uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,564,215 | arxiv | \section{Introduction}
Studying stability and instability of nonlinear waves and coherent structures informs our understanding of spatially extended nonlinear systems, with examples of applications that are of particular relevance to the present work ranging from instability in fluids \cite{doi:10.1146/annurev.fluid.37.061903.175810}, spatial ecology \cite{doi:10.1073/pnas.1420171112}, biology \cite{PhysRevE.105.014602}, to material science \cite{PhysRevB.83.064113}. In models one analyzes stability of coherent structures using a variety of methods: explicitly \cite{MR1897705,MR1177566}, perturbatively \cite{MR1878337}, based on topological arguments \cite{MR3789546}, or, most often, using numerical methods that approximate the infinite domains by finite-domain boundary-value problems \cite{stablab}. The analysis is commonly split into two parts, separating the stability in the far-field, with typically simple, spatially constant or periodic states, and the core region. The far field is usually more easily tractable, while detailed information on the core is rarely available explicitly or even asymptotically. In function spaces, the distinction between core and far-field is reflected in the distinction between point and essential spectra of the linearization, respectively; see \cite{fiedler03,MR3100266,sandstede02} for an overview and references therein. Essential spectra can be determined by algebraic computations after Fourier transform (or by solving boundary value problems after Bloch wave transforms in the case of asymptotically periodic states). Point spectra can be well approximated by problems in bounded domains with exponential convergence away from absolute spectra \cite{ssabs}.
Our focus here is on essentially one-dimensional systems, with one unbounded spatial direction, where spatial-dynamics methods have helped establish a wealth of results on existence and stability. Our interest is in identifying pointwise temporal growth rates, that is, exponential growth rates in time when initial conditions are compactly supported and growth is measured in a bounded region of space. One finds that such growth rates correspond to singularities in the spectral parameter $\lambda$ of the resolvent Green's function $G_\lambda(x,y)$ and we refer to those here as \emph{pointwise spectral values}. Such pointwise spectral values can not generally be identified as eigenvalues in an appropriate function space: they include resonances, that is, eigenvalues hidden by the essential spectrum, and branch points of the dispersion relation. Also, perturbation results for pointwise spectral values are more subtle: unlike spectra, they are in general not upper semicontinuous with respect to system parameters.
Nevertheless, we propose here an iterative method that identifies pointwise spectral values using methods very much inspired by the power method, which is at the heart of computational methods for most eigenvalue problems. As a specific objective, we focus on a basic algorithmic challenge: given a reference point $\lambda_0\in\mathbb{C}$:
\begin{center}
\emph{Find the pointwise spectral value $\lambda$ closest to $\lambda_0$!}
\end{center}
Questions of this type arise when investigating resonances in Schr\"odinger operators and in nonlinear optics, although algorithms of the nature proposed here do not appear to have been used in the literature. Even in constant- or periodic-coefficient problems, such tasks present challenging problems, relating to many questions in fluid mechanics \cite{doi:10.1146/annurev.fluid.37.061903.175810,vansaarloos03}, material science \cite{PhysRevB.83.064113}, and ecology \cite{doi:10.1073/pnas.1420171112}. Current methods require an intricate parameter continuation of eigenvalue problems and may at times miss leading pointwise growth rates; see for instance \cite{brevdo_linear_1999,MR2183609}.
Our focus on pointwise spectral values originates in work on pointwise Green's functions in the context of shock stability \cite{zumbrun98}. We are further motivated by the inherently pointwise nature of the analysis of coherent structures and the Evans function in many examples \cite{MR1068805}, the vast literature in fluid dynamics concerned with convective and absolute (pointwise) instabilities \cite{doi:10.1146/annurev.fluid.37.061903.175810}, and, lastly, the role of pointwise stability in the selection of fronts propagating into unstable states \cite{holzerscheel14,https://doi.org/10.48550/arxiv.2012.06443}.
Our point of view is shaped by the perspective of \emph{nonlinear eigenvalue problems}, that is, matrix or operator families that depend analytically but nonlinearly on a spectral parameter and where spectral parameter values for which the inverse of the operator is not analytic
are the object of interest. This point of view allows us to simultaneously treat far-field and core, to preserve structure of eigenvalue problems, and to develop iterative methods that provably converge to leading eigenvalues. Theoretically, our first contribution is a formulation of the problem of finding pointwise spectral values as a nonlinear eigenvalue problem, where local power series are readily computed from a homological equation. Our second contribution develops an inverse power method for this nonlinear eigenvalue problem that provably converges to the nearest spectral value. We prove in particular that, curiously, the method detects eigenvalues even past the radius of convergence of the local power series expansion.
The approach developed here is complementary to Evans function methods. The Evans function is a popular and well-developed analytical and computational tool for the analysis of point spectra and resonances, a Wronskian-type complex analytic function that enables one to find eigenvalues as roots of an analytic function, exploiting for instance winding number computations to count numbers of unstable eigenvalues and to thereby establish robustly stability or instability; see for instance \cite{MR1068805,sandstede02}. The Evans function is computed either via differential forms or, more directly, taking a determinant of bases of bounded solutions to the linearized equation at spatial $\pm \infty$. It can in fact be related to an operator-theoretic, non-pointwise Fredholm determinant \cite{MR2350362}. The in many ways most challenging problems arise when studying point spectra located near or embedded in essential spectra. The approach here provides a more canonical computational view on these spectral problems while, at the same time, emphasizing the {pointwise} character of the stability questions of interest. By avoiding determinants, it has potential to perform better in large systems.
\paragraph{Outline.} The remainder of the paper is organized as follows. We set up a somewhat general framework for eigenvalue problems and formulate the nonlinear pointwise eigenvalue problem in \S\ref{s:2}. We discuss an inverse power method for nonlinear eigenvalue problems and its convergence properties in \S\ref{s:3}, and discuss implementation, both for the inverse power method and for the derivation of the nonlinear eigenvalue problem on the Grassmannian, in \S\ref{s:4}. We conclude with example computations of pointwise spectral values in constant and variable-coefficient problems in \S\ref{s:5} and a brief summary in \S\ref{s:6}.
\section{Pointwise nonlinear eigenvalue problems from linearization at heteroclinic profiles}\label{s:2}
\subsection{First-order ODEs from eigenvalue problems}
We consider eigenvalues problems that arise in the linearization at traveling waves, of the form
\begin{equation}\label{e:twlin}
u_x=A(x,\lambda)u,\quad x\in\mathbb{R},\ u\in\mathbb{C}^N,
\end{equation}
with matrix coefficients $A(x;\lambda)\in \mathbb{C}^{N\times N}$, continuous in $x$ and analytic in $\lambda$. We focus on the simplest case of asymptotically constant coefficients
\begin{equation}\label{e:aprop}
\lim_{x\to\pm\infty}A(x;\lambda)=A_\pm(\lambda).
\end{equation}
These equations arise when casting the linearization in the comoving frame as a first-order ODE, substituting $\mathrm{e}^{\lambda t}$ for time dependence.
\begin{Example}\label{ex:1} We explain the transformations in the case of a simple example, the scalar nonlinear diffusion equation
\begin{equation}\label{e:kpp}
w_t=w_{xx}+w-w^3,
\end{equation}
with traveling fronts $w=w_*(x-ct)$ connecting $w=w_-$ at $x=-\infty$ to $w=w_+$ at $x=+\infty$, $w_\pm\in \{-1,0,1\}$. The linearization at such a front satisfies
\begin{equation}\label{e:kppl}
w_t=w_{xx}+cw_x + (1-3w_*^2)w =: \mathcal{L}w,
\end{equation}
which leads to the formulation in the form \eqref{e:twlin},
\begin{equation}
u_x=A(x;\lambda)u,\qquad A(x;\lambda)=
\begin{pmatrix}
0&1\\
-1+3w_*^2(x)+\lambda & -c
\end{pmatrix},
\end{equation} with
\begin{equation}
A_\pm(\lambda)=
\begin{pmatrix}
0&1\\
-1+\lambda & -c
\end{pmatrix}, \text{ if }w_\pm=0, \text{ or }
\quad A_\pm(\lambda)=
\begin{pmatrix}
0&1\\
2+\lambda & -c
\end{pmatrix}, \text{ if }|w_\pm|=1.
\end{equation}
\end{Example}
Such a formalism has been extended to many other situations, including asymptotically periodic coefficients $A_\pm=A_\pm(x;\lambda)=A_\pm(x+L_\pm;\lambda)$ or ill-posed equations on an infinite-dimensional state space $u\in X$ for problems in infinite cylinders or modulated waves and it would be interesting to pursue the methods developed here in such contexts as well \cite{ssmodstab,ssmorse,MR1759902}. We note that we explicitly allow nonlinear, polynomial dependence of $A(x;\lambda)$ on $\lambda$, for cases with higher-order time derivatives, for instance the wave equation, or for cases where the spectral parameter is replaced by a polynomial to resolve branch points in the dispersion relation; see for instance Examples \ref{ex:source} and \ref{ex:1ctd}, below.
One can in much generality relate properties of the operator $\mathcal{T}(\lambda)=\frac{\mathrm{d}}{\mathrm{d} x}-A(x;\lambda)$ to properties of the linearization of the traveling wave, in our example the operator $\mathcal{L}$, both in function spaces and in a pointwise sense; see for instance \cite{sandstede02,ssmorse,holzerscheel14}.
We will therefore focus on properties of the (linear) operator pencil $\mathcal{T}$ without trying to relate back to the traveling-wave linearization in any generality.
It is not hard to see \cite{palmer,ssmorse} that $\mathcal{T}(\lambda)$ is Fredholm as a closed, densely defined operator on, say, $L^2(\mathbb{R},\mathbb{C}^N)$ with domain of definition $H^1(\mathbb{R},\mathbb{C}^N)$ if and only if the asymptotic matrices $A_\pm(\lambda)$ are hyperbolic, that is, $\mathrm{spec}\, A_\pm(\lambda)\cap \mathrm{i}\mathbb{R} = \emptyset$. The Fredholm index is then given by the difference of Morse indices,
\begin{equation}\label{e:fm}
\mathrm{ind}\,(T(\lambda))=i_\mathrm{M}(A_-(\lambda))-i_\mathrm{M}(A_+(\lambda)),
\end{equation}
where $i_\mathrm{M}(A)$ counts the eigenvalues of $A$ with positive real part with multiplicity; see for instance \cite{ssmorse} and references therein. For well-posed equations, $\mathcal{L}-\lambda$ and thereby $\mathcal{T}(\lambda)$ are invertible for $\Re\lambda\gg 1$, such that the Morse index there is constant, $i_\mathrm{M}(A_+(\lambda)\equiv i_\infty=i_\mathrm{M}(A_-(\lambda)$. Fredholm properties, that is, closedness of range and dimensions of kernel and cokernel, of $\mathcal{T}(\lambda)$ and of $\mathcal{L}-\lambda$ agree.
In the Fredholm 0 region, the analytic Fredholm theorem guarantees that generalized multiplicities of isolated eigenvalues of $\mathcal{L}$ are finite. In fact, generalized multiplicities of an eigenvalue $\lambda$ of $\mathcal{L}$ agree with the multiplicity of an eigenvalue of $\mathcal{T}(\lambda)$ when the latter is defined as follows; see \cite{trofimov,mennicken,gohberg} for the introduction of this concept and context, respectively.
\begin{Definition}[Algebraic multiplicities and Jordan chains]\label{d:jch}
Suppose $\mathcal{T}(\lambda_*)$ is Fredholm of index 0 with nontrivial kernel. We say a polynomial $u(\lambda)$ of order $p$, is a root function if $T(\lambda)u(\lambda)=\mathcal{O}((\lambda-\lambda_*)^{p+1})$. For root functions $u(\lambda)=\sum_{j=0}^p u_j(\lambda-\lambda_*)^j$, we refer to the $u_j$, $j<p$ as generalized eigenvectors. Note that $u_p$ is always an eigenvector, that is, $T(\lambda_*)u_p=0$. We define the algebraic multiplicity of $\lambda_*$ as the dimension of the (linear) space of root functions (of arbitrary degree $p$).
\end{Definition}
A quick calculation verifies that the definitions here agree with the usual definitions of algebraic multiplicity in the case of standard eigenvalue problems.
\begin{Example}
In our example, a generalized eigenvector to $\lambda=0$ of $\mathcal{L}$ solves $\mathcal{L}w_1+w_0=0$, $\mathcal{L}w_0=0$. Defining $u_j=(w_j,w_{j,x})$, $j=0,1$, we find immediately from algebraic manipulation that $\mathcal{T}(0)u_0=0$ and $\mathcal{T}(0)u_1+\mathcal{T}'(0)u_0=0$, showing how Jordan chains are equivalent.
\end{Example}
Since we did not formally introduce a general class of operators $\mathcal{L}$, we only state informally that in addition to Fredholm properties, also algebraic multiplicities of eigenvalues in the Fredholm index 0 region coincide for $\mathcal{L}-\lambda $ and $\mathcal{T}(\lambda)$.
\subsection{The Grassmannian and pointwise formulations of eigenvalue problems}
Our aim here is to develop a pointwise-in-$x$ formulation of the spectral problem for $\mathcal{T}(\lambda)$. Such formulations have been used extensively in the context of Schr\"odinger operators and developed also more generally in connection with stability of nonlinear waves in \cite{zumbrun98}.
We start by considering the ODE \eqref{e:twlin} in the Fredholm index 0 regime where $i_\mathrm{M}(A_\pm(\lambda))=i_\infty$. The linear equation induces a flow on $k$-dimensional (complex) subspaces $\mathrm{Gr}(k,N)$. We write $E^\mathrm{s/u}_\pm(\lambda)$ as the generalized eigenspaces of $A_\pm(\lambda)$ to eigenvalues $\nu$ with $\Re\nu<0$ and $\Re\nu>0$, respectively.
These subspaces are invariant under $A_\pm(\lambda)$, respectively, and thereby invariant under the flow to $u'=A_\pm(\lambda)u$. One finds that $E^\mathrm{s}_+(\lambda)$ is unstable and $E^\mathrm{u}_-(\lambda)$ is stable for the dynamics on $\mathrm{Gr}(N-i_\infty,N)$ and $\mathrm{Gr}(i_\infty,N)$ , respectively, that is, eigenvalues of the linearization at those equilibria all have positive or negative real part, respectively.
One can then find unique subspaces $E^\mathrm{s}_+(x;\lambda)$ and $E^\mathrm{u}_-(x;\lambda)$, continuous in $x$ and locally analytic in $\lambda$,
which are invariant under the flow on the Grassmannian induced by \eqref{e:twlin} and converge to $E^\mathrm{s}_+(\lambda)$ and $E^\mathrm{u}_-(\lambda)$,
for $x\to +\infty$ and $x\to -\infty$, respectively. In particular, $\lambda$ is an eigenvalue if and only if $E^\mathrm{s}_+(0;\lambda)\cap E^\mathrm{u}_-(0;\lambda)\neq \{0\}$ is nontrivial.
\begin{Lemma}[Analytic bases]\label{l:ab}
For any fixed compact region $\Omega\subset\mathbb{C}$ where $E^\mathrm{s/u}_\pm(0;\lambda)$ are analytic, there exist analytic bases $w_j^\mathrm{u}(\lambda)$, $1\leq j\leq i_\mathrm{M}$ and $w_j^\mathrm{s}(\lambda)$, $i_\mathrm{M}+1\leq j\leq N$ that span $E^\mathrm{s/u}_\pm(0;\lambda)$, respectively.
\end{Lemma}
\begin{Proof}
The existence of such bases is an immediate consequence of \cite[Rem. 2]{shubin}, which guarantees the existence of an analytic complement and thereby analytic projections onto $E^\mathrm{s/u}_\pm(0;\lambda)$, respectively, and \cite[22,\S II.4.2]{kato}, which concludes the existence of analytic bases for subspaces given as the range of an analytic projection. A more constructive approach was described in \cite{MR2221065}, constructing analytic bases to $E^\mathrm{s/u}_+(\lambda)$, first, lifting them to nearby subspaces at $x=\pm L$, $L\gg 1$, and then transporting bases with the flow to the ODE \eqref{e:twlin}.
We describe a third approach here that relates to our specific choice of bases, below. Write $E(\lambda)$ for an analytic family of subspaces, either $E^\mathrm{s}_+(0;\lambda)$ or $E^\mathrm{u}_-(0;\lambda)$, choose a complement $F_0$ for $E_0:=E(\lambda_0)$, and choose a basis $w_1,\ldots,w_m$ in $E(\lambda_0)$. Write $P_0$ for the projection along $F_0$ onto $E(\lambda_0)$. The subspace $E(\lambda)$ is then given as the graph of a map $H(\lambda):E_0\to F_0$, whenever $E(\lambda)\cap F_0=\{0\}$. We claim that the coefficients of $H(\lambda)$ have isolated poles of finite order, only, whenever $E(\lambda)\cap F_0\neq\{0\}$. For this, fix $\lambda_1$ where $H(\lambda)$ is singular, and choose $E_1,F_1$ complementary subspaces so that $E(\lambda)=\mathrm{graph}\,(H_1(\lambda))$, $H_1(\lambda):E_1\to F_1$ analytic for $\lambda\sim \lambda_1$. The map $H(\lambda)$ is then explicitly found from $H(\lambda)=(1-P_0)(\mathrm{id}+H_1(\lambda))\left(P_0(\mathrm{id}+H_1(\lambda) \right)^{-1}$, where the inverse yields a meromorphic function with isolated poles.
We therefore find basis vectors $W_j(\lambda)=w_j+H(\lambda)w_j$, $1\leq j\leq m$, for all $\lambda$ except for a finite set of points where the $W_j$ have poles. For each of the $W_j$, we can however remove the pole singularity at a point $\lambda_\ell$ multiplying the singular basis vector $W_j$ by $(\lambda-\lambda_\ell)^p$, where $p$ is the maximal order of the pole in the components of $W_j$. We thereby obtain analytic vectors $\tilde{W}_j$ which form a basis for all $\lambda$.
\end{Proof}
The same result applies in the case where bases have branch points which are resolved writing $\lambda=\varphi(\gamma)$. Subspaces that are analytic in $\gamma$ then have analytic bases.
\begin{Definition}[Pointwise eigenvalue problem]\label{d:iota}
We define the trivialization of the bundles $E^\mathrm{s}_+(0;\lambda)$ and $E^\mathrm{u}_-(0;\lambda)$ through maps
\begin{align*}
\iota^\mathrm{u}(\lambda):\mathbb{C}^{i_M}\to E^\mathrm{u}_-(0;\lambda),\qquad & u\mapsto \sum_{j=1}^{i_\mathrm{M}}
u_j w_j^\mathrm{u}(\lambda),\\
\iota^\mathrm{s}(\lambda):\mathbb{C}^{N-i_M}\to E^\mathrm{s}_+(0;\lambda),\qquad & u\mapsto \sum_{j=i_\mathrm{M}+1}^{N}
u_j w_j^\mathrm{s}(\lambda) ,
\end{align*}
where the bases $w_j^\mathrm{s/u}(\lambda)$ were constructed in Lemma \ref{l:ab}. We then define the intersection map
\[
\iota_\mathrm{sec}(\lambda):E^\mathrm{u}_-(0;\lambda)\times E^\mathrm{s}_+(0;\lambda)\to \mathbb{C}^N, (w^\mathrm{u},w^\mathrm{s})\to w^\mathrm{u}-w^\mathrm{s},
\]
and its trivialization
\begin{equation}
\iota(\lambda)=\iota_\mathrm{sec}(\lambda)\circ \left(\iota^\mathrm{u}(\lambda),\iota^\mathrm{s}(\lambda)\right).
\end{equation}
We also define the associated Evans function
\begin{equation}\label{e:Evans}
\mathcal{E}(\lambda)=\mathrm{det}\,\iota(\lambda).
\end{equation}
\end{Definition}
\begin{Proposition}\label{e:ptwisemult}
The nonlinear eigenvalue problems $\mathcal{T}(\lambda)$ and $\iota(\lambda)$ are equivalent in the sense that geometric and algebraic multiplicities, in a region $\Omega$ where $\mathcal{T}(\lambda)$ is Fredholm index 0. In particular, the algebraic multiplicity of eigenvalues of $\mathcal{T}(\lambda)$ equals the order of the root of the Evans function $ \mathcal{E}(\lambda)=\mathrm{det}\,\iota(\lambda)$.
\end{Proposition}
\begin{Proof}
We claim that root functions for $\mathcal{T}$ and $\iota$ are in 1-1 correspondence. Indeed, given a root function $u^0(\lambda)$ for $\iota$, we can construct functions $u(x;\lambda)$ by solving the initial-value problem at $x=0$ and find bounded solutions up to the order of the root function. Conversely, restricting root functions for $\mathcal{T}$ to $x=0$ yields root functions for $\iota$. For finite-dimensional nonlinear eigenvalue problems as the one defined by $\iota$, the algebraic multiplicity is as defined in Definition \ref{d:jch} and agrees with the order of the root of the determinant \cite{trofimov}.
\end{Proof}
We are also interested in a version of Proposition \ref{e:ptwisemult} concerned with the analytic extension of $\iota(\lambda)$ past the essential spectrum. As an analytic function, $\iota$ has a uniquely defined analytic extension to some opens set $\Omega\subset \mathbb{C}$. The motivation for considering this extension is rooted in the relation between this extension of $\iota$ and pointwise singularities of the Green's function.
\begin{Proposition}[Singularities of the pointwise Green's functions and $\iota$]\label{p:ptwiseext}
Consider the Green's function of $\mathcal{T}(\lambda)$, solution to $\mathcal{T}(\lambda)G(x,y;\lambda)=\delta(x-y)\mathrm{id}$. Then $G(x,y;\lambda$ with $x,y$ fixed, arbitrary, possesses an analytic extension in $\lambda$ into the region where $\iota(\lambda)^{-1}$ possesses an analytic extension. On the other hand, $G(x,y;\lambda)$ is not analytic when
\begin{enumerate}
\item $E^\mathrm{u}_-(0;\lambda)$ or $E^\mathrm{s}_+(0;\lambda)$ are not analytic, or when
\item $E^\mathrm{u}_-(0;\lambda)$ and $E^\mathrm{s}_+(0;\lambda)$ intersect nontrivially.
\end{enumerate}
\end{Proposition}
Note that the poles of $\iota(\lambda)$ do not necessarily contribute to singularities of $\iota(\lambda)^{-1}$. The case(ii) corresponds to zeros of an extension of the Evans function, yielding resonances or embedded eigenvalues, both of which we refer to as extended point spectrum, following \cite{ssabs,rademacher07}. Analyticity of $E^\mathrm{u}_-(0;\lambda)$ and $E^\mathrm{s}_+(0;\lambda)$ follows from analyticity of $E^\mathrm{u}_-(\lambda)$ and $E^\mathrm{s}_+(\lambda)$ with sufficiently rapid convergence of the matrices $A(x;\lambda)$ by results usually referred to as ``Gap Lemmas'' \cite{kapitula98,gardner98}. Absent such conditions, subspaces $E^\mathrm{u}_-(0;\lambda)$ and $E^\mathrm{s}_+(0;\lambda)$ may exhibit essential singularities \cite{sandstede04}. Singularities of the asymptotic subspaces correspond to branch point singularities at infinity, since subspaces are obtained from algebraic equations; see \cite{holzerscheel14} for an extensive discussion of those singularities, referred to there as right-sided pointwise growth modes.
\begin{Proof}[ of Prop. \ref{p:ptwiseext}]
Setting without loss of generality $y=0$, we need to solve $\mathcal{T}(\lambda)G(x,0;\lambda)=\delta(x)v$, $v\in\mathbb{C}^N$. Clearly, this requires a solution to the ODE defined by $\mathcal{T}$ with a jump at $x=0$ of size $v$. In the region where $\mathcal{T}$ is invertible, such a solution can be obtained uniquely by solving $\iota(\lambda)(w^\mathrm{u},-w^\mathrm{s})=v$, and extending the initial condition $u_-=\sum_{j=1}^{i_\mathrm{M}} w^\mathrm{u}_j u_j$ to $x<0$ and extending the initial condition $u_+=\sum_{j=i_\mathrm{M}+1}^{N} w^\mathrm{s}_j u_j$ to $x>0$. This construction clearly shows analyticity of $G$ given analyticity of $\iota^{-1}$, and, on the other hand, that conditions (i) and (ii) are necessary for analyticity of $G$.
\end{Proof}
Information on the Green's kernel $G$ translates via Laplace transform directly into pointwise information on solutions to $\mathrm{e}^{\mathcal{L}t}$ which we state here only informally. Given compactly supported initial conditions $w_0(x)$, $\sup_{|y|\leq K}\left(\mathrm{e}^{\mathcal{L}t}u_0\right)(y)$ decays uniformly for any $K $ if $\iota(\lambda)$ is analytic in $\{\Re\lambda\}>0$. Conversely, the supremum grows exponentially if $\iota(\lambda)$ has a singularity in $\{\Re\lambda>0\}$ since direct Laplace transform of the heat kernel would otherwise imply analyticity of $G$; see for instance \cite[Cor. 2.3]{holzerscheel14}.
In a way similar to the case of point spectrum, one can associate Jordan chains to points $\lambda$ where $\iota$ is not invertible.
In the following, we assume that a meromorphic realization of $\iota$ via meromorphic choices of bases, that is, of trivializations $\iota^\mathrm{u/s}$, has been fixed in the region where $E^\mathrm{u}_-(0;\lambda)$ and $E^\mathrm{s}_+(0;\lambda)$ are analytic.
\begin{Definition}[Spectral values]\label{d:sv}
We say $\lambda_0$ is a spectral value of $\iota$ if $\iota^{-1}(\lambda)$ is not analytic at $\lambda_0$. Equivalently, conditions (i) or (ii) in Proposition \ref{p:ptwiseext} are violated.
\end{Definition}
\begin{Remark}[Removing branch points]
Singularities stemming from singularities of the asymptotic subspaces are branch points and can be removed using a polynomial reparameterization of the spectral parameter, $\lambda=\varphi(\gamma)$. Considering the new spectral problem with eigenvalue parameter $\gamma$, all of the above considerations apply again.
\end{Remark}
\begin{Example}\label{ex:source}
As a simple first example, we consider
\[
w_t=w_{xx}-2\,\mathrm{sign}(x) w_x,
\]
which leads to the spatial ODE
\begin{equation}\label{e:source}
u_x=v,\qquad v_x=2\,\mathrm{sign}(x) v+\lambda u,
\end{equation}
with
\[
E_+^\mathrm{s}(\lambda)=\begin{pmatrix}1\\1-\sqrt{1+\lambda} \end{pmatrix},\qquad
E_-^\mathrm{u}(\lambda)=\begin{pmatrix}1\\-1+\sqrt{1+\lambda}\end{pmatrix},
\]
and
\[
\mathcal{E}(\lambda)=2\left( 1-\sqrt{1+\lambda}\right).
\]
We find a zero at $\lambda=0$, case (iii) above, and a branch point at $\lambda=-1$, case (i). Note that the branch point corresponds to a spectral value of $\iota$, which can be removed by passing to a Riemann surface, that is, replacing $\lambda=-1+\gamma^2$ in \eqref{e:source}.
\end{Example}
\begin{Example}\label{ex:1ctd}
Returning to Example \ref{ex:1}, we consider the (explicit) case of layers $w_*(x)=\tanh(x/\sqrt{2})$ connecting $w_\pm=\pm 1$ at $x=\pm\infty$. The eigenvalue problem $w_{xx}+(1-3\tanh^2(x/\sqrt{2}))w=\lambda w$ can be converted into the first order system $u_x=A(x;\lambda)u$ with asymptotic matrices $A_\pm(\lambda)=\begin{pmatrix} 0 &1\\ \lambda+2 & 0\end{pmatrix}$. We have $i_\infty=1$ and stable and unstable subspaces are well defined outside of $\{\lambda\leq -2\}$. Solving the ODE explicitly, one finds the solution, substituting $\gamma=\sqrt{\lambda+2}$,
\[
u_1^\mathrm{u}(x)= \begin{pmatrix} u_+(x)\\ u_+'(x) \end{pmatrix},\qquad
u_2^\mathrm{s}(x)= \begin{pmatrix} u_+(-x)\\ -u_+'(-x) \end{pmatrix},
\]
where
\[
u_+(x)=
(1 + \mathrm{e}^{\sqrt{2} x})^2
\mathrm{e}^{
-\frac{
\sqrt{2}\gamma (\sqrt{2} - 3 \gamma + \sqrt{2} \gamma^2)
}{
2 - 3 \sqrt{2} \gamma + 2 \gamma^2
}
x
}
(2 - 3 \sqrt{2} \gamma + 2 \gamma^2 +
4 \mathrm{e}^{\sqrt{2} x} (-2 + \gamma^2) +
\mathrm{e}^{2 \sqrt{2} x} (2 + 3 \sqrt{2} \gamma + 2 \gamma^2)),
\]
such that
\[
\iota(\lambda)=\begin{pmatrix} u_1^\mathrm{u}(0)&u_2^\mathrm{s}(0)\\
(u_1^\mathrm{u})'(0)&(u_2^\mathrm{s})'(0)
\end{pmatrix}
= \begin{pmatrix} -1+2\gamma^2 & -1+2\gamma^2 \\
-2 \gamma (-2 + \gamma^2)&2 \gamma (-2 + \gamma^2)
\end{pmatrix},\qquad
\mathcal{E}(\lambda)=\mathrm{det}(\iota(\lambda))=-4 \gamma (-2 + \gamma^2) (-1 + 2 \gamma^2).
\]
Clearly, $\iota$ is analytic in $\gamma\in \mathbb{C}$ in this case, with zeros alias eigenvalues at $\gamma = 0,\pm \sqrt{2}, \pm 1/\sqrt{2}$. Only positive values of $\gamma$ correspond to eigenfunctions, negative values to resonance poles (exponentially growing solutions) and $\gamma=0$ to an embedded eigenvalue at the edge of the essential spectrum. Note that all roots of $\mathcal{E}$ are simple in this case. We see that $\iota$ is analytic on the Riemann surface defined by $\gamma$.
We emphasize that our choice of $u_+(x)$ is by no means unique. One can clearly multiply $u_1^\mathrm{u}$ and $u_2^\mathrm{s}$ by non-vanishing analytic functions $\alpha_\pm(\lambda)$. In fact, canonical computations of the bases may well lead to choices where $\alpha_\pm(\lambda)$ have poles in the complex plane, which one then simply removes multiplying by suitable polynomials. A simple example of such a scaling is when one insists on a normalization $u_+(0)=1$, introducing a singularity $(1-2\gamma^2)^{-1}$ with two poles. Less fortunate choices may introduce factors that exhibit additional branch points or other singularities, in the parameterization. An example for such a difficulty arises when attempting the common normalization $\mathcal{E}\to 1$ for $\lambda\to\infty$, which one could accomplish by normalizing $u_+(0)=(-1+2\gamma^2)/\gamma^{5/2}$, clearly introducing additional branch singularities. Another natural choice of normalization would be $|u_1^\mathrm{u}(0)|=1$, which would, in addition to singularities, introduce terms involving $\bar{\gamma}$, destroying analyticity entirely.
\end{Example}
\begin{Example}[Lack of continuity]\label{ex:cpw}
In function spaces, one readily concludes that invertibility is an open property in the spectral parameter, also under large classes of perturbations, which establishes upper semicontinuity of the spectrum under perturbations. This is, in general, not true for singularities of the pointwise resolvent as can be seen in the following example, borrowed from \cite{holzerscheel14},
\begin{equation}\label{e:cpw0}
u_t=-u_x+{\varepsilon} v,\qquad v_t=v_x,
\end{equation}
which leads to the first order spatial spectral ODE
\begin{equation}\label{e:cpw0s}
u_x=-\lambda u+{\varepsilon} v,\qquad v_x=\lambda v,
\end{equation}
and globally analytic stable and unstable subspaces,
\[
E_+^\mathrm{s}(\lambda)=\begin{pmatrix}1\\0 \end{pmatrix},\qquad
E_-^\mathrm{u}(\lambda)=\begin{pmatrix}{\varepsilon}\\2\lambda \end{pmatrix},
\]
that intersect nontrivially at $\lambda=0$, $\mathcal{E}(\lambda)=2\lambda$. For ${\varepsilon}=0$, however, the basis of $E_-^\mathrm{u}(\lambda)$ is degenerate at $\lambda=0$ so that a reparameterization is needed, for instance
\[
E_+^\mathrm{s}(\lambda)=\begin{pmatrix}1\\0 \end{pmatrix},\qquad
E_-^\mathrm{u}(\lambda)=\begin{pmatrix}0\\1 \end{pmatrix}.
\]
As a result, the intersection is always nontrivial and $\mathcal{E}(\lambda)=1$. Put in the context of perturbation theory, the pointwise resolvent does not have a singularity for ${\varepsilon}=0$, but upon arbitrarily small perturbations, such a singularity can be created.
The effect is of course also visible in the (explicit) solution to the equation, which for ${\varepsilon}=0$ simply advects compactly supported initial conditions to the left ($u$-equation) and to the right ($v$-equation), which constitutes an effective super-exponential decay to zero. Coupling with ${\varepsilon}\neq 0$ causes $u$ to converge to a constant, effectively integrating the initial mass in the $v$-equation. The effect appears also in less obvious examples, including for instance diffusion in \eqref{e:cpw0} or more general coupled amplitude equations \cite{faye17}.
We return to this example in \S\ref{s:5}, demonstrating how our algorithm correctly identifies the subtle dependence on the presence of a coupling term.
\end{Example}
\begin{Example}[Branch poles vs branch points]\label{ex:bp}
In the trivial example $w_t=w_{xx}$, one finds $E_+^\mathrm{s}(\lambda)=(1,\sqrt{\lambda})^T$, $E_-^\mathrm{u}(\lambda)=(1,-\sqrt{\lambda})^T$, so $\mathcal{E}(\lambda)=2\sqrt{\lambda}$, which is \emph{both} not analytic at $\lambda=0$ due to a branch point in the eigenspaces and vanishes, so that $\iota^{-1}$ possesses a singularity of type $\sqrt{\lambda}^{-1}$. Passing to the Riemann surface by introducing $\gamma=\sqrt{\lambda}$, corresponding to considering $u_{tt}=u_{xx}$, one finds a simple pole at $\gamma=0$.
Considering $w_t=w_{xx}$ in $x>0$ with Robin boundary condition $n_1 w + n_2 w_x=0$ at $x=0$, one forms the Evans function from $E_\mathrm{bc}=(n_2,-n_1)^T$ and $E_+^\mathrm{s}(\lambda)=(1,\sqrt{\lambda})^T$ so that $\mathcal{E}(\lambda)=n_2\sqrt{\lambda}-n_1$, which still possesses branch point singularity at $\lambda=0$, but does not vanish when $n_1\neq 0$. On the Riemann surface, we find a root $\gamma=n_1/n_2$, which corresponds to an eigenvalue when $n_1n_2>0$ and to a resonance otherwise.
\end{Example}
We refer to \cite{MR3100266} for many more examples and context.
\subsection{Determinants and numerical methods}
We briefly comment on other numerical approaches related to this pointwise formulation with the aim of differentiating our approach from others in the literature. Finding spectral values, that is, points $\lambda$ where the inverse of $\iota(\lambda)$ is not analytic, can be reduced to taking a determinant of $\iota$ and finding roots of the resulting analytic function --- after first identifying branch points as a source of non-analyticity in the far field. For this, one needs to overcome several obstacles, starting with the computation of analytic bases in stable and unstable subspaces. One can track subspaces using differential forms, at the expense of a possibly high-dimensional system, or computing orthogonalized stable bases, at the expense of loosing analyticity; see for instance \cite{MR2221065} and references therein. Analyticity can be restored on the level of a determinant \cite{MR2253406,MR2676976}, thereby yielding efficient methods for computing subspaces and finding eigenvalues through winding number computations \cite{MR3413592}. In fact, from this point of view the pointwise nature of the computation can be relaxed to improve numerical stability, still exploiting a determinant formulation and computing winding numbers \cite{MR3157977}. There do not appear to be algorithms that do not involve a separate treatment of core and farfield, and most algorithms rely to some extent on determinants and winding number computations. In contrast, the approach that we present in the next section, treats core and farfield simultaneously and avoids determinants and winding numbers altogether, thus presenting a useful ad hoc tool for the initial study of stability problems.
\section{Inverse power methods for locally analytic operator pencils}\label{s:3}
Motivated by the previous derivation of nonlinear eigenvalue problems, we study families of matrices $\iota(\lambda)\in\mathbb{C}^{N\times N}$, in a domain $\lambda\in U\subset \mathbb{C}$, and wish to find values $\lambda_*$ such that the inverse $\iota(\lambda)^{-1}$ is not analytic at $\lambda=\lambda_*$. We assume that $\lambda$ is meromorphic on a Riemann surface, that is, $\iota(\varphi(\gamma))$ is meromorphic in $\gamma$, where $\varphi$ resolves potential branch points. We do not assume that $\varphi$ is a priori known. There are many methods available that find poles of $\iota(\lambda)^{-1}$ in the case where $\iota$ is analytic; see in particular \cite{gutteltisseur} for a recent review. Many methods ultimately rely on particular polynomial interpolations of $\iota(\lambda)$ and subsequent root finding or linearization of the matrix pencil \cite{MR3144797}. Much of the suitability of a method depends on what is known about $\iota$, or, in other words, how it is actually computed. In our case, one usually starts computing $\iota$ at a fixed point $\lambda_0$, computing stable and unstable subspaces and choosing bases. The main difficulty now is to continue these bases to nearby values of $\lambda$ in an analytic fashion. A key obstacle is that a naive parameterization of the subspace as a graph over the reference subspace at $\lambda=\lambda_0$ may fail at isolated points, leading to singularities in $\iota$ induced by the parameterization, as exemplified in Example \ref{ex:1ctd} when normalizing $u_+(0)=1$. Alternatively, orthogonalizing bases for the parameterization destroys analyticity; see again Example \ref{ex:1ctd}.
Our approach relies on local power series from the graph parameterization, only, yet finds spectral values of $\iota$ even past the radius of convergence of the power series and potential singularities induced by the parameterization. The local power series, as we shall explain in the next chapter, is readily computable solving homological Sylvester equations.
To set up the analysis, we fix a reference value $\lambda_0$ with the goal of finding spectral values of $\iota(\lambda)$ closest to $\lambda_0$. We assume without loss of generality that $\lambda_0=0$ possibly redefining $\lambda$. We assume that the matrix function $\iota$ has a local expansion in a convergent power series with radius of convergence $R$,
\begin{equation}
\iota(\lambda)=\sum_{k=0}^\infty \iota_k \lambda^k, \qquad |\lambda|< R.
\end{equation}
If $\iota_0$ is not invertible, $\lambda=0$ is a spectral value and we assume henceforth that $\iota_0$ is invertible. Consider then the infinite-matrix operator acting on infinite sequences $\underline{u}=(u_j)_{j=1,2,\ldots}$,
\begin{equation}\label{e:A}
\mathcal{A}:\underline{u}\mapsto \mathcal{A}\underline{u},\qquad
(\mathcal{A}\underline{u})_j=
\left\{\begin{array}{ll}
-\iota_0^{-1}\left( \iota_1 u_1+\iota_2 u_2+\ldots \right) ,& j=1,\\
u_{j-1},& j>1.
\end{array}\right.,
\qquad \text{or }\mathcal{A}=
\left(\begin{array}{cccc}
-\iota_0^{-1} \iota_1 & -\iota_0^{-1} \iota_2 & -\iota_0^{-1} \iota_3 & \cdots\\
1 & 0 & 0 & \cdots\\
0 & 1 & 0 & \cdots\\
0 & 0 & 1 & \cdots\\
\vdots & \vdots & \vdots & \ddots
\end{array}
\right).
\end{equation}The form of $\mathcal{A}$ is motivated by the case where $\iota$ is a polynomial and $\mathcal{A}$ can act on finite sequences. The polynomial $\iota$ can then be thought of as the characteristic equation to a multi-term recursion, which in turn can be written as a first-order recursion in a higher-dimensional ambient space. Iterating $\mathcal{A}$ is, in this case, simply the inverse power method for this matrix representation.
In our case, eigenfunctions that solve $\mathcal{A}\underline{u}=z\underline{u}$ are of the form $u_{j+1}=z^{-1} u_j$, for $j>1$, so that $u_j=z^{-j}u_0$ for some vector $u_0$. Setting $\lambda=z^{-1}$, the first equation in $\mathcal{A}\underline{u}=z\underline{u}$ gives
\[
-\iota_0^{-1}\left( \iota_1\lambda +\iota_2\lambda^2+\ldots \right)u_0=z(\lambda u_0),
\]
which after multiplying by $\iota_0$ and rearranging gives
\[
\iota(\lambda)u_0=0.
\]
In other words, we ``linearized'' the nonlinear matrix pencil, that is, spectral values $\lambda$ of the nonlinear pencil $\iota$ now correspond to spectral values $z=\lambda^{-1}$ of the (regular) eigenvalue problem for $\mathcal{A}$.
To access regular spectral values, one now has access to traditional methods for eigenvalue problems. The idea we pursue here is to iteratively compute $\mathcal{A}^k\underline{u}_0$ and expect that iterates grow with the spectral radius of $\mathcal{A}$, aligning with the eigenvector to the largest eigenvalue, for random initial vectors $\underline{u}_0$. Such convergence does depend on the nature of the spectrum of $\mathcal{A}$ and we will study three cases of interest in the subsequent three sections, characterized in terms of the spectral value of $\iota(\lambda)$ in the sense of Definition \ref{d:sv}:
\begin{enumerate}
\item the singularity of $\iota(\lambda)^{-1}$ closest to $\lambda_0=0$ is a pole and lies within the radius of convergence $R$, \S\ref{s:3.1};
\item the singularity of $\iota(\lambda)^{-1}$ closest to $\lambda_0=0$ is a pole and lies within a ball where $\iota(\lambda)$ is meromorphic, \S\ref{s:3.2};
\item the singularity of $\iota(\lambda)^{-1}$ closest to $\lambda_0=0$ is a branch point singularity, \S\ref{s:3.3}.
\end{enumerate}
\subsection{Isolated point spectrum}\label{s:3.1}
Clearly, $\mathcal{A}$ is a rank-1, hence compact perturbation of the right-shift operator, so that one can readily compute Fredholm properties in typical function spaces explicitly. Defining for instance $\ell^p_\rho$ for $\rho>0$ as the space of sequences such that $(u_j \rho^{-j})_j\in \ell^p$, we find
\[
\mathrm{spec}_{\mathrm{ess}, \ell^p_\rho}(\mathcal{A})=\{|z|\leq \rho^{-1}\}.
\]
On the other hand, the first row $\mathcal{A}_1:\ell^p_\rho\to \mathbb{R}$ is bounded only when $\rho<R$. Choosing $\rho$ arbitrarily close to $R$, we can thereby find eigenvalues of $\mathcal{A}$ within $\{|z| >R\}$ as point spectrum. Equivalently, any spectral value $\lambda$ of the operator pencil $\iota(\lambda)$ that lies within the radius of convergence of the power series can be found as an eigenvalue in the point spectrum of $\mathcal{A}$ in an appropriately chosen weighted space. In particular, if $\iota(\lambda)$ possesses a spectral value $\lambda$ with $|\lambda|<R$, the power method applied to $\mathcal{A}$ generically identifies the smallest eigenvalue of $\mathcal{A}$.
\begin{Proposition}[Inverse Power Method --- point spectrum within radius of convergence]\label{p:pm1}
Assume that the nonlinear matrix pencil $\iota(\lambda)$ with radius of convergence $R>0$ possesses a unique smallest spectral value $\{|\lambda_0|<R\}$.
In particular, $\iota(\lambda)^{-1}$ is analytic in $|\lambda|< |\lambda_0|+\delta,\,\lambda\neq \lambda_0$, for some $|delta>0$. Then the associated inverse power iteration
\[
\underline{u}_{k+1}= \mathcal{A}\underline{u}_k,
\]
defined on $\ell^p_\rho$ with $1\leq p\leq \infty$ and $|\lambda_0|<\rho <R$ converges for initial vectors $\underline{u}_0$ in the complement $V$ of a strict subspace of $\ell^p_\rho$ to eigenvalue and eigenvector in the sense that
\[
\underline{u}_k/|\underline{u}_k| \to \underline{u}_*, \qquad \mathcal{A}\underline{u}_*=\lambda_0^{-1} \underline{u}_*,\qquad \iota(\lambda_0)(\underline{u}_*)_1=0.
\]
In particular, $V$ contains sequences $\underline{u}$ with $\underline{u}_j=0,j\geq 2$ and $\underline{u}_1\in V_0$, the complement of a strict subspace of $\mathbb{C}^N$.
\end{Proposition}
\begin{Remark}\label{r:p1}
\begin{enumerate}
\item By the Analytic Fredholm Theorem, eigenvalues of $\mathcal{A}$ in $\{|z|< \rho^{-1}\} $ are isolated and of finite algebraic multiplicity. Shifting $\lambda\mapsto \lambda-\lambda_\mathrm{s}$ by a small generic shift would therefore guarantee that the assumption of the proposition holds.
\item Straightforward extensions of this result can establish that iteration of generic two-dimensional subspaces yield the eigenspace of $\mathcal{A}$ to the two smallest eigenvalues, showing as a consequence the convergence of a $QR$-type iteration scheme.
\item The rate of convergence can be readily obtained from the proof as the ratio between $\lambda_0$ and the next-smallest spectral value $\lambda_1$. We may compute for instance the sequence of approximate spectral values $\lambda_{0,k}$ via
\[
\lambda_{0,k}^{-1}=\langle \underline{u}_{k+1},\underline{u}_k\rangle/\langle \underline{u}_{k},\underline{u}_k\rangle,
\]
with, say, $\langle \underline{u},\underline{v}\rangle = (u_1,v_1)$, the standard complex scalar product in $\mathbb{C}^N$.
One finds from the proof below that $\underline{u}_k= \lambda_0^{-k}\underline{u}_*+\mathcal{O}(\lambda_1^{-k})$, so that
\begin{equation}\label{e:asyeig}
\lambda_{0,k}^{-1}=\lambda_0^{-1}+\mathcal{O}((\lambda_1/\lambda_0)^{-k}).
\end{equation}
\end{enumerate}
\end{Remark}
\begin{Proof}
By the analytic Fredholm theorem, we can decompose $X=\ell^p_\rho=X_0+X_1$ into $\mathcal{A}$-invariant subspaces so that $\mathcal{A}|_{X_0}=\lambda_0^{-1}\mathrm{id}+N$ with $N$ nilpotent, $X_0$ finite-dimensional, and the spectral radius of $\mathcal{A}|_{X_1}$ is strictly less than $\lambda_0^{-1}$. Within $X_0$, we can analyze the iteration in Jordan Normal Form and find convergence of vectors to the eigenspace. The component in $X_1$ will decay exponentially due to the renormalization.
It remains to show that choosing sequences with support on the first entry is is sufficient to achieve growth. We therefore need to show that there exists a vector in the kernel of the adjoint $\mathcal{A}^*-z$ whose first component does not vanish. For any such vector $\underline{w}$, we quickly find, writing
$\iota^M(\lambda)=\sum_{\ell=0}^M\iota_\ell \lambda^\ell$,
\[
w_j=\sum_{k=0}^{j-1} z^{j-1-k} \iota_k^T v_1 = z^{j-1} ((\iota^{j-1})^T(z^{-1})v_1,
\]
for some vector $v_1$. In order for $\underline{w}\in \ell^q_{\rho^{-1}}$, we need $w_j\rho^j\in\ell^q$, in particular $w_jz^{-j}\to 0$, so that in fact $\iota(z^{-1})v_1=0$, that is, $v_1$ belongs to the kernel of the adjoint.
Clearly, $w_j=0$ for all $j$ if $v_1=0$, so that for a nontrivial element in the kernel $v_1\neq 0$ and therefore $w_1=\iota_0^Tv_1\neq 0$ using invertibility of $\iota_0$. This concludes the proof.
\end{Proof}
\subsection{Extended point spectrum}\label{s:3.2}
We now turn to the case where $\iota(\lambda)$ does not have spectral values in $\{|\lambda|<R\}$. We assume however here that $\iota(\lambda)$ does have a meromorphic continuation in $\{|\lambda|<M\}$ and a spectral value in this disk. Note that, by uniqueness of the extension of $\iota$, the notion of spectral value in this larger disk is well defined, while the notion of eigenvalue for the associated operator $\mathcal{A}$ is not well defined since infinite sums do not converge when substituting a potential eigenvector to an eigenvalue with $|z|>R$ into the expression for the first component $(\mathcal{A}\underline{u})_1$.
\begin{Proposition}[Inverse Power Method --- point spectrum within meromorphic domain]\label{p:pm2}
Assume that the nonlinear matrix pencil $\iota(\lambda)$ is meromorphic in $|\lambda|<M$ possesses a unique smallest spectral value with $\{|\lambda_0|<M\}$, that is, $\iota(\lambda)^{-1}$ is analytic in $|\lambda|< M,\,\lambda\neq \lambda_0$. Then, for any $K\geq 1$, the associated inverse power iteration
\[
\underline{u}_{k+1}= \mathcal{A}\underline{u}_k
\]
with compactly initial data, $(\underline{u}_0)_j=0$ for all $j>K$, converges for all initial vectors $(\underline{u}_0)_{1\leq j\leq K}\in \mathbb{C}^K$ except for a finite-codimension subspace, locally uniformly. More precisely, for any $N$, the restriction to the first $N$ components $R_N\underline{u}=(u_1,\ldots,u_N)$ converges to the restriction of a formal eigenvector,
\[
R_N\underline{u}_k/|R_N\underline{u}_k|
\ \to R_N\underline{u}_*,
\]
and
\[
R_N(\mathcal{A}\underline{u}_k-\lambda_0^{-1} \underline{u}_k )\to 0, \quad \text{ for } k\to\infty.
\]
\end{Proposition}
\begin{Remark}\label{r:p2}
\begin{enumerate}
\item Similar to the comments in Remark \ref{r:p1}, one can generalize to multiple leading eigenvalues using iteration of subspaces with appropriate orthogonalization strategies.
\item Convergence is again exponential, with rate given by the ratio between $\lambda_0$ and the next-smallest spectral value $\lambda_1$ as in \eqref{e:asyeig}.
\end{enumerate}
\end{Remark}
To prepare for the proof, we introduce a pointwise description of iterates. We wish to obtain a pointwise representation of $\mathcal{A}^k$, that is, for the matrix entries $((\mathcal{A}^k \delta_{jm})_\ell= ((\mathcal{A}^k)_{\ell m}$ for fixed $\ell$ and $m$. We wish to use Dunford's resolvent identity and start with an expression for the resolvent $(z-\mathcal{A})^{-1}$. We therefore fix $m$ arbitrary and solve solve
\[
\left((z-\mathcal{A})\underline{u}\right)_m=f, \qquad \left((z-\mathcal{A})\underline{u}\right)_j=0, \ j\neq m,
\]
explicitly.
We find, solving the equation for all $j>1$,
\begin{equation}\label{e:pr1}
u_j=z^{-j}u_0,\ j<m,\qquad u_j=z^{-j}u_0 + z^{m-j-1}f,\ j\geq m.
\end{equation}
Inserting into the equation for $m=1$ gives
\begin{align*}
0&= -\iota_0^{-1}\left(\iota_1 z^{-1}+\iota_2 z^{-2}+\ldots\right)u_0-u_0
-\iota_0^{-1}\left(\iota_m z^{-1}+\iota_{m+1} z^{-2}+\ldots\right) f\\
&=\iota(\lambda)u_0 - \lambda^{1-m}\left(\iota(\lambda)-\iota^{m-1}(\lambda)\right)f,
\end{align*}
where $\iota^{p}(\lambda)=\iota_0+\ldots+\iota_p\lambda^p$ is the Taylor jet up to order $p$. Solving this matrix equation with matrix entries in the field of meromorphic functions for $u_0$ gives
\begin{equation}\label{e:pr2}
u_0=\lambda^{1-m}\iota(\lambda)^{-1}\left(\iota(\lambda)-\iota^{m-1}(\lambda)\right)f,
\end{equation}
which together with \eqref{e:pr1} defines the pointwise resolvent $u_j=\mathcal{R}(z;\mathcal{A})_{jm}f$ when the right-hand side is supported in the $m$'th component. We write $\mathcal{R}(z;\mathcal{A})$ for the infinite matrix $1\leq j,m<\infty$.
From the form of \eqref{e:pr1}--\eqref{e:pr2}, we obtain the following lemma.
\begin{Lemma}
The pointwise resolvent $\left((z-\mathcal{A})^{-1}\right)_{jk}$ possesses an analytic extension into connected component of the region $\{z=1/\lambda\}$ where $\iota(\lambda)$ is meromorphic and $\iota(\lambda)^{-1}$ is analytic. Moreover, if $\iota(\lambda)^{-1}$ has a pole at $\lambda_0$, then the components $\left((z-\mathcal{A})^{-1}\right)_{j1}$ of the pointwise resolvent have a singularity at $z=z_0$.
\end{Lemma}
\begin{Proof}
We only need to show that the pointwise resolvent cannot be analytic when $\iota(\lambda)^{-1}$ is not analytic. This follows by setting $m=1$ in \eqref{e:pr2} so that, with \eqref{e:pr1},
\[
u_j=\lambda^{j}(\mathrm{id}-\iota(\lambda)^{-1}\iota_0)f.
\]
Here, the term $\lambda^{j}f$ is analytic, and the term $\lambda^{j}\iota(\lambda)^{-1}\iota_0f$ has a singularity since $\iota_0\lambda^{j}$ is invertible.
\end{Proof}
From the form of \eqref{e:pr1}--\eqref{e:pr2}, it is clear that the pointwise resolvent possesses an analytic extension into the region where $\iota(\lambda)^{-1}$ is analytic and $\iota(\lambda)$ is meromorphic.
\begin{Proof}[ of Proposition \ref{p:pm2}.]
Choosing a contour $\Gamma=\{|z|=R\}$ with $R$ large, oriented counter-clockwise, one obtains from Dunford's calculus that
\[
\underline{u}^k:= \mathcal{A}^k \underline{f}=\frac{1}{2\pi\mathrm{i}}\int_\Gamma z^k (z-\mathcal{A})^{-1}\underline{f} \mathrm{d} z.
\]
For $\underline{f}$ compactly supported, and evaluating both sides in a compact region $j\leq J$, we may deform the contour $\Gamma$ in the region where the pointwise resolvent $((z-\mathcal{A})^{-1})_{jk}$ is analytic, that is, within the region where it is meromorphic but outside of the extended point spectrum. We choose to deform the contour into $\tilde{\Gamma}=\Gamma_0\cup\Gamma_1$, where $\Gamma_1=\{|z|=R_2<|\lambda_0|^{-1}\}$ and $\Gamma_0=\{\lambda_0^{-1}+z|\,|z|={\varepsilon}\} $ for some sufficiently small ${\varepsilon}>0$. For the contribution from $\Gamma_1$, one readily finds componentwise decay $|\underline{u}^k_j|\leq C R_2^k$. The contribution from $\Gamma_0$ can be evaluated computing residuals after expanding the pointwise resolvent in a Laurent series, which gives a contribution $\sum_{\ell=0}^{\ell_0} Q_j k^k \lambda_0^k$. From this splitting, the claim follows readily, in complete analogy to the finite-dimensional convergence of the power method.
\end{Proof}
\begin{Remark}[Zeros of meromorphic functions]\label{r:z}
The strategy employed here can of course be most easily tested as an algorithm to find roots of meromorphic functions $f(\lambda)$ in the plane $z\in\mathbb{C}$. More precisely, our algorithm finds the zero $\lambda_*$ of $f(\lambda)$ closest to a fixed reference point $\lambda_0$ using only the Taylor expansion of $f$ at $\lambda_0$. One simply iterates
\[
u_k=\frac{-1}{f(\lambda_0)}\left( f'(\lambda_0)u_{k-1}+ \frac{1}{2}f''(\lambda_0)u_{k-2}+ \frac{1}{6}f'''(\lambda_0)u_{k-3}+\ldots \right), \qquad u_0=1,\ u_j=0\text{ for } j<0,
\]
and obtains $\lambda_*-\lambda_0=\lim_{k\to\infty} u_k/u_{k+1}$.
Our result here states that this iterative algorithm identifies zeros past the radius of convergence of the local power series. Of course, this approach is useful only when access to Taylor series coefficients is preferred to simple evaluation of a function.
\end{Remark}
\subsection{Branch points}\label{s:3.3}
A third typical possibility appears when the largest singularity of $(z-\mathcal{A})^{-1}$ is a branch point singularity. We say that $\iota$ has a branch pole of order $p$ for some $p\in\mathbb{N}$ at $\lambda_0$ if $\iota(\lambda_0+\gamma^q)^{-1}$ is componentwise meromorphic in $\gamma$ near $\gamma=0$ with a simple pole at $\gamma=0$ for $p=q$, but is not meromorphic for $1\leq q<p$. We focus here on the case $p=2$.
For any $\lambda_0\neq 0$, let $S_\theta(\lambda_0)$ be the sector $\{\lambda\,|\,\mathrm{arg}((\lambda-\lambda_0)/\lambda_0)<\theta\}$ and $B_r=\{\lambda|\,|\lambda|<R\}$.
\begin{Proposition}[Inverse Power Method --- branch points within meromorphic domain]\label{p:a3}
Given $\lambda_0\neq 0$, $|\lambda_0|=M$, $\delta>0$, and $\theta<\pi/2$, define
$\Omega =B_{M+\delta}\setminus \overline{S_\theta(\lambda_0)})$.
Assume that the nonlinear matrix pencil $\iota(\lambda)$ is pointwise meromorphic in $\Omega$ and has a branch pole of order 2 at $\lambda_0$.
Then the associated inverse power iteration
\[
\underline{u}^{k+1}= \mathcal{A}\underline{u}^k,
\]
with compactly supported initial data, $(\underline{u}_0)_j=0,j>K$ asymptotically exhibits pointwise exponential growth with rate $1/\lambda_0$ with an algebraic correction,
\[
\underline{u}^k_j = \lambda_0^{-k} k^{-1/2} P_j \underline{u}^0 \left(1+{\scriptstyle\mathcal{O}}_1(k^{-1})\right).
\]
for some non-vanishing linear map $P_j$ defined on compactly supported sequences.
\end{Proposition}
\begin{Remark}\label{r:p3}
\begin{enumerate}
\item For higher-order branch points with Riemann surface covering $\lambda=\lambda_0+\gamma^p$, one finds in an equivalent fashion asymptotics with growth $\lambda_0^{-k}k^{1-1/p}$.
\item Another case of interest arises in $x$-dependent problems when $\iota$ possesses a branch point singularity but $\iota^{-1}$ is continuous. In this case, for $p=2$, one finds pointwise rates $\lambda_0^{-k}k^{-3/2}$ in analogy to the pointwise decay for the heat equation on the half line with Dirichlet boundary condition.
\item From the asymptotics for $\underline{u}^k$ with rate $\lambda_0^{-k}k^{-\alpha}$, one readily derives asymptotics of $\lambda_0^k$ as in Remark \ref{r:p1} (iii),
\begin{equation}\label{e:bplamasy}
\lambda_{0,k}\sim\lambda_0+\frac{\alpha\lambda_0}{k}.
\end{equation}
In particular, predictions for the branch point converge algebraically, with rate $k^{-1}$, regardless of the order of the branch point and $\alpha$, but with a prefactor $\lambda_0$ which is small for good initial guesses, suggesting effective shift strategies. Iterating a finite number $K$ of iterates to find a new initial guess $\lambda^K_0$ and restarting with the new initial guess $\lambda_0^K$, one finds exponential convergence in $k$. We demonstrate this strategy in \S\ref{s:5}.
\end{enumerate}
\end{Remark}
\begin{Proof}
The inverse power operator $\mathcal{A}$ associated with $\iota$ is invertible in $z\in \overline{\Omega'}$, where $\Omega'=1/\Omega$ contains all inverses $\lambda^{-1}$ of elements in $\Omega$.
We can therefore write, in a pointwise sense,
\[
\underline{u}^k= \mathcal{A}^k \underline{f}=\frac{1}{2\pi\mathrm{i}}\int_\Gamma z^k (z-\mathcal{A})^{-1}\underline{f} \mathrm{d} z,
\]
for $\Gamma=\partial\Omega'$. Here, we use that the singularity of $\iota(\lambda)$ at $\lambda_0$ due to the simple pole in $\gamma$ is integrable, $\mathcal{O}(\lambda^{-1/2})$, leading to an integrable singularity of $(z-\mathcal{A})^{-1}$ on $\Gamma$.
In the following, we assume for simplicity that $\lambda_0=1$, the general case can be easily obtained from there by scaling and complex rotation.
Expanding the pointwise resolvent of $\mathcal{A}$ near $z_*=1/\lambda_0=1$, we write $(z-\mathcal{A})^{-1}=(z-1)^{-1/2}\mathcal{B}_0 +\mathcal{O}(1)$, which gives
\[
\underline{u}^k=\frac{1}{2\pi\mathrm{i}}\int_\Gamma z^k \left((z-1)^{-1/2}\mathcal{B}_0 +\mathcal{O}(1)\right)\underline{f} \mathrm{d} z,
\]
Ignoring contributions from $\Gamma$ where $|z|<1-\delta$ for some $\delta>0$, we parameterize $\Gamma=\Gamma_+\cup\overline{\Gamma_+}$, with $\Gamma_+=\{z=1-\mathrm{e}^{\mathrm{i}\theta\tau},0\leq \tau\leq \delta$, and find
\begin{align*}
\underline{u}^k\sim &\frac{\mathrm{e}^{\mathrm{i}\theta}}{2\pi\mathrm{i}}\int_0^\delta (1-\mathrm{e}^{\mathrm{i}\theta}\tau)^k \left((-\mathrm{e}^{\mathrm{i}\theta}\tau)^{-1/2}\mathcal{B}_0 +\mathcal{O}(1)\right)\underline{f} \mathrm{d} \tau -\frac{\mathrm{e}^{-\mathrm{i}\theta}}{2\pi\mathrm{i}}\int_0^\delta (1-\mathrm{e}^{-\mathrm{i}\theta}\tau)^k \left((-\mathrm{e}^{-\mathrm{i}\theta}\tau)^{-1/2}\mathcal{B}_0 +\mathcal{O}(1)\right)\underline{f} \mathrm{d} \tau \\
=& -\frac{1}{\pi}\int_0^\delta (1-\tau)^k(\tau^{-1/2}\mathcal{B}_0+\mathcal{O}(1))\underline{f}\mathrm{d}\tau=k^{-1/2}P\underline{f}\left(1+{\scriptstyle\mathcal{O}}_1(k^{-1}\right).
\end{align*}
\end{Proof}
\section{Implementation of algorithms}\label{s:4}
Practically, we wish to start with an ``explicit'' matrix-valued family $A(x;\lambda)$ and asymptotic matrices $A_\pm(\lambda)$ as in \eqref{e:twlin}, all polynomial in $\lambda$. In order to apply the inverse power method as described above, we need to
\begin{enumerate}
\item find a basis for $E^\mathrm{u}_-(\lambda_0)$ and for $E^\mathrm{s}_+(\lambda_0)$;
\item compute Taylor expansions for $E^\mathrm{u}_-(\lambda)$ and for $E^\mathrm{s}_+(\lambda)$ at $\lambda=\lambda_0$;
\item assemble the map $\iota(\lambda)$ represented by a power series and implement the inverse power iteration.
\end{enumerate}
We describe these somewhat practical issues in the next three sections.
\subsection{Finding invariant subspaces and computing Taylor jets}\label{s:inv}
We describe how to obtain invariant subspaces, expand in $\lambda$, and continue using Newton's method.
\paragraph{Schur decomposition.} Typical starting point for spectral computations is the region where stable and unstable subspaces actually correspond to the $k$ most unstable and $n-k$ most stable eigenvalues, respectively. Of course, we are particularly interested in situations where this splitting is no longer valid at the relevant eigenvalue $\lambda$, but subspaces at these values are the analytic continuation from values where the splitting is valid. We use a Schur decomposition sorting by real parts of eigenvalues to find an orthonormal basis and an orthonormal complement to $E^\mathrm{u}_-$ and $E^\mathrm{s}_+$ from the matrices $A_\pm(\lambda_0)$, all arranged in orthonormal matrices $U^\mathrm{s/u}_\pm$.
\paragraph{Taylor jets.} Computing Taylor jets for subspaces is a special case of computing Taylor expansions for invariant manifolds, which one readily sees by appending the trivial equation $\lambda'=0$. We outline the relevant steps, here. We first shift the polynomial pencil evaluating derivatives at $\lambda_0$ and then conjugate with $U^\mathrm{s/u}$ so that $(U^\mathrm{s/u})^TA_\pm(\lambda+\lambda_0) U^\mathrm{s/u}$ possesses the trivial invariant subspace spanned by the first $k$ or $n-k$ coordinate vectors at $\lambda=0$, respectively. In the following, we therefore outline how to compute expansions near $\lambda=0$ for a polynomial pencil of degree $p$ with block form corresponding to the decomposition $\mathbb{C}^n=E_0\oplus E_1$ into canonical eigenspaces,
\[
A(\lambda)=\begin{pmatrix}
A_{00}(\lambda) & A_{01}(\lambda)\\ A_{10}(\lambda) & A_{11}(\lambda)
\end{pmatrix},
\qquad A_{10}(0)=0, \quad A_{00}\ k\times k-\text{matrix}, \ A_{11}\ (n-k)\times (n-k)-\text{matrix}. \
\]
We write the invariance subspace as a graph of $H(\lambda):E_0\to E_1$, $H(0)=0$, giving the column representation $E^\mathrm{s/u}\sim U^\mathrm{s/u}(F_0+H(\lambda)F_0) (U^\mathrm{s/u})^T$, where the $n\times k$-matrix $F_0$ forms the canonical basis in $E_0$. Invariance of $\text{graph}(H)$ gives the equation
\begin{equation}\label{e:hom}
A_{10}(\lambda)+A_{11}(\lambda)H(\lambda)=H(\lambda)A_{00}+H(\lambda)A_{01}(\lambda)H(\lambda).
\end{equation}
Expanding $H$ and the $A_{jk}$ in $\lambda$ via
\[
A_{jk}(\lambda)=\sum_{\ell=0}^pA_{jk}^\ell \lambda^\ell,\qquad H(\lambda)=\sum_{\ell=0}^\infty H^\ell \lambda^\ell,
\]
we find that $A_{10}^0=0,\ H^0=0$, and, at order $\ell$,
\begin{equation}
A_{11}^0 H^\ell - H^\ell A^0_{00} = R^\ell, \qquad R^\ell= \sum_{j=1}^{\ell-1}\left( H^jA_{00}^{\ell-j} - A_{11}^{\ell-j} H^j \right) -A_{10}^\ell + \sum_{\substack{i+j+k=\ell\\0\leq j\leq p\\1\leq i,k\leq \ell-1}}H^i A_{01}^jH^k.
\end{equation}
At each order $\ell=1,2,\ldots$, this equation can be solved for $H^\ell$ by solving a linear Sylvester equation for $H^\ell$, with linear operator explicit on the left-hand side. The Sylvester equation can be solved effectively putting $A_{00}$ and $A_{11}$ into upper triangular form using Schur decomposition. For finite (low) order $p$, the right-hand side requires $\mathcal{O}(\ell)$ matrix multiplications so that overall effort is quadratic in the maximal order $\ell$.
\paragraph{Newton's method and continuation.}
We note that the formulation here also lends itself to direct Newton and continuation approaches, which we shall exploit when restarting the inverse power iteration. An approximate invariant subspace solves \eqref{e:hom} for some $\lambda_*$ with a small residual. Using Newton's method, solving again a Sylvester equation at each step, we can find a nearby actual invariant subspace. We can also implement continuation in $\lambda$, choosing for instance a generic complex path between two spectral parameter values $\lambda_0$ and $\lambda_1$ of the form
\[
\lambda(\tau)=\lambda_0+\tau(\lambda_1-\lambda_0)+ \mathrm{i} \rho (\lambda_1-\lambda_0)\tau(1-\tau), \quad \rho\in [-1,1] \text{ fixed}.
\]
For a generic choice of $\rho$, the path would avoid isolated poles of $H$ or branch point singularities of the subspace so that arclength continuation would successfully find the desired invariant subspace at $\lambda_1$, even if that subspace is not actually the unstable subspace.
\subsection{Assembling $\iota$}
We illustrate how to assemble $\iota$ in the simple case of a discretization based on the second order trapezoidal rule. Let $(u_j)_{j=1\ldots N+1}$ be the values at grid points $x_j$ and $u_\mathrm{bc}=(u_\mathrm{u},u_\mathrm{s})\in\mathbb{C}^k\times \mathbb{C}^{n-k}$ a vector parameterizing boundary conditions. The differential equation is then encoded in the $nN\times n(N+2)$-matrix corresponding to $\frac{1}{h} (u_{j+1}-u_j)=\frac{1}{2}(A(x_{j+1};\lambda)+ \frac{1}{2}(A(x_j;\lambda)$, with zero columns at the end corresponding to $u_\mathrm{bc}=(u^\mathrm{u},u^\mathrm{s})$. We add $2n$ rows corresponding to $u_1=U^\mathrm{u}(\lambda)u^\mathrm{u}$ and $u_1=U^\mathrm{s}(\lambda)u^\mathrm{s}$, where $U^\mathrm{s/u}(\lambda)$ are bases for $E^\mathrm{u/s}_\pm(\lambda)$. The resulting $n(N+2)\times n(N+2)$ square matrix is the desired nonlinear matrix family $\iota(\lambda)$. It is sparse at any order $\lambda$ with entries in $n\times 2n$ blocks along the diagonal at orders $\ell\leq p$ and with nonzero entries only in the bottom right $2n\times n$-corner for orders $\ell>p$.
For constant coefficients, the differential equation can of course be ignored and $\iota$ is simply given by the $n\times n$-matrix $(U^\mathrm{u}(\lambda)|U^\mathrm{s}(\lambda))$.
We implemented the family $\iota(\lambda)=\iota^0+\iota^1\lambda+\ldots $ as a sparse matrix $\iota=(\iota^0|\iota^1|\iota^2|\ldots|\iota^M)$ allowing easy extraction of orders of iota for the inverse power iteration.
\subsection{Implementing the inverse power method}
We initiate the inverse power iteration iterating $\mathcal{A}$ in \eqref{e:A} with a random complex starting $n$-vector $u_1$. Note that the method involves shifting only, in all but the first component. In the first component, we apply the pencil expansion terms $\iota_\ell$ and solve a linear equation with matrix $\iota_0$.
Having precomputed expansions up to an order $M$, we can then perform $M$ iterates exactly. Predictions for the eigenvalue are obtained from the first component $\lambda_\mathrm{p}=\langle u_1, u_1\rangle/\langle u_1, u_2\rangle$. Stopping criteria are formulated in terms of tolerances for the change in $\lambda_\mathrm{p}$ and the first components $\|\lambda_\mathrm{p}u_2-u_1\|$. After $M$ iterations or when initial tolerances are met, we restart the pencil iteration: we shift the symbol $\iota$ to the new predicted value $\lambda_\mathrm{p}$, shifting polynomials explicitly and recomputing eigenspaces using either continuation or a Newton method with predictor from Taylor expansion, as described in \S\ref{s:inv}. For these subsequent iterations, we use a lower truncation order of the pencil $M_\mathrm{fine}\ll M$ with frequent restarts until a fine tolerance is met. Shifts using step sizes roughly $\tau(\lambda_\mathrm{p}-\lambda_\mathrm{old})$ with $\tau\sim 0.8\ldots 0.95$ turn out to be most robust avoiding both the problem of non-invertibility of $\iota_0$ at the sought-after eigenvalue and problems of continuing and computing eigenspaces at branch points.
Since convergence near branch points is slow, algebraic, we also implemented a Newton method to find the exact location of branch points for constant coefficient problems. Branch points solve the system
\begin{align*}
\begin{array}{rrr} A(\lambda)u-\nu u=0,& \qquad\qquad \qquad &
\langle e_0,u\rangle -1=0,\\
A(\lambda) v-\nu v-u=0,&\qquad \qquad\qquad &
\langle e_0,v\rangle =0.\,
\end{array}
\end{align*}
where $e_0$ is an approximate element of the kernel of $A(\lambda)-\nu$ and the scalar products are understood as Hermitian (complex valued) forms. The inverse power iteration provides good initial guesses for $\lambda$. We find an initial guess for $u$ by computing the intersection of $E^\mathrm{s}_+$ and $E^\mathrm{u}_-$ at the initial guess and computing eigenvalues $\nu$ and eigenvectors $u$ for $A(\lambda)$ restricted to this intersection.
\section{Numerical examples}\label{s:5}
We demonstrate convergence and effectiveness of the algorithms in several examples.
\paragraph{Pointwise growth modes --- constant coefficients and branch points of the dispersion relation.}
In our first example, we compute the branch point $\lambda_\mathrm{dr}=0$ associated with the spatial eigenvalue $\nu_\mathrm{dr}=-1$ in
\begin{equation}\label{e:cd}
w_t=w_{xx}+2w_x+w, \qquad,
\end{equation}
with unique double root $\lambda_\mathrm{dr}=0$ and associated $\nu_\mathrm{dr}=-1$, and with starting guess $\lambda_0=1$. Convergence is as expected algebraic with rate $1/k$ but iteration is stable for a very large number iterations, $k\sim 10^4$; see Fig. \ref{f:1}. We find the predicted algebraic convergence with rate $k^{-1}$ from Proposition \ref{p:a3} up to $10^4$ iterates, demonstrating that high-order Taylor expansions can be effective in this context of analytic matrix pencils. Of course, one would in practice restart the computation once sufficient initial accuracy is achieved; see below and Fig. \ref{f:2}. We also confirmed this algebraic rate of convergence in the Swift-Hohenberg equation,
\begin{equation}\label{e:sh}
w_t=-(\partial_{xx}+1)^2 w,
\end{equation}
with double root $\lambda_\mathrm{dr}=0$ and associated $\nu_\mathrm{dr}=\mathrm{i}$ or $\nu_\mathrm{dr}=-\mathrm{i}$, starting value $\lambda_0=1+\mathrm{i}$. Convergence is with the predicted rate $k^{-1}$, although $\iota(0)$ has 2-dimensional kernel associated with the two spatial roots $\nu=\pm\mathrm{i}$; see Fig. \ref{f:1}, center panel. The Newton method described above indeed identifies both roots. The last example, shown in Fig. \ref{f:1}, right panel, is the linearization at a constant state in the Cahn-Hilliard equation, exhibiting a spinodal decomposition instability. We consider the linearization in a comoving frame such that the double roots $\lambda_\mathrm{dr}=\mathrm{i}\omega_\mathrm{dr}$ have zero real part \cite{scheel2017spinodal},
\begin{equation}\label{e:ch}
w_t=-w_{xxxx}-w_{xx}+c_\mathrm{lin}w_x, \qquad
c_\mathrm{lin}=\frac{2}{3\sqrt{6}}\left(2+\sqrt 7 \right)\sqrt{\sqrt 7 -1}, \quad
\lambda_\mathrm{dr}=\pm\mathrm{i} \left(3+\sqrt 7 \right)\sqrt{\frac{2+\sqrt 7}{96}}.
\end{equation}
\begin{figure
\includegraphics[width=0.33\textwidth]{conv_diff_no_restart}%
\includegraphics[width=0.33\textwidth]{sh_ch_no_restart}%
\includegraphics[width=0.33\textwidth]{kdv_wave_no_restart}%
\caption{\emph{Left:} Convergence to $\lambda_\mathrm{dr}=0$ in convection-diffusion \eqref{e:cd} with starting value $\lambda_\mathrm{0}=1$ and linear fit with slope $-1$ corresponding to algebraic convergence $k^{-1}$. \emph{Center:} Convergence to $\lambda_\mathrm{dr}=0$ in the Swift-Hohenberg (SH) equation \eqref{e:sh} and to $\lambda_\mathrm{dr}=\mathrm{i}\omega_\mathrm{dr}$ in the Cahn-Hilliard (CH) equation \eqref{e:ch} with starting values $1+\mathrm{i}$ and $0.5+\mathrm{i}$ (CH only). \emph{Right:} Algebraic convergence to $\lambda_\mathrm{dr}=0$ for multiple double roots in KdV \eqref{e:kdv} and beam equation \eqref{e:beam}, as well as exponential convergence in the coupled transport equation (CPW); see text for details. }\label{f:1}
\end{figure}
We also tested convergence for multiple double roots using the Korteweg-De Vries equation
\begin{equation}\label{e:kdv}
w_t=w_{xxx}, \qquad \lambda_\mathrm{dr}=0,\ \nu_\mathrm{dr}=0,
\end{equation}
and the beam equation,
\begin{equation}\label{e:beam}
w_{tt}=-w_{xxxx}, \qquad \lambda_\mathrm{dr}=0,\ \nu_\mathrm{dr}=0,
\end{equation}
finding the same algebraic convergence $k^{-1}$; see Fig. \ref{f:1}, right panel. Convergence to double roots in coupled transport equations from Example \ref{ex:cpw},
\begin{equation}\label{e:ct}
w^1_t=-w^1_x+{\varepsilon} w^2,\qquad w^2_t=w^2_x,
\end{equation}
is exponential as expected, since the dispersion relation does not have a branch point at $\lambda_\mathrm{dr}=0$ but rather stable and unstable eigenspaces intersect nontrivially. For ${\varepsilon}=0$, subspaces do not intersect, the double root disappears. The algorithm picks up this sensitivity through a long transient for small values of ${\varepsilon}$, before exponential convergence sets in.
Speed of convergence depends on the distance to the branch point. One therefore would usually first perform a global search for possible instabilities through identifying the closest branch point to an unstable $\lambda_0$. As a second step, one would then try to compute this branch point more precisely through restarting the algorithm with a nearby initial guess as described in \S\ref{s:4} with restarts once increments in the predicted value of $\lambda_\mathrm{dr}$ are small. The result is exponential convergence as demonstrated in Fig. \ref{f:2}, left panel. Typically, one would perform a minimum number of iterations, typically 5, before repeated restarts since more frequent restarts yield faster convergence. With errors in $\lambda_\mathrm{dr}$ small enough, typically $10^{-3}$, one would switch to a Newton method which will give machine accuracy results within 3 steps.
It is at this point interesting to also return to Example \ref{ex:bp}, $\lambda w=w_{xx}$ on $x>0$ with boundary condition $n_1w+n_2 w_x=0$. Our algorithm identifies (correctly) $\lambda=0$ as a spectral value of $\iota$ regardless of the choice of $n_{1/2}$. Removing this branch point singularity through the choice $\lambda=\gamma^2$ removes the branch singularity and our algorithm finds the spectral values $\gamma=n_1/n_2$, regardless of whether they correspond to eigenvalues, $\gamma>0$, or resonances, $\gamma<0$.
\paragraph{Variable coefficients --- branch points, resonances, and eigenvalues.}
We illustrate the performance of our algorithm in the case of variable, asymptotically constant coefficients. We start with a 4th-order discretization with grid size $dx$ of the Allen-Cahn layer from Example \ref{ex:1ctd},
\begin{equation}\label{e:ac}
\lambda w=w_{xx}+(1-3\tanh^2(x/\sqrt{2}))w,
\end{equation}
with eigenvalues at $0$ and $-\frac{3}{2}$, and a branch point at $-2$. The center and right panel in Fig. \ref{f:2} demonstrate 4th order convergence of the compute eigenvalue $\lambda\sim \lambda_*=0$ as $dx$ is decreased in a domain of size $L=10$, and exponential convergence for $dx=0.005$ as $L$ increases.
\begin{figure}
\includegraphics[width=0.33\textwidth]{sh_w_restart}%
\includegraphics[width=0.33\textwidth]{ac_dx}%
\includegraphics[width=0.33\textwidth]{ac_L}%
\caption{\emph{Left:} Convergence to $\lambda_\mathrm{dr}=0$ in the Swift-Hohenberg (SH) equation \eqref{e:sh} with restarts after 20 initial iterations, $\lambda_0=1+\mathrm{i}$; restarts after additional 5 and 15 iterations, respectively, and Newton after just one restart, demonstrating exponential convergence with restarts and practically immediate convergence with Newton for good initial guesses. \emph{Center:} Fourth order convergence in the grid size to the eigenvalue $\lambda_*=0$ with $L=10$ for \eqref{e:ac}. \emph{Right: } Exponential convergence in the domain size $L$ for $dx=0.005$ for \eqref{e:ac}. }\label{f:2}
\end{figure}
Convergence to the eigenvalue is exponential with rate depending on the distance from the eigenvalue (more precisely, the relative distance between the nearest and next-nearest eigenvalue $|\lambda_0-\lambda_1|/|\lambda_0-\lambda_2|)$, with $\lambda_1=0,\lambda_2=-1.75$), which we illustrate in Fig. \ref{f:3}, left panel, with $L=10$ and $dx=0.05$; compare also Proposition \ref{p:pm1} and its proof. Convergence to the branch point $\lambda_\mathrm{dr}=-2$ is algebraic as shown in Fig. \ref{f:3}, center panel; compare also Proposition \ref{p:a3}. However, an initial approach is fast, in particular for starting values close to the branch point, as reflected in \eqref{e:bplamasy}. In fact, restarting the algorithm yields exponential convergence. For starting values close to $-1.75$, branch point and eigenvalue at $\lambda=-1.5$ are at a similar distance and convergence only sets in after a long transient. We also computed the resonances at $\lambda=-1.5$ and $\lambda=0$ with the same convergence rates, simply exchanging stable and unstable subspaces at $\pm\infty$, confirming the convergence from Proposition \ref{p:pm2}.
Lastly, we present a computation of resonances in
\begin{equation}\label{e:sech}
\lambda w = w_{xx}+F_0 \mathrm{sech}^2(x) w,\qquad F_0=-0.1,\quad \gamma_\mathrm{res}=-\frac{1}{2} \sqrt{F_0+\frac{1}{4}},\quad \lambda=\gamma^2.
\end{equation}
Writing $\lambda=\gamma^2$, removes the branch point at $\lambda=0$ and allows for detection of the resonance closest to $\gamma_\mathrm{res}$. We use $F_0=-1/10$, which gives $\gamma_\mathrm{res}=(-1+\sqrt{3/5})/2 \sim -0.1127$. The stable subspace at $\gamma_0>0$ is given by $(1,\gamma_0)^T$. Writing eigenspaces as graphs over this subspace yields a pole at $\gamma=-1/\gamma_0$. In particular, for $\gamma_0=12$, the series expansion of the boundary condition has a pole at $\gamma=-1/12\sim -0.0833$, between $\gamma_0$ and $\gamma_\mathrm{res}$, so that $\gamma_\mathrm{res}$ is not located within the radius of convergence of $\iota$ when choosing this initial value. Fig. \ref{f:3}, right panel, demonstrates convergence in this situation as predicted by Proposition \ref{p:pm2}. Convergence is slow and can again be accelerated using restarts, as is clear from the rates of convergence for initial guesses closer to $\gamma_\mathrm{res}$.
Computation times are all less than 10 seconds, with the exception of the example in Fig. \ref{f:1}, left panel, where a very large number of iterations was performed and a very high order of the Taylor expansion needs to be precomputed, leading to computation times of roughly 3 minutes on a laptop.
\begin{figure}
\includegraphics[width=0.33\textwidth]{ac_lam_0}%
\includegraphics[width=0.33\textwidth]{ac_lam_0_bp}%
\includegraphics[width=0.33\textwidth]{sech_res}%
\caption{\emph{Left:} Exponential convergence to the eigenvalue $\lambda=0$ in \eqref{e:ac} with convergence rate increasing as $\lambda_0\to 0$. \emph{Center:} Convergence to the branch point $\lambda_\mathrm{dr}=-2$ for different starting values $\lambda_0$; see text for details. \emph{Right:} Convergence to a resonance in \eqref{e:sech} past the domain of analyticity of $\iota$; see text for details.}\label{f:3}
\end{figure}
\paragraph{Spreading speeds.} Localized disturbances of an unstable state grow temporally and spread spatially. The spatial spreading can be captured via the study of pointwise instabilities in comoving frames; see \cite{holzerscheel14} for background. Using the algorithms above, one would compute branch points in a constant-coefficient problem
\[
\lambda w=\mathcal{P}(\partial_x)w, \qquad \text{ or } \quad u_x=A(\lambda)u.
\]
One would then track double roots $\lambda_\mathrm{dr}$ with associated spatial exponent $\nu_\mathrm{dr}$ using numerical continuation as a function of $c$ in
\begin{equation}\label{e:comov}
\lambda w=\mathcal{P}(\partial_x)w+cw_x, \qquad \text{ or }\quad u_x=\tilde{A}(\lambda,c)u .
\end{equation}
Increasing $c$, one tracks $\lambda_\mathrm{dr}(c)$ and finds the largest value $c_\mathrm{lin}$ of $c$ so that $\Re\lambda_\mathrm{dr}(c)=0$. One would then, for this specific value of $c$ verify that there are no unstable double roots, leaving open however the possibility of instabilities for yet larger values of $c$.
We mention here a more direct method that yields directly critical values $c_\mathrm{lin}$ in the case where the associated branch point $\lambda_\mathrm{dr}$ is real. One therefore simply considers \eqref{e:comov} with $\lambda=0$,
\begin{equation}
u_x=\tilde{A}(0,c)u,
\end{equation}
as a nonlinear eigenvalue problem in $c$! ``Eigenvalues'' $c$ correspond to values of $c$ where pointwise growth is neutral, $\lambda_\mathrm{dr}=0$, and thus yield all candidates for linear spreading speeds, with the largest one typically being most relevant. We verified numerically that this algorithm performs very well in the extended Fisher-KPP equation,
\[
w_t=-{\varepsilon}^2 w_{xxxx}+w_{xx}+w-w^3, \text{ with linearization } w_t=-{\varepsilon} w_{xxxx}+w_{xx}+w,
\]
and spreading speeds
\[
c_\mathrm{lin}=\frac{1}{9} \sqrt{\frac{6-6 \sqrt{1-12
{{\varepsilon}}^2}}{{{\varepsilon}}^2}} \left(\sqrt{1-12
{{\varepsilon}}^2}+2\right), \qquad \text{for } {\varepsilon}^2<\frac{1}{12}.
\]
We note that spreading speeds may be (and indeed are in this example for ${\varepsilon}^2>\frac{1}{12}$) associated with complex values $\lambda_\mathrm{dr}=\mathrm{i}\omega_\mathrm{dr}$, which are not detected by this procedure. The algorithm rather yields complex speeds $c_\mathrm{lin}$ which do not appear to be relevant to the stability problem.
\section{Summary and outlook}\label{s:6}
We proposed an inverse power method as a versatile tool to locate spectral values of differential operators on the real line. The method identifies all singularities of the pointwise Green's function, including eigenvalues, resonances, and branch points, finding in particular the closest singularity to a given reference point $\lambda_*$. Pointwise methods have been used mostly in connection with the Evans function, effectively taking determinants. We hope that our view point provides a robust alternative to such determinant-based methods and will then prove useful particularly in large systems.
In future work, we plan to investigate strategies for large systems, when bases for stable and unstable subspaces yield full matrices for $\iota$, and the case of periodic coefficients. On the other hand, it appears to be difficult to adapt this formalism to yield spreading speeds also in the oscillatory case $\omega_\mathrm{dr}\neq 0$, and to multi-dimensional problems. Similarly, the pointwise formulation adapted here relies strongly on a ``local'' formulation in $x$, excluding to some extend spatially nonlocal coupling that does not permit a formulation as a first-order spatial ODE through linearization of the matrix pencil in $\partial_x$; see however \cite{MR3283552,MR3803149,MR4309433} for techniques that recover ``pointwise'' descriptions in this nonlocal setting. Similarly, effective computational tools to analyze multi-dimensional problems in this pointwise context do not appear to be available; see for instance \cite{MR1140700} for a discussion of pointwise instabilities in constant-coefficient, multi-dimensional problems.
\paragraph{Acknowledgments.} The author acknowledges partial support through grant NSF DMS-1907391.
\paragraph{Code.} Code used for the computations in the examples is available at the repository \textsc{https://github.com/arnd-scheel/nonlinear-eigenvalue}
\def$'${$'$}
|
1,108,101,564,216 | arxiv | \section{Introduction}
Topological defects appear everywhere in physics. Ranging from gravitation and cosmology to Bose-Einstein condensates they are associated to symmetry-breaking phase transitions. In particular, they appear in the isotropic-to-nematic phase transition of calamitic liquid crystals in the form of hedgehogs, disclinations, domain walls or more complicated textures \cite{kleman,repnik}. The deflection of light rays by these defects is an old issue. Grandjean \cite{grand}, already in 1919, calculated the light paths of extraordinary rays passing by disclination lines \cite{prob}. More recently, Joets and Ribotta \cite{joets} used a geometrical model to describe the light propagation in anisotropic and inhomogeneous media like liquid crystals. Inspired by their work we studied the propagation of light near disclination lines in nematics from a geometric point of view \cite{caio1,caio2,caio3}. In particular, in \cite{caio1} we observed lensing effects due the deflection of the beams in regions near the defect. There, we calculated the light trajectories from a geometry resulting from the application of Fermat's principle\footnote{Kline and Kay \cite{kline} were probably the first to prove that light rays in inhomogeneous anisotropic media are extremals of Fermat's functional.} associated to an effective refractive index $N$ \cite{born} given in terms of $n_o$ and $n_e$, the ordinary and extraordinary refractive index, respectively. In a few words, what we did was to consider the bent light rays as geodesics of a model space of unknown geometry. By identifying Fermat's principle with the geodesic variational principle we were able to find the effective geometries for each defect studied. Knowing the effective geometry, the geodesics were obtained numerically. Incidentally, a geometric model describing elastic properties of nematic liquid crystals which also leads to an effective geometry, appeared recently in the literature \cite{sim}.
Since the refractive indices of a nematic depend both on the temperature and on the wavelenght of the light, a more realistic model incorporating these effects is in order. Li, Gauza and Wu \cite{jun1} modeled the temperature effect on the nematic refractive index based on Vuks equation \cite{vuks} and, by fitting their final expression to experimental data of selected materials, found the unknown coefficients.The same group derived Cauchy formulae (see page 100 of \cite{born}) for the refractive indices of a nematic sample as a function of the wavelenth of the light \cite{jun2}. Again, by fitting their final expression to experimental data, all coeficients were determined. In this work, we incorporate their models to our geometric model for propagation of light in nematics with topological defects, in order to study temperature and wavelength effects. Although in \cite{caio1} we studied both the symmetric ($k=1$) and asymmetric ($k\neq 1$) defects, without loss of generality, we keep our analysis here mostly for the symmetric cases since their effective geometry is simpler and more intuitive than the asymmetric cases. Since, in all cases, the effective geometry depends only on the ratio $\alpha=n_e/n_o$ it is how the temperature and the wavelength affect this ratio what matters. In section II we review the geometric model and discuss the simplest effective geometry for light traveling by a disclination. In section III we use the results of \cite{jun1} and \cite{jun2} to show how $\alpha$ is affected by temperature and wavelength and study the effect of the variation of this ratio on the light paths near selected defects.
\section{Geometric Model}
Disclinations in nematics are classified according to the topological index (or strength) $k$ which gives a measure of how much the director rotates as one goes around the defect. That is, the director configurations, in the plane $x-y$, are given by \cite{kleman}
\begin{equation}
\varphi(\theta)=k\theta+c , \label{phi}
\end{equation}
where $\varphi$ is the angle between the molecular axis and the $x$-axis, $\theta$ is the angular polar coordinate and $c=\varphi(0)$. Selected director configurations can be seen on Figure 11.4 of \cite{kleman}. We assume the disclinations are straight and lie along the $z$-axis and the light rays propagate in the $x$-$y$ plane so, effectively, we have a two-dimensional problem.
We consider an optical medium constituted by a nematic liquid crystal with disclinations \cite{repnik}, where the effective geometry for the light is defined by the line element (equation (25) of \cite{caio1})
{\small
\begin{eqnarray}
& ds^2 & = \left\{n_o^2 \cos^{2}[(k-1)\phi+c]+n_e^2 \sin^{2}[(k-1)\phi+c]\right\}dr^{2} \nonumber\\
& + & \left\{n_o^2 \sin^{2}[(k-1)\phi+c]+n_e^2 \cos^{2}[(k-1)\phi+c]\right\}r^{2}d\phi^{2} \nonumber\\
& - & \left\{2(n_e^2-n_o^2)^{2}\sin[(k-1)\phi+c]\cos[(k-1)\phi+c]\right\}rdrd\phi. \nonumber\\
& & \label{kmetric}
\end{eqnarray}}
The metric (\ref{kmetric}) was obtained by identifying Fermat's principle with the variational principle that determines the geodesics in Riemannian geometry. Let
\begin{equation}
{\cal F}=\int_{A}^{B} N d \ell , \label{fermat}
\end{equation}
where, $d\ell$ is the element of arc length along the path between points $A$ and $B$ and the effective refractive index
\begin{equation}
N^2=n_o^2\cos^2\beta +n_e^2\sin^2\beta , \label{nr}
\end{equation}
where $\beta = (\widehat{\vec{n},\vec{S}})$ is the local angle between the director $\vec{n}$ and the Poynting vector $\vec{S}$. Then, among all possible paths between the generic points $A$ and $B$, Fermat's principle for the extraordinary rays grants us that the path actually followed by the energy is the one that minimizes ${\cal F}$.
In Riemannian geometry the line element $ds$ depends on the position coordinates $x^i$ of the point of the manifold under consideration. That is,
\begin{equation}
ds^2 = \sum_{i,j}g_{ij}dx^idx^j, \label{riemline}
\end{equation}
where $g_{ij}=g_{ij}(x^i)$ is the metric tensor. The geodesic joinning points $A$ and $B$ in such manifold is obtained by minimizing $\int_A^B ds$, just like Fermat's principle. This leads to a nice interpretation of the light paths as geodesics in an effective geometry \cite{born}. Thus, we may identify
\begin{equation}
N^{2}d\ell^2 = \sum_{i,j}g_{ij}dx^idx^j. \label{interp}
\end{equation}
The meaning of this equation is the following: the line element of the optical path, in an Euclidean space with refractive properties, is identified with the line element of an effective geometry characterized by $g_{ij}$.
In \cite{caio2} we showed that the effective geometry for the vortex-like $k=1$, $c=\frac{\pi}{2}$ disclination is that of a cone. The effective metric for this case is obtained by substituting these values in metric (\ref{kmetric}) and rescaling the coordinate $r$ to $\rho=n_e r$. That is, the two-dimensional line element for this effective geometry, in polar coordinates, is \cite{caio1}
\begin{equation}
ds^2 = d\rho^{2} + \alpha^2 \rho^{2}d\theta^{2}, \label{metr1}
\end{equation}
where $\alpha=n_o/n_e$ is the ratio between the refractive indices. The geodesic equation in a Riemannian space like the cone is \cite{man}
\begin{equation}
\frac{d^{2}x^i}{dt^2}+\sum_{j,k}\Gamma^{i}_{jk}\frac{dx^j}{dt}\frac{dx^k}{dt}=0,\label{georie}
\end{equation}
where $t$ is a parameter along the geodesic and $\Gamma^{i}_{jk}$ are the Christoffel symbols, given by
\begin{equation}
\Gamma^{i}_{jk}=\frac{1}{2}\sum_{m}g^{mi}\left\{ \frac{\partial g_{km}}{\partial x^j}+\frac{\partial g_{mj}}{\partial x^k}-\frac{\partial g_{jk}}{\partial x^m}\right\} . \label{chris}
\end{equation}
For metric (\ref{metr1}) equation (\ref{georie}) reduces to the coupled system of ordinary differential eaquations
\begin{equation}
\frac{d^2\rho}{dt^2}-\alpha^2\rho\left(\frac{d\theta}{dt}\right)^2=0 \label{eq1}
\end{equation}
and
\begin{equation}
\frac{d^2\theta}{dt^2}+\frac{2}{\rho}\frac{d\rho}{dt}\frac{d\theta}{dt}=0. \label{eq2}
\end{equation}
The solution to the coupled system (\ref{eq1}) and (\ref{eq2}) is easily obtained \cite{padua}:
\begin{equation}
\rho(t)=\sqrt{\frac{C^2}{E\alpha^2}+2E(t+D)^2}, \label{r(t)}
\end{equation}
\begin{equation}
\theta(t)=\frac{1}{\alpha}\arctan\left(\frac{2E\alpha(t+D)}{c}\right)+\frac{F}{\alpha}, \label{theta(t)}
\end{equation}
where $C$, $D$, $E$ and $F$ are integration constants.
In figure 1 we show the light paths in the nematic medium with the $k=1$, $c=\frac{\pi}{2}$ disclination as given by (\ref{r(t)}) and (\ref{theta(t)}). In figure 2 the geodesics on a cone are shown for comparison.
\begin{figure}[!h]
\begin{center}
\includegraphics[height=5cm]{1.eps}
\caption{Light trajectories in a nematic liquid crystal with a topological defect given by a disclination $k=1$ and $c=\pi/2$.}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[height=5cm]{geocone.eps}
\caption{Geodesics on the cone.}
\end{center}
\end{figure}
Metric (\ref{metr1}) describes a cone. Fig. 3 shows the making of a cone from a planar sheet where an angular section was removed with posterior identification of the edges. If $\gamma$ is the angle that defines the removed section then the remaining surface corresponds to an angular sector of $2\pi\alpha=2\pi-\gamma$. This is exactly what metric (\ref{metr1}) describes. The incorporation of the term $\alpha^2$ to the Euclidean metric in polar coordinates makes the total angle on the surface be $\int_{0}^{2\pi} \alpha d\theta=2\pi\alpha <2\pi$, since $n_o<n_e$. It is clear then that $\alpha$ tells how ``pointed'' is the cone. The closer $\alpha$ gets to 1 the flatter is the cone. For $\alpha=1$ the cone turns into a plane.
\begin{figure}[!h]
\begin{center}
\includegraphics[height=1.5cm]{conec2.eps}
\caption{Conical surface of angular deficit $\gamma$.}
\end{center}
\end{figure}
The solutions (\ref{r(t)}) and (\ref{theta(t)}) still hold for the radial defect with $k=1$ and $c=0$ since, in this case, the line element (\ref{kmetric}) reduces to
\begin{equation}
ds^2 = d\rho^{2} + \frac{1}{\alpha^2} \rho^{2}d\theta^{2}, \label{metr2}
\end{equation}
where, $\alpha=n_o/n_e$ still. Consequently, equations (\ref{r(t)}) and (\ref{theta(t)}) become
\begin{equation}
\rho(t)=\sqrt{\frac{C^2 \alpha^2}{E}+2E(t+D)^2}, \label{r(t)2}
\end{equation}
\begin{equation}
\theta(t)=\alpha\arctan\left(\frac{2E(t+D)}{c\alpha}\right)+F\alpha, \label{theta(t)2}
\end{equation}
where, as before, $C$, $D$, $E$ and $F$ are integration constants.
\section{Refractive index variation}
The refractive indices $n_o$ and $n_e$ of a nematic liquid crystal depend both on the temperature ($T$) and on the wavelenth ($\lambda$) of the light. In this section, based in \cite{jun1} and \cite{jun2}, we analyse how these parameters affect the ratio $\alpha=n_o/n_e$, which characterizes the effective geometry associated to disclinations. By changing either $T$ or
$\lambda$, $\alpha$ is changed and so is the effective geometry. This causes a deformation of the geodesics associated to the light rays in our model.
In \cite{jun1} we can find expressions to the ordinary and extraordinary refractive index given in terms of the birefringence $\Delta n$ and of its average value $\left\langle n\right\rangle$, such that
\begin{equation}
n_o=\left\langle n\right\rangle-\frac{1}{3}\Delta n,\label{a1}
\end{equation}
\begin{equation}
n_e=\left\langle n\right\rangle+\frac{2}{3}\Delta n.\label{a2}
\end{equation}
In $(\ref{a1})$ and $(\ref{a2})$ the behavior of $\left\langle n\right\rangle$ as function of the temperature \cite{jun1} is given through a linear dependence given by
\begin{equation}
\left\langle n\right\rangle=A-BT,\label{med}
\end{equation}
where the parameters $A$ and $B$ are obtained experimentally.
The birefringence can be written in terms of the approximated \cite{haller} order parameter $S=\left(1-\frac{T}{T_c}\right)^{\beta}$ as
\begin{equation}
\Delta n=(\Delta n)_0\left(1-\frac{T}{T_c}\right)^{\beta},\label{birre}
\end{equation}
where $(\Delta n)_0$ is the birefringence at $T=0\,K$, $\beta$ is a constant associated to the material and $T_c$ is the isotropic-nematic transition temperature.
Therefore, substituting the equations $(\ref{med})$ and $(\ref{birre})$ into $(\ref{a1})$ and $(\ref{a2})$, we have
\begin{equation}
n_o=A-BT-\frac{(\Delta n)_0}{3}\left(1-\frac{T}{T_c}\right)^{\beta},\label{ind1}
\end{equation}
\begin{equation}
n_e=A-BT+\frac{2(\Delta n)_0}{3}\left(1-\frac{T}{T_c}\right)^{\beta}.\label{ind2}
\end{equation}
The liquid crystal considered was the 5CB or 4-cyano-4-n-pentylbiphenyl and the wavelength of the incident beam was $589$nm \cite{jun1}. For this material the parameters are given in the table below obtained from \cite{jun1}. The parameters A, $\beta$ and $(\Delta n)_0$ are adimensional.
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
A & B & $\beta$ &$(\Delta n)_0$ &$T_c$\\ \hline
1.7546 & 0.0005360 K$^{-1}$ & 0.2391 & 0.3768 &306.6 K\\ \hline
\end{tabular}
\end{center}
In figure 4 we show the ratio $\alpha=n_o/n_e$ as a function of the temperature as obtained from equations (\ref{ind1}) and (\ref{ind2}). As shown in \cite{caio2} these parameters are associated to the effective geometry perceived by the light traveling in the vicinity of $k=1$ disclinations. For these defects the geometry is conical with the radial disclination ($k=1$, $c=0$) behaving as a negative curvature cone and the vortex-like disclination ($k=1$, $c=\pi/2$) like the ordinary cone. The value $\alpha=1$, reached at $T_c$ corresponds to the Euclidean geometry which describes the isotropic phase.
\begin{figure}[h]
\begin{center}
\includegraphics[height=7cm]{alphatemp2.eps}
\caption{Effective geometry parameter $\alpha$ as function of the temperature for 5CB in the nematic phase at 589 nm.}
\end{center}
\end{figure}
Next, we consider the wavelength dependence of the effective geometry at a fixed temperature. Li and Wu \cite{jun2} modeled the ordinary and extraorinary refractive indices wavelength dependence based on the extended Cauchy formulae. Their model is described by the following equations:
\begin{equation}
n_e=A_e-\frac{B_e}{\lambda^2}+\frac{C_e}{\lambda^4},\label{lambda1}
\end{equation}
\begin{equation}
n_o=A_o-\frac{B_o}{\lambda^2}+\frac{C_o}{\lambda^4}.\label{lambda2}
\end{equation}
The coefficients appearing in equations (\ref{lambda1}) and (\ref{lambda2}) were obtained \cite{jun2} by fitting experimental data. For 5CB at 25.1 $^o$C they are given in the tables below.
\begin{center}
\begin{tabular}{|c|c|c|} \hline
$A_e$ & $B_e$ & $C_e$ \\ \hline
1.6795 & 0.0048 $\mu m^2$& 0.0027 $\mu m^4$\\ \hline
\end{tabular}
\end{center}
\begin{center}
\begin{tabular}{|c|c|c|} \hline
$A_o$ & $B_o$ & $C_o$\\ \hline
1.5174 & 0.0022 $\mu m^2$ & 0.0011 $\mu m^4$\\ \hline
\end{tabular}
\end{center}
In figure 5 we show the ratio $\alpha=n_o/n_e$ as a function of the wavelength as obtained from equations (\ref{lambda1}) and (\ref{lambda2}).
\begin{figure}[!h]
\begin{center}
\includegraphics[height=7cm]{alphacomp2.eps}
\caption{Effective geometry parameter $\alpha$ as function of the wavelength for 5CB in the nematic phase at 25.1 $^o$C.}
\end{center}
\end{figure}
Since both temperature and wavelength cause $\alpha$ to change we can summarize their effect on the light paths by studying the geodesics for different values of $\alpha$. Substituting the metric (\ref{kmetric}) in (\ref{chris}) and this one in (\ref{georie}) we can calculate the geodesics for different values of $\alpha$. As described in Section II, the geodesic equations (\ref{georie}) have exact solutions for the $k=1$ case. The remaining cases can be solved by a numerical method. In figures 6 and 7 we show the effects of the variation of the parameter $\alpha$ on the light paths near the $k=1$ defects, using the exact solution of Section II. In figure 8 we show the same effects for the $k=-1$ disclination, using the Runge-Kutta numerical method to solve the geodesic equation. In all cases, the solid line corresponds to $\alpha$=0,8912, the dotted line to $\alpha$=0,9120 and the dash-dotted line to $\alpha$=0,9355. These values, for 5CB, probed by a 589 nm light beam, correspond to the temperatures of 290 K, 300 K and 305 K, respectively. Notice that as $\alpha$ approaches 1 the light paths straighten out, as it should.
\begin{figure}[!h]
\begin{center}
\includegraphics[height=5cm]{div.eps}
\caption{Influence of the parameter $\alpha$ on the light trajectories in a nematic liquid crystal with a disclination $k=1$ and $c=0$.}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[height=5cm]{vort.eps}
\caption{Influence of the parameter $\alpha$ on the light trajectories in a nematic liquid crystal with a disclination $k=1$ and $c=\pi/2$.}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[height=5cm]{assim.eps}
\caption{Influence of the parameter $\alpha$ in the light trajectories in a nematic liquid crystal with a disclination $k=-1$ and $c=\pi/2$.}
\end{center}
\end{figure}
\newpage
\section{Conclusion}
Topological defects in nematics cause light passing by to deflect, as shown in \cite{caio1}. In that article, we associated the light paths to geodesics in a curved space specified by the defect. The deflection is due to the particular orientation of the director field associated to the defect, which may be translated into curvature. The intensity of the deflection depends on the ratio $\alpha$ between the ordinary and extraordinary refractive indices, which, in turn, depend on the temperature of the liquid crystal and on the wavelength of the light. Taking as example 5CB, which has been extensively characterized with respect to temperature and wavelength dependence of the refractive indices \cite{jun1,jun2}, we solved the geodesic equations for a realistic range of values of $\alpha$ corresponding to temperature and/or wavelength variation. The graphical result illustrates the influence of these parameters on the light deflection caused by the defect. The further $\alpha$ gets from 1 the strong is the deflection. This can be achieved by either lowering the temperature or shortening the wavelength. In conclusion, the study of the influence of measurable physical parameters, like temperature and wavelength, helps us to understand better the behavior of light propagation in liquid crystals where topological defects are relevant.
\begin{acknowledgement}
This work has been supported by CNPq, CNPq/FACEPE, PRONEX/FAPESQ-PB and CAPES/PROCAD. We are indebted to Eduardo R. da Costa for helping with the graphs.
\end{acknowledgement}
|
1,108,101,564,217 | arxiv | \section{Introduction}
One of the most essential characteristics of black holes are the
resonant frequencies of the response to external perturbations,
called quasinormal frequencies \cite{Kokkotas-99}. Being an analog
of normal modes for open systems, they do not depend on the way in
which the system is perturbed, but only on the parameters of a
system. Quasinormal modes are expected to be observed in the near
future with the help of a new generation of gravitational
antennas. It is well-known that non-rotating astrophysical black
hole can be described by the Schwarzschild solution, implying the
importance of the quasinormal modes of this background
\cite{SQNMs}.
In addition to the still elusive possibility to observe
quasinormal modes (QNMs) of black holes with the help of a new
generation of gravitational antennas, there is a window for
observation of the acoustic analogue of a black hole in
laboratories. This is the well-known Unruh analogue of black holes
\cite{Unruh}, which are the apparent horizons appearing in a
fluid with a space-dependent velocity, in the presence of sonic
points. The supersonic waves cannot propagate back beyond the
sonic point, mimicking thereby, the effect of the horizon at sonic
points in a membrane paradigm. The Unruh discovery stimulated
active investigation of quasinormal modes of different analogue
black holes \cite{analogueQNMs}. First of all, the quasinormal
spectrum of the analogue black holes given by the metrics:
$$ d s^{2} = -(1 - C/r^{2}) d t^{2} + (1- C/r^{2})^{-1} d r^{2} - 2 B d \phi d t + r^{2} d \phi^{2}$$
and
$$ d s^{2} = - (1- r_{0}/r^{4}) d t^{2} + (1- r_{0}/r^{4})^{-1} d r^{2} + r^{2} d \Omega^{2}$$
were considered \cite{analogueQNMs}. These are the two models for
rotating analogue black holes in "draining bathtub" and for a
canonical analogue non-rotating black holes. Recently, interesting
acoustic analogues of brane-world black holes were suggested
\cite{Ge:2007tr,Ge:2007ts}.
We can see from the above formulas that these metrics, even being
very useful as analogues with apparent horizons, do not represent
a true solutions of the Einstein or other gravity dynamical
equations. If one had a complete analogy with some known solution
of Einstein equations, say, the Schwarzschild solution, he could
see in the acoustic experiments not only qualitative, but also, up
to an experimental accuracy, exact numerical coincidence with a
prototype characteristics. Namely, for quasinormal modes, which
are governed by the form of the wave equation, this numerical
correspondence would mean that the effective potential of the
perturbations of some hydrodynamic system coincides with an
effective potential of a black hole. Fortunately, recent
consideration of the perturbations of a gas in de Laval nozzle
\cite{Sakagami} gives us such an opportunity of finding a system that is realted to
the same effective potential as a Schwarzschild black hole.
The canonical de Laval nozzle is a convergent-divergent tube,
narrow in the middle. It allows to accelerate the gas until the
sonic speed in its throat, reaching supersonic speeds after
passing the throat. The perturbations of the gas in de Laval
nozzle can be considered as one-dimensional if the section of the
nozzle does not change too quickly. Here we show that the
corresponding effective potential of perturbations in a canonical
de Laval nozzle can be made to be equal to the potential for
perturbations of Schwarzschild black holes, if choosing some
specific form of the nozzle. In addition, we suggest another,
approximate, way to get quasinormal modes of Schwarzschild black
holes in de Laval nozzle. For this, one needs to mimic the form of
the effective potential for Schwarzschild metric with the help of
a de Laval nozzle of a simple form suggested in
\cite{Sakagami}.
The paper is organized as follows: Sec \ref{sec:basic} gives all
basic equations of the one-dimensional motion in de Laval nozzle
we shall use. Sec \ref{sec:mimic} is devoted to reproducing the
exact expression for the form of the nozzle which corresponds to
the potential of the Schwarzschild black holes.
In the discussions, we sketch the open questions and possible
generalizations of the suggested technique.
\section{Calculation of the nozzle cross section $A$ in terms of $g$}\label{sec:basic}
We assume that a gas in the nozzle can be described by equations
of motion for perfect fluid and that the flow is
quasi-one-dimensional:
\begin{gather}
{\partial}_t(\rho A) + {\partial}_x(\rho vA) = 0 \,,
\label{eq:cont} \\
{\partial}_t(\rho vA) + {\partial}_x[(\rho v^2 + p)A] = 0 \,,
\label{eq:momentum} \\
{\partial}_t(\epsilon A) + {\partial}_x[(\epsilon+p)vA] = 0 \,,
\label{eq:energy}
\end{gather}
Here $\rho$ is the density, $v$ is the fluid velocity, $p$ is the
pressure, $A$ is the cross section of the nozzle, and
\begin{equation}
\epsilon = \frac{1}{2}\rho v^2 + \frac{p}{\gamma-1}
\end{equation}
is the energy density. The heat capacity ratio is
$\gamma=1+2/n=7/5=1.4$ for di-atomic molecules of air ($n=5$). We
shall assume that the flow has no entropy discontinuity, then the
fluid is isentropic
\begin{equation}
p\propto\rho^\gamma \,.
\label{eq:isentropic}
\end{equation}
Instead of Eq.~\eqref{eq:momentum}, we can use Euler's equation
\begin{gather}
\rho({\partial}_t + v {\partial}_x)v = -{\partial}_x p \,,
\label{eq:Euler}
\end{gather}
For isentropic fluid Eq.~\eqref{eq:Euler} is reduced to the
Bernoulli's equation
\begin{equation}
{\partial}_t\Phi + \frac{1}{2}({\partial}_x\Phi)^2 + h(\rho) = 0 \,,
\label{eq:Bernoulli}
\end{equation}
where $h(\rho) \equiv \int\rho^{-1}dp$ is the specific enthalpy
and $\Phi = \int v\,dx$ is the velocity potential.
According to \cite{Sakagami}, the perturbation equations in such a
nozzle can be reduced to:
\begin{gather}
\biggl[ \frac{d^2}{dx^{*2}} + \kappa^2 - V(x^*) \biggr] H_\omega = 0, \label{eq:Sch1}\\
\kappa = \frac{\omega}{c_{s0}}, \\
V(x^*) = \frac{1}{g^2}\biggl[\; \frac{g}{2}\frac{d^2g}{dx^{*2}}
- \frac{1}{4}\Bigl(\frac{dg}{dx^*}\Bigr)^2 \;\biggr].
\end{gather}
Here $c_{s0}$ is the stagnation sound speed, and $x^{*}$ is an
acoustic analogue of the tortoise coordinate which satisfies
$x^{*}(x=+\infty) = + \infty$, $x^{*}(x=0) = - \infty$, namely,
\begin{equation}
x^{*} = c_{s0} \int \frac{d x}{c_{s} (1-M(x)^{2})},
\end{equation}
where $M(x)$ is the Mach number \cite{Hydro}, which, by the
definition, is the current flow speed divided by the sound speed.
In our notations $M = v/c_s$. The function $H_{\omega}$ represents
small perturbations of gas flow,
\begin{gather}
H_\omega(x) = g^{1/2}\int dt~e^{i\omega[t-f(x)]}\phi(t,x),
\end{gather}
\begin{gather}
g = \frac{\sigma}{c_s}, \\
f(x) = \int\frac{|v|\,dx}{c_s^2-v^2},
\end{gather}
Here, according to \cite{Sakagami}, the small perturbations are
defined as follows
\begin{align}
\rho &= \bar\rho + \delta\rho \,, \qquad \bar\rho \gg |\delta\rho| \,,
\label{eq:split_rho} \\
\Phi &= \bar\Phi + \phi \,, \qquad |{\partial}_x\bar\Phi| \gg |{\partial}_x\phi| \,,
\label{eq:split_Phi}
\end{align}
Our starting point is the calculation of the configuration of de
Laval nozzle, i. e. its cross section as a function of the
transversal nozzle coordinate. Since we know the effective
potential, we can calculate in some way the function $g(x)$. By
definition Eq. (15) of \cite{Sakagami}
$$g=\frac{\sigma}{c_s}=\frac{\rho A}{\sqrt{\gamma p/\rho}}.$$
Taking (5) into account we find
\begin{equation}\label{gdef}
g\propto\frac{\rho A}{\rho^{(\gamma-1)/2}}.
\end{equation}
We can choose dimensionless quantities for $\rho(x)$ and $A(x)$ by
measuring them in units of $\rho_0$ and $A^*$ respectively
\cite{Hydro}. Then equation (5.3) of \cite{Hydro} reads
\begin{equation}\label{Adef}
A^{-1}\propto\left(1-\rho^{(\gamma-1)}\right)^{1/2}\rho.
\end{equation}
Since (\ref{eq:Sch1}) is invariant with respect to re-scaling of
$g$, we can fix the coefficients in (\ref{gdef}) and (\ref{Adef})
arbitrarily:
\begin{equation}\label{constfix}
g=\frac{\rho A}{2\rho^{(\gamma-1)/2}}, \quad
A^{-1}=\left(1-\rho^{(\gamma-1)}\right)^{1/2}\rho.
\end{equation}
We find
\begin{equation}
g=\frac{\rho^{(1-\gamma)/2}}{2\left(1-\rho^{(\gamma-1)}\right)^{1/2}}
=\frac{\rho^{(1-\gamma)}}{2\left(\rho^{(1-\gamma)}-1\right)^{1/2}}
\end{equation}
Hence it follows that
\begin{equation}
\rho^{1-\gamma}=2g^2\left(1\pm
\sqrt{1-g^{-2}}\right).
\end{equation}
The sign should be chosen in order that $\rho$ be a monotonous
function with respect to the transverse coordinate. As we will
show later, the function $g$ for the Schwarzschild black hole can
be chosen also monotonous in the $R$ region, finite at the horizon
and infinite at the spatial infinity. Therefore, we choose the
minus sign,
\begin{equation}
\rho^{1-\gamma}=2g^2\left(1-\sqrt{1-g^{-2}}\right).
\end{equation}
Note that $g$ must always be larger than unity in our
consideration.
In our notations, the Mach number is connected with $\rho$ as
\begin{equation}\label{rhosolution}
\rho^{1-\gamma}=1+\frac{\gamma-1}{2}M^2.
\end{equation}
We find
$$M^2=\frac{2}{\gamma-1}\left(\rho^{1-\gamma}-1\right)=$$
\begin{equation}
\frac{2}{\gamma-1}\left(2g^2\left(1-\sqrt{1-g^{-2}}\right)-1\right).
\end{equation}
Since $M=1$ at the event horizon, $g$ must be finite there, and
\begin{equation}\label{normal}
g\Biggr|_{e.h.}=\frac{\gamma+1}{2\sqrt{2}\sqrt{\gamma-1}}=\frac{3}{\sqrt{5}}>1.
\end{equation}
This requirement fixes both constants of integration.
Substituting (\ref{rhosolution}) in (\ref{constfix}) we find the
cross-section area as a function of $g$:
\begin{equation}\label{2}
A=\frac{\sqrt{2}\left(2g^2\left(1-\sqrt{1-g^{-2}}\right)\right)^{1/(\gamma-1)}}{\sqrt{1-\sqrt{1-g^{-2}}}}.
\end{equation}
\section{The de Laval nozzle for the Schwarzschild black hole}\label{sec:mimic}
Our starting point is the calculation of the configuration of the
de Laval nozzle, i. e. its cross section as a function of the
transversal nozzle coordinate. Since we know the effective
potential, we can calculate in some way the function g(x).
\begin{figure*}\label{1figure}
\caption{The form of de Laval nozzle and the effective potential for $s=\ell=0$.}
\resizebox{\linewidth}{!}{\includegraphics*{s_l_0.nozzle.eps}}
\end{figure*}
\begin{figure*}\label{2figure}
\caption{The form of de Laval nozzle and the effective potential for $s=\ell=1$.}
\resizebox{\linewidth}{!}{\includegraphics*{s_l_1.nozzle.eps}}
\end{figure*}
\begin{figure*}\label{3figure}
\caption{The form of de Laval nozzle and the effective potential for $s=\ell=2$.}
\resizebox{\linewidth}{!}{\includegraphics*{s_l_2.nozzle.eps}}
\end{figure*}
After separation of the angular and time variables, scalar field
perturbations in the Schwarzschild background, can be reduced to
the wave-like equation
\begin{equation}
\left(\frac{d^2}{dr_*^2}+\omega^2-V(r^*)\right)\Psi(r^*)=0,
\end{equation}
where putting the event horizon to be unity we find,
$$f(r)=1-\frac{1}{r}, \quad dr^*=\frac{dr}{f(r)}$$
\begin{equation}
V(r)=f(r)\left(\frac{\ell(\ell+1)}{r^2}+\frac{1-s^2}{r^3}\right)
\end{equation}
To find $g$ that produces the same potential we identify the
"tortoise" coordinates of the black hole solution and of the laval
nozzle
$$dr^*=dx^*=\frac{c_{s0}dx}{c_s(1-M^2)}=\frac{\rho^{(1-\gamma)/2}dx}{1-M^2}=$$
\begin{equation}\label{identify}
\frac{\sqrt{2g^2\left(1-\sqrt{1-g^{-2}}\right)}dx}{1-\frac{2}{\gamma-1}
\left(2g^2\left(1-\sqrt{1-g^{-2}}\right)-1\right)}.
\end{equation}
Here $x$ is the real coordinate along de Laval nozzle. Then we can
find the equation for $g(r)$,
\begin{equation}\label{gequation}
\frac{f(r)f'(r)g'(r)+f(r)^2g''(r)}{2g(r)}-\frac{f(r)^2g'(r)^2}{4g(r)^2}=V(r).
\end{equation}
This implies that the from of de Laval nozzle is parameterized by
the parameter $r$ and $x^{*}(r)$ = $r^{*}(r)$. Note that as we
chose the radius of the event horizon to be unity, the nozzle
coordinate $x$ is measured in the units of the radius of the event
horizon.
The general solution of the equation (\ref{gequation}) contains
two arbitrary constants. They can be fixed in a unique way by the
condition (\ref{normal}). Namely, the requirement that the
solution must be finite at $r=1$ fixes one of the constant. Then
the other constant re-scales the solution of (\ref{gequation}),
and must be fixed by its value at $r=1$. Finally, the solution of
(\ref{gequation}) for arbitrary $\ell$ and $s$, that satisfies
(\ref{normal}) is given by the following formula:
$$ g(r) = $$
\begin{eqnarray}\nonumber\label{1}
&&\frac{\gamma+1}{2\sqrt{2}\sqrt{\gamma-1}}\sum_{n=s}^{\ell}
\left(\frac{(-1)^{n+s} (\ell + n)!}{( n + s )! ( n - s )! (\ell - n)!}r^{n+1}\right)^2 = \\
&&=\small\frac{\gamma+1}{2\sqrt{2}\sqrt{\gamma-1}}r^{2s+2}\times
\\\nonumber&&\times\left(\frac{\Gamma(1+\ell+s)_2F_1(s-\ell,s+\ell+1,1+2s,r)}
{\Gamma(1+\ell-s)\Gamma(1+2s)}\right)^2.
\end{eqnarray}
One can easily check that the above solution indeed satisfies the
equation (\ref{normal}), for any fixed $\ell$ and $s$.
From (\ref{identify}) we find the dependance of the transversal
nozzle coordinate $x$ on the parameter $r$:
\begin{equation}
x=\intop_1^r\frac{\left(\gamma+1-4g(r)^2\left(1-\sqrt{1-g(r)^{-2}}\right)\right)dr}
{f(r)(\gamma-1)\sqrt{2g(r)^2\left(1-\sqrt{1-g(r)^{-2}}\right)}}.
\end{equation}
The integration constant is chosen in order to $x$ be zero at the sonic point.
Now we are in position to find the required form of de Laval
nozzle. i.e. to find its cross-section $A(x)$. We just need to
replace $g(r)$ given in (\ref{1}) in (\ref{2}) and go over to the
transverse nozzle coordinate $x$. The function $A(x)$ is shown in
Figs. 1 - 4. Note that the canonical de Laval nozzle is diverging
at the end of the flow trajectory, so that $A_{x=\infty} =\infty$
(see pages $53$ and $124$ in \cite{Hydro}). Indeed, our formula
(\ref{1}) implies divergence at least as $\sim r^2$. The diverging
of the nozzle nevertheless does not give any going beyond the
one-dimensional representation of the motion, because the function
$\sqrt{A(x)}$ is measured in units of black hole mass, i.e. one
can "pull" the nozzle along the transverse coordinate $x$ in order
to make the area of the nozzle change as slowly as one wishes.
Such a "pulling" simply means that we are getting the
correspondence with a black hole of larger mass. Since the
quasinormal modes are inversely proportional to the mass of the
black hole, this means just some determined multiplication by a
coefficient when coming from the frequencies observed in
experiment to the QNMs of a black hole.
As can be seen from \cite{KZHPLB2}, the values of the quasinormal
modes are determined by the behavior of the effective potential in
some region near black hole. The form of the effective potential
(and thereby of de Laval nozzle) far from black hole is less
significant for the QNMs problem. Therefore we expect that such
experimental phenomena as surface friction and reflection of waves
from boundaries will not have considerable influence on the
observed picture.
\begin{figure*}\label{figure4}
\caption{The form of de Laval nozzle and the effective potential for the polar gravitational perturbation $\ell=2$.}
\resizebox{\linewidth}{!}{\includegraphics*{l_2_polar_.nozzle.eps}}
\end{figure*}
Now we discuss the isospectrality and the effective potential for
the polar and axial gravitational perturbations. We consider the
effective potential for the gravitational perturbation of the
polar type,
\begin{equation}\label{polar}
V(r)=f(r)\frac{9(1+\lambda r)+\lambda^3r^3+\lambda^2r^2(3+2r)}{r^3(3+\lambda)^2},
\end{equation}
where $\lambda = (\ell+2)(\ell-1)$.
For the above polar type gravitational perturbations we can also
obtain the exact solution for the function $g(r)$, i.e. the form
of de Laval nozzle. The function $g(r)$ is given in the following
table for $\ell =2$, $3$, $4$, and $5$ and in the formula
(\ref{g}):
\begin{equation}\label{g}
g(r,\ell)=\frac{\gamma+1}{2\sqrt{2}\sqrt{\gamma-1}}\frac{r^2p(r,\ell)^2}{(3+(\ell+2)(\ell-1)r)^2}
\end{equation}
\begin{tabular}{|c|l|}
\hline
$\ell$&$p(r,\ell)$\\
\hline
$2$&$3-6r^2-4r^3$\\
$3$&$3-30r^2-20r^3+60r^4$\\
$4$&$3-90r^2-60r^3+630r^4-504r^5$\\
$5$&$3-210r^2-140r^3+3570r^4-6552r^5+3360r^6$\\
\hline
\end{tabular}
\vspace{3mm}
Apparently there exists a solution for general $\ell$, yet
probably quite cumbersome. Analysis of the Figs. 3-4 shows that
the form of the nozzles for modeling polar and axial gravitational
perturbations are almost the same. The difference cannot be seen
explicitly, although not vanishing. As the effective potentials
for axial and polar types also differ only slightly, this may mean
that the forms of the nozzles also differ only slightly.
\section{Discussion}
The suggested solution of the inverse problem for the
correspondence of the form of de Laval nozzle to the general form
of perturbations of the Schwarzschild black holes (i.e. for
perturbations with spin $s$ and multipole $\ell$) can be
generalized in many ways. First of all, it would be very
interesting to consider the massive vector \cite{vector} and
scalar \cite{scalar} field perturbations, because of quite unusual
behavior of massive perturbations. Thus the so-called
quasi-resonances, infinitely long lived modes, for massive scalar
field \cite{quasiresonances} could be observed in a de Laval
nozzle of some form as almost non-damping sound waves. These waves
would certainly be reflected from the boundary of the nozzle and,
thereby, would break the quasinormal mode boundary conditions.
Even though in a real experiment one cannot obtain perfect QNM
boundary condition (QNM b.c.), a considerable deviation from QNM
b.c. should be observed when modeling the quasi-resonances.
Another possible generalization is to consider more general black
hole backgrounds: Reissner-Nordstr\"om, Schwarzschild-de Sitter,
or higher dimensional Schwarzschild black holes \cite{higherD}
with charge, $\Lambda$-term and Gauss-Bonnet-term \cite{GB},
including the brane-world black holes
\cite{braneQNMs}. The Gauss-Bonnet black hole
ringing \cite{GB} would be especially interesting to model in a
nozzle because there exists the instability in some region of
values of the black hole parameters \cite{GB}. In the approach
considered in this paper we are limited only by a spherical
symmetry, i.e. by $\omega$ independence of the effective
potential. In addition, one could consider the flow of gas with a
time dependent initial speed at the compressor, which probably
could model the perturbations of the Vaidya evaporating black
holes \cite{Abdalla:2007hg}, \cite{Abdalla:2006vb}. We cannot be
sure, that in all these cases the differential equation for the
form of de Laval nozzle will be exactly integrable, yet one can
always find some numerical solution. We believe that further
research will solve these interesting problems.
It should be recalled also that the \emph{precise} acoustic
analogy is only established for a scalar field. To be able to
reproduce the potential $V(r)$ for fields of different spins
certainly does not mean that one can reproduce all the
characteristics of those equations in an acoustic model.
Finally, let us note that the obtained acoustic analogue for the
perturbations of the Schwarzschild black holes is not limited by
quasinormal mode problems only, but allow general investigation of
propagation of classical and quantum fields, including such
processes as scattering and tunnelling of waves and particles.
\begin{acknowledgments}
This work was supported by \emph{Funda\c{c}\~{a}o de Amparo
\`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP)} and
\emph{Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico (CNPq)},
Brazil.
\end{acknowledgments}
|
1,108,101,564,218 | arxiv | \section{Introduction}
\label{intro}
Language is an innate mechanism that humans develop \cite{Malmberg2012}. People with hearing problems also seek a way to communicate and need to develop a language that is directly accessible and effective for them. One such language is sign language \cite{Kushwah2017}. Sign languages are the only languages that Deaf people can use in order to communicate in a natural, effortless, easy, reciprocal and effective way. In Greece, Greek Deaf people use the Greek Sign Language (GSL), which is their natural language, as it is used not only by the majority of them but also by their hearing children, as well as by professionals and experts who work with deaf people.
Nowadays, the use of Information and Communication Technologies in everyday life has shown an increasing trend and has helped many people in their everyday life. The deaf/hard of hearing people could not have been unaffected by these rapid changes. The use of technology has the effect of reducing isolation, increasing independence, and offering educational, economic, and social opportunities to deaf/hard of hearing people \cite{MaioranaBasas2014}.
\subsection{Deaf and Sign Languages}
According to \cite{Woodward1972}, Deaf with ‘D’ are those deaf/hard of hearing people who belong to the Deaf community and use sign language in order to communicate, while deaf with ‘d’ are those who are hard of hearing and do not necessarily need sign language as a communication tool. Sign language is the natural language of Deaf people and not just an artificial communication system. Each country has its own Sign Language with structural features that differ from spoken languages. The gestures consist of regular structures and semantics that correspond to spoken languages \cite{Stokoe1980}. Deaf people of each country use their own Sign Language \cite{Panagiotakopoulos2003}.
\subsection{Greek Sign Language}
Greek Sign Language is the mother tongue of Greek Deaf people. Sign Language had been sidelined in many European countries, for many years. Greek Sign Language is a complete and independent language, recognized as "a non-written language with all the linguistic phenomena observed in the spoken languages” (grammar, syntax, dictionary, phonology). In addition, the natural language of Deaf people presents elements of morphology, syntax, semantics, and pragmatics, while the linguistic system of phonology is replaced by the corresponding italics \cite{Aarssen2018}.
In sign language, the combination of handshape with other elements, such as direction, position and movement, gives a specific meaning to a word. More specifically, direction has to do with the orientation that the palm takes, position shows the point where the hand is placed in relation to the body and movement shows other syntactic information such as the subject-object agreement \cite{Ackovska2012}, \cite{MaioranaBasas2014}, \cite{Papaspyrou2003}, \cite{Sandler2001}, \cite{Valli2000}. One or both hands are utilized in order to express the sign, while making the necessary movements. The signs that are rendered in this way are the main elements that distinguish sign language from spoken language \cite{Sapountzaki2015}.
Finally, there is a Finger Alphabet that is a morphological element of sign language \cite{Aarssen2018}. Finger alphabet represents Greek alphabet of spoken language and differs from signs. A Deaf person can use this alphabet in order to spell some Greek words as they are in a visualized way, or form names with his fingers \cite{Marschark2007}.
\subsection{Education and Information \& Communication Technologies (ICT)}
During the training process, the use of tools and software for educational purposes which utilize multimedia and internet technologies is proposed. In this way, students are enabled to develop and adapt the knowledge acquired at school to the modern educational environment, and have the opportunity to collect, represent, analyze, transfer, and utilize information. Mental processes and knowledge acquisition \cite{Jonassen2000} are utilized through an educational environment which results in the development of new skills and abilities. Therefore, a new learning culture is created and leads to a meaningful relationship between knowledge and its construction.
\section{Related Work - Applications of Sign language in Greece and Worldwide}
One of the most fundamental features of software applications is interaction. Interaction helps each user to be transformed from a passive recipient to an active member of learning process that keeps his interest undiminished.
Sign Language is a visual language, and with the contribution of video, it can be included in any application in order to transfer information and provide deaf/hard of hearing people with easy access to knowledge \cite{Kakoty2018},\cite{Kim2019}.
The majority of applications that have been developed to date are related to learning sign language and translating from signs to text or spoken language.
For example, the following are some applications from Greece and Worldwide:
\begin{enumerate}
\item Greek Sign Language: The web application has been operating with free access since 2016 and it was developed by the University of Patras. In this application, users can find signs of the basic vocabulary for everyday use. The application is aimed at children and adults who want to learn Greek sign language. However, there is no interaction between the user and the content \cite{Various2020a}.
\item Greek Sign Language Center: It is a free access web application. It has been developed by the Greek Sign Language Center and contains alphabetically ordered videos for sign learning. The platform is addressed to children and adults and provides quizzes for practice that contain videos with multiple-choice questions. Users can see a video and then choose the right answer.
\item DIOLKOS Software: Educational software for training in computers operation with terminology in Greek Sign Language, Greek, and English. Developed in 2006.
\item LEARNING MEANINGS Software: It is a teaching environment for Greek Sign Language (GSL) vocabulary developed in 2013. This software had been addressed to students in the first grades of Primary School. The arranging of its contents based on the principles, characteristics and rules regulate the vocabulary of the language.
\item CHILDREN'S DICTIONARY OF GREEK SIGNIFICANT LANGUAGE Software: It included videos with Greek signs translated into the corresponding Greek words. It had been addressed to kindergarten and first grades of primary school children. Developed in 2001.
\item Greek Sign Language Courses: This application contains words translated into GSL. It includes basic signs, complex signs, synonyms and antonyms, the finger alphabet and vocabulary groups. It is available for free and is addressed to all age groups \cite{Various2020}.
\item ASL-LEX: It is an online application that displays signs of American Sign Language. Users can see the frequency of use, lexical properties, the phonological coding, and other characteristics of each sign. Also, they have the aability to search for the written word of each sign to display \cite{Various2020d}.
\item preadTheSign: It is an online application that gives signs in many different sign languages. For example, English of the United States, English of India, German, Greek, Japanese, etc. This application groups its content in terms of subject and not alphabetically. Users can interact with the content by using 360 degrees images where, there are points of interest that the user can click and see the corresponding signs \cite{Various2020e}.
\item Handspeak: It is an online application that displays the signs in English Sign Language but also in American Sign Language. This application gives the content in alphabetical order and is addressed to groups of all age. The synonyms of each word are displayed and the user can see them by clicking on the corresponding word.Also, it is possible to display videos that show stories in sign language(storytelling in sign language) \cite{Lapiak2020} .
\end{enumerate}
\section{Application Description}
The proposed application (http://signlanguage.groupdvs.com/) has been developed aiming at the acquisition of a basic Greek Sign Language vocabulary that is used daily. Particularly, it is addressed to children and adults who want to learn Greek Sign Language(GSL), deaf children who do not have prior knowledge of GSL, parents of deaf children and hearing children who want to learn GSL. It has to be mentioned that relevant applications with a purely educational character both in Greece and internationally, do not exist. As it was mentioned above, there are some dictionarylike applications, which are used for Sign Language learning. Most of the aforementioned applications do not provide user interaction for practice and in-depth learning of sign language. The application was designed as an autonomous platform for tele-education and was created under the philisophy of open source software. In recent years our research team has developed a number of applications using open-source programming languages and tools such as PHP, MySQL and WordPress \cite{Fragulis2018}, \cite{Lazaridis2016}, \cite{Lazaridis2019}, \cite{Michailidi2020}, \cite{Papatsimouli2020}, \cite{Skordas2017}.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{Webserver.jpg}
\caption{Main Workflow}
\label{lab-f2}
\end{figure}
In the figures (\ref{lab-f2})-(\ref{lab-schema}), we give the main \& detailed Application workflow. The administrator can upload training material to the application and users have access to this material and can practice aiming at acquiring knowledge through their active participation (active learning).
\subsection{Requirements' Analysis}
The main points of the requirements analysis found for online educational applications are the following:
\begin{itemize}
\item Vocabulary categorization into semantic sections for the facilitation of users
\item Videos are the best format for use, except for the display of finger alphabets where images are appropriate, too
\item There should be a connection between each word and each meaning in a visual form (image/video) in order to support users who have a poor knowledge of Greek written language (e.g. children or adults with low educational level)
\item The interface should be as simple as possible and easy for users
\item In this application, the user will be able to see displays of signs/videos. In addition, practice will provide users with increased and appropriate knowledge through active learning
\item The repetition of videos should be possible
\item Audio integration should be available in order to support hard of hearing people
\end{itemize}
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{schema.jpg}
\caption{Detailed Application Workflow}
\label{lab-schema}
\end{figure}
\subsection{User Interface and Functionality}
User’s registration is not required in order to access the application.
During the operation of the application, there is immediate feedback for every action of the user.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{f1a.jpg}
\caption{Home page}
\label{fig-f1}
\end{figure}
\subsubsection{HomePage}
The Home page (http://signlanguage.groupdvs.com/) consists of 5 different menus: Home, Greek Sign Language, English Sign Language, GSL to ESL, Contact. The content is automatically translated depending on the language chosen by each user (English, Chinese, German, etc.) The Home page provides information about sign language and the Contact page allows the user to contact the administrator.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=4cm,
keepaspectratio,]{f2.jpg}
\caption{Greek Sign Language submenu}
\label{fig-2}
\end{figure}
\subsubsection{Greek Sign Language Menu}
In this section, the user can learn the Greek Sign Language alphabet, search for signs which are categorized in alphabetical order and practice on the acquired knowledge. English Sign Language menu has the same structure as the Greek Sign Language one. Specifically, it contains alphabet presentation, search for signs by their first letter, and practice.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=2cm,
keepaspectratio,]{f3.jpg}
\caption{Example of an alphabetical display of signs per letter}
\label{fig-3}
\end{figure}
The user selects the category that he wants, and all the words that start with the selected letter are displayed. As it is presented in figure (\ref{fig-4}), for each selected word the written text, the sign (in video format), and the pronunciation are displayed, for supporting the hard of hearing people, who do not have complete hearing loss.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=8cm,
keepaspectratio,]{f4.jpg}
\caption{Example of selected word display}
\label{fig-4}
\end{figure}
\subsubsection{Translation of Greek Sign Language to English Sign Language}
In this menu, the user can translate Greek sign language signs into English sign language and exercise in the translation of signs, as well.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=3cm,
keepaspectratio,]{f5.jpg}
\caption{Greek sign language to English sign language}
\label{fig-5}
\end{figure}
The first option is translation of Greek sign language signs into English sign language and vice versa. The signs are categorized according to Greek Alphabet, and the user can select the word group to be displayed. At this point, the written text, the sign (in video format), and the pronunciation are displayed both in Greek and English (Figures \ref{fig-5}-\ref{fig-6}).
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=3cm,
keepaspectratio,]{f6.jpg}
\caption{Group content to view}
\label{fig-6}
\end{figure}
\subsubsection{Practice content task}
In this task, users can exercise the Greek finger alphabet with Figures. Users see the letters and then enter the correct answer. At the end of the task, users can see the achieved results on the screen.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=6cm,
keepaspectratio,]{f7.jpg}
\caption{Choose the right answer}
\label{fig-7}
\end{figure}
The user enters the answer in the box, and then he presses the Check button (Figure \ref{fig-7}) . The answer is corrected automatically depending on the result (Figures \ref{fig-8}-\ref{fig-9}). After completing the task, users can see the overall results (Figure \ref{fig-2})).
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=4cm,
keepaspectratio,]{f8.jpg}
\caption{Correct answer}
\label{fig-8}
\end{figure}
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=4cm,
keepaspectratio,]{f9.jpg}
\caption{Wrong answer}
\label{fig-9}
\end{figure}
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=8cm,
keepaspectratio,]{f10.jpg}
\caption{Final results presentation}
\label{fig-2}
\end{figure}
\subsubsection{Choose the correct answers for Greek Sign Language practice}
This task is about Greek Finger Alphabet practice. Various options are presented and the user chooses the right combination of a letter and a Figure. When the test is finished, the achieved results are displayed (Figure \ref{fig-11}).
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=5cm,
keepaspectratio,]{f11.jpg}
\caption{Select the correct match}
\label{fig-11}
\end{figure}
\subsubsection{Arrange in correct order task in the Greek finger alphabet}
In this task, the user places the Figures in correct order so that the finger alphabet appears in alphabetical order(Figure \ref{fig-12}).
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=8cm,
keepaspectratio,]{f12.jpg}
\caption{Arrange in correct order}
\label{fig-12}
\end{figure}
Also the users can see the time that they spent on doing the task, and the total moves that were made. In addition, they can press either Check button to complete the task or Show Solution to see the solution.
\subsubsection{English finger alphabet recognition task}
Here, the user recognizes the English finger alphabet (Figure \ref{fig-13}). The user sees figures depicting the English Sign Language letters and by moving the cursor over them, the English characters corresponding to them will appear.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f13a.jpg}
\caption{Recognition of English Sign Language letters}
\label{fig-13}
\end{figure}
\subsubsection{Videos with Multiple Choice questions}
In this task, the user can see the signs in video format (Figure \ref{fig-14})., choose the correct answer from the available options that are shown and press the Check button to check the answer.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{f14.jpg}
\caption{Videos with Multiple Choice questions task}
\label{fig-14}
\end{figure}
\subsubsection{Choose the first letter of the word}
In this task, the initial letters in English Sign Language and pictures that start with the corresponding letters are given (Figures \ref{fig-15} - \ref{fig-17a}). The user makes the right combinations, and feedback appears by pressing the Check button.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f15a.jpg}
\caption{Choose the first letter of the word task}
\label{fig-15}
\end{figure}
\begin{figure}[b]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f16a.jpg}
\caption{Match the letters with the right Figure}
\label{fig-16}
\end{figure}
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f17a.jpg}
\caption{Feedback given}
\label{fig-17a}
\end{figure}
\subsubsection{Practice in storytelling}
In this task, the user watches some videos including signs and then uses these signs in order to write a short story (Figure \ref{fig-17}). It is one of the tasks that contribute to the development of imagination of users as everyone can write his/her own story without being right or wrong. It is a non-linear task that helps the development of written narrative speech.
\begin{figure}[b]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f17.jpg}
\caption{Practice in storytelling}
\label{fig-17}
\end{figure}
\subsubsection{Memory cards task}
In this task, the user matches the letters of the Greek Sign Language with the English Sign Language ones (Figure \ref{fig-18}) . The user can see the moment and the amount of times the cards were turned.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=5cm,
keepaspectratio,]{f18.jpg}
\caption{Memory Cards Task}
\label{fig-18}
\end{figure}
\subsubsection{Interactive Videos Task}
In this task, a sign is shown on a video and then it stops (Figure \ref{fig-19}). The user needs to answer what this sign means by clicking on an active point on the video, and choosing one of the available options that are shown. The user can check his answer by pressing the Check button.
\begin{figure}[h]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f19.jpg}
\caption{Interactive videos task}
\label{fig-19}
\end{figure}
\begin{figure}[b]
\includegraphics
[width=.5\textwidth,
height=10cm,
keepaspectratio,]{f20a.jpg}
\caption{Answer selection in interactive videos task}
\label{fig-20}
\end{figure}
\section{Conclusions}
Sign languages are similar to spoken languages and it is the communication system that is used in deaf communities. It is acquired during childhood without being instructed, achieves the same social and mental functions as spoken languages and can be interpreted in real-time \cite{Cormier2006}. The introduction of ICT in education can bring important results to the educational process. Moreover, the appropriate introduction of educational methods in online platforms can give the best results in knowledge acquisition and make the educational process more interesting. In the current application, users can learn both Greek and English sign language and translate them, as well. The educational content of all these categories has been grouped in alphabetical order in order to enable the user to easily find it. Each word has been presented in written text both in Greek and English, in sign language in video format and its pronunciation has been presented as well in order to support the hard of hearing people without total hearing loss. The current platform -using only open source software- is not just a simple dictionary of signs. Users can interact with the educational content and actively participate in the educational process through active learning. The user can exercise in various types of tasks and receive feedback. User’s interaction is based on Figures, depicting the finger alphabet and videos displaying signs. The most important and innovative tasks that were used for sign language learning are:
\begin{itemize}
\item Arrange in correct order, in which the user places the letters of the finger alphabet in right order
\item Memory Cards where users have to match each Greek finger alphabet letter with the English finger alphabet one
\item Interactive videos, which show a sign and the user chooses what this sign means by choosing one of the available options
\item Choose the first letter of the word, where initial letters in English sign language and pictures that start with the corresponding letters are given. The user recognizes the letters of the finger alphabet and makes the right combinations.
\item Storytelling where videos are displayed, and the user can write his own story based on the signs that he just saw.
\end{itemize}
|
1,108,101,564,219 | arxiv | \section{Introduction}
A vector space $A$ is called a Poisson algebra provided that, beside addition, it has two $\mathbb{K}$-bilinear
operations which are related by derivation. First, with respect to multiplication, $A$
is a commutative associative algebra; denote the multiplication by $\mu(a,b)$ (or $a\cdot b$ or $ab$), where $a, b \in A$. Second, $A$ is a Lie algebra; traditionally here the Lie operation
is denoted by the Poisson brackets \{a,b\}, where $a, b \in A$. It is also assumed that
these two operations are connected by the Leibniz rule
$\{a\cdot b,c\} $= $a \cdot \{b,c\} +b \cdot \{a, c\}$, $a$, $b$, $c \in A$ \cite{gr,kubo}. Poisson algebras are the key to recover Hamiltonian mechanics and are also central in the study of quantum groups. Manifolds with a Poisson algebra structure are known as Poisson manifolds, of which the symplectic manifolds and the Poisson-Lie groups are a special case. Their generalization is known as Nambu algebras \cite{n:generalizedmech,f:nliealgebras,GDito,GDitoF}, where the binary bracket is generalized to ternary or $n$-ary bracket.
A Hom-algebra structure is a multiplication on a vector space where a usual structure is twisted by a homomorphism \cite{ms}. The first motivation to study nonassociative Hom-algebras comes from quasi-deformations of Lie algebras of
vector fields, in particular $q$-deformations of Witt and Virasoro algebras. The structure of (Non-Commutative)-Hom-Poisson algebras are twisted generalization of (Non-Commutative)-Poisson algebras \cite{Yau:Noncomm}. A (Non-Commutative)-Hom-Poisson algebra $A$ is defined by a linear self-map $\alpha$, and two binary operations $\{, \}$ (the Hom-Lie bracket) and $\mu$ (the Hom-associative product). The associativity, the Jacobi identity, and the Leibniz identity in a (Non-Commutative)-Poisson algebra are replaced by their Hom-type (i.e. $\alpha$-twisted) identities. Motivated by a categorical study of Hom-algebras and new type of categories, generalized algebraic structures endowed with two commuting multiplicative linear maps, called BiHom-algebras including BiHom-associative algebras, BiHom-Lie algebras and BiHom-Bialgebras were introduced in \cite{GrazianiMakhloufMeniniPanaite}. Therefore, when the two linear maps are the same, BiHom-algebras will be turn to Hom-algebras in some cases. Various studies deal with these new type of algebras, see \cite{RepBiHomLie,luimakhlouf,luimakhlouf2} and references therein.
The purpose of this paper is to study (Non-Commutative) BiHom-Poisson algebras. The paper is organized as follows. In Section 2, we review the definition of BiHom-associative and BiHom-Lie algebras and then generalize the Poisson algebra notion to BiHom case. This new structure is illustrated with some examples. In Section 3, we study the concept of module of BiHom-Poisson algebra, which is based on BiHom-modules of BiHom-associative and BiHom-Lie algebras. Then we define semi-direct product of (Non-Commutative) BiHom-Poisson algebras. In Section 4, we describe BiHom-Poisson algebras using only one binary operation and the twisting maps via the polarization-depolarization process. We show that, admissible BiHom-Poisson algebras, and only these BiHom-algebras, give rise to BiHom-Poisson algebras via polarization. In the last section, we give the classification of 2-dimensional BiHom-Poisson algebras.
\section{Definitions and Examples }
In this section, we recall some basic definitions about BiHom-associative and BiHom-Lie algebras \cite{GrazianiMakhloufMeniniPanaite} and then we generalize the Poisson algebras notion to BiHom case. We assume
that $\mathbb{K}$ will denote a commutative field of characteristic zero.
\begin{df}
A BiHom-associative algebra is a quadruple $(A,\mu,\alpha,\beta)$ consisting of vector space $A$, a bilinear mapping
$\mu:A\times A\rightarrow A$ and two homomorphisms $\alpha,\beta:A\rightarrow A$ such that for $x,y,z\in A$ we have
\begin{eqnarray}
&& \alpha \beta = \beta \alpha,\nonumber\\
&& \alpha \circ\mu=\mu\circ\alpha^{\otimes^2},~~\beta \circ\mu=\mu\circ\beta^{\otimes^2},\nonumber\\
&&\mu(\alpha(x),\mu(y,z))=\mu(\mu(x,y),\beta(z))~~(\textrm{BiHom-associative\ condition}),\label{Bihom associative}
\end{eqnarray}
where $\alpha\beta =\alpha\circ\beta$.
\end{df}
We recall that the BiHom-commutative condition is $\mu(\beta(x),\alpha(y))=\mu(\beta(y),\alpha(x))$, for all $x,y \in A$.
\begin{df} A BiHom-Lie algebra is a quadruple $(A,[\cdot,\cdot],\alpha,\beta)$ consisting of vector space $A$, a bilinear mapping
$[.,.]:A\times A\rightarrow A$ and two homomorphisms $\alpha,\beta:A\rightarrow A$ such that for $x,y,z\in A$ we have
\begin{eqnarray}
&& \alpha \beta = \beta \alpha,\nonumber\\
&& \alpha ([x,y])=[\alpha(x),\alpha(y)],~~\beta ([x,y])=[\beta(x),\beta(y)],\nonumber\\
&& [\beta(x),\alpha(y)]=-[\beta(y),\alpha(x)], \ (\textrm{BiHom-skew-symmetric})\label{anti-biho}\\
&& \circlearrowleft_{x,y,z}[\beta^2(x),[\beta(y),\alpha(z)]]= 0~~(\textrm{BiHom-Jacobi\ condition}),\label{Bihom jacobi}
\end{eqnarray}
where $\circlearrowleft_{x,y,z}$ denotes summation over the cyclic permutation on $x,y,z$.
\end{df}
If $\alpha$ is a bijective morphism, then the identity \eqref{Bihom jacobi} can be written \begin{equation}\label{Bihom jacobi1}
[\beta^2(x),[\beta(y),\alpha(z)]]=[[\alpha^{-1}\beta^2(x),\beta(y)],\alpha\beta(z)]+[\beta^2(y),[\beta(x),\alpha(z)]].
\end{equation}
\begin{df} A Poisson algebra is a triple $(A, \{\cdot,\cdot\}, \mu)$ consisting of a vector space $A$ and two bilinear maps $\{\cdot,\cdot\},\ \mu : A\times A \longrightarrow A$ satisfying
\begin{enumerate}
\item $(A, \{\cdot,\cdot\})$ is a Lie algebra,
\item $(A, \mu)$ is a commutative associative algebra,
\item for all $x, y \in A$ :
\begin{equation}\label{a}
\{\mu(x,y),z\} = \mu(\{x, z\},y)+\mu(x, \{y,z\})\ (\textrm{Compatibility\ identity}).
\end{equation}
\end{enumerate}
If $\mu$ is non-commutative then $(A, \{\cdot,\cdot\}, \mu)$ is a non-commutative Poisson algebra.
\end{df}
\begin{df} A BiHom-Poisson algebra is a 5-uple $(A, \{\cdot,\cdot\}, \mu, \alpha,\beta)$ consisting of a vector space $A$, two bilinear maps $\{\cdot,\cdot\},\ \mu : A\times A \longrightarrow A$ and two linear maps $\alpha,\ \beta:A \longrightarrow A$ satisfying
\begin{enumerate}
\item $(A, \{\cdot,\cdot\},\alpha,\beta)$ is a BiHom-Lie algebra,
\item $(A, \mu, \alpha,\beta)$ is a BiHom-commutative BiHom-associative algebra,
\item for all $x, y \in A$ :
\begin{equation}\label{a}
\{\mu(x,y),\alpha\beta(z)\} = \mu(\{x, \beta(z)\},\alpha(y))+\mu(\alpha(x), \{y,\alpha(z)\}).
\end{equation}
\end{enumerate}
If $\mu$ is non-BiHom-commutative then $(A, \{\cdot,\cdot\}, \mu, \alpha,\beta)$ is a non-BiHom-commutative BiHom-Poisson algebra.
\end{df}
We are using here a right handed Leibniz rule, one may call such algebras right BiHom-Poisson algebras. We refer to \cite{LMMP20} for left BiHom-Poisson algebras.
\begin{rem}
Obviously, a BiHom-Poisson algebra $(A, \{\cdot,\cdot\}, \mu, \alpha,\beta)$ for which $\alpha=\beta$ and $\alpha$ injective is just a
Hom-Poisson algebra $(A, \{\cdot,\cdot\}, \mu, \alpha)$.
\end{rem}
\begin{prop} Let $(A, \mu, \alpha,\beta)$ be a BiHom-associative algebra where $\alpha$ and $\beta$ are two bijective homomorphisms. Then the $5$-uple $(A, \{\cdot,\cdot\},\mu,\alpha,\beta)$, where the bracket is defined by
$$\{x,y\}=\mu(x,y)-\mu(\alpha^{-1}\beta(y),\alpha\beta^{-1}(x)),$$
for $x, y \in A$
is a non-commutative BiHom-Poisson algebra.
\end{prop}
\begin{proof}We show that $\alpha$ and $\beta$ are compatible with the bracket $\{\cdot,\cdot\}$ . For all $x, y \in A$, we have
\begin{eqnarray*}
\{\alpha(x),\alpha(y)\}&=& \mu(\alpha(x),\alpha(y))-\mu(\alpha^{-1}\beta(\alpha(y)),\alpha\beta^{-1}(\alpha(x)))\\
&=& \mu(\alpha(x),\alpha(y))-\mu(\beta(y),\alpha^2\beta^{-1}(x))\\
&=& \alpha(\{x,y\}).
\end{eqnarray*}
The second equality holds since $\alpha$ is even and $\alpha\circ \beta=\beta\circ \alpha$. In the same way, we check that
$\beta(\{x, y\})= \{\beta(x),\beta(y)\}$.\\
The BiHom-skew-symmetry $\{\beta(x),\alpha(y)\}=-\{\beta(y),\alpha(x)\}$ is obvious.\\
Therefore, it remains to prove the BiHom-Jacobi identity. For all $x, y,z \in A$, we have
\begin{eqnarray*}
\{\beta^2(x),\{\beta(y),\alpha(z)\}\}&=& \mu(\beta^2(x),\mu(\beta(y),\alpha(z)))-\mu(\mu(\alpha^{-1}\beta^2(y),\beta(z)),\alpha\beta(x)) \\
&-& \mu( \beta^2(x),\mu(\beta(z),\alpha(y)))+\mu(\mu(\alpha^{-1}\beta^2(z),\beta(y)),\alpha\beta(x)).
\end{eqnarray*}
And, we have
\begin{eqnarray*}
\{\beta^2(y),\{\beta(z),\alpha(x)\}\}
&=&\mu(\beta^2(y),\mu(\beta(z),\alpha(x)))-\mu(\mu(\alpha^{-1}\beta^2(z),\beta(x)),\alpha\beta(y)) \\
&-& \mu( \beta^2(y),\mu(\beta(x),\alpha(z)))+\mu(\mu(\alpha^{-1}\beta^2(x),\beta(z)),\alpha\beta(y)) .
\end{eqnarray*}
Similarly,
\begin{eqnarray*}
\{\beta^2(z),\{\beta(x),\alpha(y)\}\}
&=& \mu(\beta^2(z),\mu(\beta(x),\alpha(y)))-\mu(\mu(\alpha^{-1}\beta^2(x),\beta(y)),\alpha\beta(z)) \\
&-& \mu( \beta^2(z),\mu(\beta(y),\alpha(x)))+ \mu(\mu(\alpha^{-1}\beta^2(y),\beta(x)),\alpha\beta(z)).
\end{eqnarray*}
By BiHom-associativity, we find that
\begin{eqnarray*}
\circlearrowleft_{x,y,z}\{\beta^2(x),\{\beta(y),\alpha(z)\}\}&=& 0.
\end{eqnarray*}
Now, we show the compatibility condition, for $x,y,z\in P$, we have
\begin{align*}
& \{\mu(x,y),\alpha\beta(z)\} -\mu(\{x, \beta(z)\},\alpha(y))-\mu(\alpha(x) \{y,\alpha(z)\}) \\
= &\mu(\mu(x,y),\alpha\beta(z))-\mu(\beta^2(z),\mu(\alpha\beta^{-1}(x),\alpha\beta^{-1}(y)) -\mu(\mu(x, \beta(z)),\alpha(y))\\&+\mu(\mu(\alpha^{-1}\beta^2(z),\alpha\beta^{-1}(x)),\alpha(y)) -\mu(\alpha(x),\mu(y,\alpha(z)))+\mu(\alpha(x),\mu(\beta^2(z),\alpha\beta^{-1}(y)))=0.
\end{align*}
\end{proof}
\begin{df}
Let $(A,\mu,\{.,.\},\alpha,\beta)$ and $(A',\mu',\{.,.\}',\alpha',\beta')$ be two BiHom-Poisson algebras.
A linear map $f : A\rightarrow A'$ is a \emph{morphism} of BiHom-Poisson algebras if it satisfies for all $x_1,x_2\in A$:
\begin{eqnarray}
f(\{x_{1},x_{2}\}) &=& \{f(x_{1}),f(x_{2})\}' ,\\
f \circ \mu&=&\mu'\circ f^{\otimes 2},\\
f\circ \alpha &= & \alpha' \circ f.\\
f\circ \beta &= & \beta' \circ f.
\end{eqnarray}
It said to be a \emph{weak morphism} if hold only the two first conditions.
\end{df}
\begin{df}
Let $(A,\mu,\{.,.\},\alpha,\beta)$ be a BiHom-Poisson algebra.
It is said to be \emph{multiplicative} if
\begin{eqnarray*}
\alpha(\{x_{1},x_{2}\})&=&\{\alpha(x_{1}),\alpha(x_{2})\},\\
\beta(\{x_{1},x_{2}\})&=&\{\beta(x_{1}),\beta(x_{2})\},\\
\alpha \circ \mu&=&\mu \circ \alpha^{\otimes 2}.\\
\beta \circ \mu&=&\mu \circ \beta^{\otimes 2}.
\end{eqnarray*}
It is said to be \emph{regular} if $\alpha$ and $\beta$ are bijective.
\end{df}
\begin{prop}\label{twist}
Let $(A, \{\cdot,\cdot\}, \mu)$ be an ordinary Poisson algebra over a field $\mathbb{K}$ and let
$\alpha,\beta: A\rightarrow A$ be two commuting morphisms. Define the two linear maps $\{\cdot,\cdot\}_{ \alpha,\beta},\mu_{ \alpha,\beta}:A\otimes A\longrightarrow A$ by
$$\{x,y\}_{ \alpha,\beta}=\{ \alpha(x),\beta(y)\}\ \textrm{and} \ \mu_{ \alpha,\beta}(x,y)=\mu (\alpha(x),\beta(y)),$$ for all $x,y\in A$.\\
Then $A_{ \alpha,\beta}
:=(A, \{\cdot,\cdot\}_{ \alpha,\beta}, \mu_{ \alpha,\beta} ,\alpha,\beta)$ is a BiHom-Poisson algebra.
\end{prop}
\begin{proof}
We already have $(A, \mu, \alpha,\beta)$ is a BiHom-commutative BiHom-associative algebra and $(A, \{ , \}_{\alpha,\beta}, \alpha,\beta)$ is a BiHom-Lie algebra. It remains to check the BiHom-Leibniz identity. Let $x,\ y,\ z\in A$, we have
\begin{align*}
& \{\mu_{ \alpha,\beta}(x,y),\alpha\beta(z)\}_{ \alpha,\beta} - \mu_{ \alpha,\beta}(\{x,\beta(z)\}_{ \alpha,\beta},\alpha(y))-\mu_{ \alpha,\beta}(\alpha(x), \{y,\alpha(z)\}_{ \alpha,\beta})\\
&= \{\mu(\alpha^2(x),\alpha\beta(y)),\alpha\beta^2(z)\} - \mu(\{\alpha^2(x), \alpha\beta^2(z)\},\alpha\beta(y))-\mu(\alpha^2(x), \{\alpha\beta(y),\alpha\beta^2(z)\}) \\
& = \{\mu(X,Y),Z)\} - \mu(\{X, Z\},Y)-\mu(X, \{Y,Z\})=0,
\end{align*}
where $X=\alpha^2(x),\ Y=\alpha\beta(y),\ Z=\alpha\beta^2(z).$
\end{proof}
\begin{rem}
Let $(A, \{\cdot,\cdot\}, \mu,\alpha,\beta)$ be a BiHom-Poisson algebra and $\alpha', \beta': A \to A$ two BiHom-Poisson algebra morphisms such that any of the maps $\alpha, \beta, \alpha', \beta'$ commute. Define new multiplications on $A$ by:
\begin{align*}
& \{x,y\}'= \{\alpha'(x) , \beta'(y)\}, \quad
\mu'(x ,y)=\mu( \alpha'(x) , \beta'(y)).
\end{align*}
Then, $(A, \{\cdot,\cdot\}', \mu',\alpha'\alpha,\beta'\beta)$ is a BiHom-Poisson algebra.
\end{rem}
\begin{exa}\label{example1HomPoisson}
Let $\{e_1,e_2,e_3\}$ be a basis of a $3$-dimensional vector space
$A$ over $\mathbb{K}$. Consider the following multiplication $\mu$, skew-symmetric
bracket and linear map $\alpha$ on $A-
=\mathbb{K}^3${\rm :}
\[
\begin{array}{ll}
\begin{array}{lll}
\mu(e_1,e_1) &=& e_1, \ \\
\mu(e_1,e_2) &=& \mu(e_2,e_1)=e_3,\\
\end{array}
& \quad
\begin{array}{lll}
\{ e_1,e_2 \}&=& a e_2+ b e_3, \ \\
\{e_1, e_3 \}&=& c e_2+ d e_3, \ \\
\end{array}
\end{array}
\]
\[
\alpha (e_1)= \lambda_1 e_2+\lambda_2 e_3 , \quad
\alpha (e_2) =\lambda_3 e_2+\lambda_4 e_3 , \quad
\alpha (e_3)=\lambda_5 e_2+\lambda_6 e_3,
\]
where $a,b,c,d,\lambda_1,\lambda_2,\lambda_3,\lambda_4,\lambda_5,\lambda_6$ are parameters
in $\mathbb{K}$.
Assume that $\beta=Id$, hence $\alpha\beta=\beta\alpha$. Using Proposition \ref{twist}, we construct the following multiplicative BiHom-Poisson algebra defined by
\[
\begin{array}{ll}
\begin{array}{lll}
\mu_{\alpha\beta}(e_1,e_1) &=& \lambda_{1} e_3, \ \\
\mu_{\alpha\beta}(e_2,e_1) &=& \lambda_{3} e_3,\\
\mu_{\alpha\beta}(e_3,e_1) &=& \lambda_{5} e_3,\\
\end{array}
& \quad
\begin{array}{lll}
\{ e_1,e_1 \}_{\alpha\beta}&=& -(\lambda_{1} a+\lambda_{2} c) e_2 -(\lambda_{1} b+\lambda_{2} d) e_3, \ \\
\{ e_2,e_1 \}_{\alpha\beta}&=& -(\lambda_{3} a+\lambda_{4} c) e_2 -(\lambda_{3} b+\lambda_{4} d) e_3, \ \\
\{ e_3,e_1 \}_{\alpha\beta}&=& -(\lambda_{5} a+\lambda_{6} c) e_2 -(\lambda_{5} b+\lambda_{6} d) e_3. \ \\
\end{array}
\end{array}
\]
\end{exa}
Then we give an example of BiHom-Poisson algebra where $\alpha$ and $\beta$ are arbitrary and $\{e_1, e_2, e_3\}$ be a basis of a 3-dimensional vector space $A$ over $\mathbb{K}$.
\begin{exa}\label{example2biHomPoisson}
\[
\alpha (e_1)= e_2 , \quad
\alpha (e_2) = e_2 , \quad
\alpha (e_3)= e_3.
\]
\[
\beta (e_1)= e_1 , \quad
\beta (e_2) = e_2 .
\]
\[
\begin{array}{ll}
\begin{array}{lll}
\mu(e_1,e_2) &=& \lambda_{1} e_2, \ \\
\mu(e_2,e_1) &=& \lambda_{1} e_1,\\
\end{array}
& \quad
\begin{array}{lll}
\{ e_1,e_2 \}= a e_3, \ \\
\end{array}
\end{array}
\]
where $a,\lambda_1$ are parameters
in $\mathbb{K}$.
\end{exa}
Another example of BiHom-Poisson algebras of dimension 3 with basis $\{e_1, e_2, e_3\}$ is given where $\alpha$ and $\beta$ are diagonal.
\begin{exa}\label{example3biHomPoisson}
\[
\alpha (e_1)= a e_1 , \quad
\alpha (e_2) = b e_2 .
\]
\[
\beta (e_1)= c e_1 , \quad
\beta (e_2) = d e_2 .
\]
\[
\begin{array}{ll}
\begin{array}{lll}
\mu(e_3,e_3) = \lambda_{1} e_3, \ \\
\end{array}
& \quad
\begin{array}{lll}
\{ e_3,e_3 \}= \lambda_{2} e_3, \ \\
\end{array}
\end{array}
\]
where $a,b,c,d,\lambda_1,\lambda_2$ are parameters in $\mathbb{K}$.
\end{exa}
In the sequel we define a direct sum and tensor product of a BiHom-Poisson algebra and a BiHom-associative symmetric algebra.
\begin{thm}
Let $(A_{1},\mu_{1},\{.,.\}_{1},\alpha_{1},\beta_{1})$ and $(A_{2},\mu_{2},\{.,.\}_{2},\alpha_{2},\beta_{2})$ be two (non-BiHom-commutative) BiHom-Poisson algebras. Let $\mu_{A_{1}\oplus A_{2}}$ be a bilinear map on $A_{1}\oplus A_{2}$ defined for $x_1,y_1\in A_1$ and $x_2,y_2\in A_2$ by
$$\mu(x_{1}+x_{2},y_{1}+y_{2})=\mu_{1}(x_{1},y_{1})+\mu_{2}(x_{2},y_{2}),$$ $\{.,.\}_{A_{1}\oplus A_{2}}$ a bilinear map defined by $$\{x_{1}+x_{2},y_{1}+y_{2}\}_{A_{1}\oplus A_{2}}=\{x_1,y_1\}_1+\{x_2,y_2\}_2$$ and $\alpha_{A_{1}\oplus A_{2}}$ a linear map defined by $$\alpha_{A_{1}\oplus A_{2}}(x_1+x_2)=\alpha_1(x_1)+\alpha_2(x_2),$$
and $\beta_{A_{1}\oplus A_{2}}$ a linear map defined by $$\beta_{A_{1}\oplus A_{2}}(x_1+x_2)=\beta_1(x_1)+\beta_2(x_2).$$ Then
$$(A_{1}\oplus A_{2},\mu_{A_{1}\oplus A_{2}},\{.,.\}_{A_{1}\oplus A_{2}},\alpha_{A_{1}\oplus A_{2}},\beta_{A_{1}\oplus A_{2}})$$
is a BiHom-Poisson algebra.
\end{thm}
\begin{thm}
Let $(A,\mu,\{.,.\},\alpha,\beta)$ be a BiHom-Poisson algebra and $(B,\mu',\alpha',\beta')$ be a BiHom-associative symmetric algebra, then
$$(A\otimes B,\{.,.\}_{A\otimes B}, \mu \otimes \mu',\alpha\otimes\alpha',\beta\otimes\beta'),$$
is a BiHom-Poisson algebra, where $\{.,.\}_{A\otimes B}=\{.,.\}\otimes\mu'. $
\end{thm}\begin{proof}
Since $\mu$ and $\mu'$ are both BiHom-associative multiplication whence a tensor product $\mu \otimes \mu'$ is BiHom-associative. Also the BiHom-commutativity of
$\mu \otimes \mu'$, the BiHom-skewsymmetry of $\{.,.\}$ and the BiHom-commutativity of $\mu$ imply the BiHom-skewsymmetry of $\{.,.\}_{A\otimes B}$. Same, since the BiHom-Jacobi identity of $\{.,.\}$ and the BiHom-associative of $\mu'$ are satisfy then $\{.,.\}_{A\otimes B}$ is a BiHom-Lie bracket on $A\otimes B$. Therefore, it remains
to check the BiHom-Leibniz identity.
We have
\begin{align*}
LHS=&\{\mu \otimes\mu'(a_{1}\otimes b_{1},a_{2}\otimes b_{2}),\alpha\beta\otimes\alpha'\beta'(a_{3}\otimes b_{3})\}_{A\otimes B}\\
&=\{\mu(a_{1},b_{1})\otimes \mu'(a_{2},b_{2}),\alpha\beta(a_{3})\otimes\alpha'\beta'(b_{3})\}_{A\otimes B}\\
&=\underbrace{\{\mu(a_{1},b_{1}),\alpha\beta(a_{3})\}_{A}}_{a'}\otimes \underbrace{\mu'(\mu'(a_{2},b_{2}),\alpha'\beta'(b_{3}))}_{b'}\\
\end{align*}
and
\begin{align*}
RHS=&\mu \otimes\mu'(a_{1}\otimes b_{1},\{\beta(a_{2})\otimes \beta'(b_{2}),\alpha(a_{3})\otimes\alpha'( b_{3})\}_{{A\otimes B}})\\
&+\mu \otimes\mu'(\{\alpha(a_{1})\otimes\alpha'( b_{1}),a_{3}\otimes b_{3}\}_{A\otimes B},\alpha \otimes \beta'(a_{2}\otimes b_{2}))\\
&=\mu \otimes\mu'(\alpha (a_{1})\otimes \alpha'(b_{1}),\{a_{2},\alpha(a_{3})\}\otimes \mu'(b_{2},\alpha'(b_{3})))\\
&+\mu \otimes\mu'(\{a_{1},\beta(a_{3})\}\otimes \mu'(b_{1},\beta'(b_{3})),\alpha (a_{2})\otimes \alpha'(b_{2}))\\
&=\underbrace{\mu(\alpha (a_{1}),\{a_{2},\alpha(a_{3})\})}_{c'}\otimes \underbrace{\mu'(\alpha'(b_{1}),\mu'(b_{2},\alpha'(b_{3})))}_{d'}\\
&+\underbrace{\mu(\{a_{1},\beta(a_{3})\},\alpha (a_{2}))}_{e'}\otimes \underbrace{\mu'(\mu'(b_{1},\beta'(b_{3})),\alpha'(b_{2})}_{f'}.
\end{align*}
With BiHom-Leibniz identity we have $a'=c'+e'$, and using the BiHom-associativity condition
we have $b'=d'=f'$. Therefore the left hand side is equal to the right hand side and the BiHom-Leibniz identity is proved. Then
\begin{center}
$(A\otimes B, \mu \otimes \mu',\{.,.\}_{A\otimes B},(\alpha\otimes\alpha',\beta \otimes\beta'))$
\end{center}
is a BiHom-Poisson algebra.
\end{proof}
\section{Modules and semi-direct product of BiHom-Poisson algebras}
In this section we introduce a representation theory of BiHom-Poisson algebras and provide a semi-direct product construction.
\begin{df} A representation of a BiHom-Lie algebra $(A, \{\cdot,\cdot\}, \alpha,\beta)$ on a vector space $V$ with respect to two commuting maps $\gamma,\nu\in End(V)$ is a linear map $\rho_{\{\cdot,\cdot\}}:A\longrightarrow End(V)$, such that for any $x, y\in A$, the following equalities are satisfied:
\begin{eqnarray}
&&\rho_{\{\cdot,\cdot\}}(\alpha(x))\circ \gamma=\gamma\circ\rho_{\{\cdot,\cdot\}}(x),\\ &&\ \rho_{\{\cdot,\cdot\}}(\beta(x))\circ \nu=\nu\circ\rho_{\{\cdot,\cdot\}}(x),\\
&&\rho_{\{\cdot,\cdot\}}(\{\beta(x), y\})\circ \nu=(\rho_{\{\cdot,\cdot\}}(\alpha\beta(x))\circ\rho_{\{\cdot,\cdot\}}(y)
-\rho_{\{\cdot,\cdot\}}(\beta(y))\circ\rho_{\{\cdot,\cdot\}}(\alpha(x)).
\end{eqnarray}
\end{df}
\begin{prop}
Let $(A,\{\cdot,\cdot\})$ be a Lie algebra and $\rho:A\rightarrow End(V)$ be a representation of the Lie algebra on $V$. Let $\alpha,\beta:A\rightarrow A$ be two commuting morphisms and let $\gamma,\nu:V\rightarrow V$ be two commuting linear maps such that $\rho_{\{\cdot,\cdot\}}(\alpha(x))\circ \gamma=\gamma\circ\rho_{\{\cdot,\cdot\}}(x)$, $ \rho_{\{\cdot,\cdot\}}(\beta(x))\circ \nu=\nu\circ\rho_{\{\cdot,\cdot\}}(x)$ and $ \rho_{\{\cdot,\cdot\}}(\alpha(x))\circ \nu=-\rho_{\{\cdot,\cdot\}}(\beta(x))\circ \gamma$. Define $\widetilde{\rho}_{\{\cdot,\cdot\}}=\rho_{\{\cdot,\cdot\}}(\alpha(x))\circ\gamma.$ Then $(V,\widetilde{\rho},\gamma,\nu)$ is a representation of the BiHom-Lie algebra $A$.
\end{prop}
\begin{proof}
Let $x,y \in A$,
\small{\begin{eqnarray*}
&&\widetilde{\rho}_{\{\cdot,\cdot\}}(\{\beta(x), y\}_{\alpha,\beta})\circ \nu-\widetilde{\rho}_{\{\cdot,\cdot\}}(\alpha\beta(x))\circ\widetilde{\rho}_{\{\cdot,\cdot\}}(y)
\widetilde{\rho}_{\{\cdot,\cdot\}}(\beta(y))\circ\widetilde{\rho}_{\{\cdot,\cdot\}}(\alpha(x))\\&&
=\widetilde{\rho}_{\{\cdot,\cdot\}}({\alpha\beta(x), \beta(y)})\circ \nu-\widetilde{\rho}_{\{\cdot,\cdot\}}(\alpha\beta(x))\circ\rho_{\{\cdot,\cdot\}}(\alpha(y))\circ \nu
+\widetilde{\rho}_{\{\cdot,\cdot\}}(\beta(y))\circ\rho_{\{\cdot,\cdot\}}(\alpha^2(x))\circ \nu=\\&&\rho_{\{\cdot,\cdot\}}({\alpha^2\beta(x), \alpha\beta(y)})\circ \nu^2-\rho_{\{\cdot,\cdot\}}(\alpha^2\beta(x))\circ\rho_{\{\cdot,\cdot\}}(\alpha\beta(y))\circ \nu^2
+\rho_{\{\cdot,\cdot\}}(\alpha\beta(y))\circ\rho_{\{\cdot,\cdot\}}(\alpha^2\beta(x))\circ \nu^2\\&&=
\big(\rho_{\{\cdot,\cdot\}}({\alpha^2\beta(x), \alpha\beta(y)})-\rho_{\{\cdot,\cdot\}}(\alpha^2\beta(x))\circ\rho_{\{\cdot,\cdot\}}(\alpha\beta(y))
+\rho_{\{\cdot,\cdot\}}(\alpha\beta(y))\circ\rho_{\{\cdot,\cdot\}}(\alpha^2\beta(x))\big)\circ \nu^2=0.
\end{eqnarray*}}
\end{proof}
Let $(A,\{\cdot,\cdot\},\alpha,\beta)$ be a BiHom-Lie algebra. Let $\rho_{\{\cdot,\cdot\}}:A\rightarrow End(V)$ be a representation of the BiHom-Lie algebra on $V$ with respect to $\gamma$ and $\nu$. Assume that the maps $\alpha$ and $\nu$ are bijective. On the direct sum of the underlying vector spaces $A\oplus V$, define $\widetilde{\alpha},\widetilde{\beta}:A\oplus V\longrightarrow A\oplus V$ by
\begin{eqnarray*}
\widetilde{\alpha}(x+a) &=& \alpha(x)+\gamma(a),\\
\widetilde{\beta}(x+a) &=& \beta(x)+\nu(a),
\end{eqnarray*}
and define a skewsymmetric bilinear map $[\cdot,\cdot]_{A\oplus V}:A\oplus V \times A\oplus V\longrightarrow A\oplus V$ by
\begin{eqnarray}
[(x+a),(y+b)]_{A\oplus V} &=& \{x,y\}+\rho_{\{\cdot,\cdot\}}(x)(b)-\rho_{\{\cdot,\cdot\}}(\alpha^{-1}\beta(y))(\gamma\nu^{-1}(a)).
\end{eqnarray}
\begin{thm}\label{LieProduitSDirect}\cite{RepBiHomLie} With the above notations, $(A\oplus V,[\cdot,\cdot]_{A\oplus V},\widetilde{\alpha},\widetilde{\beta})$ is a BiHom-Lie algebra.
\end{thm}
\begin{df}
Let $(A,\mu,\alpha,\beta)$ be a commutative BiHom-associative algebra. A representation (or a BiHom-module) on a vector space $V$ with respect to $\gamma,\nu\in End(V)$ is a linear map $\rho_{\mu}:A\longrightarrow End(V)$, such that for any $x, y\in A$, the following equalities are satisfied:
\begin{eqnarray}
& \rho_{\mu}(\alpha(x))\circ \gamma=\gamma\circ\rho_{\mu}(x),\ \rho_{\mu}(\beta(x))\circ \nu=\nu\circ\rho_{\mu}(x),
\\
& \rho_{\mu}(\mu(x, y))\circ \nu=\rho_{\mu}(\alpha(x))\rho_{\mu}(y).
\end{eqnarray}
\end{df}
Let $(A,\mu,\alpha,\beta)$ be a commutative BiHom-associative algebra and $(V,\rho_\mu,\gamma,\nu)$ be a representation of $A$. On the direct sum of the underlying vector spaces $A\oplus V$, define $\widetilde{\alpha},\widetilde{\beta}:A\oplus V\longrightarrow \mathcal{A}\oplus V$ by $$\widetilde{\alpha}(x+a)=\alpha(x)+\gamma(a)\ \textrm{and}\ \widetilde{\beta}(x+a)=\beta(x)+\nu(a) $$ and define a bilinear map $\mu_{A\oplus V}:A\oplus V \times \mathcal{A}\oplus V\longrightarrow A\oplus V$ by
\begin{eqnarray}
\mu_{A\oplus V}(x+a, y+b)&=& \mu(x, y)+\rho_\mu(x)(b)+\rho_\mu(\alpha^{-1}\beta(y))(\gamma\nu^{-1}(a)).
\end{eqnarray}
\begin{thm}\label{AssProduitSDirect} With the above notations, $(A\oplus V,\mu_{A\oplus V},\widetilde{\alpha},\widetilde{\beta})$ is a commutative BiHom-associative algebra.
\end{thm}
\begin{proof} By the fact that $\alpha,\beta$ are algebra homomorphisms with respect to $\mu$, for $x,y\in A,\ a,b\in V$, we have
\begin{align*}
\widetilde{\alpha}( \mu_{\mathcal{A}\oplus V}(x+a, y+b)) & = \widetilde{\alpha}(\mu(x, y)+\rho_\mu(x)(b)+\rho_\mu(\alpha^{-1}\beta(y))(\gamma\nu^{-1}(a))) \\
& = \alpha(\mu(x, y))+\gamma(\rho_\mu(x)(b))+\gamma(\rho_\mu(\alpha^{-1}\beta(y))(\gamma\nu^{-1}(a)))) \\
& \mu(\alpha(x),\alpha( y)))+\rho_\mu(\alpha(x))(\gamma(b)))+\rho_\mu(\alpha^{-1}\beta(\alpha(y)))(\gamma\nu^{-1}(\gamma(a)))))\\
&=\mu_{\mathcal{A}\oplus V}(\alpha(x)+\gamma(a), \alpha(y)+\gamma(b))\\
&=\mu_{\mathcal{A}\oplus V}(\widetilde{\alpha}(x+a), \widetilde{\alpha}(y+b)). \end{align*}
If $(\mathcal{A},\mu,\alpha,\beta)$ is a commutative BiHom-associative algebra,
\begin{eqnarray}\label{198a}
\mu_{A\oplus V}(\widetilde{\alpha}(x+a),\mu_{A\oplus V}\textbf{(}(y+b,z+c)\textbf{)})=\mu_{A\oplus V}\textbf{(}\mu_{A\oplus V}(x+a,y+b)\textbf{)},\widetilde{\beta}(z+c)),
\end{eqnarray}
for $x, y, z\in A$ and $a, b, c\in V$. Developing (\ref{198a}), we have
\begin{align*}
&\mu_{A\oplus V}(\widetilde{\alpha}(x+a),\mu_{A\oplus V}\textbf{(}(y+b,z+c)\textbf{)})\\&=
\mu_{A\oplus V}(\widetilde{\alpha}(x+a),\mu(y,z)+\rho_\mu(y)(c)+\rho_\mu(\alpha^{-1}\beta(z))(\gamma\nu^{-1}(b)))\\
&=\mu(\alpha(x),\mu(y,z))+
\rho_\mu(\alpha(x))\circ\rho_\mu(y)(c))
+\rho_\mu(\alpha(x))\circ\rho_\mu(\alpha^{-1}\beta(z))(\gamma\nu^{-1}(b)))\\
&+\rho_\mu(\mu(\alpha^{-1}\beta(y),\alpha^{-1}\beta(z)))(\gamma^2\nu^{-1}(a)).
\end{align*} Similarly
\begin{align*}
&\mu_{A\oplus V}\textbf{(}\mu_{A\oplus V}(x+a,y+b),\widetilde{\beta}(z+c)\textbf{)}\\&=\mu_{A\oplus V}\textbf{(}\mu(x,y)+\rho_\mu(x)(b)+
\rho_\mu(\alpha^{-1}\beta(y))(\gamma\nu^{-1}(a)),\widetilde{\beta}(z+c)\textbf{)}
\\
&=\mu(\mu(x,y),\beta(z))+\rho_\mu(\mu(x,y))\circ \nu(c)+\rho_\mu(\alpha^{-1}\beta^2(z))\circ\rho_\mu(\alpha\beta^{-1}(x))(\gamma\nu^{-1}(b))+\\&
\rho_\mu(\alpha^{-1}\beta^2(z))\circ\rho_\mu(y)(\gamma^2\nu^{-2}(a)).
\end{align*}
\end{proof}
\begin{df}
Let $(A,\{\cdot,\cdot\},\mu,\alpha,\beta)$ be a BiHom-Poisson algebra, $V$ be a vector space and $ \rho_{\{\cdot,\cdot\}},\rho_{\mu}:A\longrightarrow End(V)$ be two linear maps and also $\gamma,\nu: V \longrightarrow V$ be two linear maps. Then $(V,\rho_{\{\cdot,\cdot\}},\rho_{\mu},\gamma,\nu)$ is called a representation of $A$ if $(V,\rho_{\{\cdot,\cdot\}},\gamma,\nu)$ is a representation of $(A,\{\cdot,\cdot\},\alpha,\beta)$ and $(V,\rho_{\mu},\gamma,\nu)$ is a representation of $(A,\mu,\alpha,\beta)$ and they are compatible in the sense that for any $x,y\in A$
\begin{eqnarray}
\label{RepComp1}\rho_{\{\cdot,\cdot\}}(\mu(x, y))\nu &=& \rho_{\mu}(\beta(y))\rho_{\{\cdot,\cdot\}}(x) +\rho_{\mu}(\alpha(x))\rho_{\{\cdot,\cdot\}}(y),\\
\label{RepComp2}\rho_{\mu}(\{\beta(x),y\})\nu &=&-\rho_{\mu}(\alpha\beta(x))\rho_{\{\cdot,\cdot\}}(y)
-\rho_{\{\cdot,\cdot\}}(\beta(y))\rho_{\mu}(\alpha(x)).
\end{eqnarray}
\end{df}
\begin{thm}
Let $(A,\{\cdot,\cdot\},\mu,\alpha,\beta)$ be a BiHom-Poisson algebra and $(V,\rho_{\{\cdot,\cdot\}},\rho_{\mu},\gamma,\nu)$ be a representation of $A$.
Then $(A\oplus V,\mu_{A\oplus V},\{\cdot,\cdot\}_{A\oplus V},\widetilde{\alpha},\widetilde{\beta})$ is a commutative BiHom-Poisson algebra, where the maps $\mu_{A\oplus V},\{\cdot,\cdot\}_{A\oplus V},\widetilde{\alpha}$ and $\widetilde{\beta}$ are defined in Theorem \ref{AssProduitSDirect} and Theorem \ref{LieProduitSDirect}.
\end{thm}
\begin{proof}We need only to show that the Leibniz identity is satisfied.
Let $x,y,z\in A$ and $a,b,c\in V$, we have
\begin{align*}
& \{\mu_{A\oplus V}(x+a,y+b),\widetilde{\alpha}\widetilde{\beta}(z+c)\}_{A\oplus V}-\mu_{A\oplus V}(\{x+a, \widetilde{\beta}(z+c)\}_{A\oplus V},\widetilde{\alpha}(y+b))\\&-\mu_{A\oplus V}(\widetilde{\alpha}(x+a), \{y+b,\widetilde{\alpha}(z+c)\}_{A\oplus V}) \\
=& \{\mu(x,y)+\rho_{\mu}(x)(b)+\rho_{\mu}(\alpha^{-1}\beta(y))
\gamma\nu^{-1}(a),\widetilde{\alpha}\widetilde{\beta}(z+c)\}_{A\oplus V} \\&-\mu_{A\oplus V}(\{x,\beta(z)\}+\rho_{\{\cdot,\cdot\}}(\alpha^{-1}\beta(x))\nu(c) -\rho_{\{\cdot,\cdot\}}(\alpha^{-1}\beta^2(z))\gamma\nu^{-1}(a),\widetilde{\alpha}(y+b))\\&-\mu_{A\oplus V}(\widetilde{\alpha}(x+a),\{y,\alpha(z)\}+\rho_{\{\cdot,\cdot\}}(y)\gamma(c)-\rho_{\{\cdot,\cdot\}}(\beta(z))\gamma\nu^{-1}(b)) \\
=&\{\mu(x,y),\alpha\beta(z)\}+\rho_{\{\cdot,\cdot\}}(\mu(x,y))\gamma\nu(c)
-\rho_{\{\cdot,\cdot\}}(\beta^2(z))\gamma\nu^{-1}(\rho_{\mu}(x)(b)+\rho_{\mu}(\alpha^{-1}\beta(y))
\gamma\nu^{-1}(a))\\
&-\mu(\{x,\beta(z)\},\alpha(y))-\rho_{\mu}(\{x,\beta(z)\})\gamma(b)
-\rho_{\mu}(\beta(y))\gamma\nu^{-1}(\rho_{\{\cdot,\cdot\}}(\alpha^{-1}\beta(x))\nu(c) -\rho_{\{\cdot,\cdot\}}(\alpha^{-1}\beta^2(z))\gamma\nu^{-1}(a))
\\&-\mu(\alpha(x),\{y,\alpha(z)\})-\rho_{\mu}(\alpha(x))(\rho_{\{\cdot,\cdot\}}(y)\gamma(c)-\rho_{\{\cdot,\cdot\}}(\alpha(z))(b))
-\rho_{\mu}(\{\alpha^{-1}\beta(y),\beta(z)\})\gamma^2\nu^{-1}(a)\\
=&\{\mu(x,y),\alpha\beta(z)\}+\rho_{\{\cdot,\cdot\}}(\mu(x,y))\gamma\nu(c)
-\rho_{\{\cdot,\cdot\}}(\beta^2(z))(\rho_{\mu}(\alpha\beta^{-1}(x))\gamma\nu^{-1}(b))\\
&-\rho_{\{\cdot,\cdot\}}(\beta^2(z))(\rho_{\mu}(y)
\gamma^2\nu^{-2}(a)) -\mu(\{x,\beta(z)\},\alpha(y))-\rho_{\mu}(\{x,\beta(z)\})\gamma(b)
\\& -\rho_{\mu}(\beta(y))(\rho_{\{\cdot,\cdot\}}(x)\gamma(c)) -\rho_{\mu}(\beta(y))(\rho_{\{\cdot,\cdot\}}(\beta(z))\gamma^2\nu^{-2}(a))
-\mu(\alpha(x),\{y,\alpha(z)\})\\&-\rho_{\mu}(\alpha(x))(\rho_{\{\cdot,\cdot\}}(y)\gamma(c)
) +\rho_{\mu}(\alpha(x))(\rho_{\{\cdot,\cdot\}}(\alpha(z))(b))
-\rho_{\mu}(\{\alpha^{-1}\beta(y),\beta(z)\})\gamma^2\nu^{-1}(a)=0.
\end{align*}
\end{proof}
\section{Admissible BiHom-Poisson algebras}
\label{sec:admissible}
A Poisson algebra has two binary operations, the Lie bracket and the commutative associative product. In this section we describe BiHom-Poisson algebra using only one binary operation and the twisting maps via the polarization-depolarization procedure.
\begin{df}
\label{def:admissible}
Let $(A,\mu,\alpha,\beta)$ be a BiHom-algebra. Then $A$ is called an \textbf{admissible BiHom-Poisson algebra} if it satisfies
\begin{align}
\label{admissibility}
as_{\alpha,\beta}(\beta(x),\alpha(y),\alpha^{2}(z)) &= \frac{1}{3}\{\mu(\mu(\beta(x),\alpha\beta(z)),\alpha^2(y)) - \mu(\mu(\beta^{2}(z),\alpha(x)),\alpha^{2}(y))\nonumber\\& + \mu(\mu(\beta(y),\alpha\beta(z)),\alpha^2(x)) - \mu(\mu(\beta(y),\alpha(x))\alpha^{2}\beta(z))\},
\end{align}
for all $x,y,z \in A$, where $as_{\alpha,\beta}$ is the BiHom-associator of $A$ defined by
\begin{equation}
\label{associator}
as_{\alpha,\beta}(x,y,z) = \mu(\mu(x,z),\beta(y)) - \mu(\alpha(x),\mu(x,z)) \end{equation}
\end{df}
If the BiHom-algebra $(A,\mu,\alpha,\beta)$ is regular then the identity \eqref{admissibility} is equivalent to
\begin{align}
\label{admissibility2}
as_{\alpha,\beta}(x,y,z) &= \frac{1}{3}\{\mu(\mu(x,\alpha^{-1}\beta(z)),\alpha(y)) - \mu(\mu(\alpha^{-2}\beta^{2}(z),\alpha\beta^{-1}(x)),\alpha(y))\nonumber \\&+ \mu(\mu(\alpha^{-1}\beta(y),\alpha^{-1}\beta(z)),\alpha^{2}\beta^{-1}(x)) - \mu(\mu(\alpha^{-1}\beta(y),\alpha\beta^{-1}(x))\beta(z))\}.
\end{align}
\begin{prop}
Let $(A,\mu)$ be an admissible Poisson algebra and $\alpha,\beta:A\rightarrow A$ two commuting Poisson algebra morphisms. Then $(A,\mu_{\alpha,\beta}=\mu\circ(\alpha\otimes\beta),\alpha,\beta)$ is an admissible BiHom-Poisson algebra.
\end{prop}
\begin{proof}
Let $x,y,z\in A$
\small{\begin{align*}
&\mu_{\alpha,\beta}(\mu_{\alpha,\beta}(\beta(x),\alpha(y)),\alpha^{2}\beta(z))
-\mu_{\alpha,\beta}(\alpha\beta(x),\mu_{\alpha,\beta}(\alpha(y),\alpha^{2}(z)))- \frac{1}{3}\{\mu_{\alpha,\beta}(\mu_{\alpha,\beta}(\beta(x),\alpha\beta(z)),\alpha^2(y))\\& - \mu_{\alpha,\beta}(\mu_{\alpha,\beta}(\beta^{2}(z),\alpha(x)),\alpha^{2}(y)) + \mu_{\alpha,\beta}(\mu_{\alpha,\beta}(\beta(y),\alpha\beta(z)),\alpha^2(x)) - \mu_{\alpha,\beta}(\mu_{\alpha,\beta}(\beta(y),\alpha(x))\alpha^{2}\beta(z))\}\\
=&\mu(\mu(\alpha^2\beta(x),\alpha^2\beta(y)),\alpha^{2}\beta^2(z))
-\mu(\alpha^2\beta(x),\mu(\alpha^2\beta(y),\alpha^{2}\beta^2(z)))- \frac{1}{3}\{\mu(\mu(\alpha^2\beta(x),\alpha^2\beta^2(z)),\alpha^2\beta(y))\\& - \mu(\mu(\alpha^2\beta^{2}(z),\alpha^2\beta(x)),\alpha^{2}\beta(y)) + \mu(\mu(\alpha^2\beta(y),\alpha^2\beta^2(z)),\alpha^2\beta(x)) - \mu(\mu(\alpha^2\beta(y),\alpha^2\beta(x))\alpha^{2}\beta^2(z))\}\\
=0
\end{align*}}
\end{proof}
As usual in \eqref{admissibility} the product $\mu$ is denoted by juxtapositions of elements in $A$. An admissible BiHom-Poisson algebra with $\alpha=\beta = Id$ is exactly an \textbf{admissible Poisson algebra} as defined in \cite{gr}.
To compare BiHom-Poisson algebras and admissible BiHom-Poisson algebras, we need the following function, which generalizes a similar function in \cite{mr}.
\begin{df}
Let $(A,\mu,\alpha,\beta)$ be a regular BiHom-algebra. Define the quadruple
\begin{equation}
\label{pa}
P(A) = \left(A, \{\cdot,\cdot\} , \bullet , \alpha, \beta\right),
\end{equation}
where $\{x,y\} = \frac{1}{2}(\mu(x,y) - \mu(\alpha^{-1}\beta(y)\alpha\beta^{-1}(x)))$ and $x\bullet y = \frac{1}{2}(\mu(x,y) +\mu(\alpha^{-1}\beta(y)\alpha\beta^{-1}(x)))$
called the \textbf{polarization} of $A$. We call $P$ the \textbf{polarization function}.
\end{df}
The following result says that admissible BiHom-Poisson algebras, and only these BiHom-algebras, give rise to BiHom-Poisson algebras via polarization.
\begin{thm}
\label{thm:polar}
Let $(A,\mu,\alpha,\beta)$ be a regular BiHom-algebra. Then the polarization $P(A)$ is a regular BiHom-Poisson algebra if and only if $A$ is an admissible BiHom-Poisson algebra.
\end{thm}
\begin{proof}
For any $x, y, z\in A$, we will check that the $(A, \{\cdot,\cdot\},\alpha,\beta)$ is a BiHom-Lie algebra. Indeed, we have
\begin{align*}
\{\beta(x),\alpha(y)\}=\beta(x)\alpha(y)-\beta(y)\alpha(x)=-\{\beta(y),\alpha(x)\},
\end{align*}
so the antisymmetry of $\{\cdot,\cdot\}$ is satisfied. Now, we verify the BiHom-Jacobi identity
\begin{align*}
&\{\beta^2(x), \{\beta(y), \alpha(z)\}\}+\{\beta^2(y), \{\beta(z), \alpha(x)\}\}+\{\beta^2(z), \{\beta(x), \alpha(y)\}\}\\
&=\{\beta^2(x), \frac{1}{2}(\beta(y)\cdot \alpha(z)-\beta(z)\cdot \alpha(y))\}+
\{\beta^2(y), \frac{1}{2}(\beta(z)\cdot \alpha(x)-\beta(x)\cdot \alpha(z))\}\\
&+\{\beta^2(z), \frac{1}{2}(\beta(x)\cdot \alpha(y)-\beta(y)\cdot \alpha(x))\}\\
&=\frac{1}{4}\Big(-as_{\alpha\beta}(\beta(x), \beta(y), \alpha(z))+as_{\alpha\beta}(\alpha^{-1}\beta^{2}(x), \beta(z), \alpha(y))\\
&-as_{\alpha\beta}(\alpha^{-1}\beta^{2}(y), \beta(z), \alpha(x))+as_{\alpha\beta}(\alpha^{-1}\beta^{2}(z), \beta(y), \alpha(x))\\
&+as_{\alpha\beta}(\alpha^{-1}\beta^{2}(y),\beta(x), \alpha(z))-as_{\alpha\beta}(\alpha^{-1}\beta^{2}(z), \beta(x), \alpha(y))\Big)=0.
\end{align*}
Next, we check that $(A, \bullet,\alpha,\beta)$ is a BiHom-commutative BiHom-associative algebra. Indeed, for any $x, y, z\in A $, the prove of BiHom-commutativity of $\mu$ is similar to the BiHom-skewsymmetry of $\{\cdot,\cdot\}$ checked before.
\begin{align*}
&(x\bullet y)\bullet \beta(z)-\alpha(x)\bullet(y\bullet z)=\frac{1}{2}(\mu(x,y)
-\mu(\alpha^{-1}\beta(y), \alpha\beta^{-1}(x)))\bullet \beta(z)
-\alpha(x)\bullet\frac{1}{2}(\mu(y,z)\\&-\mu(\alpha^{-1}\beta(z), \alpha\beta^{-1}(y)))=\frac{1}{4}\Big(as_{\alpha\beta}(x, y, z)-as_{\alpha\beta}(\alpha^{-2}\beta^{2}(z), y, \alpha^{2}\beta^{-2}(x))\\
&+as_{\alpha\beta}(\alpha^{-2}\beta^{2}(z), \alpha\beta^{-1}(x), \alpha\beta^{-1}(y))+\mu(\mu(\alpha^{-2}\beta^{2}(z),\alpha\beta^{-1}(x)),\alpha(y))\\
&-\mu(\mu(\alpha^{-1}\beta(y),\alpha\beta^{-1}(x)),\beta(z))+\mu(\mu(\alpha^{-1}\beta(y),\alpha^{-1}\beta(z)),\alpha^{2}\beta^{-1}(x))\\
&-as_{\alpha\beta}(x,\alpha^{-1}\beta(y), \alpha\beta^{-1}(z))-\mu(\mu(x,\alpha^{-1}\beta(y)),\alpha(z))=0.
\end{align*}
Finally, we check the condition :\ \ \
$\{x\bullet y,\alpha\beta (z)\}-\{x, \beta(z)\}\bullet \alpha(y)-\alpha(x)\bullet\{y,\alpha(z)\}.$ Indeed, we have
\begin{align*}
&\{x\bullet y,\alpha\beta (z)\}-\{x, \beta(z)\}\bullet \alpha(y)-\alpha(x)\bullet\{y,\alpha(z)\}\\
&=\frac{1}{4}\Big(as_{\alpha\beta}(x, y, \alpha(z))-as_{\alpha\beta}(x, \beta(z), \alpha\beta^{-1}(y))-as_{\alpha\beta}(\alpha^{-1}\beta(y),\beta( z), \alpha^{2}\beta^{-2}(x))+as(\alpha^{-1}\beta^{2}(z), y, \alpha^{2}\beta^{-2}(x))\\
&+as_{\alpha\beta}(\alpha^{-1}\beta^{2}(z), \alpha\beta^{-1}( x), \alpha\beta^{-1}(y))+as_{\alpha\beta}(\alpha^{-1}\beta(y),\alpha\beta^{-1} (x), \alpha(z))\Big)=0.
\end{align*}
The proof is finished.
\end{proof}
The following result says that there is a bijective correspondence between admissible BiHom-Poisson algebras and BiHom-Poisson algebras via polarization and depolarization.
\begin{cor}\label{cor:polar}
Let $(A,\{\cdot,\cdot\},\bullet,\alpha,\beta)$ be a BiHom-Poisson algebra. Define the BiHom-algebra
\begin{equation}
\label{pminusa}
P^-(A) = \left(A,\mu = \{\cdot,\cdot\}+\bullet, \alpha,\beta\right),
\end{equation}
then $P^-(A)$ is an admissible BiHom-Poisson algebra
called the \textbf{depolarization} of $A$. We call $P^-$ the \textbf{depolarization function}.
\end{cor}
\begin{proof}
If $(A,\mu,\alpha)$ is a regular admissible BiHom-Poisson algebra, then $P(A)$ is a BiHom-Poisson algebra by Theorem \ref{thm:polar}. We have $P^-(P(A)) = A$ because
$$
\mu(x,y) = \frac{1}{2}(\mu(x,y) - \mu(\alpha^{-1}\beta(y),\alpha\beta^{-1}(x))) + \frac{1}{2}(\mu(x,y) + \mu(\alpha^{-1}\beta(y),\alpha\beta^{-1}(x)), \ \forall \ x,\ y\in A.
$$\end{proof}
\section{Classification of BiHom-Poisson algebras}
Let $(A,\{\cdot,\cdot\},\mu,\alpha,\beta)$ be a BiHom-Poisson algebra, in this section, we provide a list of 2-dimensional BiHom-Poisson algebras, where the morphisms $\alpha$ and $\beta$ are diagonal.
\begin{table}[h]
\begin{tabular}{|l||p{4cm}||p{5cm}||p{3cm}|}
\hline
Algebras & Multiplications & Brackets & Morphisms \\ \hline
$Alg_{1} $ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$,\par $\mu(e_{2},e_{2})=c_{22}^{2}e_{2}$, & $\{ e_{1},e_{1}\}=d_{11}^{1}e_{1}$,\par $\{ e_{2},e_{1}\}=d_{21}^{1}e_{1}$,
& $\alpha = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right),$ \par$\beta = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{2}$ & $
\mu(e_{1},e_{1})=c_{11}^{1}e_{1},$ & $\{e_{1},e_{1}\}=d_{11}^{1}e_{1},$ \par $\{e_{2},e_{1}\}=e_{1}$, & $\alpha =
\left(
\begin{array}{cc}
0 & 0 \\
0 & a_{22}%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
0 & 0 \\
0 & b_{22}%
\end{array}
\right).$ \\ \hline
$Alg_{3}$ & $%
\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$,\par $\mu(e_{2},e_{2})=c_{22}^{2}e_{2},$ & $\{ e_{1},e_{2}\}=d_{12}^{2}e_{2}$,\par $\{ e_{2},e_{2}\}=d_{22}^{2}e_{2},$ & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right),$\par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right).$ \\ \hline
$Alg_{4}$ & $\mu(e_{2},e_{2})=c_{22}^{2}e_{2}$, & $\{e_{1},e_{2}\}=e_{2}$, \par$\{e_{2},e_{2}\}=d_{22}^{2}e_{2}$, & $\alpha = \left(
\begin{array}{cc}
a_{11} & 0 \\
0 & 0%
\end{array}
\right),$ \par$\beta = \left(
\begin{array}{cc}
b_{11} & 0 \\
0 & 0%
\end{array}
\right).$ \\ \hline
$Alg_{5}$ & $%
\mu(e_{2},e_{2})=c_{22}^{2}e_{2},$ & $\{e_{1},e_{1}\}=e_{1}$,\par$\{e_{1},e_{2}\}=d_{12}^{1}e_{1}$,\par$\{e_{2},e_{1}\}=d_{21}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right),$\par $\beta = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{6}$ & $\mu(e_{2},e_{1})= c_{21}^{1}e_{1}$, \par $\mu(e_{2},e_{2})= \frac{c_{21}^{1}}{b_{11}}e_{2}$, & $\{ e_{2},e_{1}\}=d_{21}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right),$ \par$\beta = \left(
\begin{array}{cc}
b_{11} & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{7}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, & $\{ e_{1},e_{2}\}=d_{12}^{2}e_{2}$,\par $\{ e_{2},e_{1}\}=d_{21}^{2}e_{2}$, \par$\{ e_{2},e_{2}\}=e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right),$\par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right).$ \\ \hline
$Alg_{8}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, & $\{e_{1},e_{2}\}=d_{12}^{2}e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right),$\par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & b_{22}%
\end{array}
\right).$ \\ \hline
$Alg_{9}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, \par $\mu(e_{1}, e_{2})=c_{11}^{1}b_{22}e_{2}$,& $\{e_{1},e_{2}\}=d_{12}^{2}e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right),$\par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & b_{22} %
\end{array}
\right).$ \\ \hline
$Alg_{10}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, & $\{e_{2},e_{1}\}=d_{21}^{2}e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & a_{22}%
\end{array}
\right),$ \par$\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0 %
\end{array}
\right).$ \\ \hline
\end{tabular}%
\end{table}
\clearpage
\begin{table}[h]
\begin{tabular}{|l||p{4cm}||p{5cm}||p{3cm}|}
\hline
$Alg_{11} $ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, \par $\mu(e_{2},e_{1})=c_{11}^{1}a_{22}e_{2}$, & $\{ e_{2},e_{1}\}=d_{21}^{2}e_{2}$,
& $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & a_{22}%
\end{array}
\right),$ \par$\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right).$ \\ \hline
$Alg_{12}$ & $%
\mu(e_{2},e_{2})=c_{22}^{2}e_{2},$ & $\{e_{1},e_{2}\}=d_{12}^{1}e_{1}$, & $\alpha =
\left(
\begin{array}{cc}
a_{11} & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{13}$ & $%
\mu(e_{1},e_{2})=c_{12}^{1}e_{1}$,\par $\mu(e_{2},e_{2})=\frac{c_{12}^{1}}{a_{11}}e_{2},$ & $\{ e_{1},e_{2}\}=d_{12}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
a_{11} & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{14}$ & $\mu(e_{2},e_{2})=c_{22}^{2}e_{2}$, & $\{e_{1},e_{1}\}=e_{1}$, \par$\{e_{2},e_{1}\}=d_{21}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{15}$ & $\mu(e_{2},e_{1})=c_{21}^{1}e_{1}$,\par $\mu(e_{2},e_{2})=c_{21}^{1}e_{2},$ & $\{e_{1},e_{1}\}=e_{1}$,\par $\{e_{2},e_{1}\}=d_{21}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{16}$ & \par $\mu(e_{2},e_{2})= c_{22}^{2}e_{2}$, & $\{ e_{2},e_{1}\}=d_{21}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
b_{11} & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{17}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, & $\{ e_{1},e_{2}\}=d_{12}^{2}e_{2}$,\par $\{ e_{2},e_{2}\}=e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{18}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$,\par $\mu(e_{1},e_{2})=c_{11}^{1}e_{2}$, & $\{e_{1},e_{2}\}=d_{12}^{2}e_{2}$,\par $\{e_{2},e_{2}\}=e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right).$ \\ \hline
$Alg_{19}$ & $\mu(e_{2},e_{2})=c_{22}^{2}e_{2}$, & $\{e_{1},e_{1}\}=e_{1}$, \par $\{e_{1},e_{2}\}=d_{12}^{1}e_{1}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
0 & 0 \\
0 & 1 %
\end{array}
\right).$ \\ \hline
$Alg_{20}$ & $\mu(e_{1},e_{1})=c_{11}^{1}e_{1}$, & $\{e_{2},e_{1}\}=d_{21}^{2}e_{2}$,\par $\{e_{2},e_{2}\}=e_{2}$, & $\alpha = \left(
\begin{array}{cc}
1 & 0 \\
0 & 1%
\end{array}
\right),$ \par $\beta = \left(
\begin{array}{cc}
1 & 0 \\
0 & 0 %
\end{array}
\right).$ \\ \hline
\end{tabular}%
\end{table}
\clearpage
|
1,108,101,564,220 | arxiv | \section{Introduction}
Hankel operators form an important class of operators on spaces of holomorphic functions. Initially there were two descriptions of Hankel operators, one considered it as an operator on the one-sided sequence space $l^2$ into itself, and the other as an operator from the Hardy space $H^2$ of the unit disk into its orthogonal complement in $L^2$. These operators are closely connected to problems in approximation theory as shown by now the famous work of Nehari \cite{Nehari57} on one hand, and Adamjan, Arov and Krein \cite{AdamjanEtall71} on the other. These operators also have a close connection to Toeplitz operators, and the commutators of projections and multiplication operators on $L^2$. More about Hankel operators and related topics can be found \cite{Peller03}
Let $\Omega$ be a bounded domain in $\mathbb{C}^n$ and $dV$ denote the Lebesgue volume measure. The Bergman space $A^2(\Omega)$ is the closed subspace of $L^2(\Omega)$ consisting of holomorphic functions on $\Omega$. The Bergman projection $P$ is the orthogonal projection from $L^2(\Omega)$ onto $A^2(\Omega)$ and can be written explicitly as $Pf(z) = \int_{\Omega}K(z, w)f(w) dV(w),$ where $K(z, w)$ is the Bergman kernel of $\Omega$. For $\beta\in L^2(\Omega)$ we can define the Hankel operator $H_{\beta}$ from $A^2(\Omega)$ into $L^2(\Omega)$ by $H_{\beta}(g) = (Id - P)(\beta g) .$ In general, $H_{\beta}$ is only densely defined on $A^2(\Omega)$. When $\Omega$ is a bounded pseudoconvex domain, Kohn's formula $P=Id-\overline{\partial}^*N\overline{\partial}$ ($N$ is the (bounded) inverse of complex Laplacian, $\overline{\partial}\dbar^*+\overline{\partial}^*\overline{\partial},$ and $\overline{\partial}^*$ is the Hilbert space adjoint of $\overline{\partial}$ on the square integrable $(0,1)$-forms on $\Omega$) implies that $H_{\beta}(f)=\overline{\partial}^*N\overline{\partial}(\beta f)=\overline{\partial}^{*}N(f\overline{\partial}\beta) $ for $f\in A^{2}(\Omega)$ and $\beta \in C^{1}(\overline{\Omega}).$ This will be the main tool in this paper as it will allow us to use several complex variables techniques to study Hankel operators. We refer the reader to \cite{ChenShawBook} for more information on the $\overline{\partial}$-Neumann operator.
The study of the size estimates of Hankel operators on Bergman spaces has inspired a lot of work in the last 20 years. The first result in the study of boundedness and compactness of Hankel operators was done by Axler \cite{Axler86} on the Bergman space of the open unit disk $\Delta$. He showed that, for $\beta$ holomorphic on $\Delta$, $H_{\overline \beta}$ is bounded if and only if $\beta$ is in the Bloch space, and $H_{\overline \beta}$ is compact if and only if $\beta$ is in the little Bloch space. In the case of a general symbol, Zhu \cite{Zhu87} showed the connection between size estimates of a Hankel operator and the mean oscillation of the symbol in the Bergman metric. In \cite{BBCZ90}, Bekolle, Berger, Coburn and Zhu studied the same problem in the setting of bounded symmetric domains in $\mathbb{C}^n$ with the restriction that $H_{\beta}$ and $H_{\overline \beta}$ are simultaneously bounded and compact with $\beta\in L^2(\Omega)$. Stroethoff and Zheng \cite{Stroethoff90IJM,Zheng89} independently gave a characterization for compactness of Hankel operators with bounded symbols on $\Delta.$ Later Stroethoff \cite{Stroethoff90JOT} generalized these results to the case of the open unit ball and polydisc in $\mathbb{C}^n.$ Luecking \cite{Luecking92} gave different criteria for boundedness and compactness of $H_{\beta}$ on $A^p(\Omega)$ with $1 < p < \infty$. Peloso \cite{Peloso94} extended Axler's result to Bergman spaces on smooth bounded strongly pseudoconvex domains. For the same domains, Li \cite{Li94} characterized bounded and compact Hankel operators $H_{\beta}$ with symbols $\beta\in L^2(\Omega)$. Beatrous and Li \cite{BeatrousLi93} obtained related results for the commutators of multiplication operators and $P$ on more general domains, that include smooth bounded strongly pseudoconvex domains.
The novelty of our approach is that we put an emphasis on the interplay between the geometry of the domain and the symbols of Hankel operators. Although, our symbols are more restricted the domains we consider are much more general and allow rich geometric structures.
In several complex variables, compactness of the $\overline{\partial}$-Neumann operator has been an active research area for the last couple of decades. We refer the reader to a very nice survey \cite{FuStraube01} for more information about compactness of the $\overline{\partial}$-Neumann operator. Compactness of the canonical solution operators for $\overline{\partial}$ on the unit disk has been discussed in \cite{Haslinger01}, where it was in fact shown that this operator restricted to $(0, 1)$-forms with holomorphic coefficients is a Hilbert-Schmidt operator. Fu and Straube \cite{FuStraube98} showed that presence of analytic discs in the boundary of a bounded convex domain in $\mathbb{C}^n$ is equivalent to the non-compactness of the $\overline\partial$-Neumann operator. The second author and Straube \cite{SahutogluStraube06} used their techniques to prove that analytic discs are obstructions for compactness of the $\overline{\partial}$-Neumann operator on smooth bounded pseudoconvex domains in $\mathbb{C}^n$ whose Levi form has maximal rank. In $\mathbb{C}^2$ their result reduces to a folklore result of Catlin \cite{FuStraube98}.
Given Kohn's formula it is natural to expect a strong relationship between Hankel operators and the $\overline{\partial}$-Neumann operator. The following fact confirms this expectation. Compactness of the $\overline{\partial}$-Neumann operator implies compactness of Hankel operators with symbols are smooth on the closure \cite{FuStraube01}. Actually, the statement in \cite{FuStraube01} requires the symbol to have bounded first order derivatives. But any symbol that is continuous on the closure can be approximated uniformly by symbols that are smooth on the closure of the domain. Hence the resulting Hankel operators converge in norm preserving compactness. In this paper we show that the theory for compactness of Hankel operators is somewhat parallel to the theory of compactness of the $\overline{\partial}$-Neumann operator in terms of analytic structures in the boundary. Previous work in this direction was done by Knirsch and Schneider \cite{KnirschSchneider07}.
Throughout the paper $b\Omega$ denotes the boundary of $\Omega.$ Our first result concerns smooth bounded pseudoconvex domains in $\mathbb{C}^{n}.$
\begin{theorem}\label{ThmCn}
Let $\Omega$ be a smooth bounded pseudoconvex domain in $\mathbb{C}^n$ for $n\geq 2$ and $\beta\in C^{\infty}(\overline{\Omega}).$ Assume that the Levi form of $b\Omega$ is of rank at least $n-2.$ If $H_{\beta}$ is compact on $A^2(\Omega),$ then $\beta \circ f$ is holomorphic for any holomorphic function $f:\Delta \to b\Omega .$
\end{theorem}
\begin{remark}\label{RemarkAlong}
We note that the statement ``$\beta \circ f$ is holomorphic'' can be interpreted as meaning that $\beta$ is holomorphic ``along'' $M=f( \Delta).$ However it may not be holomorphic in the transversal directions.
\end{remark}
\begin{remark}
One can check that the proof of Theorem \ref{ThmCn} shows that compactness of $H_{\beta}$ on $A^2(\Omega)$ for $\beta \in C^{\infty}(\overline{\Omega})$ still implies that $\beta \circ f$ is holomorphic for any holomorphic function $f:\Delta \to b\Omega $ when $\Omega$ satisfies the following property: If the Levi form of $b\Omega$ is of rank $k$ for $0\leq k\leq n-1$ at $p,$ then there exists a $n-k-1$ dimensional complex manifold in $b\Omega$ through $p.$
\end{remark}
Since in $\mathbb{C}^2$ the Levi form has only one eigenvalue the condition on the Levi form in Theorem \ref{ThmCn} is always satisfied. Therefore, for $n=2$ we have the following corollary.
\begin{corollary}\label{ThmC2}
Let $\Omega$ be a smooth bounded pseudoconvex domain in $\mathbb{C}^{2}$ and $\beta\in C^{\infty}(\overline{\Omega}).$ If $H_{\beta}$ is compact on $A^{2}(\Omega)$ then $\beta \circ f$ is holomorphic for any holomorphic function $f:\Delta \to b\Omega .$
\end{corollary}
For convex domains in $\mathbb{C}^{n}$ we prove the same result without any restriction on the Levi form.
\begin{theorem}\label{ThmConvexCn}
Let $\Omega$ be a smooth bounded convex domain in $\mathbb{C}^{n}$ for $n\geq 2$ and $\beta\in C^{\infty}(\overline{\Omega}).$ Assume that $H_{\beta}$ is compact on $A^{2}(\Omega).$ Then $\beta \circ f$ is holomorphic for any holomorphic function $f:\Delta \to b\Omega .$
\end{theorem}
In the following theorem we show that, when $\Omega$ is convex in $\mathbb{C}^2,$ the converse of Theorem \ref{ThmCn} is true.
\begin{theorem} \label{ThmConvex}
Let $\Omega$ be a smooth bounded convex domain in $\mathbb{C}^{2}$ and $\beta\in C^{\infty}(\overline{\Omega}).$ If $\beta \circ f$ is holomorphic for any holomorphic $f:\Delta\to b\Omega,$ then $H_{\beta}$ is compact.
\end{theorem}
Combining Corollary \ref{ThmC2} (or Theorem \ref{ThmConvexCn}) and Theorem \ref{ThmConvex} we get a necessary and sufficient condition for compactness of $H_{\beta}$ for convex domains in $\mathbb{C}^2.$
\begin{corollary}\label{CorConvex}
Let $\Omega$ be a smooth bounded convex domain in $\mathbb{C}^{2}$ and $\beta\in C^{\infty}(\overline{\Omega})$. Then $H_{\beta}$ is compact if and only if $\beta \circ f$ is holomorphic for any holomorphic $f:\Delta\to b\Omega.$
\end{corollary}
\begin{remark}
We note that \cite{MatheosThesis} constructed a smooth bounded pseudoconvex complete Hartogs domain $\Omega$ in $\mathbb{C}^{2}$ that has no analytic disk in its boundary, yet it does not have a compact $\overline{\partial}$-Neumann operator. It would be interesting to know whether there exists a symbol $\beta\in C^{\infty}(\overline{\Omega})$ such that the Hankel operator $H^{\Omega}_{\beta}$ is not compact on $A^2(\Omega).$
\end{remark}
\begin{remark}
We would like to take this opportunity to point out an inaccuracy.
Knirsch and Schneider \cite[Proposition 1]{KnirschSchneider07} claim that if there is an affine disk in the boundary of a bounded convex domain in $\mathbb{C}^{n},$ then the Hankel operator $H_{\bar z_i^m}$ is not compact for $i=1,2,\ldots,n$ and any positive integer $m$ where $z_i$ is the ith coordinate function. They correctly prove the result when the disk lies in $z_{1}$-coordinate and claim that the proof for $i=2,3,\ldots, n$ is similar. However, Theorem \ref{ThmConvex} implies that if $\Omega$ is a smooth bounded convex domain in $\mathbb{C}^2$ and the set of weakly pseudoconvex points form a disc in $z_{1}$-coordinate, then $H_{\bar z_2}$ is compact.
\end{remark}
\begin{remark}
For simplicity we assume that the domains have $C^{\infty}$-smooth boundary and the symbols are smooth up to the boundary. However, one can check that the proofs work under weaker but reasonable smoothness assumptions.
\end{remark}
\begin{remark}
Recently, \c{C}elik and Straube \cite{CelikStraube} studied compactness multipliers for the $\overline{\partial}$-Neumann problem (we refer the reader to \cite{CelikStraube} for the definition and some properties of compactness multipliers of the $\overline{\partial}$-Neumann problem). This notion is related to that of a symbol of a compact Hankel operator, but there are differences.
First of all, the $\overline{\partial}$-Neumann operator $N$ is applied to square integrable forms and compactness multipliers are applied after $N$. In case of Hankel operators, however, one can think of the $(0,1)$-form $\overline{\partial}\beta$ as acting as a ``pre-multiplier'' (acting before the canonical solution operator $\overline{\partial}^*N$) on the Bergman space which is more rigid than the space of $L^2$ forms. Secondly, \c{C}elik and Straube proved that on a bounded convex domain, a function that is continuous on the closure is a compactness multiplier if and only if the function vanishes on all the (nontrivial) analytic discs in the boundary. One can show that such symbols produce compact Hankel operators. However, for smooth bounded convex domains in $\mathbb{C}^2$, a symbol smooth on the closure produces a compact Hankel operator if and only if the symbol is holomorphic along (see Remark \ref{RemarkAlong}) analytic discs in boundary. (That is, the complex tangential component of the pre-multiplier on any analytic disc in the boundary vanishes). In general, these connections are not well understood. For example, the following question is still open:
\end{remark}
\begin{question}
Assume that $\Omega$ is a smooth bounded pseudoconvex domain in $\mathbb{C}^n$ and $\beta\in C(\overline{\Omega})$ is a compactness multiplier for the $\overline{\partial}$-Neumann operator on $L^2_{(0,1)}(\Omega).$ Is $H_{\beta}$ compact on the Bergman space on $\Omega?$
\end{question}
\section{Proof of Theorem \ref{ThmCn} and Theorem \ref{ThmConvexCn}}\label{ProofThmCn}
Let $\Delta=\Delta_{1}$ denote the unit open disc in $\mathbb{C}, \Delta_{r}$ denote the disc in $\mathbb{C}$ centered at the origin with radius $r,$ and $\Delta^{k}_{r}$ denote the polydisc in $\mathbb{C}^{k}$ of multiradius $(r,\cdots,r).$ We will be using Hankel operators defined on different domains. So to be more precise, let $H^{\Omega}_{\phi}$ denote the Hankel operator on $\Omega$ with symbol $\phi$ and $R_{U}$ be the restriction operator onto $U.$ Furthermore, the Bergman projection on $U$ will be denoted by $P_{U}.$ First we will start with a proposition that will allow us to ``localize'' the proofs.
In the proofs below we will use generalized constants. That is $A\lesssim B$ will mean that there exists a constant $c>0$ that is independent of the quantities of interest such that $A\leq cB .$ At each step the constant $c$ may change but it will stay independent of the quantities of interest.
\begin{proposition}\label{prop1}
Let $\Omega$ be a bounded pseudoconvex domain in $\mathbb{C}^{n}$ and $\phi \in L^{\infty}(\Omega) .$ Then
\begin{itemize}
\item[i)] If $H^{\Omega}_{\phi}$ is compact on $A^{2}(\Omega)$ then for every $p\in b\Omega$ and $U$ an open neighborhood of $p$ such that $U\cap \Omega$ is a domain, $H^{U\cap \Omega}_{R_{U\cap \Omega}(\phi)}R_{U\cap \Omega}$ is compact on $A^{2}(\Omega).$
\item[ii)] If for every $p\in b\Omega$ there exists an open neighborhood $U$ of $p$ such that $U\cap \Omega$ is a domain, and $H^{U\cap \Omega}_{R_{U\cap \Omega}(\phi)}R_{U\cap \Omega}$ is compact on $A^{2}(\Omega),$ then $H^{\Omega}_{\phi}$ is compact on $A^{2}(\Omega).$
\end{itemize}
\end{proposition}
\begin{proof}
Let us prove i) first. For $f\in A^{2}(\Omega)$ we have
\begin{eqnarray*}
(Id_{U\cap\Omega}-P_{U\cap\Omega})R_{U\cap \Omega} H^{\Omega}_{\phi}(f)&=& (Id_{U\cap\Omega}-P_{U\cap\Omega})R_{U\cap \Omega}(\phi f-P_{\Omega}(\phi f))\\
&=&H^{U\cap \Omega}_{R_{U\cap \Omega}(\phi)}R_{U\cap \Omega}(f)+P_{U\cap\Omega}R_{U\cap \Omega}P_{\Omega}(\phi f)-R_{U\cap \Omega}P_{\Omega}(\phi f)\\
&=&H^{U\cap \Omega}_{R_{U\cap \Omega}(\phi)}R_{U\cap \Omega}(f).
\end{eqnarray*}
In the last equality we used the fact that $P_{U\cap\Omega}R_{U\cap \Omega}P_{\Omega}=R_{U\cap \Omega}P_{\Omega}$ on $L^{2}(\Omega) .$ Hence
\[ (Id_{U\cap\Omega}-P_{U\cap\Omega})R_{U\cap \Omega} H^{\Omega}_{\phi}(f)= H^{U\cap \Omega}_{R_{U\cap \Omega}(\phi)}R_{U\cap \Omega}(f).\]
Therefore, if $H^{\Omega}_{\phi}$ is compact, then $H^{U\cap \Omega}_{R_{U\cap \Omega}(\phi)}R_{U\cap \Omega}$ is also compact.
To prove ii) let us choose $\{p_{1},\ldots,p_{m}\}\subset b\Omega$ and open sets $U_{1},\ldots,U_{m}$ such that
\begin{itemize}
\item[i)] $U_j$ is a neighborhood of $p_j$ and $U_{j}\cap \Omega$ is a domain for $j=1,\ldots,m,$
\item[ii)] $b\Omega \subset \cup_{j=1}^{m} U_{j},$
\item[iii)] $S_{j}=H^{U_{j}\cap \Omega}_{R_{U_{j}\cap \Omega}(\phi)}R_{U_{j}\cap \Omega}$ is compact for $j=1,\ldots,m.$
\end{itemize}
Let $U_0=\Omega, S_0=H^{\Omega}_{\phi},$ and $\{\chi_{j}:j=0,\ldots,m\}$ be a $C^{\infty}$-smooth partition of unity subject to $\{U_{j}:j=0,\ldots,m\} .$ Then for $f\in A^2(\Omega)$
\begin{eqnarray*}
\overline{\partial} \left(\sum_{j=0}^{m} \chi_{j}S_{j}(f)\right)&=& \sum_{j=0}^{m} (\overline{\partial}\chi_{j})S_{j}(f)+\sum_{j=0}^{m} \chi_{j}\overline{\partial} S_{j}(f)\\
&=&\sum_{j=0}^{m} (\overline{\partial}\chi_{j})S_{j}(f)+\sum_{j=0}^{m} \chi_{j}(\overline{\partial} \phi) f\\
&=&\sum_{j=0}^{m} (\overline{\partial}\chi_{j})S_{j}(f)+(\overline{\partial} \phi) f.
\end{eqnarray*}
Hence, since $\overline{\partial} \left(\sum_{j=0}^{m} \chi_{j}S_{j}(f)\right)$ and $(\overline{\partial} \phi) f$ are $\overline{\partial}$-closed we conclude that $\sum_{j=0}^{m} (\overline{\partial}\chi_{j})S_{j}(f)$ is $\overline{\partial}$-closed. Let
\begin{equation} \label{eqnprop}
S=\sum_{j=0}^{m} \chi_{j}S_{j}-\overline{\partial}^{*}N^{\Omega} \sum_{j=0}^{m}(\overline{\partial}\chi_{j})S_{j}.
\end{equation}
We write $\chi_0S_0(f)$ as $\chi_0\phi f-\chi_0P_{\Omega}(\phi f) $ and choose a bounded sequence $\{f_j\}$ in $A^2(\Omega).$ Let $K$ be a compact set in $\Omega$ that contains a neighborhood of the support of $\chi_0.$ Cauchy integral formula and Montel's theorem imply that $\{f_j\}$ and $\{P_{\Omega}(\phi f_j)\}$ have uniformly convergent subsequences on $K.$ Then $\{\chi_0\phi f_ j\}$ and $\{\chi_0P_{\Omega}(\phi f_j)\} $ have convergent subsequences in $L^2(\Omega).$ That is, the operator $\chi_0S_0$ is compact on $ A^2(\Omega).$ Similarly, $(\overline{\partial} \chi_0)S_0$ is compact as well. We remind the reader that we assumed that $S_{j}$ is compact for $j=1,\ldots,m$ and $\overline{\partial}^{*}N^{\Omega}$ is continuous on bounded pseudoconvex domains. Therefore, \eqref{eqnprop} implies that $S$ is a compact operator and $\overline{\partial} S(f) =(\overline{\partial}\phi) f.$ To get the Hankel operator we project onto the complement of $A^{2}(\Omega).$ Hence using $H_{\phi}^{\Omega} =(Id_{\Omega}-P_{\Omega})S$ we conclude that $H_{\phi}^{\Omega}$ is compact on $A^2(\Omega).$
\end{proof}
\begin{lemma}\label{lem2}
Let $\Omega_{1}$ and $\Omega_{2}$ be two bounded pseudoconvex domains in $\mathbb{C}^{n},$
$\phi\in C^{\infty}(\overline{\Omega}_{2}),$ and $F:\Omega_{1}\to \Omega_{2}$ be a biholomorphism
that has a smooth extension up to the boundary. Assume that $H_{\phi}^{\Omega_{2}}$ is compact on $A^2(\Omega_2)$. Then $H_{\phi\circ F}^{\Omega_{1}}$ is compact on
$A^2(\Omega_1) .$
\end{lemma}
\begin{proof}
Let $g\in A^2(\Omega_1), f=g\circ F^{-1}, u=\overline{\partial}^*N^{\Omega_2}\overline{\partial} \phi f ,$ and $w=u\circ F=F^{*}(u).$ Then $ f\in A^2(\Omega_2), u=H^{\Omega_2}_{\phi}(f),$ and
\[\overline{\partial} w= \overline{\partial} F^{*}(u)=F^{*}(\overline{\partial} u)=F^{*}(f \overline{\partial} \phi)=(f\circ F)\overline{\partial}(\phi\circ F).\]
So $\overline{\partial} (u\circ F)=(f\circ F)\overline{\partial} (\phi\circ F)$ on $\Omega_1$ and $\overline{\partial}^{*} N^{\Omega_1}\overline{\partial} (u\circ F)$ is the canonical solution for $\overline{\partial} w=(f\circ F)\overline{\partial} (\phi\circ F)$ on $\Omega_1.$ Then
\[H^{\Omega_1}_{\phi\circ F}(g)=H^{\Omega_1}_{\phi\circ F}(f\circ F)=\overline{\partial}^{*}
N^{\Omega_1}\overline{\partial} (u\circ F)=\overline{\partial}^{*} N^{\Omega_1}\overline{\partial} (F^* H_{\phi}^{\Omega_2}((F^{-1})^{*}(g))).\]
Therefore, $H^{\Omega_1}_{\phi\circ F}$ is a composition of $H^{\Omega_2}_{\phi}$ with continuous operators $\overline{\partial}^{*}N^{\Omega_1}\overline{\partial}, F^* ,$ and $(F^{-1})^{*}.$ Then since $H_{\phi}^{\Omega_2}$ is assumed to be compact on $A^2(\Omega_2)$ we conclude that $H_{\phi\circ F}^{\Omega_{1}}$ is compact on $A^2(\Omega_1).$
\end{proof}
Let $d_{b\Omega}(z)$ be the function defined on $\Omega$ that measures the (minimal) distance from $z$ to $b\Omega.$ The Bergman kernel function of $\Omega$ satisfies the following relation on the diagonal of $\Omega\times \Omega$
\[ K_{\Omega} (z, z) = \sup\{ |f (z)|^2: f \in A^2 (\Omega), \| f \|_{L^{2}(\Omega)} \leq 1\}.\]
The following proposition appeared in \cite{Fu94} for general pseudoconvex domains in $\mathbb{C}^n$ and in \cite{SahutogluThesis} in the following form.
\begin{proposition}\label{PropFu}
Let $\Omega$ be a bounded pseudoconvex domain in $\mathbb{C}^n$ with $C^2$-boundary
near $p\in b\Omega.$ If the Levi form is of rank $k$ at $p,$ then there exist a constant $C > 0$ and a neighborhood $U$ of $p$ such that
\[ K_{\Omega} (z, z) \geq \frac{C}{(d_{b\Omega}(z))^{k+2} }\text{ for } z \in U \cap \Omega. \]
\end{proposition}
\begin{proof}[Proof of Theorem \ref{ThmC2}]
We will prove a stronger result. The proof will go along the lines of the proof of Theorem 1 in \cite{SahutogluStraube06} and the proof of $(1)\Rightarrow (2)$ in \cite{FuStraube98} with some additional work. The same strategy has appeared in \cite{Catlin81,DiederichPflug81,SahutogluThesis}. Let us assume that
\begin{itemize}
\item[i.] $\Omega$ is a smooth bounded pseudoconvex domain in $\mathbb{C}^n$ and $p\in b\Omega,$
\item[ii.] the Levi form of $b\Omega$ is of rank $k$ at $p$ through which there exists a $n-k-1$ dimensional complex manifold in $b\Omega,$
\item[iii.] there exists non-constant holomorphic mapping $f:\Delta^{n-k-1} \to b\Omega$ and $q\in \Delta$ such that $f(q)=p,$ $Df(q)$ is full rank ($Df$ is the Jacobian of $f$), and $\overline{\partial}(\beta\circ f)(q)\neq 0,$
\item[iv.] $H_{\beta}$ is compact.
\end{itemize}
Lemma 1 in \cite{SahutogluStraube06} implies that there exist a neighborhood $V$ of $p$ and a local holomorphic change of coordinates $G$ on $V$ so that $G(p)=0,$ positive $y_n$-axis is the outward normal direction to the boundary of $\Omega_1=G(V\cap \Omega)$ at every point of $M =\{z\in \Delta^n: z_{n-k}=\cdots=z_n=0 \}\subset b\Omega_{1} .$
Let $z=(z',z'')$ where $z'=(z_1,\ldots, z_{n-k-1})$ and $z''=(z_{n-k},\ldots,z_n).$ We define $L$ to be the $k+1$ (complex) dimensional slice of $\Omega_1$ that passes through the origin and is orthogonal to $M.$ That is, $L=\{z''\in \mathbb{C}^{k+1}:(0,z'')\in \Omega_{1}\}.$ So $L$ is strongly pseudoconvex at the origin when $k\geq 1$ and is a domain in $\mathbb{C}$ when $k=0.$ Then there exists $0<\lambda<1$ such that $ M_{1}\times L_{1} \subset \Omega_{1}, $ where $ L_{1}$ is a ball in $\mathbb{C}^{k+1}$ centered at $(0,\ldots,0,-\lambda)$ with radius $\lambda$ and $M_{1}=\frac{1}{2} M.$ For every $j$ we choose $p_j=(0,\ldots,0,-1/j)\in M_{1}\times L_{1}.$ We take the liberty to abuse the notation and consider $p_j=(0,\ldots,0,-1/j)\in L_{1}.$ Now we define $q_{j}=G^{-1}(p_{j})\in V\cap \Omega$ and
\[f_j(z)=\frac{K_{\Omega}(z,q_j)}{\sqrt{K_{\Omega}(q_j,q_j)}}.\]
One can check that $\{f_j\}$ is a bounded sequence of square integrable functions on $\Omega$ that converges to zero locally uniformly. Let us define
$\alpha_{j}=f_j\circ G^{-1}$ and $\beta_{1}=\beta\circ G^{-1}.$ Then i) in Proposition \ref{prop1} implies that $H^{V\cap \Omega}_{R_{V\cap \Omega}(\beta)}R_{V\cap \Omega}$ is compact. In turn, Lemma \ref{lem2} implies that $H^{\Omega_{1}}_{\beta_{1}}$ is compact. Hence $\{H^{\Omega_{1}}_{\beta_{1}}(\alpha_{j})\}$ has a convergent subsequence. The strategy for the rest of the proof will be to prove that $\{H^{\Omega_{1}}_{\beta_{1}}(\alpha_{j})\}$ has no convergent subsequence. Hence, getting a contradiction.
Since $\overline{\partial} (\beta \circ f )(q) \neq 0$ without loss of generality we may assume that $\left|\frac{\partial \beta_{1}}{\partial \bar z_{1}}\right|>0$ at the origin. Then there exist $0<r<1$ and a smooth function $0\leq \chi\leq 1$ on real numbers such that
\begin{itemize}
\item[i.] $\Delta^{n-k-1}_r\subset M_1,$
\item[ii.] $\chi(t)= 1$ for $|t|\leq r/2, \chi (t)= 0$ for $|t|\geq 3r/5,$
\item[iii.] $\left|\frac{\partial \beta_{1}}{\partial \bar z_{1}}\right|>0$ on $\Delta_{r}^{n}.$
\end{itemize}
Then $C=\int_{|z_1|<3r/4}\chi(|z_1|)dV(z_1) > 0.$ Let us define $\gamma$ on $\Omega_{1}$ so that
\[\gamma(z)\frac{\partial\beta_{1}(z)}{\partial
\bar z_{1}}=\chi(|z_1|)\cdots\chi(|z_n|).\]
and $\langle .,. \rangle$ denote the standard pointwise inner product on forms in $\mathbb{C}.$ Furthermore, let $z=(z_{1},w)$ where $w=(z_{2},\ldots,z_{n})$ and $\alpha\in A^{2}(\Omega_{1}).$ Then using the mean value property for a holomorphic function $\alpha$ and for fixed $w\in \Delta^{n-1}_{3r/4}$ so that $\Delta_{r}\times \{w\} \subset M_{1}\times L_{1}$ we get
\begin{eqnarray*}
C\alpha(0,w)&=&\int_{|z_1|<3r/4}
\chi(|z_1|)\alpha(z_1,w)dV(z_1)\\
&=&\int_{|z_1|<3r/4}\gamma(z_1,w)\frac{\partial\beta_{1}
(z_1,w)}{\partial\bar
z_{1}}\alpha(z_1,w)dV(z_1)
\end{eqnarray*}
On the other hand,
\begin{eqnarray*}
\int_{|z_1|<3r/4}\gamma(z_1,w)\frac{\partial\beta_{1}(z_1,w)}{\partial\bar z_{1}}\alpha(z_1,w)dV(z_1)
&=&\int_{|z_1|<3r/4}\langle\alpha\overline{\partial} \beta_{1} , \bar \gamma d\bar z_1\rangle dV(z_1)\\
&=&\int_{|z_1|<3r/4} \langle \overline{\partial}\dbar^{*}N^{\Omega_{1}}(\alpha\overline{\partial} \beta_{1}),\bar \gamma d\bar z_1\rangle dV(z_1)\\
&=&\int_{|z_1|<3r/4}\frac{\partial H^{\Omega_{1}}_{\beta_{1}}(\alpha)}{\partial \bar z_{1}} \gamma dV(z_1)\\
&=&-\int_{|z_1|<3r/4}H^{\Omega_{1}}_{\beta_{1}}(\alpha) \frac{\partial \gamma}{\partial \bar z_{1}} dV(z_1).
\end{eqnarray*}
Therefore, we have
\begin{eqnarray*}
|\alpha(0,w)|
&\lesssim&
\left(\int_{|z_1|<3r/4}|H^{\Omega_{1}}_{\beta_{1}}(\alpha
)|^{2}dV(z_1)\right)^{1/2}.
\end{eqnarray*}
If we square both sides we get
\begin{eqnarray*}
\left| \alpha(0,w) \right|^2\lesssim
\int_{|z_1|<3r/4}|H^{\Omega_{1}}_{\beta_{1}}(\alpha
)(z_{1},w)|^{2}dV(z_1).
\end{eqnarray*}
Since $\left| \alpha(0,w) \right|^2$ is subharmonic when we integrate over $(z_{2},\cdots, z_{n-k-1})\in \Delta^{n-k-2}_{3r/4},$ we get
\begin{eqnarray}\label{EqnSlice}
\left| \alpha(0,z'') \right|^2\lesssim
\int_{z'\in \Delta^{n-k-1}_{3r/4}}|H^{\Omega_{1}}_{\beta_{1}}(\alpha
)(z',z'')|^{2}dV(z').
\end{eqnarray}
The above inequality applied to $\alpha_{j}$ implies that $\alpha_{j}|_{L_{1}}\in L^{2}(L_{1})$. Now we use the reproducing property of $K_{L_{1}}$ on $\alpha_{j}|_{L_{1}}$ to get
\[\alpha_{j}(p_{j})=\int_{L_{1}}K_{L_{1}}(p_{j},z)\alpha_{j}|_{L_1}(z)dV(z) .\]
Cauchy-Schwartz inequality implies that $| \alpha_{j}(p_{j})|\leq \|\alpha_{j}|_{L_{1}}\|_{L^{2}(L_{1})}\| K_{L_{1}}(p_{j},.)\|_{L^{2}(L_{1})} $. On the other hand $\| K_{L_{1}}(p_{j},.)\|_{L^{2}(L_{1})}=\sqrt{ K_{L_{1}}(p_{j},p_{j})}. $ So we have
\[ \|\alpha_{j}|_{L_{1}}\|_{L^{2}(L_{1})} \geq \frac{| \alpha_{j}(p_{j})|}{ \sqrt{K_{L_{1}}(p_{j},p_{j})}}=\sqrt{\frac{K_{\Omega}(q_{j},q_{j})}{K_{L_{1}}(p_{j},p_{j})}}.\]
Since $L_{1}$ is a ball in $\mathbb{C}^{k+1}$ and the rank of the Levi form for $\Omega$ (and hence for $\Omega_{1}$) is at least $k,$ the asymptotics of the Bergman kernel on balls and Proposition \ref{PropFu} imply the following inequalities:
\[ \frac{1}{(d_{bL_{1}}(p_{j}))^{k+2}}\lesssim K_{L_{1}}(p_{j},p_{j}) \lesssim \frac{1}{(d_{bL_{1}}(p_{j}))^{k+2}},\]
\[\frac{1}{(d_{b\Omega}(q_{j}))^{k+2}}\lesssim K_{\Omega}(q_{j},q_{j}).\]
We note that $p_{j}$ and $q_{j}$ are related by a diffeomorphism. So for large enough $j$ $d_{b\Omega_{1}}(p_{j})=d_{bL_{1}}(p_{j})$ and they are comparable to $d_{b\Omega}(q_{j}).$ Therefore, there exists $\tilde \xi>0$ such that $\tilde \xi <\|\alpha_{j}|_{L_{1}}\|_{L^{2}(L_{1})}$ for all $j$. Since $\{\alpha_{j}\}$ converges to 0 locally uniformly this implies that $\{\alpha_{j}|_{L_{1}}\}$ has no convergent subsequence in $L^{2}(L_{1}).$ Also \eqref{EqnSlice} applied to $\alpha_j-\alpha_k$ implies that
\[\|\alpha_{j}|_{L_{1}}-\alpha_{k}|_{L_{1}}\|_{L^{2}(L_{1})} \lesssim \|H^{\Omega_{1}}_{\beta_{1}}(\alpha_ {j}-\alpha_{k})\|_{L^{2}(\Omega_{1})}.\]
Hence $\{H^{\Omega_{1}}_{\beta_{1}}(\alpha_ {j})\} $ has no convergent subsequence in $L^{2}(\Omega_{1}).$ Therefore, we have reached a contradiction completing the first proof of Theorem \ref{ThmC2}.
\end{proof}
A weaker version of the following lemma appeared in \cite{FuStraube98}.
\begin{lemma}\label{LemAffine}
Let $\Omega$ be a convex domain in $\mathbb{C}^{n}$ and $f:\Delta\to b\Omega$ be a non-constant holomorphic map. Then the convex hull of $f(\Delta)$ is an affine analytic variety contained in $b\Omega.$
\end{lemma}
\begin{proof}
Let $K$ be the convex hull of $f(\Delta)$ in $\mathbb{C}^n.$ First we will show that $K$ is an analytic affine variety. By definition $K$ is an affine set in $\mathbb{C}^n.$ Let $F(z,w,t)=tf(z)+(1-t)f(w)$ for $(z,w)\in \Delta^2$ and $0<t<1.$ One can check that
\[K=\{F(z,w,t):(z,w,t)\in \Delta^2\times (0,1)\}.\]
If $K$ is open in $\mathbb{C}^n$ then we are done. Otherwise, there exists $p\in K$ which is a boundary point and, by convexity, there exists $(z_0,w_0,t_0)\in\Delta^2\times (0,1)$ such that after possible rotation and translation $p=F(z_0,w_0,t_0)$ is the origin and $K\subset \{x_n\leq 0\}.$ Let us define $g=Re(z_{n}\circ F):\Delta^2\times(0,1)\to \mathbb{R}.$ Then $g(z_0,w_0,t_0)=0$ and $g(\Delta^2\times(0,1))\subset \{x\in \mathbb{R}: x\leq 0\}.$ Maximum principle applied to the harmonic function $g$ implies that $g\equiv 0.$ Hence $K \subset \{x_n=0\}.$ Since $f$ is holomorphic, $f'$ must stay in the complex tangent subspace of $\{x_n=0\}.$ That is,
\begin{equation} \label{EqnConvex}
f'(p)\subset \text{span}\left\{\frac{\partial}{\partial z_1},\ldots, \frac{\partial}{\partial z_{n-1}}\right\} \text{ for every } p\in \Delta.
\end{equation}
Now it is easy to see that \eqref{EqnConvex} implies that $K\subset \{z_n=0\}.$ So we have demonstrated that if $K$ is not an $n$ dimensional analytic affine variety then it is contained in an $n-1$ dimensional analytic affine variety. We use the above argument multiple times if necessary to show that $K$ is open in an analytic affine variety. Hence $K$ is an analytic affine variety.
Now we will show that $K$ is contained in $b\Omega.$ Since $K$ and $\Omega$ are convex after some possible rotation and translation, we can assume that $f(0)$ is the origin and $f(\Delta)\subset \overline{\Omega} \subset \{x_{n}\leq 0\} .$ Since $\emptyset \neq f(\Delta)\subset K\cap b\Omega$ the set $K$ is not an open set in $\mathbb{C}^{n}$. Then, as in the above paragraph, one can show that $K\subset \{x_n=0\}\cap \overline{\Omega} \subset b\Omega.$ This completes the proof of the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{ThmConvexCn}]
The proof will be very similar to the first part and the proof of $(1) \Rightarrow (2)$ in \cite{FuStraube98}. So we will just sketch the proof and point out differences. Let us assume that $H^{\Omega}_{\beta}$ is compact and that there exists a nonconstant holomorphic map $ f:\Delta \to b\Omega.$ We can choose $p\in \Delta$ such that $|\overline{\partial} (\beta\circ f) (p)|>0.$ By applying translation and rotation, if necessary, we may assume that $f(p)=0, f'(p)=(1,0,\ldots,0),$ and positive $x_{n}$-axis is the outward normal for $b\Omega$ at $0.$ Using Lemma \ref{LemAffine} with scaling, if necessary, we may assume that $\{(z,0,\ldots,0)\in \mathbb{C}^n:|z|\leq 1\} \subset b\Omega$ and $|\frac{\partial \beta(0)}{\partial \bar z_{1}}|>0 .$ We define $$L=\{(z_2,\ldots,z_{n})\in \mathbb{C}^{n-1}:(0,z_{2},\ldots,z_{n})\in \Omega\}, $$ $p_j=(0,\ldots,-1/j)\in L,$ and $f_j(z)=\frac{K_{L}(z,q_j)}{\sqrt{K_{L}(q_j,q_j)}}.$ Using the proof of $(1) \Rightarrow (2)$ in \cite{FuStraube98} one can easily prove that $\{f_j\}$ is a bounded sequence in $A^2(L)$ such that $\{R_{\lambda L}(f_j)\},$ the restricted sequence of $\{f_j\}$ to $\lambda L,$ has no convergent subsequence in $A^2(\lambda L)$ for any $0<\lambda<1$. Then for each $j$ we extend $f_j$ to $\Omega$ using Ohsawa-Takegoshi theorem \cite{OhsawaTakegoshi87} to get a bounded sequence $\{\alpha_j\}$ on $A^2(\Omega).$ Using similar arguments as in the proof of Theorem \ref{ThmC2} and the fact that $\Delta_{1/2}\times \frac{1}{2}L \subset \Omega$ (this follows from convexity of $\Omega$) one can show that
\[\|f_j-f_k\|_{L^{2}(\frac{1}{2}L)} \lesssim \|H^{\Omega}_{\beta}(\alpha_j-\alpha_k)\|_{L^{2}(\Omega)}.\]
This contradicts the assumption that $H^{\Omega}_{\beta}$ is compact.
\end{proof}
\section{Proof of Theorem \ref{ThmConvex}}\label{ProofThmConvex}
We refer the reader to \cite[Proposition V.2.3]{D`AngeloIneqBook} for a proof of the following standard lemma.
\begin{lemma}\label{CompactEst}
Let $T:X\to Y$ be a linear operator between two Hilbert spaces $X$ and $Y$. Then $T$ is compact if and only if for every $\epsilon>0$ there exist a compact operator $K_{\epsilon}:X\to Y$ and $C_{\epsilon}>0$ so that
\[\|T(h)\|_Y\leq \epsilon\|h\|_X+C_{\epsilon}\|K_{\epsilon}(h)\|_Y \textrm{ for } h\in X.\]
\end{lemma}
\begin{proof}[Proof of Theorem \ref{ThmConvex}] Let $K$ denote the closure of the union of all analytic discs in $b\Omega.$ Let us choose a defining function $\rho$ for $\Omega$ so that $\|\nabla \rho\|=1$. Let $\beta=\beta_1+i\beta_2,$
\begin{eqnarray*}
\nu=\sum_{j=1}^{2}\frac{\partial \rho}{\partial x_j}\frac{\partial }{\partial x_j}+ \frac{\partial \rho}{\partial y_j} \frac{\partial}{\partial y_j}, \text{ and }
T=\sum_{j=1}^{2}\frac{\partial \rho}{\partial x_j}\frac{\partial }{\partial y_j}-\frac{\partial \rho}{\partial y_j} \frac{\partial}{\partial x_j}.
\end{eqnarray*}
For sufficiently small $\epsilon$ and $\xi\in b\Omega,$ let us define
\begin{eqnarray*}
\widetilde{\beta_1}(\xi+\epsilon\nu(\xi))=\beta_1(\xi)+\epsilon T(\beta_2)(\xi)\text{ and }
\widetilde{\beta_2}(\xi+\epsilon\nu(\xi))=\beta_2(\xi)-\epsilon T(\beta_1)(\xi).
\end{eqnarray*}
Then $\widetilde \beta=\widetilde{\beta_1}+i\widetilde{\beta_2}$ is a smooth function in a neighborhood of $b\Omega$ and it is equal to $\beta$ on the boundary of $\Omega.$ Let us extend $\widetilde\beta$ as a smooth function on $\overline{\Omega}$ and still call it $\widetilde{\beta}.$ One can check that $(\nu+iT)(\widetilde{\beta})=0$ on $b\Omega.$ That is, in some sense $\widetilde \beta$ is holomorphic along complex normal direction on the boundary. Let us define $\widehat{\beta}=\beta-\widetilde{\beta}$ on $\overline{\Omega}.$ Then $\widetilde{\beta}$ and $\widehat{\beta}$ are smooth functions on $\overline{\Omega}$ such that $\widehat{\beta}=0$ on $b\Omega$ and $\widetilde{\beta}$ is holomorphic on $K.$ Montel's theorem together with the fact that $\widehat{\beta}$ can be approximated by smooth functions supported away from the boundary imply that $H^{\Omega}_{\widehat{\beta}}$ is compact on $A^2(\Omega).$ In the rest of the proof we will show that $H^{\Omega}_{\widetilde{\beta}}$ is compact on $A^2(\Omega).$ Let $\{\psi_j\}$ be a sequence in $C^{\infty}_{(0,1)}(\overline{\Omega})$ such that $\psi_j=0$ in a neighborhood of $K$ for all $j$ and $\psi_j$ converges to $\overline{\partial} \widetilde{\beta}$ uniformly on $\overline{\Omega}.$ On the boundary, $\psi_j$'s are supported on sets that satisfy property $(P)$ (see \cite{FuStraube98} when $\Omega$ is convex).
In the following calculation $\langle.,.\rangle_{L^{2}(\Omega)}$ denotes the $L^2$ inner product on $\Omega$ and $N=N^{\Omega}.$ Now we will show that $H^{\Omega}_{\widetilde{\beta}}$ is compact. Let $g\in A^{2}(\Omega).$ Then we have
\begin{eqnarray*}
\langle \overline{\partial}^{*}N(g \overline{\partial} \widetilde{\beta}),\overline{\partial}^{*}N(g\overline{\partial}\widetilde{\beta}) \rangle_{L^{2}(\Omega)}&=& \langle N(g \overline{\partial} \widetilde{\beta}), g\overline{\partial} \widetilde{\beta} \rangle_{L^{2}(\Omega)}\\
&=& \langle N(g \overline{\partial} \widetilde{\beta}), g(\overline{\partial} \widetilde{\beta} -\psi_j)\rangle_{L^{2}(\Omega)} +\langle N(g \overline{\partial} \widetilde{\beta}), g\psi_j \rangle_{L^{2}(\Omega)}.
\end{eqnarray*}
Let us fix $\psi_j$. We choose $\psi\in C^{\infty}(\overline{\Omega})$ such that $0\leq \psi\leq 1,\psi\equiv 1$ on the support of $\psi_j$ and $\psi$ is supported away from $K.$ Then for $g\in A^{2}(\Omega)$ we have
\begin{equation} \label{EqnComp1}
|\langle N(g \overline{\partial} \widetilde{\beta}), g\psi_j \rangle_{L^{2}(\Omega)}| = |\langle \psi N(g \overline{\partial} \widetilde{\beta}), g\psi_j \rangle_{L^{2}(\Omega)}| \leq \|\psi N(g \overline{\partial} \widetilde{\beta})\|_{L^{2}(\Omega)} \|g\|_{L^{2}(\Omega)}.
\end{equation}
Let us choose finitely many balls $B_1,\ldots,B_m$ and $\phi_{j}\in C^{\infty}_{0}(B_{j})$ for $j=0,1,\ldots,m$ (we take $B_{0}=\Omega$ here) such that
\begin{itemize}
\item[i.] $\sum_{j=0}^{m}\phi_{j}=\psi$ on $\overline{\Omega},$
\item[ii.] $\Omega\cap B_j$ is a domain for $j=1,2,\ldots,m,$
\item[iii.] $\cup_{j=1}^mB_{j}$ covers the closure of the set $\{z\in b\Omega: \psi(z) \neq 0\},$
\item[iv.] $\Omega\cap B_j$ has a compact $\overline{\partial}$-Neumann operator for $j=1,2,\ldots,m.$
\end{itemize}
We note that multiplication with smooth functions preserves the domain of $\overline{\partial}^{*}$ and the $\overline{\partial}$-Neumann operator is compact on $B_{j}\cap \Omega$ for $j=1,\ldots, m.$ Compactness of $N$ implies the so-called compactness estimates (see for example \cite{FuStraube01}). Let $W^{-1}(\Omega)$ denote the Sobolev -1 norm for functions and forms. Then for every $\varepsilon>0$ there exists $ C_{\varepsilon}>0$ such that for $h\in L^{2}_{(0,1)}(\Omega)$ in the domains of $\overline{\partial}$ and $\overline{\partial}^{*}$ we have
\begin{eqnarray*}
\|\psi h \|_{L^{2}(\Omega)} &\leq &\sum_{j=0}^{m} \|\phi_{j}h\|_{L^{2}(\Omega)} \\
&\lesssim& \sum_{j=0}^{m} \varepsilon\Big( \|\overline{\partial} (\phi_{j}h)\|_{L^{2}(\Omega)}+ \|\overline{\partial}^{*} (\phi_{j}h)\|_{L^{2}(\Omega)} \Big)+ C_{\varepsilon}\|\phi_{j}h\|_{W^{-1}(\Omega)}\\
&\lesssim& \varepsilon\Big( \|\overline{\partial} h\|_{L^{2}(\Omega)}+ \|\overline{\partial}^{*} h \|_{L^{2}(\Omega)}+\|h\|_{L^{2}(\Omega)} \Big)+ C_{\varepsilon}\|h\|_{W^{-1}(\Omega)}.
\end{eqnarray*}
In the calculations above we used interior ellipticity for $j=0$ and the fact that multiplication by a smooth function is a continuous operator on Sobolev spaces. Now if we replace $h$ by $Nh$ and use the fact that $\|Nh\|_{L^{2}(\Omega)}+\|\overline{\partial} Nh\|_{L^{2}(\Omega)}+\|\overline{\partial}^{*}Nh\|_{L^{2}(\Omega)} \lesssim \|h\|_{L^{2}(\Omega)}$ we get
\begin{eqnarray*}
\|\psi Nh \|_{L^{2}(\Omega)}
&\lesssim& \varepsilon \|h \|_{L^{2}(\Omega)}+ C_{\varepsilon}\|Nh\|_{W^{-1}(\Omega)} \text{ for } h\in L^{2}_{(0,1)}(\Omega).
\end{eqnarray*}
Then Lemma \ref{CompactEst} implies that $\psi N$ is compact on $L^{2}_{(0,1)}(\Omega).$ Then using the small constant-large constant inequality $(2ab\leq \epsilon a^2+b^2/\epsilon)$ combined with the inequality above and \eqref{EqnComp1} we get that for any $\varepsilon >0$ there exists $C_{\varepsilon}>0$ such that
\begin{equation}\label{Eqn5}
|\langle N(g \overline{\partial} \widetilde{\beta}), g\psi_j \rangle_{L^{2}(\Omega)}| \leq \varepsilon \|g\|_{L^{2}(\Omega)}^2+C_{\varepsilon}\|N(g\overline{\partial} \widetilde{\beta})\|^2_{W^{-1}(\Omega)} \text{ for } g\in A^{2}(\Omega).
\end{equation}
Since $\psi_j$ converges to $\overline{\partial} \widetilde{\beta}$ uniformly on $\overline{\Omega}$ for every $\varepsilon>0$ there exists $\psi_j$ such that $|\langle N(g \overline{\partial} \widetilde{\beta}), g(\overline{\partial} \widetilde{\beta} -\psi_j)\rangle_{L^{2}(\Omega)}|\leq \varepsilon \|g\|_{L^{2}(\Omega)}^2.$ Furthermore, the last inequality together with \eqref{Eqn5} imply that there exists $C_{\varepsilon}>0$ such that
\[\|\overline{\partial}^{*}N(g \overline{\partial} \widetilde{\beta})\|_{L^{2}(\Omega)}^2= \| H^{\Omega}_{\widetilde{\beta}}(g)\|_{L^{2}(\Omega)}^2 \lesssim \epsilon \|g\|_{L^{2}(\Omega)}^2+C_{\epsilon} \|N(g\overline{\partial} \widetilde{\beta})\|^2_{W^{-1}(\Omega)} \text{ for } g\in A^{2}(\Omega).\]
The above inequality combined with Lemma \ref{CompactEst} and the fact that $W^{-1}(\Omega)$ imbeds compactly into $L^{2}(\Omega)$ imply that $H^{\Omega}_{\widetilde{\beta}}$ is compact on $A^2(\Omega).$ Therefore, $H^{\Omega}_{\beta}$ is compact.
\end{proof}
\section{Acknowledgement}
We would like to thank the referee and Emil Straube for helpful comments.
\singlespace
|
1,108,101,564,221 | arxiv | \subsection{#1}
\def\mysection#1{\refstepcounter{section}\subsection{#1}}
\def\mysubsection#1{\subsubsection{#1}}
\def\arabic{equation}{\arabic{equation}}
\font\fourteenbf=cmbx10 scaled\magstep2
\def\normalsize{\normalsize}
\def\normalsize{\normalsize}
\def\note#1{}
\def\goodbreak\noindent{\bf Proof\quad}{\goodbreak\noindent{\bf Proof\quad}}
\def{\ $\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}$}\bigskip {{\ $\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}$}\bigskip }
\def\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}{\hbox{$\sqcup$}\llap{\hbox{$\sqcap$}}}
\def\displaystyle{\displaystyle}
\def\textstyle{\textstyle}
\def\scriptstyle{\scriptstyle}
\def\und#1{{\underline{#1}}}
\def\til#1{{\tilde{#1}}}
\newcommand{\nonumber}{\nonumber}
\def\alpha{\alpha}
\def\rho{\rho}
\def\delta{\delta}
\def\lambda{\lambda}
\def\mu{\mu}
\def\tau{\tau}
\def\left{\left}
\def\right{\right}
\def\otimes{\otimes}
\def{1\kern-.25em{\rm l}}{{1\kern-.25em{\rm l}}}
\def\triangle{\triangle}
\begin{document}
\begin{titlepage}
\rightline{NIKHEF 95-059}
\vskip 1.8 true cm
\begin{center}
\Large{\bf Link invariants from $N$-state vertex models: an
alternative construction independent of statistical models}
\\
\vspace{0.45in}
\normalsize\sc
M.J. Rodr\'\i guez-Plaza
\\
\vspace{0.3in}
\normalsize\em
NIKHEF,
Postbus 41882,
1009 DB Amsterdam, The Netherlands\\
\vspace{0.04in}
and\\
\vspace{0.04in}
\normalsize\sc
\normalsize\em
Institut f\"ur Theoretische Physik,
Universit\"at Heidelberg,
D-69120 Heidelberg, Germany
\footnote{current address}\\
\end{center}
\vspace{0.8in}
{\leftskip=1.5 true cm \rightskip=1.5 true cm
\noindent
We reproduce the hierarchy of link invariants associated to the series
of $N$-state vertex models with a method different from the original
construction due to Akutsu, Deguchi and Wadati. The alternative method
substitutes the `crossing symmetry' property exhibited by the
Boltzmann weights of the vertex models by a similar property which,
for the purpose of constructing link invariants, encodes the same
information but requires only the limit of the Boltzmann
weights when the spectral parameter is sent to infinity. \par}
\vskip .8 true cm
\end{titlepage}
\setcounter{page}{2}
\mysection{Introduction}
\label{intro}
Starting from the $N$-state vertex models first introduced in
\cite{SAA}, Akutsu, Deguchi and Wadati show in \cite{ADW} that there
is a polynomial link invariant associated to each vertex model of the
series. The invariant corresponds to a Markov trace and is
therefore a link invariant of ambient isotopy for oriented links.
In particular for $N=2$ (the 6-vertex model) the {\em skein relation}
of the polynomial link invariant is given by
\begin{equation}
\alpha\left({{}\atop\epsfbox{sigma1.eps}}\right)=
(1-t)\,t^{1/2}\,\alpha\left({{}\atop\epsfbox{sigma0.eps}}\right)
+t^2\,\alpha\left({{}\atop\epsfbox{sigma-1.eps}}\right)
\label{polN2}
\end{equation}
that corresponds to the Jones's polynomial \cite{Jo}. For $N=3$ (the
19-vertex model) it is given by
\begin{eqnarray}
\alpha\left({{}\atop\epsfbox{sigma2.eps}}\right)&=&
t\,(1-t^2+t^3)\,\alpha\left({{}\atop\epsfbox{sigma1.eps}}\right)\nonumber\\
&&+t^2\,(t^2-t^3+t^5)\,\alpha\left({{}\atop\epsfbox{sigma0.eps}}\right)-
t^8\,\alpha\left({{}\atop\epsfbox{sigma-1.eps}}\right),
\label{polN3}
\end{eqnarray}
which is a one-variable specialization of the Kauffman polynomial
\cite{Kau1}; for $N=4$ (the 44-vertex model) the
relation is
\begin{eqnarray}
\alpha\left({{}\atop\epsfbox{sigma3.eps}}\right)&=&
t^{3/2}\,(1-t^3+t^5-t^6)\,\alpha\left({{}\atop\epsfbox{sigma2.eps}}\right)\nonumber\\
&&+ t^6\,(1-t^2+t^3+t^5-t^6+t^8)\,\alpha\left({{}\atop\epsfbox{sigma1.eps}}\right)
\nonumber\\
&&- t^{9/2}\,t^8\,(1-t+t^3-t^6)\,\alpha\left({{}\atop\epsfbox{sigma0.eps}}\right)-
t^{20}\,\alpha\left({{}\atop\epsfbox{sigma-1.eps}}\right),
\label{polN4}
\end{eqnarray}
that is again a one-variable polynomial. This sequence generalizes
for $N$ arbitrary in a polynomial defined by a $N$-th order skein
relation.
The object of this paper is to prove that the same $N=2, 3, 4$
polynomials (\ref{polN2})-(\ref{polN4}) are obtained with the link
invariant that we recall next. The case of generic $N$ can be
worked out by induction and its associated link invariant again
reproduces the result obtained in \cite{ADW}. The link invariant is
the following. Consider the plane projection of any classical link so
that the projected link diagram consists only of double crossings,
maxima, minima and vertical arcs and associate to each of these pieces
the next objects indexed by a finite index set $I$
\[
{{}\atop\epsfbox{pr.eps}}\longleftrightarrow\,\,R^a{}_c{}^b{}_d,
\qquad\qquad{{}\atop\epsfbox{pri.eps}}\longleftrightarrow
\,\,{R^{-1}}^a{}_c{}^b{}_d,
\]
\[{{}\atop\epsfbox{md.eps}}\longleftrightarrow\,\,M_{a\,b},
\qquad\qquad{{}\atop\epsfbox{mu.eps}}\longleftrightarrow\,\,M^{a\,b},
\qquad\qquad
{{}\atop\epsfbox{delta.eps}}\longleftrightarrow\,\,\delta^a{}_b.
\]
With this convention any link diagram $L$ is translated into its
corresponding expression $<L>$ in terms of the previous elements. Thus
we have, for example, that for the trefoil
\[
{{}\atop\epsfbox{trefoil1.eps}}\qquad\longleftrightarrow\qquad
{{}\atop\epsfbox{trefoil2.eps}}
\]
\[ <\, {\rm trefoil}\,>= M_{a\,b}\,M_{c\,d}\,R^b{}_e{}^c{}_f\,
{R^{-1}}^a{}_g{}^e{}_h\,{R^{-1}}^f{}_i{}^d{}_j\,M^{h\,i}\,M^{g\,j}
\]
and for the unknotted circle
${{}\atop\epsfbox{zero.eps}}=M_{a\,b}\,M^{a\,b}$, where sum over
repeated indices is always assumed.
It is well-known \cite{Kau2} \cite{KR} that if the objects
$R^a{}_c{}^b{}_d$, ${R^{-1}}^a{}_c{}^b{}_d$, $M_{a\,b}$, $M^{a\,b}$
have been chosen so that they satisfy the conditions
\begin{eqnarray}
&&M_{a\,b}\,M^{b\,c}=\delta_a{}^c=M^{c\,b}\, M_{b\,a},\qquad\quad
{{}\atop\epsfbox{inv1.eps}}\label{m}\\
&&R^a{}_c{}^b{}_d\, {R^{-1}}^c{}_e{}^d{}_f=\delta_e{}^a\,\delta_f{}^b=
{R^{-1}}^a{}_c{}^b{}_d\,R^c{}_e{}^d{}_f,\qquad\quad
{{}\atop\epsfbox{inv2.eps}}\label{r}\\
&&R^a{}_i{}^b{}_j\, R^j{}_k{}^c{}_f\, R^i{}_d{}^k{}_e=
R^b{}_i{}^c{}_j\, R^a{}_d{}^i{}_k\, R^k{}_e{}^j{}_f,
\qquad\quad{{}\atop\epsfbox{inv3.eps}}\label{braid}\\
&&{R^{-1}}^a{}_c{}^b{}_d=M^{a\,e}\,R^b{}_e{}^f{}_c\,M_{f\,d},
\qquad\quad{{}\atop\epsfbox{inv5.eps}}\label{twist1}\\
&&{R^{-1}}^a{}_c{}^b{}_d=M_{c\,e}\,R^e{}_d{}^a{}_f\,M^{f\,b},
\qquad\quad{{}\atop\epsfbox{inv4.eps}}\label{twist2}\\
\end{eqnarray}
then $<L>$ is an invariant of regular isotopy for unoriented link
diagrams. Equations (\ref{m}) and (\ref{r}) require that the matrices
$M_u=(M^{a\,b})$ and $M_d=(M_{a\,b})$ are inverse to each other, and
also that $R$ and $R^{-1}$ are (actually the notation used anticipated
it already); (\ref{braid}) is the {\em Yang-Baxter relation} to which
$R$ has to be a solution (we will refer as $R$-matrix to any solution of
this equation). The mixed conditions (\ref{twist1}) and (\ref{twist2})
are `crossing symmetry'-like properties demanded for $R$.
The paper is organized as follows. First we solve eqs
(\ref{twist1})-(\ref{twist2}) using as initial input the $R$-matrices
written in (\ref{R2}), (\ref{R3}) and (\ref{R4}) respectively, where
$Z$ is a constant to be determined. The solution to these equations
gives the matrices $M_u$ and $M_d$ corresponding to each input $R$ and
constitutes the main result of Section~\ref{two}. Section~\ref{three}
lists sufficient conditions for the link invariant $<L>$ to behave as
a Markov trace. In this and the following section $L$, the link
diagram, is represented as the closure of a braid since braids are
algebraically more manageable than projections of links.
Section~\ref{four} checks that for the three set of matrices $R$,
$M_u$ and $M_d$ under consideration, the corresponding link invariant
$<L>$ behaves as a Markov trace and constructs the ambient isotopy
invariant that derives from such Markov trace. The result are
precisely the link invariants (\ref{polN2}), (\ref{polN3}) and
(\ref{polN4}) obtained in \cite{ADW} through the $N=2,3,4$ state
vertex model. Our conclusions are redacted in
Section~\ref{conclusions}. There are other methods known to provide
(part or the whole series of) $N$-state invariants such as Kauffman
bracket approach and state models \cite {Kau1} \cite{Kau3} and
Kirillov and Reshetikhin work \cite{KR}. Section~\ref{six} explains
the relation between $<L>$ and these methods. The last section
contains some remarks that we find of interest.
The way in which Akutsu {\em et al} derived their result is different
from the method that we follow here. The construction of a Markov
trace in \cite{ADW} relies on the crossing-symmetry property exhibited
by the matrix $R(u)$ of Boltzmann weights of the solvable $N$-state
vertex model, and on the non-trivial expression of the crossing
multipliers. To deduce these, the crossing multipliers, reference
\cite{ADW} uses the explicit dependence of $R(u)$ on the spectral
parameter $u$. In the method followed here is not essential to know
how $R(u)$ depends on $u$ because the information contained in the
crossing multipliers useful to write a Markov trace is substituted by
the information encoded in eqs (\ref{twist1}) and (\ref{twist2}). It
was precisely the similarity between the latter two equations with the
crossing symmetry property (\ref{crossing}) what originally motivated
this work. To reproduce the link polynomials
(\ref{polN2})-(\ref{polN4}) then, there exist other methods in
addition to vertex models (via $<L>$ for instance) which reproduce
independently the results originally obtained with statistical models.
The origin of matrices written in (\ref{R2}), (\ref{R3}) and
(\ref{R4}) is this: they are the limit $R=Z\,\lim_{u\to\infty}
R(u)/\rho(u)$ taken on the matrix $R(u)$ constructed, as we have said,
with the Boltzmann weights of the $N=2,3,4$ vertex model. The
denominator $\rho(u)$ is a function evaluated with each particular
$R(u)$. At the expense of being repetitive we emphasize once more that
in this paper we are working not with $R(u)$ but with its limit and
that the limit is enough to write the invariant $<L>$.
\mysection{Solving the $N=2,3,4$ cases}
\label{two}
\mysubsection{$N=2$ case}
Let us consider the matrix $R(u)$
constructed with the Boltzmann weights of the $N=2$ vertex model
as in \cite{ADW}
\begin{eqnarray}
R(u)=\left(\begin{array}{cccc}
\sinh(\lambda-u)&0&0&0\\
0&e^{\displaystyle{2\,\mu\,u}}\sinh\lambda&\sinh u&0\\
0&\sinh u&e^{\displaystyle{-2\,\mu\,u}}\sinh\lambda&0\\
0&0&0&\sinh(\lambda-u)
\label{spectN2}
\end{array}\right)
\end{eqnarray}
(along this paper we are using $R^i{}_l{}^j{}_k(u)$ to denote what in
\cite{ADW} is denoted by $S^i{}_j{}^k{}_l(u)$). Here $u$ represents
the spectral parameter of the vertex model and $\lambda$ and $\mu$ are
arbitrary constants. This matrix is a solution of the Yang-Baxter
equation with spectral parameter
\begin{eqnarray}
R^a{}_i{}^b{}_j(u)\, R^j{}_k{}^c{}_f(u+v)\, R^i{}_d{}^k{}_e(v)=
R^b{}_i{}^c{}_j(v)\, R^a{}_d{}^i{}_k(u+v)\, R^k{}_e{}^j{}_f(u)
\label{YBE}
\end{eqnarray}
where $a,b,c,\ldots$ are indices in the set $I=\{-1/2,\,1/2\}$, and
it satisfies the relation $R(u)\,R(-u)=\rho(u)\,\rho(-u)$, with
$\rho(u)\equiv\sinh(\lambda-u)$. It also satisfies the crossing symmetry
property
\begin{equation}
R^i{}_k{}^j{}_l(u)=\left({r(i)\,r(k)\over r(j)\,r(l)}\right)^{1/2}
R^j{}_{-i}{}^{-l}{}_k(\lambda-u)
\label{crossing}
\end{equation}
with crossing multipliers given by
$r(p)=e^{\displaystyle{-2\,\mu\,\lambda\,p}}$ for every $p$ in $I$.
The limit $R=Z\,\lim_{u\to\infty} R(u)/\rho(u)$ is well defined and
given by the invertible matrix with expression
\begin{eqnarray}
R=Z\pmatrix{
1&0&0&0\cr
0&1-q^2&q&0\cr
0&q&0&0\cr
0&0&0&1\cr}
\label{R2}
\end{eqnarray}
in terms of the parameter $q\equiv -e^\lambda$. This matrix here is
obtained with the assumption $\mu=1/2$ in (\ref{spectN2}) and satisfies
the Yang-Baxter relation (\ref{braid}).
With the $R$-matrix (\ref{R2}) and its inverse we now determine
the value of the constant $Z$ with the condition that there exist
non-singular matrices $M_u$ and $M_d$, inverse to each other, such
that they satisfy conditions (\ref{twist1}) and (\ref{twist2}). It is
not difficult to see that for generic $q$ these equations have a
unique solution given by
\begin{eqnarray}
M_u=\left(\begin{array}{cc}
0 & q^{1/2} \\
-q^{-1/2} & 0
\end{array}\right),
\qquad M_d=- M_u,\qquad Z=\pm\, q^{-1/2},
\label{M2}
\end{eqnarray}
where we have used the freedom to choose a multiplicative constant in
$M_u$, say, to make both matrices of determinant equal to one. This is
done for simplicity merely because the value of the multiplicative
constant is not relevant since it does not affect the link invariant
$<L>$ derived from the solutions of (\ref{twist1})-(\ref{twist2}).
This solution yields for the zero knot the value of the invariant
${{}\atop\epsfbox{zero.eps}}=M_{a\,b}\,M^{a\,b}={\rm
tr}\,(M_u\,M_d^t)=-(q+q^{-1})$ where the superscript $t$ indicates
transpose matrix. We postpone until Section~\ref{four} the
final expression of the link invariant associated to (\ref{R2}) and
(\ref{M2}).
\mysubsection{$N=3$ case}
In this case the $R$-matrix considered as input
to solve eqs (\ref{twist1}) and (\ref{twist2}) is
given by the invertible matrix
\begin{eqnarray}
R=Z\left(\begin{array}{ccccccccc}
1&0&0&0&0&0&0&0&0\\
0&1-q^4&0&-q^2&0&0&0&0&0\\
0&0&(1-q^2)\,(1-q^4)&0&q\,(1-q^4)&0&q^4&0&0\\
0&-q^2&0&0&0&0&0&0&0\\
0&0&q\,(1-q^4)&0&q^2&0&0&0&0\\
0&0&0&0&0&1-q^4&0&-q^2&0\\
0&0&q^4&0&0&0&0&0&0\\
0&0&0&0&0&-q^2&0&0&0\\
0&0&0&0&0&0&0&0&1
\end{array}\right).
\,\label{R3}
\end{eqnarray}
This matrix is $Z$ times the $\lim_{u\to\infty} R(u)/\rho(u)$ where now
$R(u)$ is the spectral parameter dependent solution of (\ref{YBE})
with entries the Boltzmann weights of the $N=3$ vertex model given
by \cite{ADW}
\begin{eqnarray}
&&R^1{}_1{}^1{}_1(u)=R^{-1}{}_{-1}{}^{-1}{}_{-1}(u)=
\sinh(\lambda-u)\,\sinh(2\,\lambda-u),\nonumber\\
&&R^1{}_{-1}{}^{-1}{}_1(u)=R^{-1}{}_1{}^1{}_{-1}(u)=
\sinh u\,\sinh(\lambda+u),\nonumber\\
&&e^{\displaystyle{4\,\mu\,u}}\,R^1{}_1{}^{-1}{}_{-1}(u)=
e^{\displaystyle{-4\,\mu\,u}}\,R^{-1}{}_{-1}{}^1{}_1(u)=
\sinh \lambda\,\sinh 2\,\lambda,\nonumber\\
&&R^1{}_0{}^0{}_1(u)=R^{-1}{}_0{}^0{}_{-1}(u)=
R^0{}_1{}^1{}_0(u)=R^0{}_{-1}{}^{-1}{}_0(u)=
\sinh u\,\sinh(\lambda-u),\nonumber\\
&&e^{\displaystyle{2\,\mu\,u}}\,R^1{}_1{}^0{}_0(u)=
e^{\displaystyle{-2\,\mu\,u}}\,R^{-1}{}_{-1}{}^0{}_0(u)=
e^{\displaystyle{-2\,\mu\,u}}\,R^0{}_0{}^1{}_1(u)
\label{N3}\\
&&\qquad\qquad\qquad =e^{\displaystyle{2\,\mu\,u}}\,R^0{}_0{}^{-1}{}_{-1}(u)=
\sinh 2\,\lambda\,\sinh(\lambda-u),\nonumber\\
&&e^{\displaystyle{-2\,\mu\,u}}\,R^0{}_{-1}{}^0{}_1(u)=
e^{\displaystyle{2\,\mu\,u}}\,R^0{}_1{}^0{}_{-1}(u)=
e^{\displaystyle{2\,\mu\,u}}\,R^1{}_0{}^{-1}{}_0(u)\nonumber\\
&&\qquad\qquad\qquad = e^{\displaystyle{-2\,\mu\,u}}\,R^{-1}{}_0{}^1{}_0(u)
=\sinh 2\,\lambda\,\sinh u,\nonumber\\
&&R^0{}_0{}^0{}_0(u)=\sinh \lambda\,\sinh 2\,\lambda-\sinh u\,\sinh(\lambda-u)\nonumber
\end{eqnarray}
and zero the rest of the entries. In this case the index set is fixed
as $I=\{-1,\,0,\, 1\}$; the quantities $R^a{}_c{}^b{}_d$ are arranged
in matrix form so that the left indices label the block and the right
ones the block entries. Solution (\ref{N3}) satisfies
$R(u)\,R(-u)=\rho(u)\,\rho(-u)$ where now
$\rho(u)\equiv\sinh(\lambda-u)\,\sinh(2\,\lambda-u)$ is different from the
$N=2$ case but obtained with the same relation, and also satisfies
property (\ref{crossing}) from above with the same crossing
multipliers. As in the case $N=2$, the matrix in (\ref{R3}) is
obtained from (\ref{N3}) with the choice $\mu=1/2$ and the
substitution $q=-e^\lambda$.
For generic values of $q$ there is a unique solution (again unique up
to a multiplicative constant) to eqs
(\ref{twist1}) and (\ref{twist2}) corresponding to the matrix
in (\ref{R3}) and its inverse. The solution is given by
\begin{eqnarray}
M_u=\left(\begin{array}{ccc}
0 & 0 & q \\
0 & -1 & 0 \\
q^{-1} & 0 & 0
\end{array}\right),\qquad M_d=M_u, \qquad Z=\pm\, q^{-2}.
\label{M3}
\end{eqnarray}
The invariant associated to the unknot is in this case
equal to ${\rm tr}\,(M_u\,M_d^t)=q^2+1+q^{-2}$.
\mysubsection{$N=4$ case}
The non-zero entries of the $R$-matrix
$R=Z\,\lim_{u\to\infty} R(u)/\rho(u)$ where $R(u)$ \cite{ADW} are the
Boltzmann weights of the $N=4$ vertex model are given by (now
$\rho(u)\equiv\sinh(\lambda-u)\,\sinh(2\,\lambda-u)\,\sinh(3\,\lambda-u)$ and
$\mu=1/2$, $q=-e^\lambda$ as usual)
\begin{eqnarray}
&&R^{3/2}{}_{3/2}{}^{3/2}{}_{3/2}=
R^{-3/2}{}_{-3/2}{}^{-3/2}{}_{-3/2}=Z\nonumber\\
&&R^{-3/2}{}_{3/2}{}^{3/2}{}_{-3/2}=
R^{3/2}{}_{-3/2}{}^{-3/2}{}_{3/2}=Z\,q^9\nonumber\\
&&R^{-3/2}{}_{-3/2}{}^{3/2}{}_{3/2}=Z\,(1-q^2)\,(1-q^4)\,(1-q^6)\nonumber\\
&&R^{3/2}{}_{1/2}{}^{1/2}{}_{3/2}=
R^{-3/2}{}_{-1/2}{}^{-1/2}{}_{-3/2}=
R^{1/2}{}_{3/2}{}^{3/2}{}_{1/2}=
R^{-1/2}{}_{-3/2}{}^{-3/2}{}_{-1/2}=Z\,q^3\nonumber\\
&&R^{1/2}{}_{-3/2}{}^{-3/2}{}_{1/2}=
R^{-1/2}{}_{3/2}{}^{3/2}{}_{-1/2}=
R^{-3/2}{}_{1/2}{}^{1/2}{}_{-3/2}=
R^{3/2}{}_{-1/2}{}^{-1/2}{}_{3/2}=Z\,q^6\nonumber\\
&&R^{-3/2}{}_{-3/2}{}^{-1/2}{}_{-1/2}=
R^{1/2}{}_{1/2}{}^{3/2}{}_{3/2}=Z\,(1-q^6)\nonumber\\
&&R^{1/2}{}_{-3/2}{}^{-1/2}{}_{3/2}=
R^{-3/2}{}_{1/2}{}^{3/2}{}_{-1/2}=Z\,q^4\,(1-q^6)\label{R4}\\
&&R^{-3/2}{}_{-1/2}{}^{1/2}{}_{-1/2}=
R^{1/2}{}_{-1/2}{}^{1/2}{}_{3/2}=
R^{-1/2}{}_{1/2}{}^{3/2}{}_{1/2}=
R^{-1/2}{}_{-3/2}{}^{-1/2}{}_{1/2}\nonumber\\
&&\qquad\qquad\qquad=Z\,q^3\,(1-q^4)\,(q^2+1+q^{-2})^{1/2}\nonumber\\
&&R^{-3/2}{}_{-3/2}{}^{1/2}{}_{1/2}=
R^{-1/2}{}_{-1/2}{}^{3/2}{}_{3/2}=Z\,(1-q^4)\,(1-q^6)\nonumber\\
&&R^{-1/2}{}_{-3/2}{}^{1/2}{}_{3/2}=
R^{-3/2}{}_{-1/2}{}^{3/2}{}_{1/2}=Z\,q\,(1-q^4)\,(1-q^6)\nonumber\\
&&R^{1/2}{}_{1/2}{}^{1/2}{}_{1/2}=
R^{-1/2}{}_{-1/2}{}^{-1/2}{}_{-1/2}=Z\,q^4\nonumber\\
&&R^{1/2}{}_{-1/2}{}^{-1/2}{}_{1/2}=
R^{-1/2}{}_{1/2}{}^{1/2}{}_{-1/2}=Z\,q^5\nonumber\\
&&R^{-1/2}{}_{-1/2}{}^{1/2}{}_{1/2}=Z\,q^2\,(1-q^4)\,(1+q^2),\nonumber
\end{eqnarray}
where now $I=\{-3/2,\,-1/2,\,1/2,\,3/2\}$. Using these matrix elements
as input data in (\ref{twist1}) and (\ref{twist2}) the unique solution
(up to a multiplicative constant) to these equations for generic $q$
is
\begin{eqnarray}
M_u=\left(\begin{array}{cccc}
0 & 0 & 0 & q^{3/2} \\
0 & 0 & -q^{1/2} & 0 \\
0 &q^{-1/2} & 0 & 0 \\
-q^{-3/2} & 0 & 0 & 0
\end{array}\right),\qquad M_d=-M_u, \qquad Z=\pm\, q^{-9/2}
\label{M4}
\end{eqnarray}
which gives for the unknot the invariant ${{}\atop\epsfbox{zero.eps}}=
{\rm tr}\,(M_u\,M_d^t)=-(q^3+q+q^{-1}+q^{-3})$.
\mysection{Conditions for the link invariant $<L>$ to be a Markov trace}
\label{three}
We study in this section under which conditions $<L>$ is an invariant
of ambient isotopy in addition to regular isotopy as well. Just for
convenience we will regard every link diagram $L$ as the closure of a
certain braid $A$ in the $n$-string braid group $B_n$. {}From the
picture
\[
{{}\atop\epsfbox{markov.eps}}
\]
we easily see that the invariant associated to $L$ is
\begin{eqnarray*}
<L>&=&A^{a_1}{}_{b_1}^{\ldots}{}_{\ldots}^{\,a_n}{}_{b_n}
\,(M^{{b_1}\,{c_1}}\,M_{{a_1}\,{c_1}})
\cdots (M^{{b_n}\,{c_n}}\,M_{{a_n}\,{c_n}})\\
&=&A^{a_1}{}_{b_1}^{\ldots}{}_{\ldots}^{\,a_n}{}_{b_n}
\,(M_u\,M_d^t)^{b_1}{}_{a_1}\cdots (M_u\,M_d^t)^{b_n}{}_{a_n}\\
&=&{\rm tr}\, (A\,(M_u\,M_d^t)^{\otimes^n}).
\end{eqnarray*}
As mentioned $A$ represents an element of the $n$-string braid
group $B_n$ generated by $\{1,\, b_i\}$, $i=1,\ldots, n-1$. The matrix
representation of the elementary braids
\[
{{}\atop\epsfbox{generator.eps}}
\]
that we shall be using to write each $A$ is given in terms of
$R$-matrices by
\begin{equation}
b_i=\underbrace{
{1\kern-.25em{\rm l}}\otimes\cdots\otimes R\otimes\cdots{1\kern-.25em{\rm l}}}_{n\; {\rm times}}\,,\qquad\quad
b_i^{-1}={1\kern-.25em{\rm l}}\otimes\cdots\otimes R^{-1}\otimes\cdots{1\kern-.25em{\rm l}}
\label{bi}
\end{equation}
where $R$ or its inverse $R^{-1}$ are placed in the $(i,i+1)$ entry of
the tensor product and are given by (\ref{R2}), (\ref{R3}) or
(\ref{R4}), respectively. In this formula ${1\kern-.25em{\rm l}}$ denotes the unit
matrix of dimensions $N\times N$. The trace `tr' in $<L>={\rm tr}\,
(A\,(M_u\,M_d^t)^{\otimes^n})$ is the ordinary trace of matrices taken
in this representation. The fact that $R$ satisfies the Yang-Baxter
relation (\ref{braid}) is equivalent to say that the generators
$\{b_i\}$ satisfy $b_i\,b_{i+1}\,b_i=b_{i+1}\,b_i\,b_{i+1}$, $1\le
i\le n-2$ that together with $b_i\,b_j=b_j\,b_i$, $|i-j|\ge 2$ coming
from (\ref{bi}) guarantee the topological equivalence between
different expressions of a braid in terms of the braid generators.
We now define on $B_n$ the functional $\phi:\,B_n\longrightarrow \,
{\cal C}$ by
\begin{equation}
\phi(A)\equiv {{\rm tr}\, (A\,(M_u\,M_d^t)^{\otimes^n})\over
({\rm tr}\, (M_u\,M_d^t))^n}
\label{markov}
\end{equation}
so that $\phi$ is basically $<L>$ but satisfies $\phi(1)=1$. We
investigate now under which conditions this functional $\phi$ written
as in (\ref{markov}) behaves as a {\em Markov trace}. For $\phi$ to
be called a Markov trace it must satisfy the following properties
\begin{description}
\item{(p1)} $\phi(A\,B)=\phi(B\,A)$ for all $A,\, B$ in $B_n$ and
\item{(p2)} $\phi(A\,b_n)=\tau\,\phi(A)$ and
$\phi(A\,b_n^{-1})=\bar\tau\,\phi(A)$ for $A$ in $B_n$ and
$b_n,\,b_n^{-1}$ in $B_{n+1}$ with $\tau$ and $\bar\tau$ constants
independent of $n$ and given by
\[
\tau=\phi(b_i),\qquad\bar\tau=\phi(b_i^{-1})\qquad {\rm for}\quad
{\rm all}\quad i.
\]
\end{description}
If $\phi$ is a Markov trace then it is possible to associate an
invariant of ambient isotopy for oriented links $\alpha'$ to it
given by the renormalization formula
\begin{equation}
\alpha'(A)=\left({1\over \tau\,\bar\tau}\right)^{(n-1)/2}\,
\left({\bar\tau\over\tau}\right)^{e(A)/2}\,\phi(A).
\label{ambientp}
\end{equation}
In this formula $e(A)$ is the writhe of the braid $A$ which is given
by the exponent sum of the generators $\{bi\}$ in the braid so that
if, for instance, $A=b_2^3\, b_1^{-2}$ then $e(A)=3-2=1$.
It is not difficult to prove that properties (p1)-(p2) above hold for
$\phi$ given by (\ref{markov}) when the objects $R,\,M_u$ and
$M_d$, subject already to restrictions (\ref{m})-(\ref{twist2}),
satisfy as well the following conditions
\begin{description}
\item{(c1)} \begin{equation}
R\,(M_u\,M_d^t)^{\otimes^2}=(M_u\,M_d^t)^{\otimes^2}\,R,
\label{c1}
\end{equation}
\end{description}
which in its pictorial form means that the crossings can be
pulled through the closure strands and
\begin{description}
\item{(c2)} \begin{equation}
(R\,(M_u\,M_d^t)^{\otimes^2})^a{}_b{}^c{}_c
\cdot\,{\rm tr}\,(M_u\,M_d^t)=
(M_u\,M_d^t)^a{}_b\cdot\,{\rm
tr}\,(R\,(M_u\,M_d^t)^{\otimes^2}),\label{c2}
\end{equation}
\end{description}
together with the similar relations that result from replacing $R$
with $R^{-1}$ in (\ref{c1}) and (\ref{c2}). As above $a,b,c$ are
elements of the index set $I$.
\begin{figure}
\[{{}\atop\epsfbox{c1.eps}}, \qquad\qquad {{}\atop\epsfbox{c2.eps}}\]
\caption{Conditions c1 and c2}
\end{figure}
The proof of conditions (c1), (c2) is as follows.
For ${\rm tr}\,(A\,B\,(M_u\,M_d^t)^{\otimes^n})= {\rm
tr}\,(B\,A\,(M_u\,M_d^t)^{\otimes^n})$ to hold is enough to demand
that $A\,(M_u\,M_d^t)^{\otimes^n}=(M_u\,M_d^t)^{\otimes^n}\,A$ for any
$A$, which turns to be equivalent to condition (\ref{c1}) since
any braid $A$ decomposes in a product of generators of $B_n$ and these
are expressed in terms of $R$. In the case of condition (\ref{c2})
it merely comes from $\phi$ written for the link
\[
{\epsfbox{Abn.eps}}
\]
and made equal to the product $\phi(A)\,\phi(b_i)$ where
$\phi(b_i)={\rm tr}\,(R\,(M_u\,M_d^t)^{\otimes^2})/({\rm tr}\,
(M_u\,M_d^t))^2$ according to definition (\ref{markov}).
\mysection{Ambient isotopy $N=2,3,4\ldots$ link invariant}
\label{four}
In this section we check that the link invariant $<L>$ behaves as a
Markov trace for the $N=2,3,4$ matrices $R$, $M_u$ and $M_d$, and
calculate the ambient isotopy invariant $\alpha'(\cdot)$ associated. We
compare the result with the ambient isotopy invariant $\alpha(\cdot)$
obtained in \cite{ADW} using vertex models.
\mysubsection{$N=2$ case}
It is simple to check that the matrices $R$, $M_u,\,M_d$ given in
(\ref{R2}) and (\ref{M2}) satisfy conditions (\ref{c1}) and (\ref{c2})
what indicates that $\phi$ defined as in (\ref{markov}) is indeed a
Markov trace. This allows to construct the invariant (\ref{ambientp})
as follows. With the defining formulae
\[
\phi(b_i)={{\rm tr}\,(R\,(M_u\,M_d^t)^{\otimes^2})\over ({\rm tr}\,
(M_u\,M_d^t))^2},\qquad
\phi(b_i^{-1})={{\rm tr}\,(R^{-1}\,(M_u\,M_d^t)^{\otimes^2})\over ({\rm tr}\,
(M_u\,M_d^t))^2}
\]
and (\ref{R2}), (\ref{M2}) we obtain that
\[
\tau=\pm\, {q^{-1/2}\over q^2+1},\qquad
\bar\tau=\pm\, {q^{5/2}\over q^2+1}
\]
which substituted in $\alpha'(\cdot)$ gives
\[
\alpha'(A)=(q+q^{-1})^{n-1}\,q^{3\,e(A)/2}\,\phi(A).
\]
A consequence of the minimal polynomial of $R$ in (\ref{R2})
\[
(R-Z)\,(R+q^2\,Z)=0,\qquad Z=\pm\, q^{-1/2}
\]
and the linearity of the trace function is the existence of a relation
between the numbers $\phi(b_i),\,\phi(1)$ and $\phi(b_i^{-1})$ that,
due to (\ref{ambientp}), implies a relation between
$\alpha'(b_i),\,\alpha'(1)$ and $\alpha'(b_i^{-1})$ given by
\[
\alpha'(b_i)=\mp\, q\,(q^2-1)\,\alpha'(1)+q^{4}\,\alpha'(b_i^{-1}).
\]
With the substitution $t=q^2$ this equality transforms in
\begin{equation}
\alpha'\left({{}\atop\epsfbox{sigma1.eps}}\right)=\pm\,
(1-t)\,t^{1/2}\,\alpha'\left({{}\atop\epsfbox{sigma0.eps}}\right)
+t^2\,\alpha'\left({{}\atop\epsfbox{sigma-1.eps}}\right)
\label{polpN2}
\end{equation}
which reproduces the result obtained in \cite{ADW} and displayed in
(\ref{polN2}). The two link invariants calculated with this formula
(note that there is a $(\pm)$ in it) are in fact the same one because
when they are computed for a given knot or link either they do
coincide or differ in a global sign. It can be said then that
(\ref{polpN2}) defines a unique link invariant. This
is the Jones polynomial obviously.
\mysubsection{$N=3$ case}
In this case $\phi$ given by (\ref{markov}) is a Markov trace
too since $R$, $M_u$ and $M_d$ in (\ref{R3}) and (\ref{M3})
also satisfy (\ref{c1}) and (\ref{c2}). It only remains to proceed as in
the case $N=2$ and compute $\tau$ and $\bar\tau$ which result
\[
\tau=\pm\, {q^{-2}\over q^4+q^2+1},\qquad
\bar\tau=\pm\, {q^{6}\over q^4+q^2+1}
\]
thus providing the invariant
\begin{equation}
\alpha'(A)=(q^2+1+q^{-2})^{n-1}\,q^{4\,e(A)}\,\phi(A).
\label{ambient3}
\end{equation}
The minimal polynomial of $R$ in (\ref{R3}) given now by
\[
(R-Z)\,(R+q^4\,Z)\,(R-q^6\,Z)=0,\qquad Z=\pm\, q^{-2}
\]
fixes a relation between $\phi(b_i^2),\,\phi(b_i),\,\phi(1)$ and
$\phi(b_i^{-1})$ that translates into a relation among the $\alpha'$ on
the same arguments. After some elementary algebra this relation
reads as ($t=q^2$)
\begin{eqnarray}
\alpha'\left({{}\atop\epsfbox{sigma2.eps}}\right)&=&
\pm\,t\,(1-t^2+t^3)\,\alpha'\left({{}\atop\epsfbox{sigma1.eps}}\right)\nonumber\\
\label{polpN3}\\
&&+t^2\,(t^2-t^3+t^5)\,\alpha'\left({{}\atop\epsfbox{sigma0.eps}}\right)
\mp\,t^8\,\alpha'\left({{}\atop\epsfbox{sigma-1.eps}}\right),\nonumber
\end{eqnarray}
that also coincides with formula (\ref{polN3}) as obtained in Akutsu {\em
et al} work. Here again the two ambient link invariant in
(\ref{polpN3}) correspond to the same invariant.
\mysubsection{$N=4$ case}
For $N=4$ and its associated matrices $R$, $M_u$ and $M_d$ given by
(\ref{R4}) and (\ref{M4}), $\phi$ is also a Markov trace.
The constants $\tau$ and $\bar\tau$ are now
\[
\tau=\pm\, {q^{-9/2}\over q^6+q^4+q^2+1},\qquad
\bar\tau=\pm\, {q^{21/2}\over q^6+q^4+q^2+1}
\]
that together with the minimal polynomial of $R$ in (\ref{R4})
\[
(R-Z)\,(R+q^6\,Z)\,(R-q^{10}\,Z)\,(R+q^{12}\,Z)=0,\qquad Z=\pm\, q^{-9/2}
\]
give for the link invariant (\ref{ambientp}) the expression
\begin{eqnarray}
\alpha'(b_i^3)&=&
\pm\,t^{3/2}\,(1-t^3+t^5-t^6)\,
\alpha'(b_i^2)\nonumber\\
&&+t^6\,(1-t^2+t^3+t^5-t^6+t^8)\,
\alpha'(b_i)
\label{polpN4}\\
&&\mp\,t^{9/2}\,t^8\,(1-t+t^3-t^6)\,
\alpha'(1)-
t^{20}\,\alpha'(b_i^{-1})\nonumber
\end{eqnarray}
that coincides with (\ref{polN4}).
\mysection{Conclusion and generalization to $N$ arbitrary}
\label{conclusions}
The collection of these results obtained for $N=2,3,4$ indicates
that the equality
\[
\alpha'=\alpha_{\textstyle{\;\rm vertex}\; {\rm model}}\,,
\]
where $\alpha'$ is calculated with the link invariant recalled in the
introduction and $\alpha$ via vertex models as used in \cite{ADW} is also
true for arbitrary $N$. In the case of generic $N$ it is possible to
write the corresponding ambient isotopy link invariant $\alpha'$ with the
following formulae deduced by induction
\[
\tau=\pm\, {q^{{-(N-1)^2/2}}\over
q^{2\,(N-1)}+q^{2\,(N-2)}+\cdots +1},\qquad
\bar\tau=\pm\, {q^{{(N-1)\,(N+3)/2}}\over
q^{2\,(N-1)}+q^{2\,(N-2)}+\cdots +1},
\]
in addition to the $N$-th degree minimal polynomial of $R$ of
expression
\[
\prod^N_{i=1}\left(R-(-1)^{i+1}q^{{N\,(N-1)-(N-i+1)\,(N-i)}}\,Z\right),
\quad {\rm where}\quad Z=\pm\, q^{{-(N-1)^2/2}}.
\]
\mysection{Relation between $<L>$ and other methods of obtaining
$N$-state link invariants}
\label{six}
We discuss briefly at this point the connection between invariant
$<L>$ and other methods existing in the literature which are known to
provide Jones and Kauffman polynomials (i.e. the lowest $N$ link
invariants of the series) such as the Kauffman bracket approach \cite{Kau3}
\cite{Kau2}, or the work of Kirillov and Reshetikhin \cite{KR} which
reproduces the entire hierarchy of $N$-state link invariants.
\mysubsection{Kauffman bracket approach and state models}
To discuss how the invariant $<L>$ is connected with
Kauffman bracket approach we need to note first the following
fact: for any $R$-matrix with associated matrices $M_u$ and $M_d$ it
is possible to always define a Temperley-Lieb (TL) algebra, or
viewed in an equivalent manner, that any regular link invariant
$<L>$ constructed out of conditions (\ref{m})-(\ref{twist2}) comes
naturally equipped with a Temperley-Lieb algebra. The algebra is
formulated as follows: let $e=(e^a{}_c{}^b{}_d)$ with $a,b,c,d$ in the
index set $I$ be the matrix whose matrix elements are defined by
$e^a{}_c{}^b{}_d=M^{a\,b}\, M_{c\,d}$; then $e$ provides a
Temperley-Lieb algebra with generators $e_1,\ldots, e_{n-1}$ given by
\[
e_i={1\kern-.25em{\rm l}}\otimes\cdots\otimes\underbrace{e}_{i,\, i+1}\otimes\cdots{1\kern-.25em{\rm l}}
\]
and relations
\begin{equation}
e_i^2=k\,e_i,\qquad e_i\,e_j=e_j\,e_i,\quad |i-j|\ge 2, \qquad
e_i\,e_{i\pm 1}\,e_i=e_i
\label{TL}
\end{equation}
where the constant $k={{}\atop\epsfbox{zero.eps}}= M_{a\,b}\,M^{a\,b}$
and the indices $1\,\le i,\, j\,\le n-1$ are chosen so that all
relations above make sense. The proof of this statement is very
simple. The first relation comes from
$e^2={{}\atop\epsfbox{zero.eps}}\,e$, which obviously holds from the
definition of $e$ and condition (\ref{m}). The second relation is
clear when indices $i,\,j$ are sufficiently apart from each other. In
the case of the upper third expression, its lhs written in components
is $e^a{}_i{}^b{}_j\,e^j{}_q{}^c{}_f\,e^i{}_d{}^q{}_e= M^{a\,b}\,
M_{i\,j}\,M^{j\,c}\, M_{q\,f}\,M^{i\,q}\, M_{d\,e}=
M^{a\,b}\,M_{d\,e}\,\delta^c{}_f=e^a{}_d{}^b{}_e\,\delta^c{}_f$, which
equals the rhs when this is expressed in components too. The lower
relation is proved in a similar manner. (We note that there exist
another possible TL algebra defined not with $e$ but with matrix $f$
as $f^a{}_c{}^b{}_d=M^{b\,a}\, M_{d\,c}$ and that provides similar
relations to those in (\ref{TL}) but with $f$ instead of $e$ in the
definition of the generators $e_i$. However, the matrix representation
of this second TL algebras is related by a similarity transformation
to the one constructed with $e$ alone since $f=P\,e\,P$, where $P$ is
the permutation matrix, $P^a{}_c{}^b{}_d= \delta^a{}_d\,\delta^b{}_c$
(Kronecker deltas)).
We use now this Temperley-Lieb algebra to formulate a state model
for the $N=2$ $R$-matrix given in (\ref{R2})
(from the two choices that we have for the constant $Z$ we take
$Z=q^{-1/2}$ to work out the example) given by the expression
\[
R^a{}_c{}^b{}_d=A\,\delta^a{}_c\,\delta^b{}_d+ B\,M^{a\,b}\, M_{c\,d},
\]
were the indices are taken in the set $I=\{-1/2,\,1/2\}$, $M_u,\,M_d$
are the matrices written in (\ref{M2}) and $A,\,B$ are constants to be
determined. In this case, the minimal polynomial of $R$ (remember from
Section~\ref{four} that this is given by
$q^{-1/2}\,R-q^{1/2}\,R^{-1}=q^{-1}-q$) fixes
without ambiguity the constants $A,\,B$ and gives for $R$ above the
formula $R=q^{-1/2}\,{1\kern-.25em{\rm l}}+q^{1/2}\,e$ where $e$ is, according to the
definition, the matrix
\begin{eqnarray*}
e=\pmatrix{
0&0&0&0\cr
0&-q&1&0\cr
0&1&-q^{-1}&0\cr
0&0&0&0\cr}.
\end{eqnarray*}
In pictorial form $R$ can be represented by the bracket
identity
\begin{equation}
{{}\atop\epsfbox{stateR.eps}}=
q^{-1/2}\,{{}\atop\epsfbox{stateD.eps}}
+q^{1/2}\,{{}\atop\epsfbox{stateTL.eps}}
\label{bracket}
\end{equation}
that together with equations
\begin{equation}
<{{}\epsfbox{zeroL.eps}}\,L>=
-(q+q^{-1})\,<L>,\qquad
<{{}\epsfbox{zeroL.eps}}>=-(q+q^{-1})
\label{bracketcond}
\end{equation}
constitutes the bracket model for the matrix in (\ref{R2}). The first
equation in (\ref{bracketcond}) is obtained just applying
property $<{{}\epsfbox{zeroL.eps}}\,L>=M_{a\,b}\,M^{a\,b}\,<L>$
particularized for the case $N=2$. Jones polynomial can now be
obtained from the bracket in the usual manner \cite{Kau2}: from
(\ref{bracket}) and (\ref{bracketcond}) derive the two following relations
\begin{eqnarray*}
&&<{{}\atop\epsfbox{loop1.eps}}>=-q^{-3/2}\,<{{}\atop\epsfbox{line.eps}}>\\
&&<{{}\atop\epsfbox{loop2.eps}}>=-q^{3/2}\,<{{}\atop\epsfbox{line.eps}}>
\end{eqnarray*}
which indicate that the normalization of the bracket given by
$V_L=(-q^{3/2})^{w(L)}<L>$
is an ambient isotopy invariant for oriented links. Here
$w(L)$, the writhe of the link $L$, is the regular isotopy invariant
defined by the equation
$w(L)=\sum_p \epsilon(p)$ where $p$ runs over all crossings of $L$ and
$\epsilon(p)$ is the sign of the crossing
$\epsilon\left({{}\atop\epsfbox{sigma1.eps}}\right)=1$,
$\epsilon\left({{}\atop\epsfbox{sigma-1.eps}}\right)=-1$. Also from
(\ref{bracket}) derives the identity
\[
q^{-1/2}\,{{}\atop\epsfbox{stateR.eps}}
-q^{1/2}\,{{}\atop\epsfbox{stateRi.eps}}=
(q^{-1}-q)\,{{}\atop\epsfbox{stateD.eps}}
\]
that written in terms of $V_L$ defined by the previous normalization
gives the Jones polynomial skein relation ($t=q^2$ as usual)
displayed in formula (\ref{polN2}), i.e.
\[
t^{-1}\,V{}_{{}_{{}\atop\epsfbox{sigma1.eps}}}-
t\,V{}_{{}_{{}\atop\epsfbox{sigma-1.eps}}}=
(t^{1/2}-t^{-1/2})\,V{}_{{}_{{}\atop\epsfbox{sigma0.eps}}}.
\]
This shows that matrices $M_u$ and $M_d$ are of relevance to associate
bracket identities to $R$-matrices thus making a direct link between
the invariant $<L>$ and the bracket approach introduced by
Kauffman. We have seen also with (\ref{bracket}) that they help to
formulate a state model for the matrix (\ref{R2}). The first fact, the
definition of bracket identities through $M_u$ and $M_d$, is common to the
whole series of $N$-state link invariants and therefore can be done
for generic $N$, but the definition of a state model when $N\ge 3$ is
a rather different subject. Let us see it by examining the case
$N=3$. With the expression of $M_u$ and $M_d$ given in (\ref{M3}) we
calculate the TL element $e$ which now results in the matrix
\begin{eqnarray}
e=\left(\begin{array}{ccccccccc}
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&q^2&0&-q&0&1&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&-q&0&1&0&-q^{-1}&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&1&0&-q^{-1}&0&q^{-2}&0&0\\
0&0&0&0&0&0&0&0&0\\
0&0&0&0&0&0&0&0&0
\end{array}\right)
\,\label{e3}
\end{eqnarray}
and that verifies $e^2=(q^2+1+q^{-2})\,e$. This element is useful to
write the following relation satisfied by matrix (\ref{R3}) with the
choice $Z=q^{-2}$
\[
R-R^{-1}=(q^{-2}-q^2)\,({1\kern-.25em{\rm l}}-e),
\]
or equivalently
\begin{equation}
{{}\atop\epsfbox{stateR.eps}}-{{}\atop\epsfbox{stateRi.eps}}=
(q^{-2}-q^2)\,\left({{}\atop\epsfbox{stateD.eps}}-
{{}\atop\epsfbox{stateTL.eps}}\right).
\label{Dubrovnik1}
\end{equation}
This identity, together with
\begin{eqnarray}
&&<{{}\epsfbox{zeroL.eps}}>=q^2+1+q^{-2}\nonumber\\
&&<{{}\atop\epsfbox{loop1.eps}}>=q^{-4}\,<{{}\atop\epsfbox{line.eps}}>
\label{Dubrovnik2}\\
&&<{{}\atop\epsfbox{loop2.eps}}>=q^{4}\,<{{}\atop\epsfbox{line.eps}}>\nonumber
\end{eqnarray}
obtained from (\ref{R3}) and (\ref{M3}) too define (a specialization
of) the Dubrovnik version of the Kauffman polynomial for unoriented
links. Yet again we see that matrices $M_u$ and $M_d$ are helpful to
formulate bracket identities but, unlike the $N=2$ case, the
$R$-matrix in (\ref{R3}) cannot be expanded in terms of the unit
matrix and $e$ only but requires the addition of extra terms. It is
clear then that there is an infinite set of state models that satisfy
the Dubrovnik version (\ref{Dubrovnik1}), (\ref{Dubrovnik2}) of the
Kauffman polynomial, but it is an open (and interesting) question to
decide whether there exists a state model which only involves
combination of Kronecker deltas, $M_{a\,b}$ and $M^{c\,d}$. If such
state model does exist the extra terms to add to matrices ${1\kern-.25em{\rm l}}$ and
$e$ should be of third order or bigger in the matrix elements of $M_u$
and $M_d$. This feature is common to all $N\ge 3$.
\mysubsection{Invariants of links from representations of $U_qsl(2)$}
Kirillov and Reshetikhin have shown in ref. \cite{KR} that the link
invariants constructed through $N$-state vertex models do coincide with
the invariants derived from the spin $j$ representation of the quantum
enveloping algebra of $sl(2)$. Since this paper reproduces the
$N$-hierarchy via $<L>$ we dedicate this section to explain the
connection between the invariant $<L>$ and the formalism of the
Russian authors.
Let us recall some properties of the quantum enveloping algebra
$U_qsl(2)$ that we need first. $U_qsl(2)$ is the Hopf
algebra generated by elements $H,\,X^\pm$ with algebra relations
\begin{equation}
[H,X^\pm]=\pm 2\,X^\pm,\qquad [X^+,X^-]={q^H-q^{-H}\over q-q^{-1}}
\label{algebra}
\end{equation}
and coalgebra relations given by
\begin{eqnarray*}
&&\triangle H=H\otimes{1\kern-.25em{\rm l}}+{1\kern-.25em{\rm l}}\otimes H,\qquad \triangle X^\pm=q^{H/2}\otimes X^\pm+
X^\pm\otimes q^{-H/2}\\
&&\epsilon(H)=\epsilon(X^\pm)=0
\end{eqnarray*}
in addition to the antipode map $S(\cdot)$ with action
\begin{equation}
S(H)=-H,\qquad S(X^\pm)=-q^{\mp 1}\,X^\pm
\label{antipode}
\end{equation}
Here $q$ is considered as a generic real constant. This algebra
can be extended by adding an invertible element $w$ (which
obviously is not in $U_qsl(2)$) such that the element performs the
transformation \cite{KR}
\begin{equation}
w\,a\,w^{-1}=\tau\,S(a),\qquad{\rm for\; all}\; a\;{\rm in}\quad
U_q sl(2)
\label{weyl}
\end{equation}
where $\tau$ denotes now the linear anti-automorphism of action
\begin{equation}
\tau(H)=H,\qquad \tau(X^\pm)=X^\mp
\label{transposition}
\end{equation}
on the generators of $U_q sl(2)$. A consequence of the existence of
$w$ is the `crossing symmetry' property exhibited by the universal
${\cal R}$-matrix of $U_q sl(2)$ (we do not review any property of this
universal object here. Nevertheless, details about it can be found in
\cite{KR} as well)
\begin{equation}
{\cal R}=q^{-(H\otimes H)/2}\sum_{n=0}^\infty {(1-q^2)^n\over [n]!}\,
q^{-n(n-1)/2}(q^{-H/2}X^+)^n\otimes (q^{H/2}X^-)^n
\label{universal}
\end{equation}
(here $[x]=(q^x-q^{-x})/(q-q^{-1})$) which manifests in the formulae
\begin{eqnarray}
&&(\tau\otimes {\rm id}) {\cal R}^{-1}=(w\otimes {\rm id})\,{\cal R}\,
(w^{-1}\otimes {\rm id})
\label{uni1}\\
&&({\rm id}\otimes\tau){\cal R}=({\rm id}\otimes w)\,{\cal R}^{-1}\,
({\rm id}\otimes w^{-1})
\label{uni2}
\end{eqnarray}
These two formulae hold because the universal
${\cal R}$-matrix of $U_q sl(2)$ is an
invertible operator which satisfies ${\cal R}^{-1}=
(S\otimes {\rm id})\,{\cal R}$ and ${\cal R}=({\rm id}\otimes S)\,{\cal R}^{-1}$.
Lowered to a representation $\pi^{j_1}\otimes\pi^{j_2}$ of
$U_q sl(2)^{\otimes^2}$, equations (\ref{uni1}) and (\ref{uni2})
can be written as
\begin{eqnarray}
&&((R^{{j_1}{j_2}})^{-1})^{t_1}=(w^{j_1}\otimes {1\kern-.25em{\rm l}})\,R^{{j_1}{j_2}}
\,((w^{j_1})^{-1}\otimes {1\kern-.25em{\rm l}})
\label{cs1}\\
&&(R^{{j_1}{j_2}})^{t_2}=({1\kern-.25em{\rm l}}\otimes w^{j_2})\,(R^{{j_1}{j_2}})^{-1}
\,({1\kern-.25em{\rm l}}\otimes (w^{j_2})^{-1})
\label{cs2}
\end{eqnarray}
where we are using the notation $R^{{j_1}{j_2}}=\pi^{j_1}\otimes\pi^{j_2}({\cal
R})$ and $w^j=\pi^j(w)$. Transpositions in the first and second space
in $V^{j_1}\otimes V^{j_2}$ are indicated by $t_1$ and $t_2$. Equation
(\ref{cs1}) is precisely relation (2.10) in \cite{KR}.
We prove now that when $j_1=j_2=j$ the equalities (\ref{cs1}),
(\ref{cs2}) are exactly (\ref{twist1}), (\ref{twist2}) provided that
we make the identification $M_d=({w^j})^t$, i.e. that $M_d$ is the
transpose of the matrix associated to the element $w$ in the $j$
representation of $U_q sl(2)$. The proof of this statement is very
simple: relation (\ref{cs2}) is written in components as
\[
(R^{j\,j})^a{}_c{}^b{}_d=w^j{}{}^d{}_e\,{(R^{j\,j})^{-1}}^a{}_c{}^e{}_f\,
{(w^j)^{-1}}^f{}_b
\]
that expressed in terms of the $R$-matrix
$R^a{}_c{}^b{}_d=(R^{j\,j})^b{}_c{}^a{}_d$ looks as
\[
R^b{}_c{}^a{}_d=w^j{}{}^d{}_e\,{R^{-1}}^a{}_f{}^e{}_c\,
{(w^j)^{-1}}^f{}_b
\]
After reversing this relation it reads finally as
\[
{R^{-1}}^a{}_c{}^b{}_d={(w^j)^{-1}}^b{}_f\,
R^e{}_d{}^a{}_f\,
w^j{}^e{}_c
\]
Notice that this is precisely relation (\ref{twist2}) after the
substitution $M_d=({w^j})^t$ (remember the convention
adopted in this paper, namely $M_{a\,b}=(M_d)^a{}_b$ and
$M^{a\,b}=(M_u)^a{}_b$). The proof that
(\ref{cs1}) is (\ref{twist1}) can be done in a similar fashion.
This much for the connection between $<L>$ and the formalism carried
out in \cite{KR} to construct link invariants from $U_q sl(2)$.
Let us see now as a practical example that indeed the
transpose of the matrix $\pi^j(w)$ when $j=1/2,1,3/2$ is the matrix
$M_d$ when $N=2,3,4$ as displayed in (\ref{M2}), (\ref{M3}) and
(\ref{M4}), respectively (the correspondence is
given by $j=(N-1)/2$ for generic $N$). {}From the algebra
relations (\ref{algebra}) and the Casimir operator of $U_q sl(2)$
given by
\[
C=\left({q^{(H+1)/2}-q^{-(H+1)/2}\over q-q^{-1}}\right)^2+X^-X^+=
\left({q^{(H-1)/2}-q^{-(H-1)/2}\over q-q^{-1}}\right)^2+X^+X^-
\]
is deduced the action of $H,\,X^\pm$ on the basis vectors
$\{e^j_m\}$, $m=-j,\ldots,j$ that span the $2\,j+1$ dimensional
representation $\pi^j$ of $U_q sl(2)$, i.e.
\begin{equation}
\pi^j(H)\,e^j_m=2\,m\,e^j_m, \qquad
\pi^j(X^\pm)\,e^j_m=([j\mp m]\,[j\pm m+1])^{1/2}\, e^j_{m\pm 1}
\label{j}
\end{equation}
This action is sufficient to calculate $\pi^j(w)$ up to a constant
$\gamma_j$ that depends on the representation $j$ and that is of no
relevance for link invariance purposes. Indeed from (\ref{weyl}),
(\ref{transposition}) and the antipode action (\ref{antipode})
it follows that
\[
w\,H\,w^{-1}=-H,\qquad w\,X^\pm\,w^{-1}=-q^{\mp 1}\,X^\mp
\]
that together with the action (\ref{j}) allows to find the matrix
elements of $\pi^j(w)$
\[
w^j_{m\,m'}=<e^j_m|\pi^j(w)|\,e^j_{m'}>=(-1)^{j+m}\, q^{j+m}\,\gamma_j
\,\delta_{m,-m'}
\]
In obtaining these matrix elements is important to notice that
$\pi^j(X^+)=(\pi^j(X^-))^t$, i.e. that we are truly under the
hypothesis expressed in the transposition (\ref{transposition}). Now
in the basis $\{e^j_{-j},\ldots,e^j_j\}$ and for $j=1/2,1,3/2$
we have that
\begin{eqnarray*}
&&(\pi^{1/2}(w))^t=q^{1/2}\,\gamma_{1/2}\left(\begin{array}{cc}
0 & -q^{1/2} \\
q^{-1/2} & 0
\end{array}\right),
\qquad
(\pi^1(w))^t=q\,\gamma_{1}\left(\begin{array}{ccc}
0 & 0 & q \\
0 & -1 & 0 \\
q^{-1} & 0 & 0
\end{array}\right)\\
&&(\pi^{3/2}(w))^t=q^{3/2}\,\gamma_{3/2}\left(\begin{array}{cccc}
0 & 0 & 0 & q^{3/2} \\
0 & 0 & -q^{1/2} & 0 \\
0 & q^{-1/2} & 0 & 0 \\
-q^{-3/2} & 0 & 0 & 0
\end{array}\right)
\end{eqnarray*}
result to be compared with matrices $M_d$ in (\ref{M2}), (\ref{M3}) and
(\ref{M4}). They do differ in an extra factor $q^j\,\gamma_j$ but we
remind again that $M_d$ can be determined up to an arbitrary
multiplicative constant and that the invariant $<L>$ is completely
independent of which the value of this constant is. This means
that the extra factor can be incorporated into $M_d$ and then
it is correct to identify
$M_d=(\pi^j(w))^t$ as we wanted to prove.
We mention that matrices (\ref{R2}), (\ref{R3}) and (\ref{R4}) are the
intertwining operator of the representation $V^j\otimes V^j$ of
$U_qsl(2)$ in the cases $j=1/2$, $j=1$ and $j=3/2$, respectively. More
precisely they are $R=P^{j\,j}\,R^{j\,j}$, $P^{j\,j}$ the permutation
operator in $V^j\otimes V^j$ and $R^{j\,j}=\pi^{j}\otimes\pi^{j}({\cal R})$,
${\cal R}$ as in (\ref{universal}).
We conclude this section with a remark: the element $w$ defined by
relations (\ref{weyl}) and (\ref{transposition}) does exist for
$U_qsl(2)$ as we know. It also exists for other (not all) quantized
enveloping algebras after suitable modifications of
(\ref{transposition}) to allow transposition on the various generators
$H_i,\,X_i^\pm$. It follows then that the link
invariant construction introduced in \cite{KR} and its generalization
applies for (some) quantized enveloping algebras. The
case of invariant $<L>$ is rather different. As explained in
Section~\ref{intro}, its construction requires merely to start with an
$R$-matrix (whether there exist matrices $M_u$ and $M_d$ for a given
$R$-matrix is a different question) and this matrix does not need to
come from quantized enveloping algebras necessarily.
\mysection{Final remarks}
\label{seven}
\begin{enumerate}
\item For the link invariant $<L>$ to exist it is not necessary that
the $R$-matrix entries satisfy the {\em charge conservation}
condition, i.e. that $R^a{}_c{}^b{}_d=0$ unless $a+b=c+d$. In the case
of the $R$-matrices (\ref{R2}), (\ref{R3}) and (\ref{R4}) considered
here this is the case since they all have this property, but the
existence of a non-trivial link invariant $<L>$ does not require such
condition.
\item Every $N$-state vertex model affords a $R$-matrix when the limit
$\lim_{u\to\infty} R(u)/\rho(u)$ is well-defined. This limit is
well-defined for all values of $\mu$ in the interval
$-1/2\,\le\,\mu\,\le\,1/2$. Along the paper we have worked the case
$\mu=1/2$, let us discuss now the link invariant $\alpha'$ when
$\mu=-1/2$. In this case and for all $N$ the corresponding $R$-matrix
is $P\,R\,P$, with $R$ the $R$-matrix corresponding to $\mu=1/2$ and
$P$ again the permutation matrix, $P^a{}_c{}^b{}_d=
\delta^a{}_d\,\delta^b{}_c$. If ($R$, $M_u$) is a solution of eqs
(\ref{m})-(\ref{twist2}) with associated link invariant $<L>$, then
($P\,R\,P$, $M_u^t$) is also a solution with associated link invariant
$<L'>$, where $L'$ is as $L$ but with the braid strands closed on the
opposite side of the plane. Now $<L>=<L'>$ and consequently
$\alpha'|_{\mu=1/2}=\alpha'|_{\mu=-1/2}$, i.e. the link invariants for
$\mu=1/2, -1/2$ are the same. Regarding the case in which
$-1/2\,<\,\mu\,<\,1/2$, there exist matrices $M_u$ and $M_d$ for each
$N$ if and only if $q$ is the following root of unity
$q^{2(N-1)}=1$. For these roots the minimal polynomial of $R$
collapses to $R^2=Z^2$ ($Z$ the values listed in the conclusions
Section~\ref{conclusions} restricted to the mentioned roots of unity)
and the associated link invariant skein relation is
$\alpha'\left({{}\atop\epsfbox{sigma1.eps}}\right)=
\alpha'\left({{}\atop\epsfbox{sigma-1.eps}}\right)$. This is an invariant
related to the number of components of the link and depends on $N$
simply because $\alpha'{{}\atop\epsfbox{zerozero.eps}}$ of an arbitrary
number of unknots depends on $N$.
\end{enumerate}
\begin{figure}
\[{{}\atop\epsfbox{l.eps}}\qquad\qquad {{}\atop\epsfbox{lp.eps}}\]
\caption{An example of $L$ and $L'$}
\end{figure}
\def\subsection{\subsection}
|
1,108,101,564,222 | arxiv | \section{Introduction}
A chiral helimagnet is one of the attractive magnetic materials.
In this material, magnetic moments form either left- or right-handed helical rotation.
The schematic magnetic structure of the chiral helimagnet is shown in Fig.~\ref{chm_csl}(a).
This helical configuration of magnetic moments comes from the competition between two interactions: the ferromagnetic exchange interaction between nearest neighbor magnetic moments and the Dzyaloshinsky-Moriya (DM) interaction.
The ferromagnetic exchange interaction causes nearest neighbor magnetic moments to be parallel, while the DM interaction causes nearest neighbor magnetic moments to be perpendicular to each other\cite{DM_1,DM_2}.
The DM interaction determines the direction of the rotation, left-handed or right handed.
The magnetic structure in the chiral helimagnet has been observed in CrNb$_3$S$_6$\cite{CHM_Cr_1, CHM_Cr_2, CHM_Cr_3}, CsCuCl$_3$\cite{CHM_Cs_1, CHM_Cs_2}, and Yb(Ni$_{1-x}$Cu$_x$)$_3$Al$_9$\cite{CHM_Yb_1, CHM_Yb_2} experimentally.
Then, properties of chiral helimagnets have been investigated experimentally\cite{CHM_Cr_1, CHM_Cr_2, CHM_Cr_3,CHM_Cs_1, CHM_Cs_2,CHM_Yb_1,CHM_Yb_2} and theoretically\cite{CHM_theory_1, Kishine_theory}.
For example, under an external magnetic field perpendicular to the helical axis, the helical configuration of magnetic moments changes into an incommensurate magnetic structure\cite{Kishine_1}.
This magnetic structure is called a chiral soliton lattice, which is shown in Fig.~\ref{chm_csl}(b).
The chiral soliton lattice consists of ferromagnetic domains periodically partitioned by $360^\circ$ domain walls.
The chiral soliton lattice has been also observed experimentally with the Lorentz microscopy and the small-angle electron diffraction\cite{Togawa_PRL_1}.
Other interesting phenomena are the giant magnetoresistance\cite{Togawa_PRL_2}, the magneto-chiral dichroism\cite{MCh_Dh}, the creation of the spin current\cite{spin_current}, and the Berezinskii-Kosterlitz-Thouless (BKT) transition\cite{BKT}.
From other point of view, we may expect this peculiar magnetic structure affects other materials.
Therefore, in this paper, we focus on effects on superconductors, in particular, vortex structures in type-II superconductors.
\begin{figure}[t]
\centering
\includegraphics[scale=0.2]{Fig1.eps}
\caption{(a) Magnetic structure in the chiral helimagnet, and (b) the chiral soliton lattice under the applied magnetic field.}
\label{chm_csl}
\end{figure}
These vortex structures in the type-II superconductor are important for a critical magnetic field and a critical current.
In general, under an uniform magnetic field, vortices appear and form a triangular lattice, which is called the Abrikosov lattice \cite{Abrikosov, Hess}.
When the external current flows in the superconductor and vortices move, the electric resistance occurs, which leads to break superconductivity.
So, controlling vortex states is a key factor for applications of superconductors.
One of plausible vortex controlling method is using a ferromagnet.
In a ferromagnet / superconductor bilayer system, vortices appear spontaneously \cite{FM_SC_Hybrid, FM_SC_FSB_ex, FM_SC_FSB_theory}.
This phenomenon comes from interaction between magnetic domains in the ferromagnet and magnetic fluxes of vortices.
This spontaneous vortex state enhances superconductivity, in particular, its critical current by a pinning of vortices in the superconductor due to magnetic domains in the ferromagnet.
From this interference between the ferromagnet and the superconductor, we expect novel effects of peculiar magnetic materials; the chiral helimagnet.
Therefore, we investigate effects of the chiral helimagnet on the superconductor theoretically.
In our previous study, we have investigated vortex structures in two-dimensional superconductors under the magnetic field from the chiral helimagnet\cite{Fukui_SUST, Fukui_JPSJ}.
Although, magnetic field is created by the helically structured magnetic moments in the chiral helimagnet only the perpendicular component of the magnetic field to the superconducting surface is effective, and other components are neglected in the two-dimensional superconductor, .
In this paper, we consider three-dimensional superconductors, and taking all components of the magnetic field, investigate vortex structures under the helical magnetic field from the chiral helimagnet completely.
In section II, we introduce numerical methods in order to obtain vortex structures in three-dimensional superconductors.
In section III, we show vortex structures under the helical magnetic field from the chiral helimagnet, and discuss origin of these vortex structures.
Finally, in section IV, we summarize our results.
\section{Method}
We consider a three-dimensional superconductor under the helical magnetic field.
The distribution of the helical magnetic field is assumed to be proportional to the distribution of the magnetic moments in the chiral helimagnet.
We obtain distributions of the order parameter in superconductors by solving the Ginzburg-Landau equations.
We start from the Ginzburg-Landau free energy,
\begin{eqnarray}
& &\mathcal{F}(\psi,\mbox{\boldmath $A$}) = \int_V \left( f_n + \alpha(T) |\psi|^2 + \frac{\beta}{2} |\psi|^4 \right) dV \nonumber \\
& & + \int_V \left\{ \frac{1}{2m_s} \left| \left( -i\hbar \nabla - \frac{e_s \mbox{\boldmath $A$}}{c} \right) \psi \right|^2
+ \frac{|\mbox{\boldmath $h$}|^2}{8\pi} - \frac{\mbox{\boldmath $h$} \cdot \mbox{\boldmath $H$}_{\rm ext}}{4\pi} \right\} dV, \nonumber \\ \label{gl_free_3d}
\end{eqnarray}
where $\psi$ is a superconducting order parameter and $f_n$ is a free energy density of the normal state.
$\alpha(T)$ is a coefficient which depends on the temperature $T$, $\alpha(T) = \alpha'(T-T_c)$.
$\alpha'$ and $\beta$ is a positive constant and $T_c$ is the critical temperature of the superconductor.
$m_s$ is the effective mass of the superconductor and $e_s$ is an effective charge of electrons in the superconductor.
$\mbox{\boldmath $h$} = \nabla \times \mbox{\boldmath $A$}$ is a local magnetic field and $\mbox{\boldmath $A$}$ is a magnetic vector potential.
$\mbox{\boldmath $H$}_{\rm ext}$ is an external magnetic field, which is included the magnetic field from the chiral helimagnet.
The order parameter and the vector potential is normalized as,
\begin{equation}
\tilde{\psi} = \frac{\psi}{\sqrt{\alpha}/\beta},~~~\tilde{\mbox{\boldmath $A$}} = \frac{2\pi}{\Phi_0} \mbox{\boldmath $A$}. \label{normalize}
\end{equation}
$\Phi_0 = ch/2e$ is the quantum flux and $e$ is an elementary charge.
Using the normalized order parameter and the vector potential, we obtain following equations from Eq.~(\ref{gl_free_3d}),
\begin{eqnarray}
& & \int_{V} \left[ \left( i\nabla \tilde{\psi} - \tilde{\mbox{\boldmath $A$}}\tilde{\psi} \right) \left( -i \nabla (\delta \tilde{\psi}) - \tilde{\mbox{\boldmath $A$}} (\delta \tilde{\psi}) \right) \right. \nonumber \\
& & \left. + \left( i\nabla (\delta\tilde{\psi}) - \tilde{\mbox{\boldmath $A$}} (\delta \tilde{\psi}) \right) \left( -i\nabla \tilde{\psi}^\ast - \tilde{\mbox{\boldmath $A$}} \tilde{\psi}^\ast \right) \right. \nonumber \\
& & \left. + \frac{1}{\xi^2} \left( |\tilde{\psi}|^2 - 1 \right) \left( \tilde{\psi} (\delta \tilde{\psi}^\ast) + \tilde{\psi}^\ast (\delta \tilde{\psi}) \right) \right] d\Omega = 0, \label{gl_3d_1} \\ \nonumber \\
& & \int_{V} \left[ \kappa^2 \xi^2 \left( {\rm div}~\tilde{\mbox{\boldmath $A$}} \cdot {\rm div}~(\delta \tilde{\mbox{\boldmath $A$}}) + {\rm rot}~\tilde{\mbox{\boldmath $A$}} \cdot {\rm rot}~(\delta \tilde{\mbox{\boldmath $A$}}) \right) \right. \nonumber \\
& & \left. |\psi|^2 \tilde{\mbox{\boldmath $A$}} \cdot (\delta \tilde{\mbox{\boldmath $A$}}) - \frac{i}{2} \left\{ \tilde{\psi}^\ast (\nabla \tilde{\psi}) - \tilde{\psi} (\nabla \tilde{\psi}^\ast) \right\} \tilde{\mbox{\boldmath $A$}} \right] d\Omega \nonumber \\
& & = \kappa^2 \xi^2 \int_V \frac{2\pi}{\Phi_0} \mbox{\boldmath $H$}_{\rm ext} \cdot {\rm rot}~(\delta \tilde{\mbox{\boldmath $A$}}) d\Omega, \label{gl_3d_2}
\end{eqnarray}
where $\delta \tilde{\psi}$ and $\delta\tilde{\mbox{\boldmath $A$}}$ are variations, or test functions of the order parameter and the vector potential, respectively.
$\kappa = \lambda/\xi$ is the Ginzburg-Landau parameter, and $\lambda$ and $\xi$ are the penetration length and the coherence length, respectively.
In order to solve Eqs.~(\ref{gl_3d_1}) and (\ref{gl_3d_2}), we use the three-dimensional finite element method (FEM).
In the three-dimensional FEM, we divide the system into tetrahedron elements (see Fig.~\ref{system_finite_element}).
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{Fig2.eps}
\caption{(a) Schematic three-dimensional system, (b) a tetrahedron finite element. Coordinates of four nodes $i=1,~2,~3,~4$ of the $e$-th tetrahedron denote $1(x_1,~y_1,~z_1)$, $2(x_2,~y_2,~z_2)$, $3(x_3,~y_3,~z_3)$, and $4(x_4,~y_4,~z_4)$. }
\label{system_finite_element}
\end{figure}
In a tetrahedron, there are four volume coordinates, which is given as,
\begin{equation}
N_i^e = \left( a_i + b_ix + c_i + d_iz \right)/6V~~~~(i=1,~2,~3,~4), \label{finite_element_3d}
\end{equation}
where $V$ is a volume of the tetrahedron.
$a_i,~b_i,~c_i,$ and $d_i$ are given in the Appendix.
The order parameter $\tilde{\psi}$ and the vector potential $\tilde{\mbox{\boldmath $A$}}$ are expanded using these volume coordinates,
\begin{eqnarray}
\tilde{\psi}(\mbox{\boldmath $x$}) &=& \sum_{i,e}^{N_e} \tilde{\psi}_i^e N_i^e, \label{psi_expand} \\
\tilde{\mbox{\boldmath $A$}} (\mbox{\boldmath $x$}) &=& \sum_{i,e}^{N_e} \tilde{\mbox{\boldmath $A$}}_i^e N_i^e, \label{a_expand}
\end{eqnarray}
where $\tilde{\psi}_i^e$ and $\tilde{\mbox{\boldmath $A$}}_i^e$ are the order parameter and the vector potential at $i-$th node in the $e-$th element, respectively.
$N_e$ is the number of finite elements.
We consider following test functions $\delta \tilde{\psi}$ and $\delta \tilde{\mbox{\boldmath $A$}}$,
\begin{eqnarray}
\delta \tilde{\psi}_{8(e-1) + 2j-1} &=& \begin{cases} N_j^e~~~~(x \in e\text{-th~element}) \\ 0 ~~~~~~~(\text{otherwise}), \end{cases} \label{test_psi_1} \\
\delta \tilde{\psi}_{8(e-1) + 2j} &=& \begin{cases} iN_j^e~~~(x \in e\text{-th~element}) \\ 0 ~~~~~~~(\text{otherwise}), \end{cases} \label{test_psi_2} \\
\delta \tilde{\mbox{\boldmath $A$}}_{12(e-1) + 3(j-1)+i} &=& \begin{cases} \mbox{\boldmath $e$}_i N_j^e~(x \in e\text{-th~element}) \\ 0 ~~~~~~~(\text{otherwise}), \end{cases} \label{test_a}
\end{eqnarray}
where $i=1,2,3$ and $j=1,2,3,4$.
$\mbox{\boldmath $e$}_1,~\mbox{\boldmath $e$}_2, $ and $\mbox{\boldmath $e$}_3$ are basis vectors in the three-dimensional space.
We substitute Eqs. (\ref{psi_expand})-(\ref{test_a}) into Eqs. (\ref{gl_3d_1}) and (\ref{gl_3d_2}), we obtain these equations,
\begin{eqnarray}
& & \sum_j \left[ P_{ij}(\{\tilde{A} \}) + P_{ij}^{2R}(\{ \tilde{\psi} \}) \right] {\rm Re} \tilde{\psi}_j^e \nonumber \\
& & ~ + \sum_j \left[ Q_{ij} (\{ \tilde{\mbox{\boldmath $A$}} \}) + Q_{ij}^{2}(\{ \tilde{\psi} \}) \right] {\rm Im} \tilde{\psi}_j^e = V_i^{R}(\{ \tilde{\psi} \}), \label{e1} \\
& & \sum_j \left[ -Q_{ij}(\{\tilde{\mbox{\boldmath $A$}}\}) + Q_{ij}^{2}(\{\tilde{\psi}\}) \right] {\rm Re} \tilde{\psi}_j^e \nonumber \\
& & ~ + \sum_j \left[ P_{ij}(\{\tilde{\mbox{\boldmath $A$}}\}) + P_{ij}^{2I}(\{\tilde{\psi}\}) \right] {\rm Im} \tilde{\psi}_j^e = V_{i}^{I}(\{\tilde{\psi}\}), \label{e2} \\
& & \sum_j R_{ij} (\{\tilde{\psi}\}) \tilde{A}_{jx} + \sum_j S_{ij}^{xy} \tilde{A}_{jy} + \sum_j S_{ij}^{xz} \tilde{A}_{jz} \nonumber \\
& & ~ = T_i^{x} - U_i^{x}, \label{e3} \\
& & \sum_j R_{ij} (\{\tilde{\psi}\}) \tilde{A}_{jy} + \sum_j S_{ij}^{yx} \tilde{A}_{jx} + \sum_j S_{ij}^{yz} \tilde{A}_{jz} \nonumber \\
& & ~ = T_i^{y} - U_i^{y}, \label{e4} \\
& & \sum_j R_{ij} (\{\tilde{\psi}\}) \tilde{A}_{jz} + \sum_j S_{ij}^{zx} \tilde{A}_{jx} + \sum_j S_{ij}^{zy} \tilde{A}_{jy} \nonumber \\
& & ~ = T_j^{z} - U_i^{z}. \label{e5}
\end{eqnarray}
Coefficients are given in Appendix and the reference \cite{Fukui_3d_proc}.
Solving Eqs.~(\ref{e1})-(\ref{e5}) self consistently, we obtain real and imaginary parts of the order parameter ${\rm Re}~\tilde{\psi}$, ${\rm Im}~\tilde{\psi}$ and three-components of the vector potential $A_x,~A_y,$ and $A_z$.
The magnetic field $\mbox{\boldmath $H$}_{\rm ext}$ includes the helical magnetic field from the chiral helimagnet,
\begin{equation}
\mbox{\boldmath $H$}_{\rm ext} = \mbox{\boldmath $H$}_{\rm CHM} + \mbox{\boldmath $H$}_{\rm appl}, \label{external_h}
\end{equation}
where $\mbox{\boldmath $H$}_{\rm CHM}$ is the magnetic field from the chiral helimagnet and $\mbox{\boldmath $H$}_{\rm appl}$ is the homogeneous applied magnetic field.
We consider that a distribution of $\mbox{\boldmath $H$}_{\rm CHM}$ is proportional to the configuration of magnetic moments in the chiral helimagnet.
The configuration of magnetic moments is obtained by the Hamiltonian of the chiral helimagnet \cite{Kishine_1, Fukui_SUST, Fukui_JPSJ, Fukui_3d_proc},
\begin{eqnarray}
\mathcal{H} &=& -2J \sum_{n} \mbox{\boldmath $S$}_n \cdot \mbox{\boldmath $S$}_{n+1} + \mbox{\boldmath $D$} \cdot \sum_n \mbox{\boldmath $S$}_n \times \mbox{\boldmath $S$}_{n+1} \nonumber \\
& & - 2\mu_B \mbox{\boldmath $H$}_{\rm appl} \cdot \sum_n \mbox{\boldmath $S$}_n, \label{hamiltonian_chm}
\end{eqnarray}
where $\mbox{\boldmath $S$}_n$ is the spin at the $n$-th site, $\mu_B$ is the Bohr magneton.
This Hamiltonian consists of three terms.
The first term is the ferromagnetic exchange interaction with magnitude $J~(>0)$.
The second term is the DM interaction with the DM vector $\mbox{\boldmath $D$}$.
The last term is the Zeeman energy.
We assume that the helical axis is the $x$-axis.
In the monoaxial chiral helimagnet, the DM vector is parallel to the helical axis, $\mbox{\boldmath $D$} = (D,0,0)$, and we assume that the direction of spin is perpendicular to the helical axis, so $\varphi = \pi/2$.
We express $n$-th spin as $\mbox{\boldmath $S$}_n = S(\sin{\theta_n} \cos{\varphi},~\sin{\theta_n} \sin{\varphi},~\cos{\theta_n})$.
We set $\mbox{\boldmath $H$}_{\rm appl} = (0,0,H_{\rm appl})$.
In the typical chiral helimagnet CrNb$_3$S$_6$, the helical period $L=48$nm is much longer than the lattice constant.
So, we consider the continuum limit.
We minimize Eq. (\ref{hamiltonian_chm}) in the continuum limit with respect to $\theta(x)$.
We obtain the Sine-Gordon equation,
\begin{equation}
\frac{d^2\theta(x)}{dx^2} - H^\ast \sin{\theta(x)} = 0, \label{sine_gordon}
\end{equation}
where $H^\ast = 2\mu_B H_{\rm appl}/(a^2S^2\sqrt{J^2 + |\mbox{\boldmath $D$}|^2})$ is a normalized applied magnetic field and $a$ is the lattice constant.
The solution of Eq.(\ref{sine_gordon}) is,
\begin{equation}
\sin{\left( \frac{\theta - \phi}{2} \right)} = {\rm sn} \left( \frac{\sqrt{H^\ast}}{k}x~|~k \right), \label{theta_1}
\end{equation}
or,
\begin{equation}
\theta(x) = 2\sin^{-1} \left[ {\rm sn} \left( \frac{\sqrt{H^\ast}}{k}x~|~k \right) \right] + \phi. \label{theta_2}
\end{equation}
${\rm sn}(x|k)$ is the Jacobi's elliptic function, $k$ is the modulus and $\phi$ is an initial angle at $x=0$.
$k$ is determined by the relation,
\begin{equation}
\frac{\pi \tan^{-1}(|\mbox{\boldmath $D$}|/J)}{4\sqrt{H^\ast}} = \frac{E(k)}{k}, \label{k_det}
\end{equation}
where $E(k)$ is the complete elliptic integral of the second kind.
Using Eq. (\ref{theta_2}), the external magnetic field in Eq. (\ref{external_h}) is,
\begin{eqnarray}
(\mbox{\boldmath $H$}_{\rm ext})_x(x) &=& 0, \label{h_x} \\
(\mbox{\boldmath $H$}_{\rm ext})_y(x) &=& H_0 \sin{\theta(x)}, \label{h_y} \\
(\mbox{\boldmath $H$}_{\rm ext})_z(x) &=& H_0 \cos{\theta(x)} + H_{\rm appl}. \label{h_z}
\end{eqnarray}
$H_0$ is a magnitude of the magnetic field from the chiral helimagnet.
We solve the Ginzburg-Landau equations (\ref{e1})-(\ref{e5}) using the magnetic field in Eqs. (\ref{h_x})-(\ref{h_z}) numerically.
\section{Result}
Solving the Ginzburg-Landau equations (\ref{e1})-(\ref{e5}) self-consistently, we obtain distributions of the order parameter in the superconductor under the chiral helimagnet.
We set the Ginzburg-Landau parameter $\kappa = \lambda/\xi = 5$ and the temperature $T=0.3T_c$, where $T_c$ is the critical temperature of the superconductor.
The ratio between the ferromagnetic exchange interaction and the Dzyaloshinsky-Moriya interaction is taken from the experimental data\cite{DM_Cr}, $|\mbox{\boldmath $D$}|/J = 0.16$.
We consider a parallelepiped as our model shown in Fig.~\ref{Fig_system}.
The system size is $1.0L'\xi_0 \times 15\xi_0 \times 13\xi_0$.
$\xi_0$ is a coherence length at $T=0$ and $L'\xi_0$ is a helical period of the chiral helimagnet, which is given as,
\begin{equation}
L' = \frac{2\pi}{\tan^{-1}(D/J)}. \label{period}
\end{equation}
Here, the uniform applied magnetic field, $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$.
For $|\mbox{\boldmath $D$}|/J=0.16$, $L'$ is approximately $39.2699$.
We assume the superconducting region is surrounded by the vacuum region.
The distance between the superconducting region and the vacuum region is $1.5\xi_0$, and the size of the superconducting region is $(1.0L' - 3.0)\xi_0 \times 12\xi_0 \times 10\xi_0$.
When we calculate the Ginzburg-Landau equations, we set following boundary conditions,
\begin{equation}
\mbox{\boldmath $A$} \cdot \mbox{\boldmath $n$} = 0,~~\left| \left( -i\hbar \nabla + \frac{e\mbox{\boldmath $A$}}{c} \right) \psi \right| \cdot \mbox{\boldmath $n$} = 0. \label{boundary}
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[scale=0.3]{Fig3.eps}
\caption{(a) Three-dimensional superconductor with finite tetrahedron elements, (b) $xy$-plane of the system, (c) $yz$-plane of the system, (d) $zx$-plane of the system. The system size in these figures is $1.0L'\xi_0 \times 15\xi_0 \times 13\xi_0$. The superconducting region (Green) is surrounded by the vacuum region (Black). The distance between the superconducting region is the vacuum region is $1.5\xi_0$.}
\label{Fig_system}
\end{figure}
The external magnetic field $\mbox{\boldmath $H$}_{\rm ext}$ is given in Eqs.~(\ref{h_x})-(\ref{h_z}).
We set magnitudes of the helical magnetic field and the applied magnetic field in Eqs.~(\ref{h_x})-(\ref{h_z}) as $H_0/(\Phi_0/\xi_0^2) = 0.15$ and $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$, respectively.
First, we take the angle at $x=0$ in Eqs. (\ref{theta_1}) or (\ref{theta_2}) as $\phi = -\pi/2$.
Then, the distribution of the helical magnetic field is shown in Fig.~\ref{Fig_field_1}.
In Fig.~\ref{Fig_field_1}, we show distribution of the helical magnetic field (Fig.~\ref{Fig_field_1}(a)), each components of the magnetic field $(\mbox{\boldmath $H$}_{\rm ext})_x$, $(\mbox{\boldmath $H$}_{\rm ext})_y$, and $(\mbox{\boldmath $H$}_{\rm ext})_z$ (Figs.~\ref{Fig_field_1}(b)-(d)).
Under the magnetic field in Fig.~\ref{Fig_field_1}, we obtain the distribution of the order parameter shown in Fig.~\ref{op_1_2}.
In Fig.~\ref{op_1_2}, we show distributions of the order parameter in the $xy$-plane (Fig.~\ref{op_1_2}(a)) and the $zx$-plane (Fig.~\ref{op_1_2}(b)).
The cross sections parallel to the $xy$-plane at $z=1.5\xi_0,~11.5\xi_0$ and the $zx$-plane at $y=1.5\xi_0,~13.5\xi_0$ are interfaces between the superconducting region and the vacuum region.
In Fig.~\ref{op_1_2}(a), two vortices appear in the regions around $(x/\xi_0,~y/\xi_0) \sim (10,~7.5)$ and $(30,~7.5)$, where $(\mbox{\boldmath $H$}_{\rm ext})_y/(\Phi_0/\xi_0^2) \sim 0.00$ and $(\mbox{\boldmath $H$}_{\rm ext})_z/(\Phi_0/\xi_0^2) \sim \pm 0.15$ in Fig.~\ref{Fig_field_1}.
In this regions, the magnetic field is parallel or anti-parallel to $z$-axis, so these two vortices have quantum fluxes antiparallel to each other.
On the other hand, in Fig.~\ref{op_1_2}(b), one vortex appears in the region around $(x/\xi_0,~y/\xi_0) \sim (20,~7.5)$, where $(\mbox{\boldmath $H$}_{\rm ext})_y/(\Phi_0/\xi_0^2) \sim 0.15$ and $(\mbox{\boldmath $H$}_{\rm ext})_z/(\Phi_0/\xi_0^2) \sim 0.00$.
In this region, the magnetic field is parallel to the direction of $y-$axis, so the vortex has a quantum flux parallel to the $y-$axis.
In total, three vortices appear.
They are separated by $0.25\xi_0$.
And the angle between nearest neighbor vortices is $\pi/2$.
\begin{figure}[t]
\centering
\includegraphics[scale=0.27]{Fig4.eps}
\caption{(a) Distributions of the magnetic field from the chiral helimagnet, (b) $x$-component of the magnetic field, (c) $y$-component of the magnetic field, (d) $z$-component of the magnetic field. The amplitude of the helical magnetic field is $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field is $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$. }
\label{Fig_field_1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{Fig5.eps}
\caption{Distributions of the order parameter in cross sections parallel to (a) $xy$-plane and (b) $zx$-plane. The amplitude of the helical magnetic field $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$. }
\label{op_1_2}
\end{figure}
The vortex structure is different from our previous work\cite{Fukui_3d_proc}, although the model and numerical parameters are the same.
Only difference is the initial random state of our iteration method for solving Eqs.~(\ref{e1})-(\ref{e5}).
To determine more stable state, we should calculate the free energies for both states.
This is the future problem.
Next, we investigate vortex structures with the other distribution of the helical magnetic field with $\phi = \pi$ in Eq.~(\ref{theta_1}), which is shown in Fig,~\ref{Fig_field_2}.
Under this magnetic field, we obtain distributions of the order parameter shown in Fig.~\ref{op_2_2}.
In Fig.~\ref{op_2_2}, we show the distributions of the order parameter and the phases of the order parameter.
In Fig. \ref{op_2_2}(a), two vortices appear in the region of the magnetic field $(\mbox{\boldmath $H$}_{\rm ext})_z/(\Phi_0/\xi_0) > 0$.
Positions of these vortices in these cross sections from $z=1.5\xi_0$ to $11.5\xi_0$ change along the $x$-axis.
So, two vortices tilt toward the $x$-axis, but the $x$-component magnetic field is zero, $(\mbox{\boldmath $H$}_{\rm ext})_x/(\Phi_0/\xi_0^2) = 0$.
Vortices parallel to the $y$-axis does not appear in Fig.~\ref{op_2_2}(b).
This result comes from the screening current of demagnetization factor of the superconductor.
Next, in order to avoid difference between shielding fields parallel to $y$- and $z$-axis, we investigate vortex structure in the system with the larger size.
The system size is $1.0L'\xi_0 \times 15\xi_0 \times 15\xi_0$.
In this system, the cross section parallel to the $yz$-plane is square.
We take the same numerical parameters $\kappa = 5,~T=0.3T_c,$ and $|\mbox{\boldmath $D$}|/\xi_0 = 0.16$, and the same distribution of the helical magnetic field, $H_0/(\Phi_0/\xi_0^2) = 0.15,~H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.0$, and $\phi = \pi$ in Eq.~(\ref{theta_2}).
Under these numerical parameters, we obtain vortex structures shown in Figs.~\ref{op_3_2}(a) and \ref{op_3_2}(b).
We show vortex structures at top and bottom surface and the center cross section in the superconductor.
In Fig.~\ref{op_3_2}(a), we see that two vortices around $x \sim 20\xi_0$ tilt toward the $x$-axis around $x \sim 10\xi_0$ and $30\xi_0$, while, two vortices around $x \sim 10\xi_0$ and $30\xi_0$, where the magnetic field $(\mbox{\boldmath $H$}_{\rm ext})_y/(\Phi_0/\xi_0^2)= \pm 0.15$, are parallel or antiparallel to $y$-axis.
So, these vortices have quantum fluxes parallel to the direction of $y$-axis.
Compare to the previous model, sheilding the field along the $y$-axis costs equal energy with shielding the field along the $z$-axis.
Then, vortices parallel to the direction of $y$-axis also appear in this system.
\begin{figure}[t]
\centering
\includegraphics[scale=0.27]{Fig6.eps}
\caption{(a) Distributions of the magnetic field from the chiral helimagnet, (b) $x$-component of the magnetic field, (c) $y$-component of the magnetic field, (d) $z$-component of the magnetic field. The amplitude of the helical magnetic field is $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field is $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$ for $\phi = \pi$ in Eq.~(\ref{theta_2}). }
\label{Fig_field_2}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.32]{Fig7.eps}
\caption{Distributions of the order parameter and phases of the order parameter in the cross sections parallel to (a) $xy$-planes and (b) $zx$-planes. The amplitude of the helical magnetic field is $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field is $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$. }
\label{op_2_2}
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.3]{Fig8.eps}
\caption{Distributions of the order parameter and phases of the order parameter in the cross sections parallel to (a) $xy$-planes and (b) $zx$-planes. The amplitude of the helical magnetic field $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$. }
\label{op_3_2}
\end{figure*}
Next, we examine how the chirality of the helical rotation of magnetic field affects the vortex structures, i.e. we examine difference between vortex structures under right- and left-handed helical magnetic field.
In order to change the direction of the rotation, we take the opposite DM vector, $|\mbox{\boldmath $D$}|/J=0.16$ and $\mbox{\boldmath $D$}$ is antiparallel to the $x$-axis.
The helical magnetic field for this DM vector is shown in Fig.~\ref{Fig_field_3}.
The rotation of the helical magnetic field in Fig.~\ref{Fig_field_3} is opposite to that in Fig.~\ref{Fig_field_2}.
We solve the Ginzburg-Landau equations using this helical magnetic field.
We take the numerical parameter, $\kappa = 5,~T=0.3T_c$, and the magnetic field, $H_0/(\Phi_0/\xi_0^2) = 0.15$ and $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.0$.
The system size is $1.0L'\xi_0 \times 15\xi_0 \times 15\xi_0$, which is the same system size as Fig.~\ref{op_3_2}.
Under these conditions, we obtain vortex structures shown in Figs.~\ref{op_4_2}(a) and \ref{op_4_2}(b).
Comparing between Figs.~\ref{op_3_2}(a) and \ref{op_4_2}(a), directions of vortices are completely opposite.
Then, the direction of tilt of vortices depends on the chirality of the helical magnetic field.
Experimentally, these vortex structures may appear in superconductor / chiral helimagnet hybrid structures.
For example, our model may be equivalent to the system in which a small superconductor is surrounded by a large chiral helimagnet.
On the other hand, in the chiral helimagnet / superconductor bilayer system, when the superconductor is thin, only the perpendicular component of the magnetic structure is effective.
Then, our previous work on the two-dimensional superconductor is applicable to such bilayer systems\cite{Fukui_SUST,Fukui_JPSJ}.
Finally, we discuss the movement of vortices under the external current.
When the external current is applied to vortex structures in Figs.~\ref{op_1_2}, \ref{op_2_2}, \ref{op_3_2}, and \ref{op_4_2}, along the helical axis, vortices easily move perpendicular to the $x$-axis.
On the other hand, when the external current flows along the direction of the $y$-axis, vortices move in the surface parallel to the $zx$-plane.
If the $z$-component of vortex direction is not zero, the vortex moves toward the $x$-axis.
But the distribution of the helical magnetic field varies along the helical axis ($x$-axis) spatially.
So, the interaction between the vortex and the magnetic field changes.
Then, motion of vortices are obstructed by this interaction, which leads to the increase of the critical current.
\begin{figure}[t]
\centering
\includegraphics[scale=0.27]{Fig9.eps}
\caption{(a) Distributions of the magnetic field from the chiral helimagnet, (b) $x$-component of the magnetic field, (c) $y$-component of the magnetic field, (d) $z$-component of the magnetic field. The amplitude of the helical magnetic field $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$ for $\phi = \pi$ in Eq.~(\ref{theta_2}). The ratio between the Dzyaloshinsky-Moriya interaction and the ferromagnetic interaction is $|\mbox{\boldmath $D$}|/J=-0.16$.}
\label{Fig_field_3}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.3]{Fig10.eps}
\caption{Distributions of the order parameter and phases of the order parameter in the cross sections parallel to (a) $xy$-planes and (b) $zx$-planes. The amplitude of the helical magnetic field $H_0/(\Phi_0/\xi_0^2) = 0.15$ and the applied magnetic field $H_{\rm appl}/(\Phi_0/\xi_0^2) = 0.00$. }
\label{op_4_2}
\end{figure*}
\section{Summary}
We have investigated vortex structures in three-dimensional superconductor under the helical magnetic field from the chiral helimagnet numerically.
We have obtained distributions of the order parameter using the three-dimensional Ginzburg-Landau equations.
When two vortices appear in one magnetic field region $(\mbox{\boldmath $H$}_{\rm ext})_z > 0$, vortices tilt toward the $x$-axis, or the helical axis in spite of $(\mbox{\boldmath $H$}_{\rm ext})_x = 0$.
This configuration may come from the interaction between vortices and between the vortex and the boundary in the system.
It is confirmed that when the rotation of the helical magnetic field is reversed, directions of tilt of vortices are also reversed.
These vortex structures do not occur under the uniform magnetic field.
Detailed discussion of stability of these vortex states need more simulations.
We only consider superconductors under the helical magnetic field, but without the uniform magnetic field and the external current.
Under the uniform magnetic field, the magnetic structure in the chiral helimagnet changes into the chiral soliton lattice.
In the microscopic system, uniform magnetic field decreases the number of solitons in the chiral soliton lattice discretely\cite{CHM_Cr_3}.
Then, we expect that this discrete change of the soliton affects the structures of vortices.
If the external current flows in this system, two vortices that tilt toward the helical axis move uniquely because of the complicated distributions of currents in the superconductor.
Investigations and discussions about these phenomena are future works.
|
1,108,101,564,223 | arxiv | \section{Introduction}
Yang-Baxter equation (YBE) was originated from solving the
repulsive $\delta$ interaction problem in one-dimension of $N$
particles \cite{th1,th1-1},
and problems of statistical models on lattices \cite{th2,th2-2,th3}.
Today, Yang-Baxter equation has become an important tool in physics, and has many applications in variety of areas of physics,
for instance in quantum field theory, statistical mechanics, and group theories
\cite{th2,th2-2,th3,thpnas1,thpnas2,th4,thpnas3,ybe11,ybe12,ybe13,ybe14,ybe15,ap1}. It can be
applied in completely integrable statistical models to find the
solutions by means of the nested Bethe ansatz \cite{ap1}. Recently
it turns up gradually that Yang-Baxter equation is naturally linked
to a hot area of frontier research, the quantum information and
computing \cite{nielsen,long}.
It is found that the Yang-Baxter equation is closely
related to quantum entangled states \cite{ap6,ap7}, the braiding operations in the Yang-Baxter equation are universal quantum
gates \cite{ap2-2,ap3,ap4,ap5,ap8}. Yang-Baxter equation attracts much attention in recent years and is being studied in the context of quantum correlation and entanglement, and
topological quantum computing intensively\cite{ap2,ap2-1,ap9,ap9-1,ap9-2,ap9-3,ap10,panjw}.
Due to its importance, the experimental verification of Yang-Baxter
equation has been pursued all along. Notably, an experimental verification
was carried out by Tennant et al in 1995 \cite{intest0,intest1}. Tennant
et al measured the spectrum of Heisenberg spin-half chain, and the
experimental result appeared to agree with the calculation based on
the Yang-Baxter equation. Recently, the density profile of 1-dimensional wires was measured
and it agreed well with the theoretical calculations
based on Yang's solvable model \cite{intest2}. However, these experiments
are indirect verifications of the Yang-Baxter equation because
the Yang-Baxter equation provides only a sufficient condition for
the spectrum or profile, or the observed profile is only a necessary condition for the Yang-Baxter equation and it does not guarantee the validity of the Yang-Baxter equation. Thus the direct verification of the Yang-Baxter equation is still an open question \cite{opttest}.
Direct experimental verification of the Yang-Baxter equation requires
not only the verification of the equality of the left-hand and the right-hand
sides of the equation, but also the transformation relation between
the spectral parameters in the Yang-Baxter equation, the Lorentz-like
transformation.
In this paper, we report the first direct experimental simulation
of the Yang-Baxter equation using quantum optics. The fundamental principles of the present simulation was established in 2008 by Hu, Xue and Ge \cite{opttest}. Hu, Xue and Ge gave an
explicit optical realization of the Yang-Baxter equation. By the use of the Temperley-Lieb algebra, they made a remarkable reduction that obtained a Yang-Baxter equation with dimension 2, the minimum dimensional Yang-Baxter equation so far. This makes it possible to be implemented in quantum
optics with current technology. In our experiment, we experimentally implemented the Hu-Xue-Ge scheme and demonstrated the validity of the Yang-Baxter equation using linear quantum optical
components such as beamsplitters, half-wave plates, quarter wave
plates, and etc. The equality of the two sides of the Yang-Baxter
equation is directly verified. In particular, the Lorentz-like
transformation in the spectral parameters of the Yang-Baxter
equation is experimentally demonstrated for the first time. The
present experiment completes the first direct experimental
simulation of the Yang-Baxter equation.
\section{Theoretical framework}
The Yang-Baxter equation reads,
\begin{eqnarray}
\breve{R}_{12}(u)\breve{R}_{23}(u_{23})\breve{R}_{12}(v)=\breve{R}_{23}(v)\breve{R}_{12}(u_{23})\breve{R}_{23}(u),\label{e0}
\end{eqnarray}
where $u$ and $v$ are spectral parameters, and $\beta^{-1}=ic$
($c$ is the light speed in vacuum), and
\begin{eqnarray}
u_{23}=\frac{u+v}{1+\beta uv},
\end{eqnarray}
is the Lorentz-like transformation relation of the spectral parameters.
The $N^{2}\times N^{2}$ dimension matrix $\breve{R}$ acts on the
tensor product space $V\otimes V$ of two $N$-dimensional spaces, and is the two-particle scattering matrix
depending on the relative rapidity $\tanh^{-1}(\beta u)$. When $\beta u=1$, $\breve{R}=b$ which is a braid matrix, and the Yang-Baxter equation reduces
to the braid relation $b_{12}b_{23}b_{12}= b_{23} b_{12} b_{23}$. This equation implies the scattering of particles 1 and 2, followed by scattering of particles 2 and 3, and then scattering of particles 1 and 2, is equal to the scattering of particles 2 and 3, followed by scattering of particles 1 and 2, and then scattering of particles 2 and 3, when they satisfy the Yang-Baxter equation with suitable spectral parameters.
Yang-Baxter equation is an abstract equation and the quantities in the equation may have different meanings in different problem. For instance, it has been found recently that the braid matrix and the Yang-Baxter equation are connected to entangled quantum states \cite{ap3}. The Bell-basis entangled states in four-dimension can be obtained by applying braid operation that satisfies the Yang-Baxter equation on the computational basis. Here the matrix in the Yang-Baxter equation becomes a transformation that transforms the computational basis to the Bell-basis states. There have been active studies in this direction, interested readers can refer to Ref. \cite{opttest} and references therein for more details.
The scheme used in our experiment was proposed by Hu, Xue and Ge
recently \cite{opttest}. The Yang-Baxter equation with the minimum
nontrivial dimension is in four dimension. In this case, $\breve{R}$
becomes a $4\times4$ matrix. In principle, it can be simulated
directly by means of quantum optics, and Hu, Xue and Ge gave an
explicit optical realization. However, such a realization requires
many controlled NOT gates whose realization is of very low
efficiency in linear quantum optics \cite{cnot1,cnot2} that its
feasibility using current technology is illusive. A further
simplification was made by the use of the Temperley-Lieb algebra
\cite{tla}, and the Yang-Baxter equation with minimal dimension was
reduced further to 2. This makes it feasible to implement in quantum
optics with current technology.
The 2-dimensional Yang-Baxter equation is expressed as,
\begin{equation}
A(u)B(\frac{u+v}{1+\beta^{2}uv})A(v)=B(v)A(\frac{u+v}{1+\beta^{2}uv})B(u),\label{e1}
\end{equation}
where
\begin{equation}
A(u)=\rho(u)\left(\begin{array}{cc}
\frac{1+\beta^{2}u^{2}+2i\epsilon\beta u}{1+\beta^{2}u^{2}-2i\epsilon\beta u} & 0\\
0 & 1
\end{array}\right),\label{e2}
\end{equation}
and
\begin{equation}
B(u)=\frac{\rho(u)}{1+\beta^{2}u^{2}-2i\epsilon\beta
u}\left(\begin{array}{cc}
1+\beta^{2}u^{2} & 2i\epsilon\beta u\\
2i\epsilon\beta u & 1+\beta^{2}u^{2}
\end{array}\right),\label{e3}
\end{equation}
and $\rho(u)$ is a normalization factor and $\epsilon=\pm1$.
For convenience in optical realization, $A$ and $B$ are represented
as functions of optical parameter $\theta$, the angle between the
optical axes of an optical device and the vertical direction. The
two sets of parameters are related by using the following
transformation
\begin{equation}
\frac{1+\beta^{2}u^{2}+2i\epsilon\beta
u}{1+\beta^{2}u^{2}-2i\epsilon\beta u}\equiv e^{-2i\theta}\label{e4}
\end{equation}
and
\begin{equation}
\rho(u)\equiv e^{i\theta}.\label{e5}
\end{equation}
$A(u)$ and $B(u)$ then become simple matrices in two dimensions
\begin{equation}
A(\theta)=\left(\begin{array}{cc}
e^{-i\theta} & 0\\
0 & e^{i\theta}
\end{array}\right),\label{e6}
\end{equation}
and
\begin{equation}
B(\theta)=\left(\begin{array}{cc}
\cos\theta & -i\,\sin\theta\\
-i\,\sin\theta & \cos\theta
\end{array}\right).\label{e7}
\end{equation}
The Yang-Baxter equation in Eq. (\ref{e1}) can be re-written as
\begin{equation}
A(\theta_{1})B(\theta_{2})A(\theta_{3})=B(\theta_{3})A(\theta_{2})B(\theta_{1}).\label{e8}
\end{equation}
The three parameters in the Yang-Baxter equation $\theta_{1}$, $\theta_{2}$
and $\theta_{3}$ are not independent, and they are related through
the following equation
\begin{equation}
(e^{-2i\theta_{2}}+1)[i-\sec(\theta_{1}-\theta_{3})\sin(\theta_{1}+\theta_{3})]=2i.\label{e9}
\end{equation}
Using this relation, we can transform the Lorentz-like relation in
spectral parameters into a relation in the optical angle parameters,
\begin{equation}
\theta_{2}=\arctan\left(\frac{\sin(\theta_{1}+\theta_{3})}{\cos(\theta_{1}-\theta_{3})}\right).\label{e10}
\end{equation}
We use the photon polarization qubit in our experiment. A general
elliptically polarized photon state $|\psi\rangle$ can be written as
\begin{equation}
|\psi\rangle=\alpha|\updownarrow\rangle+i\beta|\leftrightarrow\rangle,\label{e11}
\end{equation}
where $\alpha$ and $\beta$ are real and satisfy $|\alpha|^{2}+|\beta|^{2}=1$.
Without loss of generality, we assume $\alpha$ is real and positive.
The sign of the $\beta$ specifies the handedness of the circular
polarized photon. $|\updownarrow\rangle$ and
$|\leftrightarrow\rangle$ are basis states of the vertical and
horizontal polarization respectively. State $|\psi\rangle$ is
measured directly in the experiment.
The operations of $A(\theta)$ and $B(\theta)$ can be realized by
series of quarter-wave plates (QWP) and half-wave plates (HWP),
whose effects are equivalent to two elements of an SU(2)
transformation group \cite{wp} as shown in Fig. \ref{f1},
\begin{figure}[h]
\centerline{\includegraphics[scale=0.5]{Fig1}} \caption{(Color online) Realization of operations (a) $A(\theta)$ and $B(\theta)$ by optical
elements.$U_{Q}(\theta)$ and $U_{H}(\theta)$ are the matrices of QWP
and HWP, respectively, and $\theta$ is the angle between the optical
device axes and the vertical direction.}\label{f1}
\end{figure}
\begin{figure}[h]
\centerline{ \includegraphics[scale=0.51]{Fig2}} \caption{(Color online) Optical
realization of the (a) LHS and RHS of Yang-Baxter equation. The
angles of these QWPs (filled) and HWPs (empty) must satisfy the
Lorentz-like relation given in Eq. (\ref{e10}).}\label{f2}
\end{figure}
\begin{equation}
U_{Q}(\theta)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}
1-i\,\cos(2\theta) & -i\,\sin(2\theta)\\
-i\,\sin(2\theta) & 1+i\,\cos(2\theta)
\end{array}\right)\label{e14}
\end{equation}
and
\begin{equation}
U_{H}(\theta)=U_{Q}^{2}(\theta)=-i\left(\begin{array}{cc}
\cos(2\theta) & \sin(2\theta)\\
\sin(2\theta) & -\cos(2\theta)
\end{array}\right),\label{e15}
\end{equation}
where $\theta$ is the angle between the optical axis of QWP or HWP
and the vertical direction. Thus, $A(\theta)$ and $B(\theta)$ can be
simulated by series of QWP and HWP, i.e.
\begin{equation}
A(\theta)=U_{Q}\left(\frac{\pi}{4}\right)U_{H}\left(-\frac{\pi}{4}+\frac{\theta}{2}\right)U_{Q}\left(\frac{\pi}{4}\right)\label{e16}
\end{equation}
and
\begin{equation}
B(\theta)=U_{Q}\left(\frac{\pi}{2}\right)U_{H}\left(\frac{\theta}{2}\right)U_{Q}\left(\frac{\pi}{2}\right),\label{e17}
\end{equation}
respectively. The two sides of the Yang-Baxter equation are simulated
by two series of wave plates as illustrated in Fig. \ref{f2}.
The qubit state after the transformation of the LHS of the
Yang-Baxter equation is denoted as,
\begin{equation}
|\psi_{{\rm
out}}\rangle_{L}=\alpha_{L}|\updownarrow\rangle_{L}+i\beta_{L}|\leftrightarrow\rangle_{L}\label{e18}
\end{equation}
and the qubit state after the processing of the RHS of the Yang-Baxter
equation can be expressed as
\begin{equation}
|\psi_{{\rm
out}}\rangle_{R}=\alpha_{R}|\updownarrow\rangle_{R}+i\beta_{R}|\leftrightarrow\rangle_{R}.\label{e19}
\end{equation}
To check the equality of these two final output states, we define
the fidelity
\begin{equation}
{\rm C}_{{\rm YBE}}=|{}_{L}\langle\psi_{{\rm out}}|\psi_{{\rm
out}}\rangle_{R}|,\label{e21}
\end{equation}
which is the absolute value of the overlap of the two states. Fidelity
${\rm C}_{{\rm YBE}}$ is a good measure of the validity of the
Yang-Baxter equation. If ${\rm C}_{{\rm YBE}}$ equals to 1, the two
sides of the Yang-Baxter equation is equal. Otherwise it is not
valid. In the real experiment, it should be 1 within statistical
errors, and independent of the input state.
\section{Experimental method and results}
The experimental setup is explicated in Fig. \ref{f3}. A He-Ne laser
with center frequency 632.8nm, drawn in the left part of Fig.
\ref{f3}, is used to generate sequences of photons with certain
polarization state. The input state can be prepared at arbitrary
linearly or elliptically polarized photon states conveniently by
using either a HWP or a QWP following the PBS and an attenuator. In
the experiments, the intensity of the light source are attenuated to
a weak level that well approximate the single photon sources. Then
the photons go through a series of optical components which simulate
either the left-hand-side or the right-hand-side of the Yang-Baxter
equation, which is indicated in the middle part of Fig. \ref{f3}.
The right part of Fig. \ref{f3} is used to measure the polarized
state of the photon output state after going through corresponding
Yang-Baxter equation transformation.
\begin{figure}[h]
\centerline{\includegraphics[scale=0.3]{Fig3}}
\caption{(Color online) Experimental setup. The left part generates a horizontally
polarized state, which is then transformed into a desired input
state with arbitrary linearly or elliptically polarization by using
a HWP or QWP, respectively, following the PBS. The middle part is
the left-hand side or right-hand side of Yang-Baxter equation
consists of series of wave plates. The right part, containing a
Glan-Laser prism and a single photon detector, is used to detect the
polarized state of the output state. A QWP may be inserted to
determine the handedness of an elliptically polarized photon. }
\label{f3}
\end{figure}
\subsection{Verification of Equality of the Yang-Baxter equation}
Groups of angles $\theta_{1}$, $\theta_{2}$ and $\theta_{3}$ are
selected and experimented. The fidelity ${\rm C}_{{\rm YBE}}$
parameter is measured for each group of angles. The experimental
results prove the Yang-Baxter equation very well. For illustration
purpose, we set the two angles $\theta_{1}$ and $\theta_{3}$ at
values $56\textrm{\ensuremath{^{\textrm{o}}}}$ and
$23{}^{\textrm{o}}$ respectively in the following. We then vary
$\theta_{2}$ from $0{}^{\textrm{o}}$ to $180{}^{\textrm{o}}$ and
obtain ${\rm C}_{{\rm YBE}}$ for each $\theta_{2}$.
The curve of ${\rm C}_{{\rm YBE}}$ versus $\theta_{2}$ for the input
state $|\updownarrow\rangle$ is presented in Fig. \ref{f4}. ${\rm
C}_{{\rm YBE}}$ reaches its maximum value $0.9997\pm0.0237$ when
$\theta_{2}=49.49^{\textrm{o}}$, which is equal 1 within statistical
error. From this one can see that for an arbitrary values of angles,
$\theta_{1}$, $\theta_{2}$ and $\theta_{3}$,
$A(\theta_{3})B(\theta_{2})A(\theta_{1})$ and
$B(\theta_{1})A(\theta_{2})B(\theta_{3})$ are usually not equal.
Only when the three angles satisfy the Lorentz-like relation, Eq.
(\ref{e10}), namely, they satisfy the Yang-Baxter equation, the
operations are equal. This clearly verified the validity of the
Yang-Baxter equation.
For the input state $|\leftrightarrow\rangle$, identical results,
within statistical error, as those for input state
$|\updownarrow\rangle$ are obtained.
If an input state is an arbitrarily polarized state, say,
$(0.7071-0.5417i)|\updownarrow\rangle$
$-0.4545i|\leftrightarrow\rangle$, the fidelity ${\rm C}_{{\rm
YBE}}$ versus $\theta_{2}$ curve is depicted in Fig. \ref{f5}. One
can see that ${\rm C}_{{\rm YBE}}$ reaches the maximum value $1$
within statistical error when $\theta_{2}=49.49{}^{\textrm{o}}$,
which accords with the theoretical value determined by Eq.
(\ref{e10}); while other $\theta_{2}$ values do not make ${\rm
C}_{{\rm YBE}}$ equal to 1, which implies
$A(\theta_{3})B(\theta_{2})A(\theta_{1})$ and
$B(\theta_{1})A(\theta_{2})B(\theta_{3})$ are not equal when they do
not satisfy Eq. (\ref{e10}). In this case, the Yang-Baxter equation
is not satisfied. Therefore, the two operations are equal only when
the Yang-Baxter equation condition is met. It firmly demonstrates
the validity of the Yang-Baxter equation.
As seen already above, $A(\theta_{3})B(\theta_{2})A(\theta_{1})$ and
$B(\theta_{1})A(\theta_{2})$ $B(\theta_{3})$ are not identical for
arbitrary sets of $\theta_{i}$, $i=1,2,3$. It is clearly manifested
again in the difference between the two curves in Fig. \ref{f4} and
Fig. \ref{f5}, namely the fidelity ${\rm C}_{{\rm YBE}}$ differs for
operations $A(\theta_{3})B(\theta_{2})A(\theta_{1})$ and
$B(\theta_{1})A(\theta_{2})B(\theta_{3})$. They transform the same
input state to different output state for general values of
$\theta_{2}$. The Yang-Baxter equation corresponds to a specific
setting, that is, ${\rm C}_{{\rm YBE}}$ is 1 if three angles
satisfy Eq. (\ref{e10}). In this particular values of $\theta_{1}$
and $\theta_{3}$ displayed in Fig. \ref{f4} and \ref{f5},
$\theta_{2}=49.49^{\textrm{o}}$ satisfies the Yang-Baxter equation,
as they are clearly demonstrated in Fig. \ref{f4} and Fig. \ref{f5}.
The transformations representing the two sides of the Yang-Baxter
equation transform any input state into the same output state,
clearly validating the equality of the Yang-Baxter equation in the
experimental results.
\subsection{Verification for the necessity of Lorentz-like transformation}
We have verified the sufficiency of the Lorentz-like transformation
of parameters $u$ and $v$ in Eq. (\ref{e1}) until now. In term of the
parameters $\theta_{1}$, $\theta_{2}$ and $\theta_{3}$, the transformation
is expressed in Eq. (\ref{e10}). However, It can not be excluded that
other groups of $\theta_{1}$, $\theta_{2}$ and $\theta_{3}$,
which don't satisfy the Lorentz-like transformation, also validate
the Yang-Baxter equation just by the former experimental verification.
It is essential to verify that the Lorentz-like transformation of parameters
$u$ and $v$ in Eq. (\ref{e1}) is the necessary condition to validate the Yang-Baxter equation.
In other words, we need to find out all the relations of $\theta_{1}$,
$\theta_{2}$ and $\theta_{3}$ that make Yang-Baxter equation valid.
Converting it to experiment, we need to record all groups of
$\theta_{1}$, $\theta_{2}$ and $\theta_{3}$ when the fidelity
${\rm C}_{{\rm YBE}}$ reaches 1 within statistical error. To achieve this,
we fixed $\theta_{3}$ first. For each fixed $\theta_{3}$, we kept
$\theta_{2}$ and tune $\theta_{1}$ from 0 to $\pi$ continually to
find out all pairs of $\theta_{1}$ and $\theta_{2}$ that make the
fidelity ${\rm C}_{{\rm YBE}}$ almost 1 (here we choose
the lower bound of ${\rm C}_{{\rm YBE}}$ as 0.9995). The experimental
results show that all groups of $\theta_{1}$, $\theta_{2}$ and $\theta_{3}$
satisfy Eq. (\ref{e10}) when ${\rm C}_{{\rm YBE}}$ attains 1 within statistical error,
i.e. the original spectral parameters $u$ and $v$ satisfy the Lorentz-like transformation.
To illustrate this, we show three series of our experimental results,
i.e. three curves of $\theta_{2}$ versus $\theta_{1}$ for the corresponding
three fixed $\theta_{3}$ when ${\rm C}_{{\rm YBE}}$ equals to 1 within the
statistical error. We draw the relation curves of $\theta_{2}$ versus
$\theta_{1}$ for different values of $\theta_{3}$ while keeping
${\rm C}_{{\rm YBE}}>0.9995$ in Fig. \ref{f6}, where
$\theta_{3}$ are fixed at $32^{\textrm{o}}$ (blue dots),
$56^{\textrm{o}}$ (green squares), and $146^{\textrm{o}}$ (red
triangles) respectively. The theoretical figures are also shown in Fig. \ref{f6}
(lines), from which one can see that the experimental results agree
with the theoretical prediction very well. The rich structure of the
Lorentz-like relation is well confirmed in the experiment.
\begin{figure}[h]
\centerline{ \includegraphics[scale=0.55]{Fig4}} \caption{(Color online) The curve
of ${\rm C}_{{\rm YBE}}$ versus $\theta_{2}$ where $\theta_{1}$ and
$\theta_{3}$ are kept fixed at $56{}^{\textrm{o}}$ and
$23{}^{\textrm{o}}$, respectively. The dots are the experimental
data while the line is the theoretical curve. The input state is
$|\updownarrow\rangle$, the vertical polarization state. When the
input state is chosen as $|\leftrightarrow\rangle$, the horizontally
polarized state, both experimental and theoretical results are
identical to those for the vertical polarization input state. }
\label{f4}
\end{figure}
\begin{figure}[h]
\centerline{ \includegraphics[scale=0.55]{Fig5}} \caption{(Color online) The curve
of ${\rm C}_{{\rm YBE}}$ versus $\theta_{2}$ where $\theta_{1}$ and
$\theta_{3}$ are kept fixed at $56{}^{\textrm{o}}$ and
$23{}^{\textrm{o}}$, respectively. The dots are the experimental
data while the line is the theoretical curve. The input state is an
elliptically polarized state with the form
$(0.7071-0.5417i)|\updownarrow\rangle-0.4545i|\leftrightarrow\rangle$.
${\rm C}_{{\rm YBE}}$ reaches $0.9999\pm0.0356$ when
$\theta_{2}=49.49^{\textrm{o}}$. }
\label{f5}
\end{figure}
\begin{figure}[h]
\centerline{\includegraphics[scale=0.8]{Fig6}} \caption{(Color online) $\theta_{2}$
versus $\theta_{1}$ curve while fixing $\theta_{3}$. The blue dots,
the green squares and the red triangles are the experimental data
for $\theta_{3}=32{}^{\textrm{o}}$, $56{}^{\textrm{o}}$, and
$146{}^{\textrm{o}}$, while the lines are the corresponding
theoretical results.}
\label{f6}
\end{figure}
\section{Summary, review and outlook}
The Yang-Baxter equation is directly verified experimentally using
linear quantum optics devices for the first time from the following two aspects.
On one hand, the experiment proved the equality between the two sides of
Yang-Baxter equation if the parameters $\theta_{1}$,
$\theta_{2}$ and $\theta_{3}$ satisfy Eq. (\ref{e10}), which is to $\theta$'s
what Lorentz-like transformation is to the original spectral parameter
$u$ and $v$. It means that the validity of Yang-Baxter equation is guaranteed
sufficiently when the spectral parameters satisfy the Lorentz-like transformation.
On the other hand, we verified that it is also the necessary condition for the
validity of Yang-Baxter equation to make the spectral parameters satisfy the
Lorentz-like transformation. We recorded all groups of the parameters $\theta_{1}$,
$\theta_{2}$ and $\theta_{3}$ that make the fidelity be 1 within the statistical error,
and found out that each group satisfies Eq. (\ref{e10}). In this process, it is fully presented
again that the beautiful structure of the Lorentz-like transformation of the spectral parameters.
Two issues remain open for further
studies for higher dimensional Yang-Baxter equation. One important
issue is the role of entanglement in the Yang-Baxter equation. In
the present experiment, no entanglement is involved. In higher
dimensions, the operations in the Yang-Baxter equation will inevitably bring in
quantum entanglement. It will be an interesting and significant
subject for future study, and consequently the entangling power of
the operations in Yang-Baxter equation, namely the operations in either
sides of the Yang-Baxter equation emerge naturally also an important
topic. The effect of Lorentz-like transformation of the
Yang-Baxter equation is mythical. It drives two independent
operators to become identical as seen from this 2-dimensional
quantum system. In higher dimensions, the operations will be more
complex and entanglement also comes into play. The role of the
transformation relation will be tested and studied in more detail
and in wider aspects.
Discovered from solving problems in many-body systems and statistical models
in the middle of the last century, variety of contexts of Yang-Baxter equation were
revealed and it has been applied to many different area, such as quantum field theory,
statistical mechanics, group theory, and etc. Now, Yang-Baxter equation
is playing an important role in quantum information science which is a thriving area of frontier research.
Using the relation between Bell basis and Yang-Baxter equation enables the investigation of quantum entanglement,
and the relation between anyon and Yang-Baxter equation entails exploring topological
quantum computing. Many interesting applications of the Yang-Baxter equation lies ahead. Yang-Baxter equation not only deserves a direct verification,
like this work, but also merits scientists' continued investigation.
\begin{acknowledgments}
This work was supported by the National Natural Science Foundation
of China (Grant No. 10874 098,11175094), the National Basic Research
Program of China (2009CB929402, 2011CB9216002).
\end{acknowledgments}
|
1,108,101,564,224 | arxiv | \section{Introduction}
\hspace{\parindent}
The standard Einstein's gravity theory corresponds to an
open region in the real section of the Ashtekar's theory phase space.
The boundary\footnote{Meaning here just the closure of the region minus
the region itself.} of that region is set up by degenerate data.
There are several motivations to study the degenerate sector.
First, a natural question which arises is whether or not
the evolution could throw some data out of the Einstein's
theory region. But then, since the reality is preserved,
the evolving data should cross the degenerate sector.
Second, according to the loop quantization, quantum excitations of the
gravitational field are lower dimensional and define degenerate,
non-invertible metric tensors.
The degenerate data can be classified with respect to the rank of the
densitized triad, and the rank of the squared triad (see next section).
It should be noted that all the considerations in this paper are local.
Our classification of the degeneracy, in particular, applies
only to open regions of the surface of initial data,
whereas in a general case the types can vary from one region to
another one.
All the solutions of the Einstein-Ashtekar equations
of the types (1,1) and (2,2) were derived in
\cite{Jacobson,Lewandowski}. In the first case \cite{Jacobson},
a general solution is the congruence of the integral curves
defined by the triad and foliating $\Sigma$ which behave like 1+1
dimensional vacuum space-times with a pair of massless complex valued
fields propagating along them.
In the (2,2) case \cite{Lewandowski}, it was shown that the
preservation of the reality by the evolution
implies the existence of a foliation of $\Sigma$ into the integral
2-surfaces
tangent to a given triad. Analogously to the Jacobson's case, the
equations
of the 3+1 gravity make the 2-surfaces behave like 2+1 dimensional empty
space-times with an extra massless complex field assigned to each surface
and
propagating along it.
An important observation was, that the conditions defining
each of the sectors Poisson commute with the Hamiltonian
modulo themselves and the constraints.
In the present paper, the Einstein-Ashtekar equations
will be solved for the remaining two types of the degenerate data.
In the first (1,0) case the solution is space-time which is a
`set of independently evolving points'. In the second (2,1) case,
the general solution is such that the surface of initial data $\Sigma$
is foliated by integral curves of the vector field from the triad.
Nine complex fields evolve along these curves.
As in the previously studied cases, it is shown that
the conditions defining each degeneracy sector
weakly (in the same sense as above) Poisson commute with the
Hamiltonian\footnote{Another interesting derivation of our result on the possibility of the evolution of a non-degenerate data into a degenerate one was given in \cite{Ma}.}.
Before the systematic study of the Ashtekar equations in the
degenerate sector which was started by Jacobson \cite{Jacobson},
various aspects of the degenerate sector were discussed for instance by
Jacobson and Romano \cite{Romano}, Bengtsson \cite{Bengtsson},
Reisenberger \cite{Reisenberger} and Matschull \cite{Matschull}. See also more recent work \cite{recent}.
\section{Ashtekar's theory.}
\hspace{\parindent}
For reader's convenience we shall briefly review the Ashtekar's theory.
It is a canonical theory on a space-time manifold $ \Sigma \times {\mbox {\boldmath $ R $}}$, where $\Sigma$ is a three-real-surface of initial data (the 'space') and {\boldmath $ R $} is the one dimensional space of values for a time parameter. The phase space consists of the pairs of fields $(A,E)$, where $A$ is an algebra $ sl(2,{\mbox {\boldmath $C$}}) $-valued one-form on $\Sigma$ and $E$ is an $ sl(2,{\mbox {\boldmath $C$}}) $-valued vector density field of weight 1 defined on $\Sigma$. Using local
coordinates $ (x^{a})=(x^{1},x^{2},x^{3}) $ on $\Sigma$ and a basis $ (\tau_{i})=(\tau_{1},\tau_{2},\tau_{3}) $ of $ sl(2,{\mbox {\boldmath $C$}}) $ we write
\begin{equation}
A= A^{i}_{a} \tau_{i} \otimes {\rm d}x^{a}, \; \; \; \; E= E^{ia} \tau_{i} \otimes \partial_{a},
\end{equation}
where $A^{i}_{a}$, $E^{ia}$ are complex valued functions on $\Sigma$. We fix the standard bilinear complex valued inner product in $ sl(2,{\mbox {\boldmath $C$}}) $ by
\begin{equation}
k(v,w) := -2{\rm tr}(vw)
\end{equation}
for any $ v,w \in sl(2,{\mbox {\boldmath $C$}}) $. The variables $(A,E)$ are canonically conjugate, the only non-vanishing Poisson bracket is
\begin{equation}
\{ A^{i}_{a}(x), E^{jb}(y) \} = {\rm i} k^{ij} \delta^{b}_{a} \delta(x,y).
\end{equation}
A data $(A,E)$ is accompanied by Lagrange multipliers, a -1 weight density $N$ (the densitized laps function), a vector field $N^{a}$ (the familiar shift) and an $sl(2,{\mbox {\boldmath $C$}})$ valued function $\Lambda$, all defined on $\Sigma$. The Hamiltonian is given by
\begin{equation}
H = {\cal C}_{N} + {\cal C}_{\vec{N}} + {\cal G}_{\Lambda},
\end{equation}
\begin{equation}
{\cal C}_{N} := \int_{\Sigma} {\rm d}^{3}x N {\cal C}(A,E) := - \frac{1}{2} \int_{\Sigma} {\rm d}^{3}x N F^{i}_{ab} E^{ja} E^{kb} c_{ijk},
\end{equation}
\begin{equation}
{\cal C}_{\vec{N}} := \int_{\Sigma} {\rm d}^{3}x N^{a} {\cal C}_{a}(A,E) := -{\rm i} \int_{\Sigma} {\rm d}^{3}x N^{a} F^{i}_{ab} E^{b}_{i},
\end{equation}
\begin{equation}
{\cal G}_{\Lambda} := \int_{\Sigma} {\rm d}^{3}x \Lambda_{i} {\cal G}^{i}(A,E) := {\rm i} \int_{\Sigma} {\rm d}^{3}x \Lambda_{i} D_{a} E^{ia},
\end{equation}
where
\begin{equation}
F := \frac{1}{2} F^{i}_{ab} \tau_{i} \otimes {\rm d}x^{a} \wedge {\rm d}x^{b} := {\rm d}A + A \wedge A
\end{equation}
is the curvature of $A$, and
\begin{equation}
D_{a} w^{i} := \partial_{a} w^{i} + c^{i}_{\; jk} A^{j}_{a} w^{k}
\end{equation}
is the covariant derivative ($w^{i}$ is a function on $\Sigma$). $ c^{i}_{\; jk}$ are the structure constants of $sl(2,{\mbox {\boldmath $C$}})$ defined by
\begin{equation}
[ \tau_{i} , \tau_{j} ] = c_{\; ij}^{k} \tau_{k} .
\end{equation}
The constraints ${\cal C}_{N}$, ${\cal C}_{\vec{N}}$, ${\cal G}_{\Lambda}$ generate respectively the time evolution, diffeomorphisms of $\Sigma$ and the Yang-Mills gauge transformations
\begin{equation}
A \longmapsto g^{-1} A g + g^{-1} {\rm d}g,
\end{equation}
\begin{equation}
E \longmapsto g^{-1} E g,
\end{equation}
where $g$ is any $ SL ( 2, {\mbox {\boldmath $C$}} ) $-valued function on $\Sigma$.
Apart from the resulting constraint equations, the data $(A,E)$ is subject to the following reality conditions
\begin{equation}
{\rm Im} ( E^{ia} E_{i}^{b} ) = 0 ,
\label{real1}
\end{equation}
\begin{equation}
{\rm Im} ( \{ E^{ia} E_{i}^{b} , {\cal C}_{N} \} ) = 0 .
\label{real2}
\end{equation}
As long as the matrix $(E^{ia})_{i,a=1,2,3}$ is of the rank 3 and the signature of the symmetric matrix $(E^{ia}E_{i}^{b})_{a,b=1,2,3}$ is (+,+,+) one constructs an ADM data from $(A,E)$ and the Ashtekar theory is equivalent to the Einstein gravity with
the Lorentzian signature. However, the theory naturally extends to degenerate cases, when the ranks are lower than 3.
\subsection*{Classification of degeneracies.}
\hspace{\parindent}
Since the $E$ field is complex valued, in general the rank of the '2-area matrix' (see e.g. \cite{Lewandowski}) $(E^{ia}E_{i}^{b})$ is lower or equal to the rank of $(E^{ia})$ matrix. If we restrict ourselves to semi-positive definite case of the 2-area
matrix, the possible cases are (0,0), (1,0), (1,1), (2,1), (2,2) and (3,3), where the numbers indicate the ranks of the triad matrix and the 2-area matrix respectively.
The examples of triad vector fields falling into specific sectors could be as follows: (0,0) - $E=0$, (1,0) - $E=(\tau_{1}+{\rm i}\tau_{2})\otimes(\frac{\partial}{\partial x^{1}})$, (1,1) - $E=\tau_{1} \otimes (\frac{\partial}{\partial x^{1}})$, (2,1) - $E=(\tau_{1}+{\rm i}\tau_{2})\otimes(\frac{\partial}{\partial x^{1}}) + \tau_{3} \otimes (\frac{\partial}{\partial x^{3}})$, (2,2) - $E=\tau_{1} \otimes (\frac{\partial}{\partial x^{1}}) + \tau_{2} \otimes (\frac{\partial}{\partial x^{2}})$, (3,3) - $E=\tau_{1} \otimes (\frac{\partial}{\partial x^{1}}) + \tau_{2} \otimes (\frac{\partial}{\partial x^{2}}) + \tau_{3} \otimes (\frac{\partial}{\partial x^{3}})$.
\section{Sector (1,0)}
\hspace{\parindent}
Sector (1,0) is defined as the one for which ${\rm rank} \left( E^{ia} \right) = 1$, ${\rm sign} \left( E^{ia} E_{i}^{b} \right) = (0,0,0)$ at the surface of initial data $\Sigma$. In this paragraph the Ashtekar equations for the sector (1,0) will be solved. At the beginning, it is useful to choose a convenient gauge. One may show the following
\begin{lemat}
\begin{displaymath}
\begin{array}{c}
\left[ \left( E^{ia} E_{i}^{b} = 0 \right) \: \wedge \: \left( {\rm rank} \left( E^{ia} \right) = 1 \right) \right] \; \Rightarrow \\
\Rightarrow \left[ \exists g\in SL(2,{\mbox {\boldmath $C$} } ) \: : \: g^{-1}Eg = \left( \tau_{1} + i\tau_{2} \right) \otimes \left( E^{1a} \partial_{a} \right) \right]
\end{array}
\end{displaymath}
\label{lemat1}
\end{lemat}
{\bf Proof:} \hspace{\parindent}
Let us assume that
\begin{equation}
{\rm rank} (E^{ia}) = 1,
\label{rank1}
\end{equation}
\begin{equation}
E^{ia} E_{i}^{b} = 0.
\label{rank2}
\end{equation}
Equality (\ref{rank1}) implies that
\begin{equation}
E = \lambda \tau_{1} \otimes E^{3} + \mu \tau_{2} \otimes E^{3} + \tau_{3} \otimes E^{3},
\label{postac}
\end{equation}
where $\lambda,\; \mu$ are functions on $ \Sigma $ and $ E^{3} := E^{3a} \partial_{a} \neq 0 $.
From (\ref{postac}) and (\ref{rank2}) we conclude that
\begin{equation}
1 + \lambda^{2} + \mu^{2} = 0.
\label{rownanko}
\end{equation}
By the fact from the Appendix we can make such a gauge transformation that $ {\rm Im} \lambda = 0 $.
It can be easily shown that we can transform $E$ with real $\lambda$ to
\begin{equation}
E = \lambda^{'} \tau_{1} \otimes E^{3} + \mu \tau_{2} \otimes E^{3},
\end{equation}
with some new real function $\lambda^{'}$. It can be done by $ g = \left(
\begin{array}{ccc}
\cos \phi & , & -\sin \phi \\
\sin \phi & , & \cos \phi
\end{array}
\right) $ with a suitably chosen $ \phi \in $ {\boldmath $R$} (see Appendix).
From the Fact 1 it follows that
\begin{equation}
\lambda^{'2} + \mu^{2} = 0 ,
\end{equation}
hence $ \mu = \pm {\rm i} \lambda^{'} $.
Our field variable takes now simple form
\begin{equation}
E = \lambda^{'} ( \tau_{1} \pm {\rm i} \tau_{2} ) \otimes E^{3}.
\label{minus}
\end{equation}
By another gauge (with $ g = \left(
\begin{array}{cc}
{\rm i} & 0 \\
0 & -{\rm i}
\end{array}
\right) $ ) we obtain the required form
\begin{equation}
E = ( \tau_{1} + {\rm i} \tau_{2} ) \otimes E^{+},
\end{equation}
which ends the proof.
Now, let us change the basis in $sl(2,{\mbox {\boldmath $C$}})$ to $( \tau_{+}, \tau_{-}, \tau_{0} )$, where $\tau_{+}:=\tau_{1}+i\tau_{2} ,\; \tau_{-}:=\tau_{1}-i\tau_{2} ,\; \tau_{0}:=\tau_{3}$. Expression for the field E takes the simple form
\begin{equation}
E = \tau_{+} \otimes E^{+} ,
\end{equation}
where $E^{+} := E^{+a} \partial_{a} = E^{1a} \partial_{a}$. It is easy to calculate that in the new basis
\begin{equation}
c_{+-0}=2{\rm i}=c_{[+-0]},\; {\rm and}
\end{equation}
\begin{equation}
( k_{ij} ) =
\left(
\begin{array}{ccc}
0 & 2 & 0 \\
2 & 0 & 0 \\
0 & 0 & 1
\end{array}
\right),
\end{equation}
where $i,j=+,-,0$.
\subsection*{Constraints}
\hspace{\parindent}
Constraint equations read now as follows
\begin{displaymath}
{\cal C} \equiv 0, \; \; {\cal G}^{-} \equiv 0,
\end{displaymath}
\begin{equation}
{\cal C}_{a} = -2{\rm i} \left( i (E^{+}) F^{-} \right)_{a} = 0,
\label{ca}
\end{equation}
\begin{equation}
{\cal G}^{0} = -2 i(E^{+}) A^{-} = 0,
\label{g0}
\end{equation}
\begin{equation}
{\cal G}^{+} = {\rm i} \partial_{a} E^{+a} + i(E^{+}) A^{0} = 0,
\end{equation}
where $i$ means the inner product and we use the convention for $A^{-}$, $A^{0}$, to be defined analogously to $E^{+}$ and $F^{-}:={\rm d}A^{-}+(A\wedge A)^{-}$. We will use this convention also for the other components of the field variables.
Since $F^{-} = {\rm d}A^{-} - {\rm i}A^{-}\wedge A^{0}$, the following equality is true, provided the constraint equations are fulfilled,
\begin{displaymath}
\begin{array}{c}
i(E^{+}) \left( {\rm d}A^{-} \wedge A^{-} \right) = i (E^{+}) \left( F^{-} \wedge A^{-} \right) = \\
= \left( i(E^{+}) F^{-} \right) \wedge A^{-} + F^{-} \left( i(E^{+}) A^{-} \right) = 0.
\end{array}
\end{displaymath}
Hence the three-form ${\rm d}A^{-} \wedge A^{-} = 0$. Therefore there exist coordinates on $\Sigma$ such that $A^{-} = \alpha {\rm d}\bar{z}$, where $\alpha$ is a function on $\Sigma$ and $\bar{z} = x - {\rm i}y$ ($x,y$ are two of the three real coordinates
on $\Sigma$) or $\bar{z}\in${\boldmath $R$} (in this case $(x,y,\bar{z})$ are the real coordinates on $\Sigma$).
If $\alpha \neq 0$ we can make gauge transformation with $g={\rm e}^{{\rm i}\lambda \tau_{3}}$, where $\lambda = - \log{\alpha}$. This gives $A^{-} = {\rm d}\bar{z}$ and leaves the form of $E$ unchanged. Indeed, let $g={\rm e}^{\lambda \tau_{0}}$, with $\lambda$ - any complex function on $\Sigma$. We know that $g^{-1}={\rm e}^{-\lambda \tau_{0}}$. Therefore
\begin{eqnarray*}
g^{-1} \tau_{\pm} g = {\rm e}^{-\lambda \tau_{0}} \tau_{\pm} {\rm e}^{\lambda \tau_{0}} = {\rm e}^{-\lambda \tau_{0}} \tau_{\pm} (1+ \lambda \tau_{0} + \frac{1}{2} \lambda^{2} \tau_{0}^{2} + \ldots ) = {\rm e}^{-\lambda \tau_{0}} ( \tau_{\pm} + \\
+ \lambda \tau_{\pm} \tau_{0} + \frac{1}{2} \lambda^{2} \tau_{\pm} \tau_{0}^{2} + \ldots ) = {\rm e}^{-\lambda \tau_{0}} ( \tau_{\pm} + \lambda \tau_{0} \tau_{\pm} \mp {\rm i} \lambda \tau_{\pm} + \frac{1}{2} \lambda^{2} \tau_{0} \tau_{\pm} \tau_{0} \mp \\
\mp \frac{1}{2} {\rm i} \lambda^{2} \tau_{\pm} \tau_{0} + \ldots ) = {\rm e}^{-\lambda \tau_{0}} ( \tau_{\pm} + \lambda \tau_{0} \tau_{\pm} \mp {\rm i} \lambda \tau_{\pm} + \frac{1}{2} \lambda^{2} \tau_{0}^{2} \tau_{\pm} \mp {\rm i} \lambda^{2} \tau_{0} \tau_{\pm} + \\
+ \frac{1}{2} ({\rm i}\lambda)^{2} \tau_{\pm} \pm + \ldots ) = {\rm e}^{-\lambda \tau_{0}} {\rm e}^{\lambda (\tau_{0} \mp {\rm i})} \tau_{\pm} = {\rm e}^{\mp \lambda {\rm i}} \tau_{\pm},
\end{eqnarray*}
\begin{displaymath}
g^{-1}\tau_{0}g=\tau_{0},
\end{displaymath}
\begin{eqnarray*}
g^{-1}{\rm d}g= {\rm e}^{-\lambda \tau_{0}} {\rm d} ({\rm e}^{\lambda \tau_{0}}) = {\rm e}^{-\lambda \tau_{0}} \left( ({\rm d}\lambda) \tau_{0} + \frac{1}{2} ({\rm d}\lambda^{2}) \tau_{0}^{2} + \ldots \right)= \\
= {\rm e}^{-\lambda \tau_{0}} ({\rm d}\lambda) \tau_{0} {\rm e}^{\lambda \tau_{0}} = ({\rm d}\lambda) \tau_{0}.
\end{eqnarray*}
We will now solve the constraint equations separately for three possible cases.
\begin{enumerate}
\item $A^{-} = {\rm d}\bar{z}, \; \bar{z} = x - {\rm i}y, \; x,y \in${\boldmath $R$}.\\
It follows from (\ref{g0}) that
\begin{displaymath}
E^{+} = E^{+z}\frac{\partial}{\partial z} + E^{+u}\frac{\partial}{\partial u},
\end{displaymath}
where $u\in${\boldmath $R$}, $z=x+{\rm i}y$. Since d$A^{-}=0$, from (\ref{ca}) and (\ref{g0}) we get
\begin{displaymath}
i (E^{+}) A^{0} = 0 = {\cal G}^{+} - {\rm i}\partial_{a}E^{+a},
\end{displaymath}
hence we need to solve the equation
\begin{equation}
\partial_{a}E^{+a} = 0.
\label{div}
\end{equation}
The general solution of this equation is
\begin{equation}
E^{+a} = \varepsilon^{abc} \Psi_{b,c} ,
\label{Marysia}
\end{equation}
where $\Psi$ is any complex function on $\Sigma$. The condition $E^{+\bar{z}}=0$ gives $\Psi_{z} = \Phi_{,u}$ and $\Psi_{u} = \Phi_{,z}$ with some complex function $\Phi$.
To solve the constraint equations completely we only have to regard the condition
\begin{displaymath}
i(E^{+}) A^{0} = 0 .
\end{displaymath}
This is a simple algebraic equation for $A^{0}$, provided $E^{+}$ is fixed. To end this discussion, it should be noted that there are no constraints for $A^{+}$.
\item $A^{-}={\rm d}\bar{z}, \; \bar{z} \in$ {\boldmath $R$}. \\
From (\ref{g0}) we get
\begin{displaymath}
E^{+} = E^{+x} \frac{\partial}{\partial x} + E^{+y} \frac{\partial}{\partial y}
\end{displaymath}
with $(x,y,\bar{z})$ - coordinates on $\Sigma$.
It is easy to see that we can solve this case in the same way as we solved point 1. We should only exchange $z$ with $x$ and $u$ with $y$.
\item $A^{-}=0$.\\
In this case $F^{-}=0$, hence ${\cal C}_{a} \equiv 0$. Moreover ${\cal G}^{0} \equiv 0$. We only have to solve
\begin{equation}
\partial_{a} E^{+a} = {\rm i} E^{+a} A^{0}_{a}.
\label{Ania}
\end{equation}
For any given $E^{+}$ it is a simple equation for $A^{0}$. We can see that in this case we have no constraints on $E^{+}$ and $A^{+}$.
\end{enumerate}
\subsection*{Evolution equations}
\hspace{\parindent}
If we take the conditions $ E^{-} = 0, \; E^{0} = 0 $ and $ A^{-} - {\rm d}\bar{z} = 0 $ as the additional constraints, it is easy to see that they weakly commute with the Hamiltonian so their vanishing is preserved by the time evolution provided the constraints are satisfied. In particular the simple form of $E$ is preserved by the time evolution. In fact
\begin{equation}
\dot{E}^{-a} = -{\rm i} ( c^{-}_{\; \; -k} E^{-b} + c^{-}_{\; \; 0k} E^{0b} ) ( D_{b} E^{ka} ) = 0,
\end{equation}
\begin{equation}
\dot{E}^{0a} = E^{+b} ( \partial_{b} E^{-a} + {\rm i} A^{0}_{b} E^{-a} - {\rm i} A^{-}_{b} E^{0a} ) - E^{-b} D_{b} E^{+a} = 0.
\end{equation}
The gauge fixing $ A^{-} = {\rm d}\bar{z} $ is also unchanged by the evolution. Namely
\begin{equation}
\dot{A}^{-}_{a} = E^{-a} F^{0}_{ba} - E^{0a} F^{-}_{ba} = 0.
\end{equation}
The variable $E$ is independent of time:
\begin{equation}
\dot{E}^{+a} = E^{+b} ( \partial_{b} E^{0a} + 2{\rm i} A^{-}_{b} E^{+a} - 2{\rm i} A^{+}_{b} E^{-a} ) - E^{0b} D_{b} E^{+a} = 0.
\end{equation}
Moreover
\begin{equation}
\dot{A}^{0}_{a} = 2 E^{+b} F^{-}_{ba} - 2 E^{-b} F^{+}_{ba} = 0, \; {\rm and}
\end{equation}
\begin{equation}
\dot{A}^{+}_{a} = - E^{+b} F^{0}_{ba} + E^{0b} F^{+}_{ba} = E^{+b} ( \partial_{a} A^{0}_{b} - \partial_{b} A^{0}_{a} + 2{\rm i} A^{-}_{a} A^{+}_{b} ).
\label{A+}
\end{equation}
In order to calculate all the above time derivatives we used constraint equations. We can show that the part of $ A^{+}_{a} $ tangent to $E^{+a}$ is independent of time and the transversal components are linear functions of time. In fact
\begin{displaymath}
\frac{\partial}{\partial t} ( E^{+a} A^{+}_{a} ) = E^{+a} \dot{A}^{+}_{a} = 2 E^{+a} E^{+b} \partial_{[a} A^{0}_{b]} = 0.
\end{displaymath}
Hence the right-hand side of (\ref{A+}) is independent of time and $ \frac{\partial}{\partial t} \dot{A}^{+}_{a} = 0 $.
Now, it can be easily checked that the reality conditions are identically satisfied for the solutions of the constraint and the evolution equations.
\subsection*{Summary}
\hspace{\parindent}
We have solved completely (1,0) sector of Ashtekar gravity. The general solution for this case (for a certain gauge fixing and choice of coordinates) is as follows. The fields $E^{-}$, $E^{0}$ vanish. Field $E^{+}$ is given by (\ref{Marysia}) and vanishing of the component transversal to $A^{-}$ if $A^{-} \neq 0$ or $E^{+}$ is arbitrary if $A^{-} = 0$. $A^{-}$ is any closed one-form on $\Sigma$, $A^{0}$ is given by the equation (\ref{Ania}) and $A^{+}$ is arbitrary one-form. All the fields are constant in time except of $A^{+}$ which is constant in the direction of $E^{+}$ and is linear in time in the other directions.
An interesting feature of this solutions is that after imposing certain initial constraints on the field variables at $ t = t_{0} $, at each point they evolve independently from the other points. The points of $ \Sigma $ ``can't see each other during the
evolution''.
\section{Sector (2,1)}
\hspace{\parindent}
Sector (2,1) is defined by ${\rm rank} \left( E^{ia} \right) = 2$ and ${\rm sign} \left( E^{ia} E_{i}^{b} \right) = (+,0,0)$ at $t=t_{0}$ (on the surface $\Sigma$). The complete local solution of the Ashtekar-Einstein equations in the sector (2,1) will be given in the present section. We will start from fixing a gauge freedom and a useful choice of coordinates.
\hspace{\parindent}
\begin{lemat}
\begin{displaymath}
\begin{array}{c}
\left[ \left( {\rm sign} \left( E^{ia} E_{i}^{b} \right) = (+,0,0) \right) \wedge \left( {\rm rank} \left( E^{ia} \right) = 2 \right) \right] \Rightarrow \\
\Rightarrow \left[ \exists g \in SL(2,{\mbox {\boldmath $C$} } ) \; : \; g^{-1} E g = \tau_{+} \otimes E^{+} + \tau_{0} \otimes E^{0} \; \; {\rm and} \; \; A^{'0}_{3} = 0 \right], \\
{\rm where} \; \; A^{'} \stackrel{\rm def}{=} g^{-1} A g + g^{-1} {\rm d}g \; \; {\rm ,and} \; \; \; E^{0} \; {\rm is \; real}.
\end{array}
\end{displaymath}
\label{lemat2}
\end{lemat}
{\bf Proof:} We assume that
\begin{equation}
{\rm rank} ( E^{ia} ) = 2,
\label{zal1}
\end{equation}
\begin{equation}
{\rm sign} ( E^{ia} E_{i}^{b} ) = 1 .
\label{zal2}
\end{equation}
Let us choose such a real basis $ (e_{1},e_{2},e_{3}) $ in the tangent space to $\Sigma$ that $ ( E^{ia} E_{i}^{b} ) = {\rm diag} (0,0,1) $. From the fact in the Appendix we conclude that there exists gauge transformation such that
\begin{equation}
E = E^{kl} \tau_{k} \otimes e_{l} + \tau_{3} \otimes e_{3},
\end{equation}
where $ k,l=1,2 $.
Rank assumption (\ref{zal1}) implies that $ E^{2} = f E^{1} $, where $f$ - complex function on $\Sigma$. (\ref{zal2}) gives $ f = \pm {\rm i} $. Minus sign can be removed in the same way as in (\ref{minus}), which ends the proof.
From now on let us use the gauge given by the above lemma.
We can make use of the reality of $ E^{0} $ by choosing convenient coordinate system $ (x^{1}, x^{2}, x^{3} ) $ such that
\begin{equation}
E^{0} = \frac{\partial}{\partial x^{3} }.
\label{e0}
\end{equation}
\subsection*{Constraints}
\hspace{\parindent}
Constraint equations read now as follows
\begin{equation}
{\cal C} = 4{\rm i} E^{+a} E^{0b} F^{-}_{ab} = 0,
\label{sc}
\end{equation}
\begin{equation}
{\cal C}_{a} = E^{0b} F^{0}_{ab} + 2 E^{+b} F^{-}_{ab} = 0,
\label{dif}
\end{equation}
\begin{equation}
{\cal G}^{+} = \partial_{a} E^{+a} + {\rm i} ( A^{+}_{a} E^{0a} - A^{0}_{a} E^{+a} ) = 0,
\label{g1}
\end{equation}
\begin{equation}
{\cal G}^{0} = \partial_{a} E^{0a} + 2{\rm i} A^{-}_{a} E^{+a} = 0,
\label{g2}
\end{equation}
\begin{equation}
{\cal G}^{-} = -{\rm i} A^{-}_{a} E^{0a} = 0.
\label{g3}
\end{equation}
Due to (\ref{e0}), (\ref{g3}) is solved by $A^{-}_{3}=0$. Since $ \partial_{a} E^{0a} = 0 $, (\ref{g2}) is equivalent to $ A^{-}_{a} E^{+a} = 0 $, or
\begin{equation}
A^{-}_{1} E^{+1} = - A^{-}_{2} E^{+2}.
\label{row}
\end{equation}
(\ref{sc}) gives
\begin{displaymath}
E^{+a} E^{0b} F^{-}_{ab} = - E^{+1} \partial_{3} A^{-}_{1} - E^{+2} \partial_{3} A^{-}_{2} = 0 .
\end{displaymath}
Let us assume that $ E^{+2} \neq 0 $. Because of (\ref{row}) we have
\begin{equation}
A^{-}_{1} \partial_{3} A^{-}_{2} = A^{-}_{2} \partial_{3} A^{-}_{1}.
\end{equation}
If we assume $ A^{-}_{1} \neq 0 $, this is equivalent to the condition that $ A^{-}_{2} = \Omega A^{-}_{1} $, where $ \Omega $ is a complex function on $ \Sigma $ such that $ \partial_{3} \Omega = 0 $. Thus
\begin{equation}
A^{-} = A^{-}_{1} \left( {\rm d}x^{1} + \Omega \left( x^{1}, x^{2} \right) {\rm d}x^{2} \right) \; \; {\rm and}
\label{form}
\end{equation}
\begin{equation}
E^{+} = - \Omega E^{+2} \frac{\partial}{\partial x^{1}} + E^{+2} \frac{\partial}{\partial x^{2}} + E^{+3} \frac{\partial}{\partial x^{3}}.
\label{Kasia}
\end{equation}
We know, however, that the coordinates $ x^{1}, x^{2} $ can be chosen in such a way that instead of $ \Omega $ we can put i (if $ {\rm Im} \Omega \neq 0 $ ) or $ 0 $ ( if $ {\rm Im} \Omega = 0 $ ). Let us assume then, that from now on $ \Omega = 0 $ or $
\Omega = $i.
In order to solve constraints completely we have to solve two more equations, namely (\ref{dif}) and (\ref{g1}). Straightforward calculation shows that
\begin{displaymath}
E^{+b} F^{-}_{ab} = - E^{+b} \partial_{b} A^{-}_{a} - {\rm i} E^{+b} A^{0}_{b} A^{-}_{a}, \; \; {\rm and}
\end{displaymath}
\begin{displaymath}
E^{0b} F^{0}_{ab} = - \partial_{3} A^{0}_{a} + 2{\rm i} A^{-}_{a} A^{+}_{3} .
\end{displaymath}
Hence (\ref{dif}) gives
\begin{displaymath}
2 E^{+b} \partial_{b} A^{-}_{a} + \partial_{3} A^{0}_{a} = 2{\rm i} A^{-}_{a} ( A^{+}_{3} - E^{+b} A^{0}_{b} ) .
\end{displaymath}
Substituting (\ref{g1}) into the above equation gives
\begin{equation}
\partial_{3} A^{0}_{a} = -2 \partial_{b} ( E^{+b} A^{-}_{a} ).
\label{Patrycja}
\end{equation}
With a given $ E^{+} $ and $ A^{-} $, the above equation describes the dependence of $ A^{0} $ on the coordinate $ x^{3} $.
To end the analysis of the constraints we should add (\ref{g1}), which can be treated as the constraint on $ A^{+}_{3} $, provided $ E^{+}, \; A^{0} $ are known
\begin{equation}
A^{+}_{3} = {\rm i} \partial_{a} E^{+a} + A^{0}_{a} E^{+a}.
\label{talk}
\end{equation}
At last, the case $ E^{+1} = E^{+2} = 0 $ should be considered separately. However, the only difference in the family of solutions for this case is in the form of $ A^{-} $. Now, we have no restrictions on $ A^{-}_{1} $ and $ A^{-}_{2} $.
For $ A^{-}_{1} = 0 $ we get from (\ref{row}) that $ A^{-}_{2} = 0 $ or $ E^{+2} = 0 $, but these cases are included in the other ones.
Hence we have solved completely constraint equations for the sector (2,1).
\subsection*{Evolution}
\hspace{\parindent}
Let us now consider conditions $ E^{-} = 0, \; E^{0} - \frac{\partial}{\partial x^{3}} = 0 , \; A^{-}_{3} = 0 $ as the new additional constraints on the initial data. One can show that they weakly commute with the Hamiltonian, hence they are preserved by the evolution. In fact
\begin{equation}
\dot{E}^{-a} = E^{0b} \partial_{b} E^{-a} + {\rm i} {\cal G}^{-} E^{0a} + {\rm i} E^{0b} A^{0}_{b} E^{-a} - E^{-b} D_{b} E^{0a} = 0,
\end{equation}
\begin{equation}
\dot{E}^{0a} = -2 E^{+b} \partial_{b} E^{-a} - E^{0a} \partial_{b} E^{0b} + {\cal G}^{0} E^{0a} - 2{\rm i} E^{+b} A^{0}_{b} E^{-a} + 2 E^{-b} D_{b} E^{+a} = 0,
\end{equation}
\begin{equation}
\dot{A}^{0}_{3} = 2 E^{+b} F^{-}_{b3} - 2 E^{-b} F^{+}_{b3} = E^{0b} F^{0}_{3b} = 0.
\label{a03}
\end{equation}
Moreover, due to constraint equations, we get
\begin{equation}
\dot{A}^{-}_{a} = E^{-b} F^{0}_{ba} - E^{0b} F^{-}_{ba} = F^{-}_{a3}, \; \; {\rm thus}
\end{equation}
\begin{equation}
\dot{A}^{-}_{3} = 0,
\end{equation}
\begin{equation}
\dot{A}^{-}_{1} = - \partial_{3} A^{-}_{1},
\label{A1}
\end{equation}
\begin{equation}
\dot{A}^{-}_{2} = - \partial_{3} A^{-}_{2}.
\label{A2}
\end{equation}
In order to find the evolution of $ E^{+} $, let us first calculate
\begin{equation}
\dot{A}^{+}_{3} = - E^{+b} F^{0}_{b3} + E^{0b} F^{+}_{b3} = - E^{0a} E^{+b} F^{0}_{ba} = - E^{+b} {\cal C}_{b} = 0.
\end{equation}
Now we have
\begin{equation}
\dot{E}^{+a} = -{\rm i} c^{+}_{\; \; ij} E^{ib} ( \partial_{b} E^{ja} + c^{j}_{\; \; kl} A^{k}_{b} E^{la} ), \; {\rm thus}
\end{equation}
\begin{eqnarray*}
\dot{E}^{+a} = E^{+b} \partial_{b} E^{0a} - E^{0b} \partial_{b} E^{+a} - 2{\rm i} A^{+}_{b} E^{-b} E^{+a} + E^{+a} {\cal G}^{0} - \\
2{\rm i} (\partial_{b} E^{0b} ) E^{+a} - {\rm i} E^{0a} E^{0b} A^{+}_{b} + {\rm i} E^{+a} E^{0b} A^{0}_{b},
\end{eqnarray*}
\noindent and since the constraints show that $ E^{0b} A^{0}_{b} = A^{0}_{3} = 0 $, we get
\begin{equation}
\dot{E}^{+a} = - \partial_{3} E^{+a} - {\rm i} E^{0a} A^{+}_{3}.
\label{e+a}
\end{equation}
Since $ E^{0a} A^{+}_{3} $ does not depend on time, (\ref{e+a}) can be easily integrated for $ E^{+a}(t) $.
We obtain similar equations for the components of $A^{0}$. In the same way as in (\ref{a03}) we get that $ \dot{A}^{0}_{a} = F^{0}_{a3} $, hence
\begin{equation}
\dot{A}^{0}_{a} = - \partial_{3} A^{0}_{a} + 2{\rm i} A^{-}_{a} A^{+}_{3}.
\label{Beata}
\end{equation}
Again we have simple linear equation for $A^{0}$.
The last thing we need to do to solve completely the evolution equations is to find the function $A^{+}(t)$. Let us calculate
\begin{equation}
\dot{A}^{+}_{a} = - E^{+b} F^{0}_{ba} + E^{0b} F^{+}_{ba}.
\end{equation}
This gives
\begin{eqnarray*}
\dot{A}^{+}_{a} = -2 E^{+b} \partial_{[b} A^{0}_{a]} - {\cal G}^{0} A^{+}_{a} + ( \partial_{b} E^{0b} ) A^{+}_{a} + \\
+2{\rm i} E^{+b} A^{+}_{b} A^{-}_{a} + 2 E^{0b} \partial_{[b} A^{+}_{a]} + 2{\rm i} E^{0b} A^{+}_{[b} A^{0}_{a]}.
\end{eqnarray*}
\noindent Using the constraints we get
\begin{equation}
\dot{A}^{+}_{a} = -2 E^{+b} \partial_{[b} A^{0}_{a]} + 2{\rm i} E^{+b} A^{+}_{b} A^{-}_{a} + 2 \partial_{[3} A^{+}_{a]} + 2{\rm i} A^{+}_{[3} A^{0}_{a]}, \; {\rm thus}
\end{equation}
\begin{equation}
\dot{A}^{+}_{a} = \partial_{3} A^{+}_{a} + 2{\rm i} A^{-}_{a} E^{+b} A^{+}_{b} - \partial_{a} A^{+}_{3} + {\rm i} A^{0}_{a} A^{+}_{3} - 2 E^{+b} \partial_{[b} A^{0}_{a]}.
\label{a+a}
\end{equation}
We can see that due to the second term on the right-hand side of the above equation $A^{+}_{1}$ depends on $A^{+}_{2}$ and conversely. However, we can simplify this equation using the results of constraint analysis. Let us consider two different possibilities.
\begin{enumerate}
\item $ E^{+1} = E^{+2} = 0 $. \\
In this case $ E^{+b} A^{+}_{b} = 0 $ and we get simple linear equations for $A^{+}_{1}$ and $A^{+}_{2}$, namely
\begin{equation}
\dot{A}^{+}_{a} = - \partial_{3} A^{+}_{a} - \partial_{a} A^{+}_{3} + {\rm i} A^{0}_{a} A^{+}_{3} - 2 E^{+b} \partial_{[b} A^{0}_{a]}.
\end{equation}
\item $ E^{+2} \neq 0, \; E^{+1} = - \Omega E^{+2} $ ( $\Omega=0$ or $\Omega=$i ). \\
It is easy to calculate that
\begin{displaymath}
\frac{\partial}{\partial t} ( A^{+}_{2} - \Omega A^{+}_{1} ) = \partial_{3} ( A^{+}_{2} - \Omega A^{+}_{1} ) - ( \partial_{2} - {\Omega}\partial_{1} ) A^{+}_{3} + {\rm i}A^{+}_{3} ( A^{0}_{2} - {\Omega}A^{0}_{1} ) -
\end{displaymath}
$ - E^{+b} \left[ \partial_{b} \left( A^{0}_{2} - {\Omega} A^{0}_{1} \right) - \left( \partial_{2} - {\Omega}\partial_{1} \right) A^{0}_{b} \right].$
\bigskip
Hence we have a simple linear equation for $ ( A^{+}_{2} - \Omega A^{+}_{1} ) ( t ) $. Substituting $ \Omega A^{+}_{1}(t) + ( A^{+}_{2} - \Omega A^{+}_{1} ) ( t ) $ for $ A^{+}_{2}(t) $ in (\ref{a+a}) we get the linear equation for $ A^{+}_{1}(t) $. It can be integrated if $ A^{-}, \; E^{+}, \; A^{+}_{3}, \; A^{0} $ are known.
\end{enumerate}
This solves the evolution equations. One can see that the reality conditions are satisfied for all the solutions we have found.
\subsection*{Summary}
\hspace{\parindent}
Let us summarize the general solution of the Ashtekar-Einstein equations in the sector (2,1). First, we have $E^{-}(t)=0$ and $E^{0}(t)=\frac{\partial}{\partial x^{3}}$. The fields $E^{+a}$ propagate along the integral curves of $E^{0}$ according to the equation (\ref{e+a}). The components $E^{+2}$ and $E^{+3}$ are arbitrary functions of the `spatial` coordinates (but $E^{+3} \neq 0$) and the remaining component is given by $ E^{+1} = -\Omega E^{+2} $, equation (\ref{Kasia}) ($\Omega=0$ or $\Omega=$i). If $E^{+2} \neq 0$, $A^{-}$ is given by (\ref{form}) with the same $\Omega$ as above, and if $E^{+2} = 0$, arbitrary one-form $A^{-}$ with $A^{-}_{3}=0$ is the solution. Fields $A^{-}_{1}$ and $A^{-}_{2}$ propagate along the integral curves of $E^{0}$ at the speed of light. $A^{0}$ is any field which propagates along the same curves as $A^{-}$ and $E^{+}$ according to equation (\ref{Beata}) and depends on the coordinate $x^{3}$ according to the equation (\ref{Patrycja}).
The field $A^{+}_{3}$ does not depend on time and is given by the equation (\ref{talk}). $A^{+}_{1}$ and $A^{+}_{2}$ are any functions on $\Sigma$ with the dependence on time given by (\ref{a+a}).
It should be noted that, as in sector (2,2), the characteristic feature of our solutions is the fact that evolution takes place on the curves, namely curves defined by $x^{1},x^{2}=const$. During the evolution these curves do not interact.
\section{Concluding Remarks}
\hspace{\parindent}
As indicated in the introduction, all the possible degenerate sectors of Ashtekar's gravity have been solved. They all have certain important features in common.
First of all, the conditions defining the degeneracy sectors weakly commute with the Hamiltonian. Therefore, if $t=t_{0}$ corresponds to the surface of initial data $\Sigma$, then there is an $\varepsilon>0$ such that for all $t$ between $t_{0}$ and $t_{0}+\varepsilon$ degeneracy type is the same (evolution preserves the degeneracy locally, where the word ``local'' refers to both space and time). Hence if the initial data on $\Sigma$ is specified in such a way that all of it belongs to the same degeneracy sector, the generic behavior will be such that the evolution preserves the character of the degeneracy. On the other hand, if there are regions on $\Sigma$ with different types of data then the above need not be true (see \cite{recent}).
The other important feature is that for all the sectors, the surface of initial data $\Sigma$ is foliated by sub-manifolds of the dimension equal to the rank of the densitized inverse three-metric $qq^{ab}$ on $\Sigma$. The evolution always takes place in such a way that these sub-manifolds evolve independently. The time derivatives of the field variables on a fixed leaf of the foliation depend only on the values of these fields on the leaf and on the derivatives along the leaf. $qq^{ab}$ decides that the fields evolve along the surfaces \cite{Lewandowski}, along the curves (sector (2,1) and \cite{Jacobson}), at the points independently (sector (1,0)) or do not evolve at all ($E^{ia}=0$ for all $i,a$).
\subsection*{Acknowledgments}
\hspace{\parindent}
J.L. was supported by Alexander von Humboldt-Stiftung and the Polish
Committee on Scientific Research (KBN, grant no. 2 P03B 017 12).
J.W. was supported by Polish Ministry of Education and Stefan Batory Trust.
\hspace{\parindent}
\newpage
\section{Appendix}
|
1,108,101,564,225 | arxiv | \section{Introduction}
\label{sec0}
Informally, the zero-range particle system follows a collection of
dependent random walks on the lattice $\mathbb Z^d$ where, from a vertex with
$k$ particles, one of the particles displaces by $j$ with rate
$(g(k)/k)p(j)$. The function on the non-negative integers $g:\mathbb N_0
\rightarrow \mathbb R_+$ is called the process ``rate'', and $p(\cdot)$
denotes the translation-invariant single particle transition
probability. The name ``zero-range'' derives from the observation
that, infinitesimally, the interaction is only with respect those
particles at the particular vertex. The case when $g(k)$ is
proportional to $k$ describes the situation of completely independent
particles.
\medskip
The problem of the asymptotics of a distinguished, or tagged particle
interacting with others has a long history and was even mentioned in
Spitzer's paper \cite{Spitzer} (see also chapters 8.I, 6.II
\cite{Spohn}). The main analytical difficulty is that the tagged
particle motion is not in general Markovian due to the interaction
with other particles. However, the intuition is that in a scale the
tagged particle behaves as a random walk with certain ``homogenized''
parameters reflecting the system dynamics.
We prove in this article a nonequilibrium invariance principle, with
respect to a diffusion process whose coefficients depend on the
hydrodynamic density, for the diffusively rescaled position of the tagged
particle in one-dimensional zero-range processes when the transition
probability $p$ is finite-range and mean-zero. This invariance
principle is the first result which captures the nonequlibrium
fluctuations of a single, given particle in a general finite-range
interacting particle system. We remark, however, in \cite{JL}, a
nonequilibrium central limit{ theorem was proved for a tagged
particle in the nearest-neighbor symmetric one-dimensional simple
exclusion model by completely different methods which rely on the
special structure of the nearest-neighbor one-dimensional dynamics.
Also, we note, in \cite{Reza-pr}, a ``propagation of chaos'' type
nonequilibrium result was shown for finite-range symmetric $d\geq 1$
dimensional simple exclusion processes which gives the fluctuations
for a tagged particle selected at random, or in other words the
average tagged particle position; however, this result, which makes
key use of the ``averaging,'' does not convey the fluctuations of
any fixed, given particle and so is weaker than the one we state in
this paper.
We mention also, with respect to zero-range tagged particles,
previous results on laws of large numbers, in equilibrium
\cite{Saada}, \cite{Sext} and non-equilibrium \cite{Reza-lln}, and
equilibrium central limit theorems when the jump probability $p$ is
mean-zero, $\sum j p(j)=0$ \cite{Saada}, \cite{Sext}, and also when
$p$ is totally asymmetric and nearest-neighbor in $d=1$
\cite{Szrtg}, and also some diffusive variance results when $p$ has
a drift $\sum jp(j)\neq 0$ in $d=1$ and $d\geq 3$ \cite{Szrtg}.
\medskip
Denote by $\xi \in \bb N_0^{\bb Z}$, $\bb N_0 = \{0, 1, \dots\}$, the
states of the zero-range process, so that $\xi(x)$, $x\in\bb Z$,
stands for the total number of particles at site $x$ for the
configuration $\xi$.
Fix an integer $N\ge 1$, scale space by $N^{-1}$ and assume that the
zero-range process rescaled diffusively, $\{\xi_t^N : t\ge 0\}$,
starts from a local equilibrium state with density profile $\rho_0 :
\bb R\to \bb R_+$. Denote by $\{\pi^{N,0}_t : t\ge 0\}$ its empirical
measure. It is well known that $\pi^{N,0}_t$ converges in probability
to the absolutely continuous measure $\rho(t,u) du$, where $\rho(t,u)$
is the solution of a non-linear parabolic equation with initial
condition $\rho_0$.
Tag a particle initially at the origin and denote by $X^N_t$ its
position at time $t$. It is relatively simple to show that the
rescaled trajectory $\{X^N_t/N : 0\le t\le T\}$ is tight for the
uniform topology. In particular, to prove convergence, one needs
only to characterize the limit points.
In contrast with other models, in zero-range processes $X^N_t$ is a
square integrable martingale with a bounded quadratic variation
$\<X^N\>_t$ given by the time integral of a local function of the
process as seen from the tagged particle:
\begin{equation*}
\<X^N\>_t \;=\; \sigma^2 N^2 \int_0^t \frac {g(\eta^N_s(0))}
{\eta^N_s(0)} \, ds\;,
\end{equation*}
where $\sigma^2$ is the variance of the transition probability
$p(\cdot)$, $g(\cdot)$ is the jump rate mentioned before, and $\eta^N_s = \tau_{X^N_s}
\xi^N_s$ is the state of the process as seen from the tagged
particle. Here $\{\tau_x : x\in\bb Z\}$ stands for the group of
translations. In particular, if the rescaled position of the tagged
particle $x^N_t = X^N_t/N$ converges to some path $x_t$, this process
$x_t$ inherits the martingale property from $X^N_t$. If in addition
$x_t$ is continuous, to complete the
characterization, one needs to examine
the asymptotic behavior of its quadratic variation.
Denote by $\{\nu_\rho : \rho\ge 0\}$ the one-parameter family, indexed
by the density, of invariant states for the process as seen from the
tagged particle. Let $\pi^N_t$ be the empirical measure associated to
this process: $\pi^N_t = \tau_{X^N_t} \pi^{N,0}_t$ and suppose that
one can replace the local function $g(\eta^N_s(0))/\eta^N_s(0)$ by a
function of the empirical measure. If we assume conservation of local
equilibrium for the process as seen from the tagged particle, this
function should be $h(\lambda (s,0))$, where $h(\rho)$ is the
expected value of $g(\eta(0))/\eta(0)$ under the invariant state
$\nu_\rho$ and $\lambda(s,0)$ is the density of particles around the
tagged particle, i.e., the density of particles around the origin for
the system as seen from the tagged particle.
As we are assuming that $X^N_t/N$ converges to $x_t$, since
$\pi^N_t = \tau_{X^N_t} \pi^{N,0}_t$ and $\pi^{N,0}_t$ converges
to $\rho(t,u) du$, we must have $\lambda(s,0) =
\rho(s,x_s)$. Therefore, if the quadratic variation of $X^N_t/N$
converges to the quadratic variation of $x_t$, $\<x\>_t = \sigma^2
\int_0^t h(\rho(s,x_s)) ds$. In particular, by the characterization of
continuous martingales, $x_t$ satisfies the stochastic
differential equation
\begin{equation*}
dx_t \; =\; \sigma \sqrt{h(\rho(s,x_s))} \, dB_s\;,
\end{equation*}
where $\rho$ is the solution of the hydrodynamic equation, $h$ is
defined above and $B$ is a Brownian motion.
We see from this sketch that the main difficulty consists in proving
the conservation of local equilibrium around the tagged particle,
without assuming any type of attractiveness, which is relied upon in \cite{Landim-conservation}. The absence of a space
average creates a major obstacle in this step. In contrast with the
proof of the hydrodynamic limit, we need to replace a local function
instead of a space average of translations of a local function. We
may, therefore, only use the bonds close to the origin of the
Dirichlet form to perform the replacement and we may not exclude
large densities of particles close to the origin. In particular, all
estimates (equivalence of ensembles and local central limit theorems)
need to be uniform over the density. This lack of translation
invariance confines us to one-dimension.
The method presented here may apply to other one-dimensional mean-zero
interacting particle systems. However, instead of replacing a local
function by a function of the empirical measure, one will need to
replace a current multiplied by $N$ by a function of the empirical
measure, as what it is done for non-gradient systems, but without
any space average.
\section{Notation and Results}
\label{sec1}
We consider one-dimensional zero-range processes with periodic
boundary conditions to avoid unnecessary technicalities. This process
is a system of random walks on the discrete torus $\bb T_N = \bb Z / N
\bb Z$ where particles interact infinitesimally only when they are at
the same site. Fix a rate function $g: \bb N_0 = \{0, 1, \dots\} \to
\bb R_+$ with $g(0)=0$, $g(k) >0$, $k\ge 1$, and a finite range
probability measure $p(\cdot)$ on $\bb Z$. The particle dynamics is
described as follows. If there are $k$ particles at a site $x$, one
of these particles jumps to site $y$ with an exponential rate
$(g(k)/k) p(y-x)$.
For simplicity, we assume that $p(\cdot)$ is symmetric, but our
results remain true, with straightforward modifications, for any
irreducible, finite-range, mean-zero transition probability $p(\cdot)$.
For the rate function $g$, we assume the next conditions:
\begin{eqnarray*}
& \text{(LG)} & \text{ $ \exists\, a_1 >0$ such that $|g(n+1)-g(n)|
\leq a_1$ for $n\geq 0$}\;, \\
& \text{(M)} & \text{$\exists\, a_0>0$, $b\geq 1$, such that
$g(n+b)-g(n)>a_0$ for $n\geq 0$}\; .
\end{eqnarray*}
A consequence of (LG), (M) is that $g$ is bounded between two linear
slopes: There is a constant $0<a<\infty$ such that $a^{-1}n \leq g(n)
\leq a n$ for all $n\geq 0$.
Denote by $\Omega_N = \bb N_0^{\bb T_N}$ the state space and by $\xi$
the configurations of $\Omega_N$ so that $\xi(x)$, $x\in \bb T_N$,
stands for the number of particles in site $x$ for the configuration
$\xi$. The zero-range process is a continuous-time Markov chain
$\xi_t$ generated by
\begin{equation}
\label{c0}
(\mc L_N f) (\xi) \;=\; \sum_{x \in \bb T_N} \sum_{z\in\bb Z} p(z) \,
g(\xi(x))\, \big[f(\xi^{x,x+z}) -f(\xi)\big]\; ,
\end{equation}
where $\xi^{x,y}$ represents the configuration obtained from $\xi$ by
displacing a particle from $x$ to $y$:
\begin{equation*}
\xi^{x,y}(z) =
\begin{cases}
\xi(x)-1 & {\rm for \ } z=x \\
\xi(y)+1 &{\rm for \ } z=y \\
\xi(z) &{\rm for \ } z \neq x,y.
\end{cases}
\end{equation*}
Now consider an initial configuration $\xi$ such that $\xi(0) \geq 1$.
Tag one of the particles initially at the origin, and follow its
trajectory $X_t$ jointly with the evolution of the process $\xi_t$.
Specially convenient for our purposes is to consider the process as
seen by the tagged particle defined by $\eta_t(x) = \xi_t(x + X_t)$.
This process is again Markovian, now on the set $\Omega^*_N = \{\eta
\in \Omega_N ; \eta(0) \geq 1 \}$ and generated by the operator $L_N =
L_N^{env} + L_N^{tp}$, where $L_N^{env}$, $L_N^{tp}$ are defined by
\begin{eqnarray*}
(L_N^{env} f) (\eta) &=& \sum_{x \neq 0} \sum_{z\in\bb Z} p(z) \,
g(\eta(x)) \, [f(\eta^{x,x+z})-f(\eta)]\\
&+& \sum_{z\in\bb Z} p(z) \, g(\eta(0)) \,
\frac{\eta(0) -1}{\eta(0)} \, [f(\eta^{0,z})-f(\eta)]\;,
\end{eqnarray*}
\begin{equation*}
(L_N^{tp} f) (\eta) \;=\; \sum_{z\in\bb Z} p(z) \,
\frac{g(\eta(0))}{\eta(0)} \, [f(\theta_z \eta)-f(\eta)]\;.
\end{equation*}
In this formula, the translation $\theta_z$ is defined by
\begin{equation*}
(\theta_z \eta)(x) =
\begin{cases}
\eta(x+z) & {\rm for \ } x \neq 0,-z \\
\eta(z)+1 &{\rm for \ } x=0 \\
\eta(0)-1 &{\rm for \ } x =-z.\\
\end{cases}
\end{equation*}
The operator $L_N^{tp}$ corresponds to jumps of the tagged particle,
while $L_N^{env}$ corresponds to jumps of the other particles,
called environment.
In order to recover the position of the tagged particle from the
evolution of the process $\eta_t$, let $N_t^z$ be the number of
translations of length $z$ up to time t: $N_{t}^z = N_{t-}^z+1 \iff
\eta_{t} = \theta_z \eta_{t-}$. In this case, $X_t=\sum_z z N_t^z$. As
jumps are not simultaneous, the processes
\begin{equation*}
N_t^z - \int_0^t p(z) \frac{g(\eta_s(0))}{\eta_s(0)} ds
\end{equation*}
are orthogonal martingales and, as $\sum zp(z) = 0$, we see that $X_t$
is a martingale with quadratic variation
\begin{equation*}
\<X\>_t \;=\; \sigma^2 \int_0^t \frac{g(\eta_s(0))}{\eta_s(0)} \, ds\;,
\end{equation*}
where $\sigma^2=\sum_z |z|^2 p(z)$.
We now discuss the invariant measures. For each $\varphi \geq 0$,
consider the product probability measures $\bar \mu_\varphi = \bar
\mu_\varphi^{N,g}$ in $\Omega_N$ defined by
\begin{equation*}
\bar \mu_\varphi(\xi(x) =k) = \frac{1}{Z(\varphi)}
\frac{\varphi^k}{g(k)!}\; ,
\end{equation*}
where $g(k)! = g(1) \cdots g(k)$ for $k \geq 1$, $g(0)!=1$ and
$Z(\varphi)$ is the normalization constant. $Z(\varphi)$ and $\bar
\mu_\varphi$ are well defined for all $\varphi \geq 0$ due to
conditions $(LG), (M)$. Let $\rho = \rho(\varphi) = \int \eta(0) d
\bar \mu_\varphi$. By conditions (LG), (M), $\varphi \mapsto \rho$ is
a diffeomorphism from $[0,\infty)$ into itself. Define then
$\mu_{\rho}= \bar \mu_{\varphi(\rho)}$, since $\rho$ corresponds to
the density of particles at each site. The measure $\{\mu_\rho : \rho
\ge 0\}$ are invariant for the process $\xi_t$ (cf. \cite{Andjel}).
Due to the inhomogeneity introduced at the origin by the tagged
particle, $\mu_\rho$ is no longer invariant for the process
$\eta_t$. However, a computation shows that the size biased measures
$\nu_\rho$ defined by $d \nu_\rho / d \mu_\rho = \eta(0) / \rho$ are
invariant for the process as seen by the tagged particle, reversible
when $p(\cdot)$ is symmetric. Here, we take $\nu_0 = \delta_{\mf d_0}$,
the Dirac measure concentrated on the configuration
$\mathfrak d_0$ with exactly one particle at the origin, and note
$\nu_\rho\Rightarrow \delta_{\mf d_0}$ as $\rho\downarrow 0$.
From now on, to avoid uninteresting compactness issues, we define
every process in a finite time interval $[0,T]$, where $T<\infty$ is
fixed. Let $\bb T$ be the unit torus and let $\mc M_+(\bb T)$ be the
set of positive Radon measures in $\bb T$.
Consider the process $\xi_t^N =: \xi_{tN^2}$, generated by $N^2 \mc
L_N$. Define the process $\pi_t^{N,0}$ in $\mc D([0,T],\mc M_+(\bb
T))$ as
\begin{equation*}
\pi_t^{N,0}(du) = \frac{1}{N} \sum_{x \in \bb T_N} \xi_t^N (x)
\delta_{x/N} (du)\;,
\end{equation*}
where $\delta_u$ is the Dirac distribution at point $u$.
For a continuous function $ \rho_0: \bb T \to \bb R_+$, define
$\mu^N_{\rho_0(\cdot)}$ as the product measure in $\Omega_N$ given
by $\mu^N_{\rho_0(\cdot)}(\eta(x)=k)=\mu_{\rho_0(x/N)}(\eta(x)=k)$.
The next result is well known (cf. Chapter V \cite{kl}; see also
\cite{dmp}, \cite{gpv}).
\begin{theorem}
\label{th0}
For each $0\le t\le T$, $\pi_t^{N,0}$ converges in probability to the
deterministic measure $\rho(t,u)du$, where $\rho(t,u)$ is the solution
of the hydrodynamic equation
\begin{equation}
\label{ec0}
\left\{
\begin{array}{l}
\partial_t \rho = \sigma^2 \partial_x^2 \varphi(\rho) \\
\rho(0,u) = \rho_0(u),\\
\end{array}
\right.
\end{equation}
and $\varphi(\rho)= \int g(\xi(0)) d \mu_\rho$.
\end{theorem}
Define now the product measure $\nu^N=\nu_{\rho_0(\cdot)}^N$ in
$\Omega^*_N$
given by $\nu_{\rho_0(\cdot)}^N(\eta(x)=k) = \nu_{\rho_0(x/N)}(\eta(x)
=k)$,
and let $\eta_t^N=:
\eta_{tN^2}$ be the process generated by $N^2 L_N$ and starting from
the initial measure $\nu^N$. Define the empirical measure $\pi_t^N$ in
$\mc D([0,T],\mc M_+(\bb T))$ by
\begin{equation*}
\pi_t^N(du) = \frac{1}{N} \sum_{x\in \bb T_N} \eta_t^N(x) \delta_{x/N}(du).
\end{equation*}
Define also the continuous function $\psi:\bb R_+ \to \bb R_+$ by
$$\psi(\rho) = \int\big(g(\eta(0))/\eta(0)\big)d\nu_\rho = \left\{\begin{array}{rl} \varphi(\rho)/\rho & \ {\rm
for \ } \rho>0\\
g(1)& \ {\rm for \ } \rho = 0.\end{array}\right.$$
The next theorems are the main results of this article. We first
identify the scaling limit of the tagged particle as a diffusion
process:
\begin{theorem}
\label{th2}
Let $x_t^N = X^N_t/N$ be the rescaled position of the tagged particle
for the process $\xi_t^N$. Then, $\{x_t^N : t\in [0,T] \}$ converges
in distribution in the uniform topology to the diffusion $\{x_t : t\in
[0,T]\}$ defined by the stochastic differential equation
\begin{equation}
\label{c9}
d x_t = \sigma \, \sqrt{\psi(\rho(t,x_t))} \, dB_t\; ,
\end{equation}
where $B_t$ is a standard
Brownian motion on $\bb T$.
\end{theorem}
Through this characterization we can describe the evolution of the
empirical measure as seen from the tagged particle:
\begin{theorem}
\label{th1}
$\{\pi_t^N : t\in [0,T]\}$ converges in distribution on $\mc
D([0,T],\mc M_+(\bb T))$ to the measure-valued process $\{\rho(t,u
+x_t)du : t\in [0,T]\}$, where $\rho(t,u)$ is the solution of the
hydrodynamic equation (\ref{ec0}) and $x_t$ is given by \eqref{c9}.
\end{theorem}
Recall that $\eta_0^N$ is distributed according to
$\nu_{\rho_0(\cdot)}^N$. Denote by $\bb P^N$ the probability measure
in $\mc D([0,T],\Omega_N^*)$ induced by the process $\eta_t^N$, and by
$\bb E^N$ the expectation with respect to this process. Denote also by
$E_\mu[h]$ and $\<h\>_\mu$ the expectation of a function $h: \Omega_N
\to \bb R$ with respect to the measure $\mu$; when $\mu = \nu_\rho$,
let $E_\rho[h]$, $\<h\>_\rho$ stand for $E_{\nu_\rho}[h]$,
$\<h\>_{\nu_\rho}$. Finally, since in the next sections we consider
only the speeded-up process $\eta_t^N$ we omit hereafter the
superscript $N$.
The plan of the paper is now the following. After some tightness
estimates in Section \ref{sec3}, certain limits are established in
Theorem \ref{s5} in Section \ref{sec4}--with the aid of ``global''
and ``local'' hydrodynamics results in Sections \ref{sec5} and
\ref{sec6}--which give the main Theorems \ref{th2} and \ref{th1}.
\section{Tightness}
\label{sec3}
To keep notation simple, in this section we assume the transition
probability $p(\cdot)$ to be nearest neighbor:
\begin{equation*}
p(1) \;=\; p(-1)\;=\; 1/2
\end{equation*}
so that $\sigma^2=1$. Denote by $\mc C(\bb T)$ the space of real
continuous functions on $\bb T$ and by $\mc C^2(\bb T)$ the space of
twice continuously differentiable functions on $\bb T$. For a function
$G$ in $\mc C(\bb T)$, denote by $\pi_t^N(G)$ the integral of $G$ with
respect to $\pi^N_t$:
\begin{equation*}
\pi_t^N(G) \;=\; \int G(u) \pi_t^N(du)\;=\;
\frac 1N \sum_{x\in \bb T_N} G(x/N) \eta^N_t(x)\;.
\end{equation*}
For $T>0$, denote by $\mc D_T = \mc D([0,T], \mc M_+(\bb T) \times \mc
M_+(\bb T) \times \bb T \times \bb R_+)$ the path space of c\`adl\`ag
trajectories endowed with the Skorohod topology. For $N\ge 1$, let
$Q_N$ be the probability measure on $\mc D_T$ induced by the process
$(\pi_t^{N,0}, \pi_t^N, x^N_t, \<x^N\>_t)$, where $\<x^N\>_t$ stands
for the quadratic variation of the martingale $x^N_t$. We prove in
this section that the sequence $\{Q_N : N\ge 1\}$ is tight, which
follows from the tightness of each component of $(\pi_t^{N,0},
\pi_t^N, x^N_t, \<x^N\>_t)$.
Let $Q_N^0$ be the probability measure in $\mc D([0,T],\mc M_+(\bb
T))$ corresponding to the process $\pi_t^{N,0}$. As mentioned in
Theorem \ref{th0}, $Q_N^0$ converges to the Dirac-$\delta$ measure
concentrated on the path $\rho(t,u)du$, where $\rho$ is the solution
of \eqref{ec0}. Hence, the sequence $\{Q_N^0 : N\ge 1\}$ is tight.
On the other hand, as $\mc M_+ (\bb T)$ is a metrizable space under
the dual topology of $\mc C(\bb T)$, to show that $\{\pi^{N} _\cdot
: N\ge 1\}$ is tight, it is enough to prove tightness of the
projections $\{\pi^{N}_\cdot (G) : N\ge 1\}$ for a suitable set of
functions $G$, dense in $\mc C(\bb T)$. For $G$ in $\mc C(\bb T)$,
let $Q_N^G$ be the measure in $\mc D([0,T], \bb R)$ corresponding to
the process $\{\pi_t^N(G) : 0\le t\le T\}$. Tightness of the
sequence $\{Q_N^G : N\ge 1\}$ follows from Aldous's criteria in the
next lemma.
\begin{lemma}
\label{s8}
The sequence $\{Q_N^G: N\ge 1\}$ is tight if
\begin{itemize}
\item[(i)] For every $t \in [0,T]$ and every $\epsilon>0$, there
exists $M>0$ such that
\begin{equation*}
\sup_N \bb P^N \Big[ \, |\pi_t^N(G)|>M \, \Big] < \epsilon\;.
\end{equation*}
\item[(ii)] Let $\mc T_T$ be the set of stopping times bounded by
$T$. Then, for every $\epsilon >0$,
\begin{equation*}
\lim_{\gamma \to 0} \limsup_{N \to \infty} \sup_{\tau \in \mc T_T}
\sup_{\theta \leq \gamma}
\bb P^N \Big[ \, |\pi_{\tau+\theta}^N(G) - \pi_\tau^N(G)| > \epsilon
\, \Big ] =0\; .
\end{equation*}
\end{itemize}
\end{lemma}
\begin{lemma}
\label{s7}
The sequence $\{Q^G_N : N \ge 1\}$, $G$ in $\mc C^2(\bb T)$, is tight.
\end{lemma}
\begin{proof}
An elementary computation shows that for each $G$ in $\mc C(\bb T)$,
\begin{equation}
\label{c1}
\begin{split}
M_t^{N,G} &= \pi_t^N(G)- \pi_0^N(G) - \int_0^t \frac{1}{N} \sum_{x\in
\bb T_N} (\Delta_N G) (x/N) \, g(\eta_s(x)) \, ds \\
&-\int_0^t \frac{g(\eta_s(0))}{\eta_s(0)} \, \pi_s^N(\Delta_N G) \, ds
+ \int_0^t \frac{2}{N} \frac{g(\eta_s(0))}{\eta_s(0)} \,
(\Delta_N G) (0) \, ds
\end{split}
\end{equation}
is a martingale of quadratic variation $\<M^{N,G} \>_t$ given by
\begin{equation*}
\begin{split}
& \<M^{N,G} \>_t = \frac{1}{N^2}\int_0^t
\sum_{\substack{x \in \bb T_N\setminus\{0\}\\ z\in\bb Z}}
p(z)\, g(\eta_s(x)) \, [\nabla_{N,z}G(x/N)]^2 \, ds \\
& \quad +\; \frac{1}{N^2} \int_0^t \sum_{z\in\bb Z} p(z) \, g(\eta_s(0))
\, \frac{\eta_s(0)-1}{\eta_s(0)} \, [(\nabla_{N,z}G)(0)]^2 \, ds\\
&\quad +\; \int_0^t \sum_{z\in\bb Z} p(z) \, \frac{g(\eta_s(0))}{\eta_s(0)}
\, \Big(\frac{1}{N} \sum_{x\in \bb T_N} (\nabla_{N,z} G) (x/N)
\eta_s(x) - \frac{1}{N} (\nabla_{N,z}G) (0)\Big)^2 \, ds\; .
\end{split}
\end{equation*}
In these formulas, $\nabla_{N,z} G$, $\Delta_N G$ correspond to the
discrete first and second derivatives of $G$:
\begin{eqnarray*}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&
(\nabla_{N,z} G) (u) \;=\; N[G(u+z/N) -G(u)]\; , \\
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \quad
(\Delta_N G) (u) \;=\; N^2\sum_{z\in\bb Z} p(z)\, [G(u+z/N)-G(u)]\;.
\end{eqnarray*}
Since the rate function $g$ grows at most linearly and since the total
number of particles is preserved by the dynamics,
\begin{equation*}
\<M^{N,G}\>_t = \int_0^t \sum_{z \in \bb Z} p(z)\,
\frac{g(\eta_s(0))}{\eta_s(0)}\, \Big(\frac{1}{N} \sum_{x\in \bb T_N}
\nabla_{N,z}G(x/N) \eta_s(x)\Big)^2 ds + R_t^{N,G}\; ,
\end{equation*}
where $|R_t^{N,G}| \leq C_0 N^{-2} \sum_{x\in\bb T_N} \eta_0(x)$ and
$C_0$ is a finite constant which depends only on $G$, $g$, $p$ and
$T$. In particular, $\bb E^N[|R_t^{N,G}|] \le C_1 N^{-1}$.
Note that in contrast with the martingale associated to empirical measure $\pi^{N,0}_t$, due to
the jumps of the tagged particle, the martingale $M^{N,G}$ \emph{does
not} vanish in $L^2(\bb P^N)$. In particular, we may not expect the
convergence of the empirical measure $\pi^N_t$ to a deterministic
trajectory.
We are now in a position to prove the lemma. Condition $(i)$ of Lemma
\ref{s8} is a direct consequence of the conservation of the total
number of particles. In order to prove condition $(ii)$, recall the
decomposition \eqref{c1} of $\pi_t^N(G)$ as an integral term plus a
martingale. The martingale term can be estimated by Chebychev's
inequality and the explicit form of its quadratic variation:
\begin{eqnarray*}
\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&
\bb P^N \Big[ \, |M_{\tau+\theta}^{N,G} -M_{\tau}^{N,G}|>\epsilon\, \Big]
\ \leq \ \frac{1}{\epsilon^2} \bb E^N
\Big[ (M_{\tau+\theta}^{N,G})^2 -(M_{\tau}^{N,G})^2 \Big] \\
\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \qquad\quad
\leq\ \frac{C_1}{\epsilon^2} \, \Vert G'\Vert_\infty^2 \, \bb E^N
\Big[ \int_\tau^{\tau +\theta} \Big(\frac{1}{N}\sum_{x\in \bb T_N}
\eta_s(x)\Big)^2 ds \Big] + \frac{C_1}{\epsilon^2 N}\\
\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \qquad\quad
\leq \ \frac{C_1 \Vert G'\Vert_\infty^2 \, \theta}{\epsilon^2}
E_{\nu^N_{\rho_0(\cdot)}}\Big[ \Big(\frac{1}{N}\sum_{x\in \bb T_N}
\eta(x)\Big)^2\Big] + \frac{C_1}{\epsilon^2 N}
\end{eqnarray*}
which converges to 0 as $N\uparrow \infty$ and $\gamma\downarrow 0$.
The integral term can be estimated in the same way, using again the
conservation of the total number of particles. This proves the
lemma.
\end{proof}
It remains to consider the scaled position of the tagged particle
$x_t^N$ and its quadratic variation. We recall that $x_t^N$ is a
martingale with quadratic variation
\begin{equation}
\label{c10}
\< x^N \>_t = \int_0^t \frac{g(\eta_s^N(0))}{\eta_s^N(0)}
\, ds \;.
\end{equation}
\begin{lemma}
\label{s2}
The process $\{(x^N, \<x^N\>) : N \ge 1\}$ is tight for the uniform
topology.
\end{lemma}
\begin{proof}
We need to show that
\begin{equation}
\label{c4}
\lim_{\epsilon \to 0} \limsup_{N\to\infty} Q_N \big[ \sup_{|t-s|\le
\epsilon} |x^N_t - x^N_s| >\delta \big] \;=\; 0
\end{equation}
for all $\delta>0$ and a similar statement for the quadratic variation
$\<x^N\>_t$. Recall that $\sup_{k\ge 1} g(k)/k \le a < \infty$ and
consider a symmetric random walk $Z^N_t$ on the discrete torus $\bb
T_N$ with jump rate $a$ and transition probability $p(\cdot)$. We may
couple $Z^N_t$ and $X^N_t$ in such a way that the skeleton chains are
equal, i.e., that the sequence of sites visited by both processes are
the same, and the holding times of $Z^N$ are always less than or equal
to the holding times of $X^N$. In particular,
\begin{equation*}
\sup_{|t-s|\le \epsilon} |x^N_t - x^N_s| \;\le\;
\sup_{|t-s|\le \epsilon} |z^N_t - z^N_s|
\end{equation*}
if $z^N_t = Z^N_t/N$. Therefore, \eqref{c4} follows from the tightness
in the uniform topology of a rescaled symmetric random walk.
Tightness of the quadratic variation $\<x^N\>_t$ in the uniform
topology is an elementary consequence of its explicit expression
\eqref{c10} and the boundedness of $g(k)/k$.
\end{proof}
\section{Limit points and proof of Theorems \ref{th2}, \ref{th1}}
\label{sec4}
The following, which characterizes certain limit points, is the main
result of this section, which yields Theorems \ref{th2} and \ref{th1}.
\begin{theorem}
\label{s5}
The sequence $Q_N$ converges in the Skorohod topology to the law $Q$
concentrated on trajectories $\{(\pi_t^0, \pi_t, x_t, A_t) : 0\le t\le
T\}$ such that $\pi_t^0(du) = \rho(t,u) du$, where $\rho$ is the
unique weak solution of \eqref{ec0}; $x_t$ is the solution of the
stochastic differential equation \eqref{c9}; $\pi_t(du) = \rho(t,x_t
+u) du$ and $A_t = \sigma^2 \int_0^t \psi(\rho(s,x_s))$ $ds$.
\end{theorem}
\noindent{\bf Proof of Theorems \ref{th2} and \ref{th1}.} As the
limit $x_t$ is concentrated on continuous paths, Theorem \ref{s5}
straightforwardly implies Theorems \ref{th2} and \ref{th1}. \qed
\medskip
The proof of Theorem \ref{s5} is now divided in a sequence of lemmatta. Denote
by $\{\tau_u : u\in\bb T\}$ the group of translations in $\bb T$
acting on points, functions and measures.
\begin{lemma}
\label{s3}
All limit points $Q$ of the sequence $\{Q_N : N\ge 1\}$ are
concentrated on trajectories $\{(\pi_t^0, \pi_t, x_t, A_t) : 0\le t\le
T\}$ in which $x_t$ is a continuous square integrable martingale.
\end{lemma}
\begin{proof}
Assume, without loss of generality, that $Q_N$ converges to $Q$.
Since, by Lemma \ref{s2}, $\{x^N : N \ge 1\}$ is tight for the uniform
topology, $Q$ is concentrated on continuous paths $x_t$. In
particular, $x^N_t$ converges in law to $x_t$ for all $0\le t\le T$.
The martingale property is inherited by $x_t$ because $x^N_t$
converges in law to $x_t$ and
\begin{equation*}
\bb E^N \Big[ (x^N_t)^2 \Big] \;=\; \bb E^N \Big[ \sigma^2
\int_0^t \frac{g(\eta_s(0))}{\eta_s(0)} \, ds \Big]\; \le\;
a \sigma^2 t
\end{equation*}
uniformly in $N$. Therefore, $x_t$ is a square integrable
martingale relative to its natural filtration.
\end{proof}
\begin{lemma}
\label{s6}
All limit points $Q$ of the sequence $\{Q_N : N\ge 1\}$ are
concentrated on trajectories $\{(\pi_t^0, \pi_t, x_t, A_t) : 0\le t\le
T\}$ in which $\pi_t^0(du) = \rho(t,u) du$, where $\rho$ is the unique
weak solution of \eqref{ec0}, and $\pi_t(du) = \tau_{x_t} \pi^0_t(du)
= \rho(t,x_t +u) du$.
\end{lemma}
\begin{proof}
Assume, without loss of generality, that $Q_N$ converges to $Q$. The
first statement follows from Theorem \ref{th0}. On the other hand, by
Lemma \ref{s3} and since $\rho$ is continuous, $Q$ is concentrated on
continuous trajectories $\{(\pi^0_t, x_t) : 0\le t\le T\}$. Hence, all
finite dimensional distributions (f.d.d.) of $(\pi^{0,N}_t, x^N_t)$
(and therefore of $\tau_{x^N_t} \pi_t^{N,0}$) converge to the
f.d.d. of $(\pi^0_t, x_t)$ ($\tau_{x_t} \pi_t^{0} = \rho(t,u+x_t)
du$). Since $\pi_t^N = \tau_{x^N_t} \pi_t^{N,0}$ and since the
f.d.d. characterize a measure on $\mc D([0,T])$, the lemma is proved.
\end{proof}
For $\varepsilon >0$, denote $\iota_\varepsilon
=\varepsilon^{-1}\mb 1\{(0,\varepsilon]\}(u)$ and
$\alpha_\varepsilon =(2\varepsilon)^{-1}
\mb 1\{(-\varepsilon, \varepsilon)\}(u)$. For $l\geq 1$
and $x\in \mathbb Z$, denote by $\eta_s^l (x)$ the mean number of particles
in a cube of length $2l+1$ centered at $x\in {\bb T}_N$ at time $s\geq
0$:
\begin{equation*}
\eta^l_s(x) \ = \ \frac{1}{2l+1} \sum_{|y-x|\leq l} \eta_s(y)\;.
\end{equation*}
When $s=0$, we drop the suffix ``$s$'' for simplicity.
A function $h:\Omega_N\to\bb R$ is said to be local if it depends only
on a finite number of sites. For a local, bounded function
$h:\Omega_N\to\bb R$, denote by $H(\rho)$ and $\bar{h} (\rho)$ its expectations with
respect to $\nu_\rho$ and $\mu_\rho$ respectively. Thus, $H,\bar{h} :\bb R_+ \to \bb R$ are the
functions defined by
\begin{equation}
\label{barH}
H(\rho) \;=\; E_{\nu_\rho}\big[ h(\eta) \big], \ \ {\rm and \ \ }
\bar{h} (\rho) \;=\; E_{\mu_\rho}\big[ h(\xi) \big]\;.
\end{equation}
Also, define for $l\geq 1$ the local function $H_l:\Omega_N \to
\bb R$ given by
\begin{equation*}
H_l(\eta) \;=\; H(\eta^l(0)).
\end{equation*}
Then, $\bar{H_l}:\bb R_+ \to \bb R$ is the function
$\bar{H_l}(\rho) = E_{\mu_\rho}[H_l]$.
A local
function $h:\Omega_N\to\bb R$ is said to be Lipschitz if there exists
a finite subset $A$ of $\bb Z$ and a finite constant $C_0$ such that
\begin{equation}
\label{Lip-def}
\big\vert h(\xi) - h(\xi') \big\vert \;\le\;
C_0 \sum_{x\in A} \big\vert \xi(x) - \xi'(x)\big\vert
\end{equation}
for all configurations $\xi$, $\xi'$ of $\Omega_N$.
Consider in particular the local function $h_0(\eta(0)) =
g(\eta(0))/\eta(0)$. It follows from assumptions (LG), (M) that
$h_0(\cdot)$ is a Lipschitz function, bounded above by a finite
constant and below by a strictly positive constant.
We now characterize the quadratic variation of $x_t$.
\begin{lemma}
\label{s9}
All limit points $Q$ of the sequence $\{Q_N : N\ge 1\}$ are
concentrated on trajectories $\{(\pi_t^0, \pi_t, x_t, A_t): 0\le t\le
T\}$ such that
\begin{equation*}
A_t \;=\; \sigma^2 \int_0^t \psi(\rho(s,x_s))\, ds
\end{equation*}
for all $0\le t\le T$. Moreover, $A_t$ is the quadratic variation of
the martingale $x_t$.
\end{lemma}
\begin{proof}
Assume, without loss of generality, that $Q_N$ converges to $Q$. Since
$\<x^N\>_t$ is tight for the uniform topology by Lemma \ref{s2},
$\<x^N\>_t$ converges to a limit $A_t$ for all $0\le t\le T$. By
Proposition \ref{l2}, with respect to $h(\eta(0)) =
g(\eta(0))/\eta(0)$ and
$H(\rho) = \psi(\rho)$, and since for each $0\le
t\le T$ the map $\pi_\cdot \to \int_0^t ds \int
\iota_\epsilon(x)\bar{\psi_l}(\pi_s(\tau_x \alpha_\varepsilon)) \, dx$
is continuous for the Skorohod topology,
\begin{equation*}
\lim_{l\rightarrow \infty}\lim_{\epsilon\to 0} \lim_{\varepsilon \to
0}Q\Big[ \, \Big| A_t - \sigma^2 \int_0^t ds \int \iota_\epsilon(x)
\bar{\psi_l}(\pi_s(\tau_x\alpha_\varepsilon)) \, dx\Big| \;>\;
\delta\Big] \;=\;0
\end{equation*}
for all $0\le t\le T$ and $\delta>0$. By Lemma \ref{s6}, $\pi_t(du) =
\rho(t,x_t+u) du$. Also, $\rho(s,\cdot)$ is continuous for $0\le s\le
T$, and $\bar{\psi_l}(a) \to \psi(a)$ as $l\uparrow \infty$ by bounded
convergence. Then, as $\varepsilon\downarrow 0$, $\epsilon \downarrow
0$, and $l\uparrow \infty$, we have a.s.
\begin{equation*}
\int \iota_\epsilon(x) \int_0^t
\bar{\psi_l}(\pi_s(\tau_x\alpha_\varepsilon)) \, dsdx \ \to \
\int_0^t \psi (\rho (s,x_s)) \, ds\;.
\end{equation*}
It remains to show that $A_t$ corresponds to the quadratic variation
of the square integrable martingale $x_t$. By \cite[Corollary
VI.6.6]{JS}, $\{(x_t^N, \<x^N\>_t) : 0\le t\le T\}$ converges in law
to $\{(x_t, \<x\>_t) : 0\le t\le T\}$. Since by the first part of the
lemma, $\{(x_t^N, \<x^N\>_t) : 0\le t\le T\}$ converges to $\{(x_t,
A_t) : 0\le t\le T\}$, $\<x\>_t = A_t$. This concludes the proof
of the lemma.
\end{proof}
Recall that the quadratic variation $\<x\>_t$ of a martingale $x_t$ is
equal to $x_t^2 - x_0^2 - 2\int_0^t x_s \, dx_s$ and that $\<x\>_t$
can be approximated in $L^2$ by the sequence of Riemannian sums
$\sum_j (x_{t_{j+1}} - x_{t_j})^2$, as the mesh of a partition $\{t_j
: 1\le j\le M\}$ of the interval $[0,t]$ vanishes. In particular, one
can prove directly in our context the identity between $A_t$ and the
quadratic variation $\<x\>_t$. \medskip
It follows from the characterization of continuous martingales that
$x_t$ is a time-changed Brownian motion:
\begin{corollary}
\label{s4}
The rescaled position of the tagged particle $\{x^N_t : 0\le t\le T\}$
converges in law to the solution of the stochastic differential
equation
\begin{equation*}
dx_t \;=\; \sigma \sqrt{\psi(\rho(t,x_t))}\, dB_t\;,
\end{equation*}
where $B_t$ is a Brownian motion and $\rho$ is the solution of the
differential equation \eqref{ec0}.
\end{corollary}
\noindent{\bf Proof of Theorem \ref{s5}.} By Section \ref{sec3},
the sequence $Q^N$ is tight. On the other hand, by Lemma \ref{s6}
and Corollary \ref{s4}, the law of the the first and the third
components of the vector $(\pi_t^{0}, \pi_t, x_t, A_t)$ are uniquely
determined. Since, by Lemmatta \ref{s6}, \ref{s9}, the distribution
of the second and fourth components are characterized by the
distribution of $x_t$, and $\rho(t,x)$, the theorem is proved. \qed
\section{Global replacement lemma}
\label{sec5}
In this section, we replace the full empirical average of a local,
bounded and Lipschitz function in terms of its density field. The
proof involves only a few modifications of the standard
hydrodynamics proof of \cite[Lemma V.1.10, Lemma V.5.5]{kl}.
\begin{proposition}[Global replacement]
\label{p1}
Let $r:\Omega_N \to \bb R$ be a local, bounded and Lipschitz function.
Then, for every $\delta>0$,
\begin{eqnarray*}
\limsup_{\varepsilon \to \infty} \limsup_{N \to \infty}
\bb P^N \Big[
\int_0^T \frac{1}{N} \sum_{x\in {\bb T}_N} \tau_x
\mathcal V_{\varepsilon N}(\eta_s) ds \geq \delta \Big] =0,
\end{eqnarray*}
where
\begin{equation*}
\mathcal V_l(\eta) = \Big| \frac{1}{2l+1} \sum_{|y| \leq l} \tau_y r(\eta)
-\bar{r}(\eta^l(0)) \Big|, {\rm \ \ and \ \ } \bar{r}(a) =
E_{\mu_a}[r]\;.
\end{equation*}
\end{proposition}
For two measures $\mu$, $\nu$ defined on $\Omega_N$ (or $\Omega_N^*$),
denote by $\mc H(\mu|\nu)$ the entropy of $\mu$ with respect to $\nu$:
\begin{equation*}
\mc H(\mu|\nu) \;=\; \sup_{f} \Big\{ \int f d\mu \;-\; \log
\int e^f d\nu \Big\}\;,
\end{equation*}
where the supremum is carried over all bounded continuous functions
$f$.
A simple computation shows that the initial entropy
$\mc H(\nu^N_{\rho_0(\cdot)}|\nu_\rho)$ is bounded by $C_0 N$ for some
finite constant $C_0$ depending only on $\rho_0(\cdot)$ and $g$.
Let $f_t^N(\eta)$ be the density of $\eta_t$ under $\bb P^N$ with
respect to a reference measure $\nu_\rho$ for $\rho>0$, and let $\hat{f}_t^N(\eta) = t^{-1} \int_0^t
f_s^N(\eta) ds$. By standard arguments (cf. Section V.2 \cite{kl}),
\begin{equation*}
\mc H_N(\hat{f}_t^N):=\mc H(\hat{f}_t^Nd\nu_\rho|\nu_\rho) \leq C_0 N \quad
{\rm and} \quad \mc D_N(\hat{f}_t^N) := \Big \<\sqrt{\hat{f}_t^N}
(-L_N \sqrt{\hat{f}_t^N}) \Big\>_\rho \leq \frac{C_0}{N}\;.
\end{equation*}
Consequently, by Chebyshev inequality, to prove Proposition
\ref{p1} it is enough to show, for all finite constants $C$, that
\begin{equation*}
\limsup_{\varepsilon \to 0} \limsup_{N \to \infty}
\sup_{\substack{\mc H_N(f) \leq C N \\ {\mathcal D}_N(f) \leq C/N}} \int
\frac{1}{N} \sum_{x\in {\bb T}_N} \tau_x \mc V_{\varepsilon N} (\eta)
f(\eta) d \nu_\rho =0
\end{equation*}
where the supremum is with respect to $\nu_\rho$-densities $f$. Notice that we may remove from the sum
the integers $x$ close to the origin, say $|x|\le 2 \varepsilon N$,
because $\mc V_{\varepsilon N}$ is bounded. After removing these
sites, we are essentially in the space homogeneous case. Proposition
\ref{p1} follows from the two standard lemmatta below as in the proof
of \cite[Lemma V.1.10]{kl}.
\begin{lemma}[Global 1-block estimate]
\label{g3}
\begin{equation*}
\limsup_{k \to \infty} \limsup_{N \to \infty} \sup_{\substack{\mc H_N(f)
\leq C N \\ \mc D_N(f) \leq C/N}} \int \frac{1}{N}
\sum_{|x| > 2\varepsilon N} \tau_x \mathcal V_k (\eta) f(\eta) d \nu_\rho =0\;.
\end{equation*}
\end{lemma}
\begin{lemma}[Global 2-block estimate]
\label{g4}
\begin{eqnarray*}
&&\limsup_{k\rightarrow \infty}\limsup_{\varepsilon \rightarrow 0}
\limsup_{N\rightarrow \infty} \sup_{\substack{\mc H_N(f) \leq C N \\ \mc
D_N(f) \leq C/N}} \\
&&\ \ \ \ \ \qquad
\frac{1}{2N\varepsilon +1} \sum_{|y|\leq
N\varepsilon}
\int \frac{1}{N} \sum_{|x| > 2\varepsilon N} |\eta^k(x+y) -
\eta^k(x)|f(\eta) d\nu_\rho = 0\;.
\end{eqnarray*}
\end{lemma}
We now indicate the proofs of Lemmatta \ref{g3} and \ref{g4} in relation
to \cite[Sections V.4, V.5 ]{kl}.
\vskip .1cm
\noindent
{\it Proofs of Lemmatta \ref{g3} and \ref{g4}.}
To be brief, we discuss only the proof of Lemma \ref{g3} through some
modifications of the argument in \cite[Section V.4]{kl}, as the proof
of Lemma \ref{g4}, using the modifications for Lemma \ref{g3} given
below, is on similar lines to that in \cite[Section V.5]{kl}.
In the first step of the 1-block estimate we cut-off high
densities. We claim that
\begin{equation*}
\limsup_{A\rightarrow \infty}\limsup_{k\rightarrow \infty}
\limsup_{N\rightarrow \infty} \sup_{\mc H_N(f)\leq CN}\int
\frac{1}{N} \sum_{|x| > 2\varepsilon N} \tau_x \mc V_k(\eta)
\mb 1 \{\eta^k(x) >A\} f(\eta) d\nu_\rho = 0\;.
\end{equation*}
Since $\mc V_k$ is bounded, we may replace it by a constant and
estimate the indicator by $A^{-1} \eta^k(x)$. After a summation by
parts, the expression is easily shown to be less than or equal to $C_0
(A N)^{-1} \sum_{x\not = 0} \eta(x)$ for some finite constant $C_0$.
To conclude it remains to follow the proof of \cite[Lemma V.4.1]{kl},
applying the entropy inequality with respect to $\nu_\rho$ and keeping
in mind that the marginals of $\mu_\rho$ and $\nu_\rho$ coincide on
sites $x\not = 0$.
Define now $\mathcal V_{k,A}(\eta)=\mathcal V_k(\eta) 1\{\eta^k(0)\leq A\}$. By the
previous argument, it is enough to show that for every $A>0$,
\begin{equation}
\label{e4}
\limsup_{k \to \infty} \limsup_{N \to \infty}
\sup_{\mc D_N(f) \leq C/N}
\int \frac{1}{N} \sum_{x\in {\bb T}_N} \tau_x \mc V_{k,A} (\eta)
f(\eta) d\nu_\rho =0\;.
\end{equation}
The proof is analogous to the homogeneous case. Since the origin does
not appear, both the Dirichlet form $\mc D_N$ and the the measure
$\nu_\rho$ coincide with the Dirichlet form of the space homogeneous
zero-range process and the stationary state $\mu_\rho$. In particular,
all estimates needed
involve only the functionals of the space-homogeneous process already
considered in \cite{kl}. \qed
\section{Local replacement lemma}
\label{sec6}
In this section, we replace a bounded, Lipshitz function supported
at the origin by a function of the empirical density.
\begin{proposition}[Local replacement]
\label{l2}
For any bounded, Lipschitz function $h: \bb N_0 \to \bb R$,
and any $t>0$,
\begin{equation*}
\limsup_{l\rightarrow \infty}\limsup_{\epsilon \to 0} \limsup_{\varepsilon
\to 0} \limsup_{N \to \infty} \bb E^N \Big[\,
\Big|\int_0^t h(\eta_s(0)) - \frac{1}{\epsilon N}\sum_{x=1}^{\epsilon
N} \bar{H_l}(\eta^{\varepsilon N}_s(x))\, ds \Big|\, \Big] \;=\; 0 \;,
\end{equation*}
where $H(\rho) = E_{\nu_\rho}[h]$, $H_l(\eta) = H(\eta^l(0))$,
and $\bar{H_l}(\rho) = E_{\mu_\rho}[H_l]$.
\end{proposition}
In the proof of this lemma, there are two
difficulties. The first and the most important one is the
absence of a spatial average, a crucial point in the standard one and two
blocks estimates since it allows a cut-off of large densities and a
reduction to translation-invariant densities in the estimation of the
largest eigenvalue of a local perturbation of the generator of the
process. Without the density cut-off, the equivalence of ensembles,
and therefore the local central limit theorem, has to be proved
uniformly over all densities. Moreover, this absence of space
average confines us to one-dimension.
A second obstacle is the lack of translation invariance of the
stationary state, turning the origin into a special site. Functions
$h(\eta(0))$ and $h(\eta(x))$, for instance, have different
distributions. In particular, in contrast with the original zero-range
process, the integral $\int \{ g(\eta(0)) - g(\eta(x))\} f d\nu_\rho $
cannot be estimated by the Dirichlet form of $f$.
The proof of Proposition \ref{l2} is divided in several steps. We
start with a spectral gap for the evolution of the environment
restricted to a finite cube. For $l\ge 1$, denote by $\Lambda_l$ a
cube of length $2l+1$ around the origin: $\Lambda_l = \{-l, \dots,
l\}$ and by $L^{env}_{\Lambda_l}$ the restriction of the environment
part of the generator to the cube $\Lambda_l$:
\begin{eqnarray*}
(L_{\Lambda_l}^{env} f) (\eta) &=&
\sum_{\substack{x\in\Lambda_l \\ x \neq 0}}
\sum_{y\in\Lambda_l} p(y-x) \, g(\eta(x)) \, [f(\eta^{x,y})-f(\eta)]\\
&+& \sum_{z\in\Lambda_l} p(z) \, g(\eta(0)) \,
\frac{\eta(0) -1}{\eta(0)} \, [f(\eta^{0,z})-f(\eta)]\;.
\end{eqnarray*}
We assume above, without loss of generality, that $l$ is larger than
the range of $p(\cdot)$.
Let $\nu^{\Lambda_l}_\rho$ be the measure $\nu_\rho$ restricted to the
set $\Lambda_l$. For $j \ge 1$, denote by $\Sigma_{\Lambda_l, j}$ the
set of all configurations in $\Lambda_l$ with at least one particle at
the origin and $j$ particles in $\Lambda_l$, and by $\nu_{\Lambda_l,
j}$ the measure $\nu^{\Lambda_l}_\rho$ conditioned to
$\Sigma_{\Lambda_l, j}$:
\begin{equation}
\label{c11}
\Sigma_{\Lambda_l, j} \;=\; \Big \{\eta\in \bb N_0^{\Lambda_l} :
\eta(0)\ge 1 , \sum_{x\in\Lambda_l} \eta(x) = j \Big \}\;, \quad
\nu_{\Lambda_l, j} (\cdot) \;=\; \nu^{\Lambda_l}_\rho \big( \cdot
\big| \Sigma_{\Lambda_l, j} \big)\;.
\end{equation}
Note that $\nu_{\Lambda_l, j}$ does not depend on the parameter
$\rho$.
\begin{lemma}
\label{s10}
There exists a finite constant $C_0$ such that
\begin{equation*}
\< f ; f\>_{\nu_{\Lambda_l, j}} \;\le\; C_0\, l^2\, \< f \, (-
L_{\Lambda_l}^{env} f) \>_{\nu_{\Lambda_l, j}}
\end{equation*}
for all $j\ge 1$, all $l \ge 1$ and all functions $f$ in
$L^2(\nu_{\Lambda_l, j})$. In this formula, $\< f ;
f\>_{\nu_{\Lambda_l, j}}$ stands for the variance of $f$ with respect
to $\nu_{\Lambda_l, j}$.
\end{lemma}
\begin{proof}
This result follows from the spectral gap of the zero-range process
proved in \cite{LSV}. Since $g(k)/k$ is bounded above and below by
finite strictly positive constants, an elementary computation shows
that
\begin{equation*}
\< f ; f\>_{\nu_{\Lambda_l, j}} \;=\; \inf_c \<(f-c)^2\>_{\nu_{\Lambda_l,j}}
\le\; a^2 \inf_c\< (f'-c)^2\>_{\mu_{\Lambda_l,j-1}}\; = \;
a^2\< f' ; f' \>_{\mu_{\Lambda_l, j-1}}
\end{equation*}
\begin{equation*}
{\rm and} \quad
\< f' (- \mc L_{\Lambda_l} f') \>_{\mu_{\Lambda_l, j-1}}
\;\le\; a^2 \< f (- L_{\Lambda_l}^{env} f) \>_{\nu_{\Lambda_l, j}}
\end{equation*}
provided $0<a^{-1} \le g(k)/k \le a$ for all $k\ge 1$. In this
formula, $\mc L_{\Lambda_l}$ is the generator of the zero-range
process \eqref{c0} restricted to the set $\Lambda_l$, $\mu_{\Lambda_l,
j-1}$ is the canonical measure associated to the zero-range process
restricted to the set $\Lambda_l$ with $j-1$ particles, and $f'(\eta)
= f(\eta + \mf d_0)$, where $\mf d_0$ is the configuration with
exactly one particle at the origin and summation of configurations is
performed componentwise.
\end{proof}
\subsection{Local one-block estimate}
For $l\ge 1$, define the function $V_l (\eta)$ by
\begin{equation*}
V_l(\eta) = h(\eta(0)) - H (\eta^l(0))\;
\end{equation*}
where we recall $h$ is a bounded, Lipschitz function, and $H(a) =
E_{\nu_a}[h(\eta(0))]$. In this subsection we give the second step
for the proof of Proposition \ref{l2}:
\begin{lemma}[One-block estimate]
\label{l3}
For every $0\le t\le T$,
\begin{equation*}
\limsup_{l \to \infty} \limsup_{N \to \infty} \bb E^N \Big[\,
\Big|\int_0^t V_l(\eta_s) \, ds \Big|\, \Big ] =0\;.
\end{equation*}
\end{lemma}
\begin{proof}
Since the initial entropy $\mc H(\nu_{\rho_0(\cdot)}^N|\nu_\rho)$ is
bounded by $C_0 N$, by the entropy inequality,
\begin{equation*}
\bb E^N \Big[\, \Big| \int_0^t V_l(\eta_s)
\, ds\Big| \, \Big]
\;\leq\; \frac{C_0}{\gamma} + \frac{1}{\gamma N}
\log \bb E_\rho \Big[\exp\Big\{\gamma N \Big|\int_0^t
V_l(\eta_s) \, ds \Big| \Big\} \Big]\; ,
\end{equation*}
where $\bb E_\rho$ denotes expectation with respect to the process
starting from the invariant measure $\nu_\rho$. Using the elementary
inequality $e^{|x|} \leq e^x+e^{-x}$, we can get rid of the absolute
value in the previous integral, considering $h$ and $-h$. In this
case, by Feynman-Kac formula, the second term on the right hand side
is bounded by $(\gamma N)^{-1} T \lambda_{N,l}$, where $\lambda_{N,l}$
is the largest eigenvalue of $N^2 L_N + \gamma N V_l$. Therefore, to
prove the lemma, it is enough to show that $(\gamma N)^{-1}
\lambda_{N,l}$ vanishes, as $N\uparrow\infty$, $l\uparrow\infty$, for
every $\gamma>0$.
By the variational formula for $\lambda_{N,l}$,
\begin{equation}
\label{ec1}
(\gamma N)^{-1} \lambda_{N,l} \;=\; \sup_f \Big\{ \< V_l \, f \>_\rho
- \gamma^{-1} N \< \sqrt{f}(-L_N \sqrt{f}) \>_\rho \Big\}\;,
\end{equation}
where the supremum is carried over all densities $f$ with respect to
$\nu_\rho$.
Recall that we denote by $L^{env}_{\Lambda_l}$ the restriction of
the environment part of the generator to the cube $\Lambda_l$. As
the Dirichlet forms satisfy $\<\sqrt{f}(-L^{env}_{\Lambda_l}
\sqrt{f})\>_\rho \leq \<\sqrt{f}(-L_N \sqrt{f})\>_\rho$, we may
bound the previous expression by a similar one where $L_N$ is
replaced by $L^{env}_{\Lambda_l}$.
Denote by $\hat{f}_{l}$ the conditional expectation of $f$ given $\{\eta(z)
: z\in\Lambda_l\}$. Since $V_l$ depends on the configuration $\eta$
only through $\{\eta(z) : z\in\Lambda_l\}$ and since the Dirichlet
form is convex, the expression inside braces in \eqref{ec1} is less
than or equal to
\begin{equation}
\label{c2}
\int V_l \, \hat{f}_{l} \, d \nu^{\Lambda_l}_\rho \;
-\; \gamma^{-1} N \int \sqrt{\hat{f}_{l}} \,
(-L^{env}_{\Lambda_l} \sqrt{\hat{f}_{l}} ) \, d \nu^{\Lambda_l}_\rho \;,
\end{equation}
where, as in \eqref{c11}, $\nu^{\Lambda_l}_\rho$ stands for the
restriction of the product measure $\nu_\rho$ to $\bb
N_0^{\Lambda_l}$.
The linear term in this formula is equal to
\begin{equation*}
\sum_{j\ge 1} c_{l,j} (f) \int V_l \, \hat{f}_{l,j}
\, d \nu_{\Lambda_l, j} \;,
\end{equation*}
where $\nu_{\Lambda_l, j}$ is the canonical measure defined in
\eqref{c11} and
\begin{equation*}
c_{l,j} (f) \;=\; \int_{\Sigma_{\Lambda_l, j}} \hat{f}_{l}
\, d \nu^{\Lambda_l}_\rho \;, \quad
\hat{f}_{l,j} (\eta) \;=\; c_{l,j}(f)^{-1} \,
\nu^{\Lambda_l}_\rho (\Sigma_{\Lambda_l, j})\, \hat{f}_{l} (\eta)\;.
\end{equation*}
The sum starts at $j=1$ because there is always a particle at the
origin. Note also that $\sum_{j\ge 1} c_{l,j} (f) =1$ and that $
\hat{f}_{l,j} (\cdot)$ is a density with respect to $\nu_{\Lambda_l, j}$.
By the same reasons, the quadratic term of \eqref{c2} can be written
as
\begin{equation*}
\gamma^{-1} N \sum_{j\ge 1} c_{l,j} (f)
\int \sqrt{\hat{f}_{l,j}} \, (-L^{env}_{\Lambda_l}
\sqrt{\hat{f}_{l,j}}) \, d \nu_{\Lambda_l, j} \;.
\end{equation*}
In view of this decomposition, \eqref{ec1} is bounded above by
\begin{equation*}
\sup_{j\ge 1} \sup_{f} \Big\{ \int V_l \, f \, d \nu_{\Lambda_l, j}
\;-\; \gamma^{-1} N \int \sqrt{f} \, (-L^{env}_{\Lambda_l} \sqrt{f})
\, d \nu_{\Lambda_l, j} \Big\}\;,
\end{equation*}
where the second supremum is carried over all densities with respect
to $\nu_{\Lambda_l, j}$.
Recall that $V_l(\eta) = h(\eta(0)) - H (\eta^l(0))$. Let
$\tilde{H}_l(j/2l+1) = \int h(\eta(0)) \, d \nu_{\Lambda_l, j}$. By
Lemma \ref{lclt1} below, we can replace $H (\eta^l(0))$ by
$\tilde{H}_l (\eta^l(0))$ in the previous expression. Let
$V_{l,j}(\eta) = h(\eta(0)) - \tilde{H}_l (j/2l+1)$ and notice that
$V_{l,j}$ has mean zero with respect to $\nu_{\Lambda_l, j}$ for all
$j\ge 1$. By Lemma \ref{s10}, $L^{env}_{\Lambda_l}$ has a spectral gap
on $\Sigma_{\Lambda_l, j}$ of order $C_0 l^{-2}$, uniformly in $j$. In
particular, since $h$ is bounded, and the inverse spectral gap of
$L^{env}_{\Lambda_l}$ is order $l^2$ uniformly in $j$, by Rayleigh
expansion \cite[Theorem A3.1.1]{kl}, for sufficiently large $N$,
\begin{eqnarray*}
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! &&
\int V_{l,j} \, f \, d \nu_{\Lambda_l, j}
\;-\; \gamma^{-1} N \int \sqrt{f} \, (-L^{env}_{\Lambda_l} \sqrt{f})
\, d \nu_{\Lambda_l, j} \\
\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! && \qquad
\le\; \frac{\gamma N^{-1}}{1-2\|V_l\|_{L^\infty}
C_0l^2\gamma N^{-1}} \int V_{l,j} (-L^{env}_{\Lambda_l})^{-1} V_{l,j}
\, d \nu_{\Lambda_l, j}\\
&& \qquad
\le\; 2 \gamma N^{-1} \int V_{l,j} (-L^{env}_{\Lambda_l})^{-1} V_{l,j}
\, d \nu_{\Lambda_l, j}
\end{eqnarray*}
uniformly in $j \ge 1$. By the spectral gap of $L^{env}_{\Lambda_l}$ again,
this expression is less than or equal to
\begin{equation*}
C_0 l^2 \gamma N^{-1} \int V_{l,j}^2 \, d \nu_{\Lambda_l, j}\;\le\;
C_0(h) l^2 \gamma N^{-1}
\end{equation*}
because $h$ is bounded. This proves that \eqref{ec1} vanishes as
$N\uparrow\infty$, $l\uparrow\infty$, and therefore the lemma.
\end{proof}
\begin{lemma}
\label{lclt1}
For bounded, Lipschitz function $h:\bb N_0 \rightarrow \mathbb R$,
\begin{equation*}
\limsup_{l\rightarrow \infty} \sup_{k\geq 0} \Big |
E_{\nu_{\Lambda_l,k}}[h(\eta(0))] -
E_{\nu_{k/|\Lambda_l|}}[h(\eta(0))] \Big | \ = \ 0\; .
\end{equation*}
\end{lemma}
\begin{proof}
Fix $\epsilon>0$ and consider $(l,k)$ such that $k/|\Lambda_l| \leq
\epsilon$. We may subtract $h(1)$ to both expectations. Since $h$ is
Lipschitz, the absolute value appearing in the statement of the lemma
is bounded by
\begin{equation}
\label{c12}
C(h) \Big\{ \int \{\eta(0) - 1\} \, d\nu_{\Lambda_l,k}
\;+\; \int \{\eta (0) -1\} \, d\nu_{k/|\Lambda_l|} \Big\} \;.
\end{equation}
Note that both terms are positive because both measures are
concentrated on configurations with at least one particle at the
origin. We claim that each term is bounded by $a^2 \epsilon$.
On the one hand, since $\nu_\rho (d\eta) = \{\eta(0)/\rho\}
\mu_\rho(d\eta)$, the second term inside braces is equal to $\rho^{-1}
\int \eta (0) \{ \eta(0) - 1\} \, d\mu_{k/|\Lambda_l|}$, where $\rho
= k/|\Lambda_l|$. since $k\le a g(k)$, we may replace $\eta(0)$ by $a
g(\eta(0))$ and perform a change of variables $\eta' = \eta - \mf d_0$
to bound the second term in \eqref{c12} by $a \varphi(\rho) \le a^2
\rho$.
On the other hand, by the explicit formula for $\nu_{\Lambda_l,k}$,
the first term in \eqref{c12} is equal to
\begin{equation*}
\sum \eta(0) \{\eta(0)-1\} \prod_{x\in\Lambda_l} \frac 1{g(\eta(x))!}
\;\Big/\; \sum \eta(0) \prod_{x\in\Lambda_l} \frac
1{g(\eta(x))!} \;,
\end{equation*}
where both sums are performed over $\Sigma_{\Lambda_l,k}$. Replacing
$\eta(0)$ by $a^{\pm 1} g(\eta(0))$ in the numerator and in the
denominator, we obtain that the previous expression is less than or
equal to $a^2 E_{\mu_{\Lambda_l,k-1}} [\eta(0)] \le a^2
k/|\Lambda_l|$. In last formula, $\mu_{\Lambda_l,k}$ is the product
measure $\mu_\rho$ conditioned on the hyperplane
$\Sigma^0_{\Lambda_l,k} = \{\xi \in \bb N_0^{\Lambda_l} :
\sum_{x\in\Lambda_l} \xi(x) = k\}$.
For $(l,k)$ such that $k/|\Lambda_l| \geq \epsilon$, write
\begin{equation*}
E_{\nu_{\Lambda_l,k}}[h(\eta(0))] -
E_{\nu_{k/|\Lambda_l|}}[h(\eta(0))] \ =\ \frac 1{\rho}
\Big\{
E_{\mu_{\Lambda_l,k}}[h'(\eta(0))] -
E_{\mu_{k/|\Lambda_l|}}[h'(\eta(0))] \Big\}\;,
\end{equation*}
where $\rho = k/|\Lambda_l|$ and $h'(j) = h(j) j$. By Corollary 6.1
(parts a,b) \cite{LSV} the last difference in absolute value is
bounded by $C(h) \epsilon^{-1} l^{-1}$.
\end{proof}
\subsection{Local two-blocks estimate} In this subsection we show how
to go from a box of size $l$ to a box of size $\epsilon N$:
\begin{lemma}[Two-blocks estimate]
\label{l4}
Let $H : \bb R_+ \to\bb R$ be a bounded, Lipschitz function. For every
$t > 0$,
\begin{equation}
\label{ec2}
\limsup_{l \to \infty} \limsup_{\epsilon \to 0} \limsup_{N \to \infty}
\bb E^N \Big[\, \Big| \int_0^t \big\{H(\eta_s^l(0)) -\frac{1}{\epsilon
N} \sum_{x =1} ^{\epsilon N} H(\eta_s^l(x)) \big\} ds \Big| \, \Big]
= 0.
\end{equation}
\end{lemma}
The proof of this lemma is very similar to the proof of Lemma
\ref{l3}. The expectation in (\ref{ec2}) is bounded by
\begin{equation*}
\frac{1}{\epsilon N} \sum_{x=2l+1} ^{\epsilon N} \bb E^N
\Big[ \, \Big| \int_0^t \big\{H(\eta_s^l(0)) - H(\eta_s^l(x)) \big\} ds
\Big| \, \Big] \;+\; \frac{C(H)(2l+1)}{\epsilon N}\;\cdot
\end{equation*}
Following the proof of the one-block estimate, we see that it is
enough to estimate, uniformly in $2l+1\leq x\leq \epsilon N$, the quantity
\begin{equation*}
\sup_{f} \Big\{ \<V_{l,x} f\>_\rho -N \gamma^{-1} \< \sqrt{f}(-L_N
\sqrt{f})\>_\rho\Big\},
\end{equation*}
where the supremum, as before, is over all density functions $f$ with
$\int f d\nu_\rho=1$ and $V_{l,x}$ is defined by
\begin{equation*}
V_{l,x}(\eta) = H(\eta^l(0)) -H(\eta^l(x)).
\end{equation*}
Notice that the blocks $\Lambda_l$ and $\Lambda_l(x) =: \{
-l+x,\ldots,l+x\}$ are disjoint. Let $L^{env}_{\Lambda_{l,x}}$ be
the restriction of $L^{env}_N$ to the set $\Lambda_{l,x}=\Lambda_l
\cup \Lambda_l(x)$ and define the operator $L_{l,x}$ by
\begin{equation*}
L_{l,x} f(\eta) = L^{env}_{\Lambda_{l,x}} f(\eta) +
g(\eta(l))[f(\eta^{l,x-l}) -f(\eta)] + g(\eta(x-l))[f(\eta^{x-l,l})
-f(\eta)].
\end{equation*}
The operator $L_{l,x}$ corresponds to the environment generator of a
zero-range dynamics on which particles can jump between adjacent sites
on each box, and between endpoints $l$ and $x-l$. Since $x \leq
\epsilon N$, we see, by adding and subtracting at most $\epsilon N$
terms, that
\begin{equation*}
\<f(- L_{l,x} f)\>_\rho \leq (1+ \epsilon N)\<f(-L_N f)\>_\rho .
\end{equation*}
Then, it is enough to prove that
\begin{equation*}
\sup_f \Big\{ \Big \<\big\{H(\eta^l(0))
- H(\zeta^l(0)) \big\} f\Big\>_{\nu_\rho^{\Lambda_l^*}}\ - \ \frac{1}{2 \epsilon\gamma }
\<\sqrt{f}(- L_{l,l} \sqrt{f})\>_{\nu_\rho^{\Lambda_l^*}} \Big\}
\end{equation*}
vanishes as $\epsilon\downarrow 0$ and $l\uparrow \infty$. In this
formula, the state space is $\bb N_0^{\Lambda_l^*}$, where
$\Lambda_l^* = \{-l, \dots, 3l+1\}$, the configurations of this space
are denote by the pair $\beta = (\eta, \zeta)$, where $\eta$ belongs
to $\bb N_0^{\Lambda_l}$ and $\zeta$ belongs to $\bb N_0^{\{l+1,
\dots, 3l+1\}}$, expectation is taken with respect to the measure
$\nu_\rho^{\Lambda_l^*}$, the projection of $\nu_\rho$ on
$\Lambda_l^*$, $L_{l,l}$ is the generator of the environment restricted
to the set $\Lambda_l^*$:
\begin{eqnarray*}
(L_{l,l} f) (\beta) &=& \sum_{\substack{x \neq 0, x\in
\Lambda^*_l\\ y\in\Lambda_l^*}} p(y-x) \,
g(\beta(x)) \, [f(\beta^{x,y})-f(\beta)]\\
&+& \sum_{z\in\bb Z} p(z) \, g(\beta(0)) \,
\frac{\beta(0) -1}{\beta(0)} \, [f(\beta^{0,z})-f(\beta)]\;;
\end{eqnarray*}
and the supremum is carried over all densities $f$ with respect to
$\nu_\rho^{\Lambda_l^*}$.
Following the proof of the one-block Lemma \ref{l3}, we need only
to prove that
\begin{equation*}
\sup_{k\geq 1}\sup_f\Big \{\int \big\{H(\eta^l(0))
- H(\zeta^l(0)) \big\} \, f\, d\nu_{\Lambda_{l}^*,k} -
\frac{1}{ 2 \gamma \epsilon }\int \sqrt{f}(-L_{l,l} \sqrt{f})\,
d\nu_{\Lambda_{l}^*,k}\Big\}
\end{equation*}
vanishes with limits on $\epsilon$ and $l$ where the supremum is on
densities $f$ with respect to the canonical measure
$\nu_{\Lambda_{l}^*,k}(\cdot) = \nu_\rho^{\Lambda_{l}^*}
(\cdot|\Sigma_{\Lambda_{l}^*,k})$, defined similarly as in
(\ref{c11}), where $\Sigma_{\Lambda_{l}^*,k} = \{\beta\in \bb
N_0^{\Lambda_{l}^*}: \beta(0)\ge 1 , \sum_{y\in \Lambda_{l}^*}
\beta(y) = k\}$.
Let $W_l(\beta)=H(\eta^l(0)) - H(\zeta^l(0))$. By the Rayleigh
expansion \cite[Theorem A3.1.1]{kl}, spectral gap estimate Lemma
\ref{s10} applied to $L_{l,l}$ (which can be thought of as the
environment generator on a block of length $2|\Lambda_l|$) and
boundedness of $H$, for large $N$ and small $\epsilon$,
\begin{eqnarray*}
&& \int W_{l} \, f\, d\nu_{\Lambda_{l}^*,k} -
\frac{1}{2 \gamma \epsilon}\int \sqrt{f}(- L_{l,l}
\sqrt{f})\, d\nu_{\Lambda_{l}^*,k}\\
&& \qquad \leq \ \int W_{l}\, d\nu_{\Lambda_{l}^*,k}
\;+\; \frac{2 \gamma \epsilon} {1- C(H) l^2 \gamma \epsilon}
\int W_{l}\big \{(- L_{l,l})^{-1} W_{l}\big\} \, d\nu_{\Lambda_{l}^*,k}\\
&& \qquad \leq \ \int W_{l} \, d\nu_{\Lambda_{l}^*,k} +
C(H) l^2 \gamma \epsilon
\end{eqnarray*}
for some finite constant $C(H)$ depending on $H$. The last term
vanishes as $\epsilon\downarrow 0$, while the first term vanishes
uniformly in $k$ as $l\uparrow \infty$ by Lemma \ref{lclt2} below.
\qed
\begin{lemma}
\label{lclt2}
For a bounded, Lipschitz function $H:\bb R_+ \rightarrow \mathbb R$, we have
that
\begin{equation*}
\limsup_{l \to \infty} \sup_{k \geq 0} \Big| \,
E_{\nu_{\Lambda_{l}^*,k}}\Big [ H(\eta^l(0)) - H(\zeta^l(0))
\Big ] \, \Big | \;=\;0\;.
\end{equation*}
\end{lemma}
\begin{proof}
Fix $\epsilon>0$. Using that $H$ is Lipschitz, we have that
$|H(\eta^l(0)) - H(\zeta^l(0))| \leq C (H) \{\eta^l(0) +
\zeta^l(0)\}$, and so the expectation appearing in the statement of
the lemma is less than or equal to $C (H) E_{\nu_{\Lambda_{l}^*,k}} [
\eta^l(0) + \zeta^l(0) ]$. A computation, similar to the one presented
in the proof of Lemma \ref{lclt1}, shows that
\begin{equation*}
E_{\nu_{\Lambda_{l}^*,k}} [ \beta(0)] \;\le\; a^2 \Big\{ 1+
E_{\mu_{\Lambda_{l}^*,k-1}}[\xi(0)]\Big\}\;,\quad
E_{\nu_{\Lambda_{l}^*,k}} [ \beta(y)] \;\le\; a^2
E_{\mu_{\Lambda_{l}^*,k-1}}[\xi(0)]
\end{equation*}
for all $y\not = 0$. In this formula, $\mu_{\Lambda_{l}^*,k}$ stands
for the canonical measure defined by $\mu_{\Lambda_{l}^*,k} (\cdot) =
\mu_\rho^{\Lambda_{l}^*} (\cdot | \sum_{x\in \Lambda_{l}^*} \beta(x) =
k)$, where $\mu_\rho^{\Lambda_{l}^*}$ is the product measure
$\mu_\rho$ restricted to the set $\Lambda_{l}^*$. In particular, the
expectation appearing in the statement of the lemma is less than or
equal to $C(H) a^2 \{1 + k\}/2|\Lambda_l|$. This concludes the proof
of the lemma in the case $k/2|\Lambda_l| \leq \epsilon$.
Assume now that $k/2|\Lambda_l|\geq \epsilon$. By definition of the
canonical measure $\nu_{\Lambda_{l}^*,k}$ and the grand-canonical
measure $\nu_\rho^{\Lambda_{l}^*}$, the expectation appearing in the
statement of the lemma is equal to
\begin{equation}
\label{e5}
\frac 1{E_{\mu_{\Lambda_{l}^*,k}} [ \beta(0) ]} \,
E_{\mu_{\Lambda_{l}^*,k}}\Big [ \beta(0) \Big\{ H(\eta^l(0)) -
H(\zeta^l(0)) \Big\} \Big] \;.
\end{equation}
Since the measure is space homogeneous, the denominator is equal to
$\rho_{l,k} = k/2|\Lambda_l|$, while in the numerator we may replace
$\beta(0)$ by $\eta^l(0)$. The numerator can therefore be rewritten as
\begin{equation*}
E_{\mu_{\Lambda_{l}^*,k}}\Big [ \Big\{ \eta^l(0) - \rho_{l,k}\Big\}
\Big\{ H(\eta^l(0)) - H(\zeta^l(0)) \Big\} \Big] +
\rho_{l,k} E_{\mu_{\Lambda_{l}^*,k}}\Big [ H(\eta^l(0)) -
H(\zeta^l(0)) \Big]\;.
\end{equation*}
The second term vanishes because the measure $\mu_{\Lambda_{l}^*,k}$
is space homogeneous, while the first one is absolutely bounded by
$C(H) E_{\mu_{\Lambda_{l}^*,k}} [ \, | \eta^l(0) - \rho_{l,k} |\,
]$. By \cite[Corollary 6.1 (C)]{LSV}, this expression is less than or
equal to
\begin{equation*}
C'(H) E_{\mu_{\rho_{l,k}}^{\Lambda_{l}^*}} \Big[ \, \big | \xi^l(0)
- \rho_{l,k} \big |\, \Big ] \;\le\; C'(H) \, \sigma (\rho_{l,k})
\, l^{-1/2}\;,
\end{equation*}
where $\sigma (\rho)$ stands for the variance of $\xi(0)$ under
$\mu_\rho$. By \cite[(5.2)]{LSV} and since $\varphi(\rho)/\rho$ is
bounded below and above, $\sigma (\rho_{l,k})^2 \le C \rho_{l,k}$.
Therefore, if we recall the denominator in \eqref{e5}, we obtain that
\begin{equation*}
E_{\nu_{\Lambda_{l}^*,k}}\Big [ H(\eta^l(0)) - H(\zeta^l(0))
\Big ]\;\le\; \frac {C(H)}{\sqrt{l\, \rho_{l,k}}}\;,
\end{equation*}
which concludes the proof of the lemma since we assumed the density to
be bounded below by $\epsilon$.
\end{proof}
\subsection{Proof of Proposition \ref{l2}}
Recall $H(\rho) = E_{\nu_\rho}[h]$, $H_l(\eta) = H(\eta^l(0))$, and
$\bar{H_l}(\rho) = E_{\mu_\rho}[H_l]$. Then, we have that
\begin{eqnarray*}
&&\bb E^N \Big[ \, \Big|\int_0^t \big\{h(\eta_s) -
\frac{1}{\epsilon N}\sum_{x=1}^{\epsilon
N}\bar{H_l}(\eta_s^{\varepsilon N}(x)) \big\} ds \Big|\, \Big] \\
&& \ \ \ \ \leq \; \bb E^N \Big[ \, \Big|\int_0^t \big\{h(\eta_s)
-H(\eta_s^l(0))\big\}ds \Big|\, \Big] \\
&&\ \ \ \ + \; \bb E^N \Big[ \, \Big|\int_0^t \Big\{H(\eta_s^l(0))
-\frac{1}{\epsilon N} \sum_{x=1}^{\epsilon N}
H(\eta_s^l(x))\Big\}ds \Big|\, \Big] \\
&&\ \ \ \ + \; \bb E^N \Big[ \, \Big|\int_0^t \Big\{\frac{1}{\epsilon N}
\sum_{x=1}^{\epsilon N} \Big(H(\eta_s^l(x))-\bar{H_l}
(\eta_s^{\varepsilon N}(x)\Big) \Big\}ds \Big|\, \Big]\;.
\end{eqnarray*}
As $h$ is bounded, Lipschitz, we have $H$ is bounded, Lipschitz by
Lemma \ref{Lip-lemma} below, and so the first and second terms
vanish by Lemmatta \ref{l3} and \ref{l4}. For the third term, we can
rewrite it as
\begin{equation*}
\bb E^N \Big[ \, \Big|\int_0^t \Big\{\frac{1}{N} \sum_{x\in {\bb T}_N}
\iota_\epsilon(x/N)\Big(H(\eta^l_s(x))-\bar{H_l}(\eta_s^{\varepsilon
N}(x)\Big) \Big\}ds \Big|\, \Big]
\end{equation*}
where $\iota_\epsilon(\cdot) = \epsilon^{-1}1\{(0,\epsilon]\}$. In
fact, as $h$ is bounded, we can replace $\iota_\epsilon$ in the last
expression by a smooth approximation. Then, for fixed $\epsilon>0$
and $l\geq 1$, treating $H_l(\eta) = H(\eta^l(0))$ as a local
function, which is also bounded, Lipschitz as $H$ is bounded,
Lipschitz, the third term vanishes using Proposition \ref{p1} by
taking $N\uparrow \infty$, and $\varepsilon\downarrow 0$. \qed
\medskip
\begin{lemma}
\label{Lip-lemma} Let $h:\Omega_N\to \bb R$ be a local, Lipschitz
function. Then, $H:\bb R_+ \rightarrow \bb R$ given by $H(\rho) =
E_{\nu_\rho}[h]$ is also Lipschitz.
\end{lemma}
\noindent{\it Proof.} The proof is similar to
that of Corollary II.3.7 \cite{kl} which shows $\bar{h}(\rho) =
E_{\mu_\rho}[h]$ is Lipschitz. Following the
proof of Corollary II.3.7 \cite{kl},
it is not difficult to show $\{\nu_\rho: \rho\geq 0\}$ is a
stochastically increasing family, and for $\rho_1< \rho_2$ that
$$|H(\rho_1) - H(\rho_2)| \ \leq \ C_h \sum_{x\in
A}|E_{\nu_{\rho_1}}[\eta(x)] - E_{\nu_{\rho_2}}[\eta(x)]|$$
where $C_h$ is the Lipschitz constant of $h$, and $A\subset \bb Z$ corresponds to the support of $h$.
If $A$ does not contain the origin, the proof is the same as for
Corollary II.3.7 \cite{kl}.
Otherwise, it is enough to estimate the difference $|E_{\nu_{\rho_1}}[\eta(0)] -
E_{\nu_{\rho_2}}[\eta(0)]|$. When $0=\rho_2<\rho_1$, the difference equals
$E_{\mu_{\rho_1}}[\eta(0)(\eta(0)-1)]/\rho_1\leq
a\varphi(\rho_1)\leq a^2\rho_1$ as $a^{-1}\leq g(k)/k,
\varphi(\rho)/\rho \leq a$ through (LG), (M), and
$E_{\mu_\rho}[g(\eta(0))f(\eta)] =
\varphi(\rho)E_{\mu_\rho}[f(\eta +\mf d_0)]$ where $\mf d_0$ is the
configuration with exactly one particle at the origin. When
$0<\rho_2<\rho_1$, the difference equals
$$|\rho_1^{-1}E_{\mu_{\rho_1}}[\eta(0)^2] -
\rho^{-1}_2E_{\mu_{\rho_2}}[\eta(0)^2]|
\ \leq \ |\sigma^2(\rho_1)/\rho_1 -
\sigma^2(\rho_2)/\rho_1|
+ |\rho_1-\rho_2|
$$
where $\sigma^2(\rho) = E_{\mu_\rho}[(\eta(0)-\rho)^2]$. The
Lipschitz estimate now follows by calculating a uniform bound on the
derivative
$$\partial_\rho \frac{\sigma^2(\rho)}{\rho} \ =\
\frac{m_3(\rho)}{\rho \sigma^2(\rho)} -
\frac{\sigma^2(\rho)}{\rho^2}$$ where $m_3(\rho) =
E_{\mu_\rho}[(\eta(0)-\rho)^3]$. For $\rho$ large, under assumptions
(LG), (M), this is on order $O({\rho}^{-1/2})$ from Lemma 5.2
\cite{LSV} and bound $a^{-1}\leq \varphi(\rho)/\rho\leq a$; on the
other hand, as $\rho\downarrow 0$, the derivative is also bounded.
\qed
|
1,108,101,564,226 | arxiv | \section{Introduction}
Kauffman networks are disordered dynamical systems proposed by Kauffman in
1969 as a model for genetic regulatory systems \cite{K}. They
attracted the interest of physicists in the 80's \cite{DP,DW,DF1,F,FK},
due to their analogy with the disordered systems studied in
statistical mechanics, such as the mean field Spin Glass \cite{MPV}. A
dynamical phase transition was found and studied in the framework of
mean field theory.
In this and in the next paper \cite{BP3} we deal with some structural
properties of the networks that determine their attractors. In the
present paper we introduce the relevant elements, a notion that was
suggested by Flyvbjerg \cite{F} and Flyvbjerg and Kjaer \cite{FK}, and
we study their probability distribution. In the next one we describe
how the relevant elements are subdivided into asymptotically non
communicating, independent modules. The modular organization of random
boolean networks was already suggested by Kauffman \cite{K}, and it was
used by Flyvbjerg and Kjaer to study analytically the attractors
in $K=1$ networks. We shall show that it is possible to describe the
phase transition in random boolean networks in terms of the scaling of
the number of relevant elements with system size, or in terms of a
percolation transition in the set of the relevant elements. The
interest of this approach is that some consequences about the
statistical properties of the attractors can be directly drawn.
In \cite{BP0} we computed the properties of the attractors in the
framework of the annealed approximation, introduced by Derrida and
Pomeau \cite{DP}, but we observed that the results of this
approximation are reliable only when the system is chaotic enough,
becoming exact for a random map. The study of the relevant elements is
complementary to this approach, and we sketch the lines of a new
approximation scheme that works better in the frozen phase and on the
critical line. This region in parameter space is the most interesting
one, since, according to Kauffman, it reproduces some features of real
cells, and is also the less understood, since neither approximate
computations nor simulations \cite{BP1} give a precise picture of the
properties of the attractors for systems of large size.
In next section we define the model, discussing some old results
together with open problems. In section 3 we define the relevant
elements and in section 4 we give an approximate argument predicting
the scaling of their number with system size in the different phases
of the model. In the following section we present our numerical
results, starting from the magnetization and the stable elements
(section 5.1) and then discussing the distribution of the relevant
elements and its connection with the properties of the attractors,
respectively in the chaotic phase (section 5.2) and on the critical line
(section 5.3). The discussion of the results is postponed to our
following paper \cite{BP3}, concerning the modular organization of the
relevant elements on the critical line.
\section{Definition of the model and previous works}
Kauffman model is defined as follows. We consider a set of $N$ elements
$\Omega=\{1,\cdots N\}$ and we associate to each of them a binary
variable, $\sigma_i\in\{0,1\}, i\in\Omega$. In the biological interpretation
proposed by Kauffman each element of the network represents one gene
and the binary variable $\sigma_i$ represents its state of activation.
Each element is under the control of $K$ elements, in the sense that
its state at time $t+1$ is determined by the states at time $t$ of the
$K$ control genes, $j_1(i),\cdots j_K(i)$ and by a response function
of $K$ binary variables, $f_i(\sigma_1,\cdots \sigma_K)\in \{0,1\}$, that
specifies how the element $i$ responds to the signals coming from
its control variables. The control elements are chosen in $\Omega$
with uniform probability. The response functions are also extracted at
random, and it's believed that the properties of the model do not
depend on the details of their distribution
\cite{F,BP0,BP1}. The rule most generally used in teh literature is
the following: for each of the possible inputs $\in \{0,1\}^K$ we
extract independently the value of $f_i$, and we call $p$ the
probability that $f_i$ is equal to 0.
The dynamics of the system obey the equation
\begin{equation} \sigma_i(t+1)=f_i\left(\sigma_{j_1(i)},\cdots \sigma_{j_K(i)}\right). \end{equation}
This evolution law is deterministic, but the system is disordered
because the control rules (elements and functions) are chosen at
random from the beginning and kept fixed: thus we deal with a
statistical ensemble of deterministic dynamical systems, and we are
interested in the statistical properties of systems of large size.
For finite $N$, every trajectory becomes periodic after a
long enough transient time, and the configuration space is partitioned into the
attraction basins of the different periodic orbits. We are interested
in the probability distributions of the number, the length and the size
of the attraction basin of the periodic orbits, as well as in that of
transient times. In the biological metaphor, given a set of rules (a
genome) an attractor represents a possible cellular type, its length
represents the duration of the cellular cycle, and the number of
attractors represents the number of cells that can be formed with a
given genome.
It was observed already in the first simulations that two dynamical regimes
are present, and that the line separating them has properties
reminiscent of those of real cells \cite{K}. In the so-called chaotic phase
(large connectivity, $p$ close to $1/2$) the average length of the
cycles increases exponentially with system size. The limit case of the
chaotic phase, $K\rightarrow\infty$, was already known as Random Map in the
mathematical literature, and was studied in detail by Derrida and
Flyvbjerg \cite{DF2}, who pointed out interesting analogies between this
system and the mean field Spin Glass \cite{MPV} concerning the
distribution of the weights of the attraction basins. In the frozen phase,
on the other hand, the typical length of the cycles does not increase with
$N$. The limit case of this phase, $K=1$, was analytically studied to
some extent by Flyvbjerg and Kjaer \cite{FK}, who introduced in that
context the concept of relevant elements (though without using this name).
The first description of this dynamical phase transition in terms of
an order parameter was given by Derrida and Pomeau \cite{DP}. They
studied the evolution of the Hamming distance between configurations in
the Kauffman networks approximating it with a Markovian stochastic
process. Such approximation (the so-called annealed approximation) was
then shown to be exact in the infinite size limit, concerning the
average value of the distance \cite{DW}. Below a critical line in
parameter space the average distance goes to zero in the infinite size
limit ({\it frozen phase}) and above it the distance goes
to a finite value ({\it chaotic phase}). The position of the
phase transition depends only on the parameter $\rho$,
representing the probability that the responses to two different signals are
different\footnote
{In terms of $p$ one has $\rho=2p(1-p)$, so its value is comprised
between zero and 1/2, but for $K=1$ $\rho$ can be taken as an
independent parameter in [0,1]}, and is given by the equation $\rho_c(K)=1/K$.
The properties of the attractors can be easily computed from the
knowledge of the whole stationary distribution of the distance, and
this can also be obtained within the annealed approximation \cite{BP0}, but the
validity of this approximation in this more general case is not
guaranteed. Comparison with simulations shows that the agreement is
satisfactory in the chaotic phase, while the approximation fails on
the critical line. In the chaotic phase it is possible to compute
the value of the exponent of the typical length of a cycle,
$\tau\propto\exp\left(\alpha(K,\rho) N\right)$, in good agreement with numerical
results, but the distribution of cycle lengths is much
broader than it is expected. The annealed approximation predicts also
that the distribution of the weights of the attraction basins
is universal in the whole chaotic phase, and equal to the
one obtained by Derrida and Flyvbjerg in the case of the Random Map
\cite{DF2}. The corrections to this prediction appear small, if
any, even for $K=3$. Finally, the number of different
cycles in a network is expected to be linear in $N$, but it is very
hard to test numerically this prediction.
The annealed approximation makes also predictions about the critical
line of the model \cite{BP0}. It predicts that the properties of the
attractors are universal on the critical line $\rho=1/K$ (with the
exceptions of the points $K=1, \rho=1$ and $K=\infty, \rho=0$, which are
not transition points). In particular, the typical length of the cycles
should increase as $\sqrt N$ all along the critical line. Numerical
results are not clear under this respect \cite{BP1}: it seems that the
rescaled cycle length $l=L/\sqrt N$ has a limit distribution if $l$ is
small (roughly, smaller than 2) but for larger values the distribution
becomes broader and broader as $N$ increases \cite{Bhatta,BP1}, so
that it is possible to define an effective length scale increasing
much faster with system size (as a stretched exponential). The
distribution of the number of cycles has exactly the same
characteristics. These results cast doubts on the validity of the
biological analogy proposed by Kauffman, that relies very much
on the fact that in critical networks the typical number of cycles
scales as $\sqrt N$, reminiscent of the fact that the number of cell
types of multicellular organisms very far apart in the filogenetic
tree scales as the square root of the number of the genes, and that in
critical networks the typical length of the cycles increases as a
power law of system size, also consistently with the behavior of cell
cycles time. Thus it is interesting to understand how these
distributions look like in the limit of very large systems.
Another reason of interest of the present approach is that it allows
to understand the limits of the annealed approximation. In our
interpretation the annealed approximation is valid as far as the system loses
memory of the details of its evolution. This, of course, does not
happen if in a realization of a random network some structural
properties that are able to influence its asymptotic dynamics
emerge. Thus the approach presented here is complementary to the one
used in \cite{BP0}.
\section{Definition of the relevant elements}
Let us start recalling the definition of the stable elements \cite{F}.
These are elements that evolve to a constant state,
independent of the initial configuration. Flyvbjerg defined them and
computed their fraction $s=S/N$ using the annealed approximation, which
becomes exact in the infinite size limit. We now recall briefly, for
future convenience, the main steps of this calculation.
Let us suppose that an element is controlled by $K-i$ stable elements
and $i$ stable ones. Then it will be stable if the control function
does not depend on the unstable arguments when the stable arguments
assume their fixed values. Otherwise it will be unstable. When all the
$i$ unstable elements are different (this can always be taken to be the
case if $K$ is finite and $N$ grows), the probability $P_i$ to
choose a constant control function of $i$ binary variables is given by
$P_i=p^{n_i}+(1-p)^{n_i}$, with $n_i=2^i$. In the framework of the
annealed approximation, extracting at random connections and response functions
at each time step, we get the following equation for the fraction of variables
that are stable at time $t$:
\begin{equation}\label{stable}
s(t+1)=\gamma\left(s(t)\right)=
\sum_{i=0}^K {K\choose i} s(t)^{K-i}\left(1-s(t)\right)^i P_i. \end{equation}
This equation can be shown to be exact in the infinite size limit.
The fixed point of this map (which can be interpreted as a
self-consistency equation for the fraction of stable variables) has only the
trivial solution $s=1$ in the frozen phase, in other words all the
elements are stable except eventually a number increasing less than
linearly with $N$. In the chaotic phase this solution becomes unstable
and another solution less than 1 appears. This happens when
$K(1-P_1)=1$. Since $1-P_1=\rho$ (it is just the probability that the
response to two different signals are different) this condition is
equivalent to the condition obtained from the study of the Hamming distance.
The existence of the stable variables is due to the finite
connectivity of the network ($s^*$ goes to zero very fast when $K$
increases). These variables do not take part in the asymptotic
dynamics. Among the remaining unstable variables, some are
irrelevant for the dynamics, either because they do not send signals to
any other variable, or because they send signals, but the response
functions are independent of this signal when the stable variables
have attained their fixed values. The remaining variables, that are
unstable and control some unstable variable, are what we
call the relevant variables. They are the only ones that can influence
the long time behavior of the system.
To be more clear we now describe the algorithm that we used to
identify the relevant variables.
As a first step, we have to identify the stable
variables. These are the variables that assume the same constant state
in every limit cycle, and identifying them is computationally very
hard, but very simple in principle. We then eliminate from the system
the stable variables, reducing the response functions to functions of
the unstable variables alone. Some of the connections left are still
irrelevant, and we have to eliminate them (a connection between the
elements $i$ and $j$ is irrelevant if the reduced response function
$f_i(\sigma_{j_1(i)},\cdots \sigma_{j_{K_i}(i)})$ does not depend on the argument
$\sigma_j$ for all the configurations of the remaining $K_i-1$ control variables).
At this point we iterate a procedure to eliminate the irrelevant
variables. At each iteration we eliminate the variables that do not
send any signal to anyone of the variables that are left, until we
remain with a set that cannot be further reduced. This is the set of
the relevant variables.
Measuring the number of relevant variables is computationally a
very hard task. In order to identify the stable variables, in fact, we
should find all the cycles in the network, and, to be rigorous, we
should simulate a number of trajectories of the same order of the
number of configurations in the system. Of course this is not
feasible and we run only 200 (in some case 300) randomly chosen
trajectories in every network. Thus we overestimate the
number of stable elements. Nevertheless, the number of stable elements
changes very little when we simulate more initial conditions and we think
that the error that we make is not very large. However, for every
network we simulate some hundreds of trajectories and every
trajectories has to be followed until the closing time. This grows
exponentially with system size in the chaotic phase. On the critical
line the typical closing time increase roughly as a power law of
system size, but the distribution becomes broader and broader and the
average closing time is more and more dominated by rare events. The
average depends thus on the number of samples generated and on the
cutoff of the closing time, {\it i.e.} the maximum time that we are
disposed to wait to look for a cycle. To reduce the bias determined by
the cutoff, we had to run simulations lasting a time which increases
roughly as a stretched exponential of system size on the critical
line. Thus it is not possible to simulate systems of more than about
one hundred elements in the chaotic phase and one thousands of
elements on the critical line.
\section{Scaling argument in the frozen phase}
The mean field analysis \cite{F} shows that the fraction of
relevant variables vanishes in the frozen phase and on the critical line,
but does not tell how the number of relevant variables scales with $N$ as $N$
grows. In order to clarify this point, we have to go beyond the mean
field picture.
In the special case of $K=1$, belonging to the frozen phase for every
$\rho <1$, there are detailed analytical results about the distribution
of the relevant variables \cite{FK}. We propose here a rough argument
that generalizes those results to the whole frozen phase and predicts
that the typical number of relevant elements scales as $\sqrt N$ on
the critical line. Though this argument is based on some
approximations which we can not control, its results
coincide for $K=1$ with the exact results by Flyvbjerg and Kjaer.
Let us suppose that we add a new element to a system with $N$ elements, $R$ of
which are relevant, while $S$ are stable and $I=N-R-S$ are indifferent,
{\it i.e.} neither stable nor relevant. The probability that the new element
is relevant can be computed as a function of $R$ and $S$, within some
approximations that we are going to discuss in a while. This probability is
equal to the fraction of relevant elements in the system with $N+1$
elements, given that the relevant elements are $R$ and the stable ones are
$S$ in the system with $N$ elements. We can then average over $R$ and
$S$ in order to get an equation connecting $r_{N+1}=\langle R\rangle_{N+1}/(N+1)$
to the moments of the distribution of $R$ in the system with $N$
elements. Since in the frozen phase and on the critical line $r_N$
vanishes, it will be enough to consider the first two moments of the
distribution, and the resulting equation can be solved asymptotically
in $N$.
The weakness of this approach lies on the assumptions that allow us to
express the probability that the new element is relevant as a function
of $R$ and $S$, as it will become soon clear.
We compute now this probability. To this aim, we need two steps:
\begin{enumerate}
\item As a first step, we have to extract the $K$ control elements and the
response function of the new element. As a consequence, the
new element can be stable, unstable or, if it receives an input from itself and
this input is relevant in the sense discussed above, relevant.
The evaluation of the stability is perfectly equivalent to the mean field
argument, but this stability is only temporary because it can be altered by the
second step described below. Thus we call a new element that is stable
(unstable) after the first step a {\it temporarily} stable (unstable) element.
\item Then we have to send to the old system the signal of the new
element. For each of the $KN$ old control connections we have a probability
$1/(N+1)$ that the connection is broken and the old control element is
substituted by the new element. This step perturbs the elements that
control the new element and modifies its temporary stability. We
have no chance to take this into account, unless we use some drastic
approximations.
\end{enumerate}
In the second step, three situations can occur.
\begin{enumerate}
\item If the new element was relevant in the first step, the new step can not
modify this condition.
\item If the new element was unstable, it cannot become stable through the
feedback of its signal. So it will be relevant or indifferent, depending on
whether it sends an input to at least one relevant element or not.
\item If the new element was stable, its signal can destabilize some of the
elements that control it and thus it can become relevant through a feedback
mechanism, very hard to investigate analytically.
\end{enumerate}
To compute the probability of case 3, we should know the organization
of the network in the very detail and not only the number of relevant
and stable elements. We propose to bypass this difficulty considering
a different event: we will consider the new element relevant if it
receives a signal from a previously relevant element or from
itself. This is the simplest way to get a closed equation for the
average number of relevant elements. In this way we make two errors
of opposite sign: on one hand we overestimate the probability that a
temporarily unstable element becomes relevant, on the other one we
underestimate the probability that the new element is temporarily
unstable and we neglect the probability that a temporarily stable
element becomes relevant through a feedback loop.
We think that this method captures at least the qualitative behavior of
the number of relevant elements. We have then to compare the estimate
given by this approximation to the simulations, because the
approximation is not under control. We present this argument because
its results agree with both the numerical results and with the
analytical calculations for $K=1$ and because we believe that it is
possible to improve this method and to keep the approximation under control.
Since we are interested in the frozen phase, where the fraction of unstable
elements vanishes in the infinite size limit, we can neglect the
eventuality that the new element is controlled by more than two
elements that were relevant in the old system. The results are
consistent with this assumption.
With these approximations we obtain the following equation for the
probability that the new element is relevant:
\begin{eqnarray} \langle r\rangle_{N+1}=\sum_{n=0}^N \Pr\left\{R_N=n\right\}
\left[K\rho\left({n+1\over N+1}\right) \left(1-{n+1\over N+1}\right)^{K-1}\right.\label{rilev} \\
+\left.\rho_2{K\choose 2}\left({n+1\over N+1}\right)^2
\left(1-{n+1\over N+1}\right)^{K-2}\right], \nonumber\end{eqnarray}
where $\rho_2$ represents the probability that a Boolean function of two
arguments is not constant and in terms of $\rho$ is given by
\begin{equation} \rho_2=1-p^4-(1-p)^4=\rho\left(2-{\rho\over 2}\right). \end{equation}
In the frozen phase it is sufficient to consider that the new element receives
only one signal from the previously relevant elements. So, posing $c=K\rho$,
the equation for the new fraction of relevant elements, $r$, is
\begin{equation} \left\langle r\right\rangle_{N+1}\approx c\left\langle r\right\rangle_N+{c\over N}. \end{equation}
The first term represents a new element that receives a relevant signal from
one of the previously relevant elements, the second term represents a new
element that receives its own relevant signal.
Thus the average number of relevant elements is independent on $N$ and its
asymptotic value is
\begin{equation} \left\langle R\right\rangle_N={c\over 1-c}.\label{froz} \end{equation}
This number diverges on the critical line $c=1$. In this case, we have to
consider also the eventuality that the new element receives a signal from two
of the previously relevant elements. Expanding to the second order in
$r=R/N$, and using the fact that $\rho_c=1/K$, we get the equation
\begin{equation} \left\langle r\right\rangle_{N+1}\approx \left\langle r\right\rangle_N-\left({K-1\over 4K}\right)
\left\langle r^2\right\rangle_N+{1\over N}, \end{equation}
whence, in the asymptotic regime where the variations of $\langle r\rangle_N$ are of
order $r/N$, we finally get
\begin{equation} \left\langle r^2\right\rangle_N\approx\left({K-1\over 4K}\right){1\over N}. \end{equation}
This means that the scale of the number of relevant elements grows, on the
chaotic phase, as $\sqrt N$.
We stress here that these computations are valid because of the finite
connectivity of the system. If we perform the limit $K\rightarrow\infty$ on the above
result, we get that the scale of the number of relevant elements grow as
$1/2\sqrt N$. If, instead, we apply the limit $K\rightarrow\infty$ prior to the limit
$N\rightarrow\infty$ we get the trivial critical point $\rho=0$, where all the elements
are stable after one time step, while for every other $\rho$ value all the
elements are relevant.
Thus, the two limits do not commute. In fact, the equation (\ref{stable}) for
the fraction of stable variables and all the computations performed in this
section are valid only if we can neglect that the same element is chosen more
than once to control a given element, {\it i.e.} for $K\ll N$.
\vspace{0.5cm}
The result (\ref{froz}) coincides for $K=1$ with the analytical
computation by Flyvbjerg and Kjaer \cite{FK}, thus suggesting that the
distribution of relevant elements is independent on $N$ in the whole
frozen phase, and depends on the two parameters $K$ and $\rho$ only
through their product. This picture agrees with the results of the
annealed approximation, which predicts that the distribution of the
number of different elements in two asymptotic configurations is
independent on $N$ and depends only on the product of the parameters
$K$ and $\rho$ in the frozen phase \cite{BP0}.
Our simulations confirm that on the critical line the number of
relevant elements scales as $\sqrt N$ (see figure \ref{fig_ril4}). Also
the annealed approximation is consistent with this result, since it
predicts that the number of elements whose state is different in two
asymptotic configurations has to be rescaled with $\sqrt N$ on the
critical line \cite{BP0}. On the other hand the number of unstable
elements grows much faster with $N$ (numerically it is found that it
goes as $N^{3/4}$, see below) but this discrepancy is only apparent,
since the asymptotic Hamming distance is related more to the number of
relevant elements than to this quantity.
For later convenience (see our next paper) it is also interesting to
compute the effective connectivity, defined as the average value of the
relevant connections between relevant elements. Let us compute it by
imposing the condition that the network has $R$ relevant elements.
The effective connectivity is equal to the average number of connections
between the new element and the other relevant elements of the older system,
with the condition that the new element is relevant. From equation
(\ref{rilev}) we have, at the leading order in $R/N$:
\begin{eqnarray} \label{Ceff} K_{eff}(R)=&&
{c{R\over N} \left(1-{R\over N}\right)^{K-1}+2\rho_2{K\choose 2}\left({R\over N}\right)^2\left(1-{R\over N}\right)^{K-2}\over
c{R\over N} \left(1-{R\over N}\right)^{K-1}+\rho_2{K\choose 2}\left({R\over N}\right)^2\left(1-{R\over N}\right)^{K-2}}\\\
\nonumber &&\approx 1+A(K,\rho){R\over N}. \end{eqnarray}
This equation shows that the effective connectivity minus 1 goes to
zero as $R/N$ in the frozen phase (where $R/N\propto 1/N$) and on the
critical line (where $R/N\propto 1/\sqrt N$). For a fixed system size,
the effective connectivity increases linearly with the number of
relevant elements.
\section{Numerical results}
\subsection{Magnetization and stable elements}
As a first step, our algorithm has to identify the stable elements. It
does this by measuring their magnetization. We thus discuss our numerical
results starting from this quantity.
The magnetization $m_i^\alpha$ of element $i$ on the cycle $\Gamma_\alpha$ can
be defined as the average activity of the element along the cycle:
\begin{equation} m_i^\alpha={1\over L_\alpha}\sum_{C\in \Gamma_\alpha}\sigma_i(C). \label{mag}\end{equation}
The distribution of this variable, shown in figure \ref{fig_magneti}
for $K=3$ and $N=75$, has many peaks, corresponding to simple
rational values. This perhaps reflects the fact that the
relevant elements are divided into asymptotically independent modules,
so that a cycle can be decomposed into several independent shorter
cycles. This subject will be further discussed in our second paper.
Our results have to be compared to the analytical work by Derrida and
Flyvbjerg \cite{DF3}.
They defined the magnetization of element $i$ at time $t$ on a given network
as the activity of the element at time $t$ averaged over many initial
configurations and could compute analytically its stationary
distribution, in the limit $N\rightarrow\infty$, using
the annealed approximation, that can be shown to be exact for this purpose.
The picture they got is different from ours, in particular we see
peaks much higher than theirs. For instance the peak at $\mid 2m-1
\mid =1$, which gives information on the size of the stable core of
the network, is about 10 times larger then expected, and the first
moments of the magnetization, that can be computed analytically, are
larger than the predicted values. Thus we performed other simulations
that strongly suggest that these discrepancies are finite size
effects, and we present an argument that explains their origin.
In order to investigate larger systems we had to change the definition
of the magnetization. The definition (\ref{mag}) is numerically
cumbersome, since the measure takes place only after that a cycle has
been found, and this means, for chaotic systems, that we have to wait
a time exponentially increasing with $N$. Thus we neglected this
condition and we measured the magnetization of the variable $i$ at
time $t$ as the average activity of the variable with respect to
different initial conditions (this definition coincides with the one
used by Derrida and Flyvbjerg). For very large $t$, when all
trajectories have reached a limit cycle, this quantity tends to the
asymptotic value
\begin{equation} m_i=\sum_\alpha W_\alpha m_i^\alpha,\label{magna}\end{equation}
where $W_\alpha$ is the weight of the basin of cycle $\Gamma_\alpha$ and
$m_i^\alpha$ is defined in (\ref{mag}). We observed that $m(t)$
reaches a stationary value (within some precision) much earlier than the
typical time at which the trajectories reach their limit cycles. At
first sight surprisingly, the time after which $m(t)$ reaches its stationary
distribution does decrease with system size instead of increasing
(see figure \ref{fig_score}).
We measured the second and fourth moment of the magnetization in a
system with $K=3$ and $\rho=1/2$, and we
found a large positive correction to the infinite size values computed
by Derrida and Flyvbjerg \cite{DF3}. The values found coincide
within the statistical error with those obtained from equation
(\ref{mag}) for a system of small size for which we did an explicit
comparison. These values can be fitted to the sum of the infinite size
value, that we got from \cite{DF3}, plus an exponentially decreasing
term. The exponent of the best fit turned out to be the same for both
the moments that we measured: we found
$m_2(N)\approx 0.236+0.24\cdot \exp(-N/70)$, and
$m_4(N)\approx 0.128+0.26\cdot \exp(-N/70)$.
\vspace{0.5cm}
The measure of the magnetization allows to identify the stable
elements as the elements with $\sum_\alpha W_\alpha m_i^\alpha$ equal either to 0
or to 1. The two definitions of the magnetization gave roughly the same
number of stable elements in the cases where we could compare the
results, but with the second method we could consider much larger
systems (we recall that the difference between the
two methods is that in the first case a cycle has been reached while
in the second one the system is still in some transient
configuration). The second method was used only to study finite size
effects, since it does not allow to identify the
relevant elements (see below).
Both the methods overestimate the number of stable elements, since
it could happen that an element appearing stable in our sample of
trajectories (some hundreds) oscillates in a cycle that is not reached
by any of them. We checked that the results do not change qualitatively
if we consider a larger number of trajectories.
The fraction of stable nodes measured in simulations with $K=3$
and $N$ ranging from 50 to 200 have been compared to the prediction of the
mean field theory by Flyvbjerg. The networks with $N=50$ have a stable core
about 10 times larger than the mean field value (in this case we
measured the magnetization using both the above definitions, while for
larger systems only equation (\ref{magna}) was used). The corrections
to the mean field value, that is exact in the infinite size limit,
appear to decay exponentially with a rate identical, within
statistical errors, to the decay rate of the corrections to the moments of the
magnetization: we found
$s(N)\approx 0.0122+0.21\cdot \exp(-N/70)$.
For every size of the systems which we simulated the stable core is
then much larger than it would be in an infinite system.
On this ground, we may expect very important finite size effects concerning
the dynamical properties of the system.
\vspace{0.5cm}
Summarizing, the distribution of the magnetization for finite systems
has the following characteristic: 1) The asymptotic value is reached
after a time that {\it decreases} with system size; 2) the
corrections to the infinite size values are very large; and 3) these
corrections decrease exponentially with system size.
These apparently strange finite size effects have a simple
interpretation: they arise as a consequence of the periodic dynamic of
the random networks.
The mean field values of the magnetization and of the
stable core are computed within the annealed approximation without
taking into account the fact that the asymptotic dynamic is
periodic. As we proposed in \cite{BP0}, the existence of limit cycles must
be taken into account in the framework of the annealed approximation
in this way: if at time $t$ all the configurations generated are
different ({\it i.e.} the trajectory is still open) we treat the
quantities of interest (distance, magnetization or stable core) as a
Markovian stochastic process; if one configuration has been found twice
(the trajectory is closed) we impose the condition that all
quantities are periodic. Thus the master equation for the distribution
for the number of stable variables is, in the framework of the annealed
approximation:
\begin{eqnarray}
& & \Pr\left\{ S(t+1)=S',O(t+1)\mid S(t)=S, O(t)\right\}=
{N\choose
S'}\left(\gamma(s)\right)^{S'}\left(1-\gamma(s)\right)^{N-S'}\left(1-\pi_N(S,t)\right)
\nonumber\\
& & \Pr\left\{ S(t+1)=S',\overline{O(t+1)}\mid S(t)=S, O(t)\right\}=
{N\choose S'}\left(\gamma(s)\right)^{S'}\left(1-\gamma(s)\right)^{N-S'}\pi_N(S,t)
\nonumber\\
& & \Pr\left\{ S(t+1)=S',\overline{O(t+1)}\mid S(t)=S,
\overline{O(t)}\right\}= \delta_{SS'}, \label{stable2}
\end{eqnarray}
where $S(t)$ is the number of stable elements, $s=S/N$, $O(t)$ stands for
the condition that the trajectory is open at time $t$ (no
configuration has been visited twice), $\overline{O(t)}$ stands for
the condition complementary to $O(t)$ (the trajectory has closed on a
previously visited configuration) and
$\pi_N(S,t)$ is the probability that a trajectory open
at time $t$ and with $S$ stable elements at time $t+1$ closes at that
time. Finally, $\gamma(s)$ is given by equation (\ref{stable}).
We don't know how to compute $\pi_N(S,t)$, but it is clear that this
is an increasing function of $S$ for fixed $t$: the more elements are
stable, the more it is likely that the trajectory closes. The
infinite size value of the stable core is given by equation
(\ref{stable}), that represents the evolution of the most probable
number of stable variables. It is clear that the corrections to this value
are positive, and that they go to zero as soon
as the closing time becomes much larger than the time necessary for
the stable core to reach its stationary value in an infinite system
(where all trajectories are still open). Thus we expect that these
corrections vanish as a power law of the typical length of the cycles:
in the chaotic phase this means that the finite size corrections due to this
effect vanish exponentially with system size, as we observed
simulating systems with $K=3$ and $\rho=1/2$. Lastly, this argument
implies that the time after which the distribution of the stable
elements becomes stationary is shorter in an infinite system than in a
small system, where the evolution of $S(t)$ is coupled to the closure
of the periodic orbits. Thus the correction of the annealed
approximation to take into account the existence of periodic
attractors can account for all the features of the finite size effects
that we observed.
\subsection{The relevant elements in the chaotic phase}
After having identified the stable elements we detect the relevant
elements using the algorithm described in the second section and we
study how this quantity influences the dynamical properties
of the network. The main results are that the average
cycle length grows almost exponentially with the number of relevant
variables in some range of this variable and the average weight of the
attraction basins has apparently a non monotonic behavior versus the number of
relevant variables. This qualitative features are the same both in the
chaotic phase and on the critical line, but the ranges of $R$ in which
these things happen are quite different in the two cases. We start
discussing the situation in the chaotic phase.
The simulations were done generating at random $20,000$ sample
networks and running 200 trajectories on each of them. The parameters
considered in this section are $K=3$, $\rho=1/2$ and
system size $N$ ranging from 30 to 60 elements.
Figure \ref{fig_ril} shows the density of the distribution of the fraction
$r_N$ of relevant variables, $r_N=R/N$. The density relative to the
most probable value increases with system size, and it appears that
$r_N$ tends to be delta-distributed in the infinite size limit, as it is
expected on the ground of the annealed master equation (\ref{stable2}).
We observe an excess of networks with very few relevant
elements ({\it i.e.} very many stable elements), consistently with the
finite size effects discussed in last section. This excess seems to
disappear in the infinite size limit.
Then we show the average length of the cycles in networks with $R$
relevant elements (figure \ref{perril}).
This quantity increases almost exponentially with $R$ when
$r=R/N$ is large, while its behavior is different for small $r$. The
crossover takes place at about $r=0.5$. Thus the number of relevant
elements turns out to have a very important influence on the typical
length of the cycles
We have also measured the conditional distribution of the length of
the cycles in networks with $R$ relevant elements. When $R$ is close
to $N$ the distribution decays as a stretched exponential with an
exponent smaller than one, very close to the one found in the
unconditioned distribution. Thus the deviation of the unconditioned
distribution from the prediction of the annealed approximation, that
predicts a much narrower distribution, is not a consequence of the
existence of the relevant elements.
The other quantity that we measured is the average weight of
the attraction basins, $Y_2$, defined by the equation
\begin{equation} Y_2=\sum_\alpha W_\alpha^2, \end{equation}
where $W_\alpha$ is the attraction basin of cycle $\alpha$. We used the method
proposed by Derrida and Flyvbjerg \cite{DF1}, that is based on the
fact that $Y_2$ is
equal to the probability that two trajectories chosen at random
end up on the same attractor.
From our data (not shown) it appears that $Y_2$ has a non monotonic behavior
as a function of $r$: for very small $r$ it decreases from the value 1,
corresponding to $r=0$, reaches a minimum and rapidly increases. At
large $r$, $Y_2$ does not seem to be correlated with $r$ (at least
within the statistical error, that is rather large). We will
see in the next paper that the decreasing behavior at small $r$ can be
interpreted as an effect of the modular organization of Kauffman networks.
\subsection{Relevant elements on the critical line}
We simulated systems with $K=4$ and the critical value
$\rho=1/4$. Systems size ranges from 120 to 1600. Concerning the
statistical properties of the attractors, these networks have a
behavior very similar to that of the more studied $K=2$, $\rho=1/2$
networks \cite{BP1}.
In these networks, the number of relevant elements appears to scale as
$\sqrt N$, in agreement with the argument presented in section 4. The
number of unstable elements, on the other hand, appears
to scale as $N^{3/4}$. This implies that the probability to extract
at random an element which is relevant, scaling as $N^{-1/2}$, is
approximately proportional to the square of the probability to extract
at random an element which is relevant ($N^{-1/4}$).
These scaling laws can be observed both looking at the average quantities
and looking at the whole distribution. The average number of unstable
variables is found to follow the power law $U\propto N^a$, with
$a=0.74\pm 0.01$. We then define the rescaled
variable $x_u=U/N^{3/4}$, and we compare its probability density
for various system sizes. As it can be seen in figure
\ref{fig_unst} the different curves superimpose within the
statistical errors. This suggests that $x_u$ has a well defined
probability density in the infinite size limit, although our data are
rather noisy to state this point without doubts. We can distinguish in
the distribution three different ranges with different
characteristics: at vanishingly small values of $x_u$ (ranging from
$U=0$ up to $U=4$) the density decreases very fast.
At intermediate values, roughly up to $x_u=1$, it looks to decrease
approximately as a power law with a small exponent (the best fit exponents
that we found range from 0.25 to 0.40, showing some tendency to
increase with system size). Asymptotically, for large $x_u$, the best
fit is a stretched exponential, $f(x)\approx \exp\left(-Cx^\beta\right)$, with an
exponent compatible with $\beta=1.7\pm 1$ for all the systems that with
studied with $N$ larger than 240.
The number of relevant variables was studied in a similar way. Its
average value increases as a power law of $N$, $\langle R\rangle\propto R^b$, with
$b=0.52\pm 0.02$. The rescaled variable $x_r=R/\sqrt N$ looks to
have a well defined distribution in the infinite size limit, as it is
shown in figure \ref{fig_ril4}, where the probability density of $x_r$
is plotted for system sizes ranging from 120 to 1600.
For large $x$ the density of the distribution is well fitted by a
stretched exponential, $\exp\left(-Cx^\beta\right)$, with the exponent $\beta$
compatible with the value $\beta=0.56\pm 0.02$ for system size larger
than 240.
\vspace{0.5cm}
The average length of the cycles increases exponentially as a function of the
number of relevant elements, for $r$ large, and more slowly for $r$
small, just as it happens in the chaotic phase. Figure
\ref{fig_perril4} shows on a logarithmic scale the behavior of the
average length of the cycles as a function of the rescaled number of relevant
elements, $x_r=R/\sqrt N$, for different system sizes at the critical
point $K=4$, $\rho=1/4$.
The average weight of the attraction basins, $Y_2$, has a non
monotonic behavior as a function of the number of relevant elements,
as it happens in the chaotic phase. The value of $Y_2(R)$ is one for
$R=0$, then decreases to a minimum value and increases very slowly, as
it is shown in figure \ref{fig_yril4}, where $\overline{Y_2}$ is
plotted against $x_r=R/\sqrt N$, for $K=4$, $\rho=1/4$ and different
system sizes.
Nevertheless, there are two important
differences with respect to the chaotic phase: first, the range where
$Y_2(R)$ is a decreasing function is much wider on the critical line than in
the chaotic phase; then, on the critical line the curves corresponding to a
smaller $N$ value are lower, while in the chaotic phase the contrary
holds. As a consequence, if we average $Y_2(R)$ over $R$ on the
critical line, we get a quantity vanishing in the infinite size
limit \cite{BP1}, while the average weight of the attraction basins is
finite and very close to the Random Map value in the chaotic phase \cite{BP0}.
This difference and the non monotonic $R$ behavior of $Y_2(R)$ have
a clear interpretation in the framework of the modular
organization of Kauffman networks \cite{BP3}. We thus postpone to that
paper the discussion of our results.
|
1,108,101,564,227 | arxiv | \section{Introduction}
Classical radio galaxies are the quintessential type II active galactic
nuclei (AGN): accreting super-massive black holes that have their continuum
emission in the UV/optical/NIR absorbed by dust, thus primarily giving them
the appearance of {\it normal} star-forming galaxies at these wavelengths.
The main evidence that they host super-massive black holes comes
from the high luminosities of their radio 'lobes' that are fed by radio jets
originating from the host galactic nuclei (Rees 1978). The lobe spatial
extents (tens of kilo-parsecs) and the luminosities
($L_{1.4{\rm GHz}}\ge10^{25}$\thinspace WHz$^{-1}$) rule out emission from
star-formation. To be more precise radio galaxies are Franhoff-Riley type II
objects with edge-brightened radio lobes. In terms of the orientation
unification scheme for AGN (Antonucci 1984) radio galaxies are analogous
to radio loud quasars obscured in the UV/optical/NIR.
Due to their large radio luminosities, radio galaxies were the predominant
way to probe the distant universe until the advent of ultra-deep optical
surveys in the last decade. In fact, radio galaxies were the first galaxies
to be found above redshifts 1, 2, 3 and 4 ({e.g.,}\ Stern \& Spinrad). Since
their first discovery it has been known that the optical hosts of luminous
radio sources are primarily giant elliptical (gE and cD) galaxies
(Matthews {et al.}\ 1964). In the more distant universe, indirect evidence
that this association remains intact comes from the detection of normal
elliptical host galaxies with $r^{1/4}$ law light profiles in $HST$/NICMOS
observations
of high-redshift radio galaxies (HzRGs) at 1$\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}$$z$$\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}$2
(Pentericci et al. 2001; Zirm et al. 2003); the tendency for HzRGs to
reside in moderately rich (proto-)cluster environments (Venemans et
al. 2002; Stern et al. 2003); the spectacular ($>$100\,kpc) luminous
\ifmmode {\rm Ly\alpha}\else{\rm Ly$\alpha$}\fi\ haloes seen around several sources, implying large gas
reservoirs (Reuland et al. 2003; Villar-Mart\'\i n et al. 2003);
sub-mm detections of HzRGs, implying violent star formation activity
up to $\sim$100\,M$_{\odot}$yr$^{-1}$ (Archibald et al. 2001; Reuland
et al. 2004); and a few direct kinematic measurements of HzRGs (Dey \&
Spinrad 1996). The most compelling evidence of this association of
HzRGs with the most massive systems, however, is the tight correlation
of the observed near-infrared Hubble, or $K-z$ diagram for powerful
radio sources (De Breuck et al. 2002, Rocca-Volmerange et
al. 2004): HzRGs form a narrow redshift sequence which traces the
envelope of radio-quiet galaxies and is well-modeled by the evolution
of a stellar population formed at high redshift from a reservoir of
$10^{12}$\,M$_{\odot}$.
With the more recent
discovery that the stellar bulge and central black hole masses of
galaxies are closely correlated, it is no longer a surprise that the
parent galaxies of the most powerful radio sources occupy the upper
end of the galaxy mass function (Maggorian 1998; Tremaine 2002).
The peak of the stellar emission at $1.6\thinspace\hbox{$\mu {\rm m}$}$ of elliptical
galaxies has been
found to be a reasonably robust measure of the stellar mass for old
passively evolving systems. The mass-to-light ratio at this
wavelength does not vary greatly for ages $\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$}$ 1Gyr. The {\it Spitzer
Space Telescope} now allows us to observe this feature
in distant sources in the rest frame. In particular the four bands of the
IRAC instrument and the IRS $16\thinspace \hbox{$\mu {\rm m}$}$ peak-up imager straddle
the rest frame $1.6\thinspace \hbox{$\mu {\rm m}$}$ flux density at $1\le$$z$$\le 5$.
\subsection{Sample selection}
In order to investigate the formation and evolution of the most massive
galaxies we have performed a {\it Spitzer} survey of 70 HzRGs in GO cycle 1.
These HzRGs have also been carefully chosen to cover the full range of
redshifts from $z=1$ to the redshift of the highest known radio galaxy
($z=5.2$) and two orders of magnitude in radio luminosity, preferentially
selecting targets with supporting data from $HST$, {\it Chandra} and
SCUBA/MAMBO. By covering this parameter space, any trends with radio
luminosity or redshift should be apparent.
The observations consist of photometry in eight bands from the three
instruments aboard {\it Spitzer}, exercising the full complement of
imaging capabilities (IRAC: 3.6, 4.5, 5.8 and 8$\,\hbox{$\mu {\rm m}$}$; IRS: $16\,\hbox{$\mu {\rm m}$}$; MIPS:
24, 70 and $160\hbox{$\mu {\rm m}$}$). Due to uncertainty in the ability of
MIPS to image against the Galactic infrared
background at the time of submission of GO Cycle 1, we chose only to image
26/70 HzRGs with MIPS. The other 54 sources have been applied to be
observed in {\it Spitzer} Cycle 3. The IRS images are only for the 46 objects
above $z=2$ as below this redshift the $8\,\hbox{$\mu {\rm m}$}$ IRAC channel adequately covers
the longward side of the $1.6\,\hbox{$\mu {\rm m}$}$ bump.
The 'SHizRaG' team keep track of this project through a private webpage which
we intend to make public eventually. Currently a restricted version of the
webpage is available here:
\noindent
{\tt http://spider.ipac.caltech.edu/staff/seymour/SHizRaGs.html}
\begin{figure}
\begin{minipage}{0.6\linewidth}
\psfig{figure=postage.ps,width=7cm,angle=0}
\end{minipage}
\begin{minipage}{0.4\linewidth}
\psfig{figure=4c23.56_pretty.ps,width=6.0cm,angle=-90}
\end{minipage}
\caption{{\bf (Left)} Postage stamp images of 4C23.56 at z=2.48: the 4
IRAC bands (top
row - wavelength increasing left to right) followed by the IRS $16\,\hbox{$\mu {\rm m}$}$
peak-up image and the MIPS bands (bottom row) showing a clear detection
in each waveband out to $70\,\hbox{$\mu {\rm m}$}$ {\bf (Right)} SED fitting of 4C23.56 {\it
Spitzer} data using an elliptical galaxy template and black-bodies
of various temperatures.}
\end{figure}
\section{Stellar Luminosities and Masses}
\subsection{SED fitting}
In order to determine the $1.6\,\hbox{$\mu {\rm m}$}$ stellar luminosity, the contribution at
this wavelength from hot, AGN heated dust needs to be ascertained. We have
performed this analysis for the sample of 17 MIPS HzRGs which have
$24\,\hbox{$\mu {\rm m}$}$ detections as well IRAC observations. For the
modeling presented here we have chosen to use just the {\it Spitzer} data.
The principle reason is that shorter wavelengths, {\it i.e. } the rest-frame optical,
are more likely to have significant contributions from young
stellar populations, emission lines, and AGN continuum.
We model the observed 3.6 to $24\,\hbox{$\mu {\rm m}$}$ data using an elliptical stellar
template and three black-bodies. The elliptical stellar SED is taken from
the P\'EGASE (Fioc \& Rocca-Volmerange 1997) software and corresponds
to a passively evolving $10^{12}\,M_\odot$ stellar population
formed at $z=10$; {\it i.e. } for each redshift a template of the correct age was
chosen although the SED does not evolve significantly after 1\,Gyr ({\it i.e. }
$z\sim4.4$). The three black-bodies where chosen to be at 60\,K (analogous to
the temperature of cold dust found from sub-mm observations), 250\,K and
600-1200\,K. This last value was allowed to vary and the best temperature from
the fitting applied. Full details of this modeling are presented in Seymour
{et al.}\ ({\it in prep.}).
Figure 1 illustrates the {\it Spitzer} observations and the SED modeling for
one representative source, 4C23.56 at $z=2.48$. The best fit hot dust
temperature is 750\,K and one can see that the 1.6$\,\hbox{$\mu {\rm m}$}$ stellar peak has
considerable AGN contamination. A more detailed analysis of the broad-band,
X-ray to radio SED of 4C23.56 is presented in De Breuck {et al.}\ ({\it in
prep.}).
The other 53 HzRGs with no MIPS detections can at least provide upper limits
to the stellar luminosities and, in some cases only the upper limits come
from detections only in IRAC
channels 1 and 2. So we fit a maximum elliptical template SED to the IRAC
data. In some case the elliptical template fits quite well to IRAC channels
1-3 and in others the SED is steeply rising at the longer wavelengths
and the fit is restricted only by channel 1. In the former case the we
calculate a 'nominal' stellar mass from the fit, but in the latter we may
only derive upper limits.
\begin{figure}
\centering
\begin{minipage}{1.0\linewidth}
\psfig{figure=loghz.ps,width=8.2cm,angle=-90}
\end{minipage}
\begin{minipage}{1.0\linewidth}
\psfig{figure=logmz.ps,width=8.2cm,angle=-90}
\end{minipage}
\caption{{\bf (Top)} HzRG Stellar luminosities plotted against redshift of
each HzRG. The solid line indicates a of $10^{12}\,M_\odot$ elliptical
galaxy. {\bf (Bottom)} Stellar masses from the SED fitting plotted
against redshift. Most stellar luminosities indicate with stellar masses
of $10^{11}-10^{12}\,M_\odot$. Stellar luminosities and masses derived
from sources with MIPS $24\,\hbox{$\mu {\rm m}$}$ detections are circled, the dots with
out circles are 'nominal' masses from just IRAC data. Downward arrows
indicate upper limits.}
\end{figure}
\subsection{Results}
The derived rest-frame, AGN-subtracted, 1.6$\,\hbox{$\mu {\rm m}$}$ stellar luminosities are
shown against redshift in Fig. 3 (top) for all our HzRGs. Also laid on the
plot by a solid line is the stellar luminosity of a $10^{12}\,M_\odot$
elliptical galaxy. The stellar luminosities imply stellar masses in the range
$10^{11}-10^{12}\,M_\odot$ with a mean mass of $\sim10^{11.5}\,M_\odot$ (Fig.
3 bottom). This mean mass remains consistent out to $z=3$ (beyond which
the parameter space becomes less well sampled) suggesting that the upper
end of the mass function is already in place by at least $z=3$.
\section{Mid-IR luminosities}
Figure 3 shows the rest 6.75$\,\hbox{$\mu {\rm m}$}$ luminosity against the rest 3\,GHz
luminosity for the 18 radio galaxies with MIPS observations. The wavelength
of 6.75$\,\hbox{$\mu {\rm m}$}$ was chosen as a fiducial mid-IR wavelength as it is the mean
rest wavelength of the observed MIPS 24$\,\hbox{$\mu {\rm m}$}$ band for our sample
and also the wavelength of the LW2 filter from ISOCAM, allowing a direct
comparison of derived luminosities. It is clear to first approximation that
the two luminosities correlate and by implication have a common origin. This
correlation makes sense if the radio luminosity comes from lobes induced by a
jet from the AGN and the mid-IR comes from hot AGN-heated dust.
The mid-IR luminosities are also all greater than $10^{11}L_\odot$
implying bolometric luminosities on ULIRG scales assuming local
relations hold for these objects. These radio galaxies tend to have a higher
mean mid-IR to radio luminosity ratio then those selected at lower redshift,
{e.g.,}\ Ogle {et al.}\ (2006) finds that $z<1$ radio galaxies tend to have
$L_{\rm mid-IR}/L_{\rm radio}\sim10-100$.
\begin{figure}
\centering
\begin{minipage}{1.0\linewidth}
\psfig{figure=llw2_log3g.ps,width=10cm,angle=-90}
\end{minipage}
\caption{Rest mid-IR luminosity against rest radio luminosity which
approximately correlate implying a common origin. This correlation may be
explained by the AGN producing the radio jets and also heating the hot
dust radiating in the mid-IR. The mid-IR/radio ratio increases at higher
redshifts compared to low redshift samples ({e.g.,}\ Ogle {et al.}\ 2006) most
likely due to the hotter temperatures of the AGN-heated dust.}
\end{figure}
\section{Conclusions and future work}
We have presented a stellar-luminosity/redshift relation of HzRGs, a
more physical representation of the $K-z$ diagram. This distribution seems to
confirm the long held paradigm that radio galaxies are hosted by massive
ellipticals out to high redshifts, and that the most massive galaxies are
already in place
by redshift 4 and possibly earlier. We also observe a correlation between the
infrared and radio luminosities which is unsurprising if they are both
fueled by the AGN.
Current on-going work includes mm/sub-mm observations with SCUBA, MAMBO and
the CSO to constrain the cold dust component at longer wavelengths and hence
estimate the mass of this cold dust. Over-densities of sources around radio
galaxies are being investigated to look for evidence of cluster formation
(Zirm {et al.}\, {\it in prep.}).
Over half the sources with $24\,\hbox{$\mu {\rm m}$}$ images have over-densities of factors of
2-5 greater than that expected from 24$\,\hbox{$\mu {\rm m}$}$ source counts. These radio
galaxies mainly lie at $1.5<z<2.5$ where the strong $6-8\,\hbox{$\mu {\rm m}$}$ PAH feature
passes though the 24$\,\hbox{$\mu {\rm m}$}$ MIPS band, enhancing the 24$\,\hbox{$\mu {\rm m}$}$ flux density.
\acknowledgements
We thank the LOC for organising a great conference and were particularly
impressed by the nifty design of the webpage. We also liked Harry's
conference haircut.
|
1,108,101,564,228 | arxiv | \section{Introduction}\label{I}
Recently, it has been recognized that thermal recombination of quarks plays an
important role for hadron production at intermediate $p_t$ in relativistic
heavy-ion collisions. This idea, first studied in Refs.~\cite{recomb,Fries},
explains the formation of low to intermediate $p_t$ hadrons from the bounding
of quarks in a densely populated phase space, assigning appropriate degeneracy
factors for mesons and baryons An implicit assumption is that hadronization
happens at a single temperature. However, it is known that hadronization is
not an instantaneous process but rather that it spans a window of temperatures
and densities. For instance lattice calculations~\cite{Karsch} show that the
phase transition from a deconfined state of quarks and gluons to a hadron gas
is, as a function of temperature, not sharp. Motivated by these shortcomings
of the original recombination scenario, here we set out to explore to what
extent the probability to recombine quarks into mesons and baryons
depends on density and temperature and whether this probability differs for
hadrons with two and three constituents, that is to say, whether the relative
population of baryons and mesons can be attributed not only to the degeneracy
factors but rather to the dynamical properties of quark clustering in a
varying density environment.
A detailed answer to the above question stemming from first principles can only
be found by means of non-perturbative QCD. Nevertheless, in order to get a
simpler but still quantitative answer, here we address such question by
resorting to the so called string-flip model~\cite{stringflip} which has proven
to be successful in the study of quark/hadron matter as a function of
density~\cite{string1,Genaro1,Genaro2}. In this proceedings contribution, we
only outline the main features of the calculation and refer the interested
reader to Ref.~\cite{ampt} for further details. Other approaches toward a
dynamical description of recombination, in the
context of fluctuations in heavy-ion collisions, have been recently formulated
in terms of the qMD model~\cite{0702188}.
\section{Thermal particle spectra}\label{II}
In the recombination model, the phase space particle density is taken as the
convolution of the product of Wigner functions for each hadron's constituent
quark at a given temperature and the constituent quark wave function inside
the hadron. For instance, the meson phase space distribution is given by
\begin{equation}
F^M(x,P)=\sum_{a,b}\int_0^1dz|\Psi_{ab}^M(z)|^2w_a({\mathbf{x}},zP^+)
\bar{w}_b({\mathbf{x}},(1-z)P^+)\, ,
\label{wigmes}
\end{equation}
where $P^+$ is the light-cone
momentum, $\Psi_{ab}^M(z)$ is the meson wave function and $a,\ b$ represent
the quantum numbers (color, spin, flavor) of the constituent quark and
antiquark in the meson, respectively. An analogous equation can also be
written for baryons. When each constituent quark's Wigner function is
approximated as a Boltzmann distribution and momentum conservation is used,
the product of Wigner functions is given by a Boltzmann-like factor that
depends only on the light-cone momentum of the hadron~\cite{Fries}. For
instance, in the case of mesons
\begin{equation}
w_a({\mathbf{x}},zP^+)\bar{w}_b({\mathbf{x}},(1-z)P^+)\sim
e^{-zP^+/T}e^{-(1-z)P^+/T}
=e^{-P^+/T}\, .
\end{equation}
In this approximation, the product of parton distributions is independent of
the parton momentum fraction and the integration of the wave function over $z$
is trivially found by normalization. There can be corrections
from a dependence of each constituent quark Wigner function on momentum
components that are not additive because energy is not conserved in this
scenario~\cite{Fries2}. An important feature to keep in mind is that in this
formalism, the QCD dynamics between quarks inside the hadron is
encoded in the wave function.
In order to allow for a more realistic dynamical recombination scenario let us
take the above description as a guide, modifying the ingredients that account
for the QCD dynamics of parton recombination. Let us assume that the phase
space occupation can be factorized into the product of a term containing the
thermal occupation number, including the effects of a possible flow
velocity, and another term containing the system energy density $\epsilon$
driven probability ${\mathcal{P}}(\epsilon)$ of the coalescence of partons into
a given hadron. We thus write the analog of Eq.~(\ref{wigmes}) as
\begin{equation}
F(x,P)=e^{-P\cdot v(x)/T}{\mathcal{P}}(\epsilon)\, ,
\label{ourF}
\end{equation}
where $v(x)$ is the flow velocity. In order to compute the probability
${\mathcal{P}}(\epsilon)$ we explicitly consider a model
that is able to provide information about the likelihood of clustering of
constituent quarks to form hadrons from an effective quark-quark
interaction, the string-flip model, which
we proceed to describe.
\section{String Flip Model and Hadron Recombination Probability}\label{III}
The String Flip Model is formulated incorporating a many-body quark potential
able to confine quarks within color-singlet clusters
\cite{stringflip}. At low densities, the model describes a given system of
quarks as isolated hadrons while at high densities, this system becomes a free
Fermi gas of quarks. For our purposes, we consider up and down
flavors and three colors (anticolors) quantum numbers. Our approach is very
close to that described in Refs.~\cite{string1} and~ \cite{Genaro1}, where
we refer the reader for an extensive discussion of the model details.
The many-body potential $V$ is defined as the optimal clustering of quarks into
color-singlet objects, that is, the configuration that
minimizes the potential energy. In our approach, the interaction between
quarks is pair-wise. Therefore, the optimal clustering is achieved by finding
the optimal pairing between two given sets of quarks of different color for all
possible color charges. The minimization procedure is performed over all
possible permutations of the quarks and the interaction between quarks is
assumed to be harmonic with a spring constant $k$. Through this procedure, we
can distinguish two types of hadrons:
i) {\it Meson-like}. In this case the pairing is imposed to be between color
and anticolors and the many-body potential of the system made up of mesons is
given by:
\begin{equation}
V_\pi = V_{B\bar{B}}+V_{G\bar{G}}+V_{R\bar{R}}\,
\label{mespot}
\end{equation}
where $R(\bar{R})$, $B(\bar{B})$ and $G(\bar{G})$ are the
labels for red, blue and green color (anticolor) respectively. Note that this
potential can only build pairs.
ii) {\it Baryon-like}. In this case the pairing is imposed to be between the
different colors in all the possible combinations. In this manner, the
many-body potential is:
\begin{equation}
V_p = V_{RB}+V_{BG}+V_{RG}\,
\label{barpot}
\end{equation}
which can build colorless clusters by linking 3(RBG), 6(RBGRBG),... etc.,
quarks. Since the interaction is pair-wise, the 3-quark clusters are of the
delta (triangular) shape.
The formed hadrons should interact
weakly due to the short-range nature of the hadron-hadron interaction. This is
partially accomplished by the possibility of a quark {\it flipping} from one
cluster to another. At high energy density, asymptotic freedom demands that
quarks must interact weakly. This behavior is obtained once the average
inter-quark separation is smaller than the typical confining scale.
We study the meson and baryon like hadrons independently. Therefore, $V=V_\pi$
or $V_p$, depending on the type of hadrons we wish to describe.
We use a variational Monte Carlo approach to describe the evolution of a
system of $N$ quarks as a function of the particle density. We consider the
quarks moving in a three-dimensional box whose sides have length \textit{a}
and the system described by a variational wave function of the form:
\begin{equation}
\Psi_{\lambda}(\textbf{x}_1,...,\textbf{x}_N)=e^{-\lambda
V(\textbf{x}_1,...,\textbf{x}_N)}\Phi_{FG}(\textbf{x}_1,...,\textbf{x}_N),
\label{wavefun}
\end{equation}
where $\lambda$ is the single variational parameter,
$V$(\textbf{x}$_1$,...,\textbf{x}$_N$) is the many-body potential either for
mesons or baryons
and $\Phi_{FG}$(\textbf{x}$_1$,...,\textbf{x}$_N$) is the Fermi-gas wave
function given by a product of Slater determinants, one for
each color-flavor combination of quarks. These are built up from
single-particle wave functions describing a free particle in a box
\cite{Genaro1}.
The variational parameter has definite values for the extreme density cases.
At very low density it must correspond to the wave function solution of an
isolated hadron. For example, the non-relativistic quark model
for a hadron consisting of 2 and 3 quarks, bound by a harmonic potential,
predicts, in units where $k=m=1$ that $\lambda_\pi \to \lambda_{0\pi} =
\sqrt{1/2}$ and $\lambda_p \to \lambda_{0p} = \sqrt{1/3}$ respectively; at very
high densities the value of $\lambda$ must vanish for both cases.
Since the simulation was performed taking $m=k=1$, to convert
to physical units we consider each case separately.
Baryons:
To fix the the energy unit we first notice that in a 3-body system the energy
per particle, including its mass, is given by (with $m=k=1$):
\begin{equation}
\frac{E}{3}= \sqrt{3}+1.
\label{A1}
\end{equation}
If we identify the state as the proton of mass $M_p=938$ MeV, then the
correspondence is
\begin{equation}
\sqrt{3}+1 \rightarrow 312.7\ {\mbox {MeV}}.
\label{A2}
\end{equation}
To fix the length unit we use the mean square radius, which for a 3-body
system is: $\sqrt{<r^2>}=(3)^{1/4}$. The experimental value for the proton
is
\begin{equation}
\sqrt{<r^2>}=0.880 \pm 0.015\ {\mbox {fm}}.
\label{A3}
\end{equation}
Then the correspondence is: $(3)^{1/4} \rightarrow 0.88$ fm.
Mesons:
In a similar fashion we obtain for mesons (taking the pion as the
representative 2-body particle):
Energy: $\frac{3}{2\sqrt{2}}+1 \rightarrow 70$ MeV,
length: $2^{1/4} \rightarrow 0.764$ fm.
Our results come from simulation done with 384 particles,
192 quarks and 192 antiquarks, corresponding to having 32 $u \ (\bar{u})$ plus
32 $d\ (\bar{d})$ quarks (antiquarks) in the three color charges
(anti-charges).
To determine the variational parameter as a function of density we first
select the value of the particle density $\rho$ in the box,
which, for a fixed number of particles, means changing the box size. Then we
compute the energy of the system as a function of the variational parameter
using a Monte Carlo Method. The minimum of
the energy determines the optimal variational parameter. We repeat the
procedure for a set of values of the particle densities in the region of
interest.
The information contained in the variational parameter is global, in
the sense that it only gives an approximate idea about the average size of the
inter-particle distance at a given density, which is not necessarily the same
for quarks in a single cluster. This is reflected in the behavior of the
variational parameter $\lambda_p$ for
the case of baryons which goes above 1 for energies
close to where the sudden drop in the parameter happens. We interpret this
behavior as as a consequence of the procedure we employ to produce colorless
clusters for baryons, which, as opposed to the case to form mesons, allows the
formation of clusters with a number of quarks greater than 3. When including
these latter clusters, the information on their size is also contained in
$\lambda$. To correct for this, we compute the likelihood to find
clusters of 3 quarks $P_3$. Recall that for $3N$ quarks in the system, the
total number of clusters of 3 quarks that can be made is equal to $N$. However
this is not always the case as the density changes, given that the potential
allows the formation of clusters with a higher number of quarks. $P_3$ is
defined as the ratio between the number of clusters of 3 quarks found at a
given density, with respect to $N$.
Therefore, within our approach, we can define
the probability of forming a baryon as the product of the
$\lambda/\lambda_{0p}$ parameter times $P_3$, namely
\begin{equation}
{\mathcal P}_p=\lambda/\lambda_{0p} \times P_3.
\label{probprot}
\end{equation}
For the case of mesons, since the procedure only takes into account the
formation of colorless quark-antiquark pairs, we simply define the probability
of forming a meson as the value of the corresponding normalized variational
parameter, namely
\begin{equation}
{\mathcal P}_\pi=\lambda/\lambda_{0\pi}.
\label{probmes}
\end{equation}
The probabilities ${\mathcal P}_p$ and ${\mathcal P}_\pi$ as a function of the
energy density are displayed in fig.~\ref{P_pP_pi}. Notice the qualitative
differences between these probabilities. In the case of baryons, the sudden
drop found in the behavior of the variational parameter is preserved at an
energy density around $\epsilon =0.7$ GeV/fm$^3$ whereas in the case of
mesons, this probability is smooth, indicating a difference in the production
of baryons and mesons with energy density.
\begin{figure}
\includegraphics[height=.3\textheight]{probability.eps}
\caption{Probabilities to form baryons and mesons as a
function of energy density.}
\label{P_pP_pi}
\end{figure}
\section{proton to pion ratio}\label{V}
In order to quantify how the different probabilities to produce sets of three
quarks (protons) as compared to sets of two quarks (pions) affect these
particle's yields as the energy density changes during hadronization, we need
to resort to a model for the space-time evolution of the collision. For the
present purposes, we will omit describing the effect of radial flow and take
Bjorken's scenario which incorporates the fact that initially, expansion is
longitudinal, that is, along the beam direction which we take as the $\hat{z}$
axis. In this 1+1 expansion scenario, the relation between the temperature $T$
and the 1+1 proper-time $\tau$ is given by
\begin{equation}
T=T_0\left(\frac{\tau_0}{\tau}\right)^{v_s^2},
\label{temperaturevstau}
\end{equation}
where $\tau=\sqrt{t^2-z^2}$. Equation~(\ref{temperaturevstau}) assumes that
the speed of sound $v_s$ changes slowly with temperature. A lattice estimate
of the speed of sound in quenched QCD~\cite{Gupta} shows that $v_s^2$
increases monotonically from about half the ideal gas limit
for $T\sim 1.5 T_c$ and approaches this limit only for
$T>4T_c$, where $T_c$ is the critical temperature for the phase transition. No
reliable lattice results exist for the value of the speed of sound in the
hadronic phase though general arguments indicate that the equation of state
might become stiffer below $T_c$ and eventually softens as the temperature
approaches zero. For the ease of the argument, here we
take $v_s$ as a constant equal to the ideal gas limit $v_s^2=1/3$.
We also consider that hadronization takes place on hypersurfaces $\Sigma$
characterized by a constant value of $\tau$ and therefore
\begin{equation}
d\Sigma=\tau\rho \ d\rho \ d\phi\ d\eta ,
\label{hypersurface}
\end{equation}
where $\eta$ is the spatial rapidity and $\rho$, $\phi$ are the polar
transverse coordinates. Thus, the transverse spectrum for a hadron species $H$
is given as the average over the hadronization interval, namely
\begin{equation}
E\frac{dN^H}{d^3P}=\frac{g}{\Delta \tau}
\int_{\tau_0}^{\tau_f}d \tau\int_{\Sigma}d\Sigma\ \frac{P\cdot
u(x)}{(2\pi)^3}F^H(x,P),
\label{distributionaveraged}
\end{equation}
where $\Delta \tau=\tau_f-\tau_0$.
To find the relation between the energy density $\epsilon$ --that the
probability ${\mathcal{P}}$ depends upon-- and $T$, we resort to lattice
simulations. For the case of two flavors, a fair representation
of the data~\cite{Karsch} is given by the analytic expression
\begin{equation}
\epsilon /T^4 = a\left[ 1 + \tanh\left(\frac{T-T_c}{bT_c}\right)\right],
\label{latticeenergy}
\end{equation}
with $a=4.82$ and $b=0.132$. We take $T_c=175$ MeV.
For a purely longitudinal expansion, the flow four-velocity vector $v^\mu$ and
the normal to the freeze-out hypersurfaces of constant $\tau$, $u^\mu$,
coincide and are given by $v^\mu=u^\mu=(\cosh\eta,0,0,\sinh\eta)$,
therefore, the products $P\cdot u$ and $P\cdot v$ appearing in
Eq.~(\ref{distributionaveraged}) can be written as
\begin{equation}
P\cdot v=P\cdot u=m_t\cosh(\eta-y),
\label{Pdotv}
\end{equation}
where $m_t=\sqrt{m_H^2+p_t^2}$ is the transverse mass of the hadron and
$y$ is the rapidity.
Considering the situation of central collisions and looking only at the case
of central rapidity, $y=0$, the final expression for the hadron's transverse
distribution is given by
\begin{equation}
E\frac{dN^H}{d^3P}=\frac{g}{(2\pi)^3}\frac{2m_tA}{\Delta \tau }
\int_{\tau_0}^{\tau_f}d \tau\tau K_1\left[\frac{m_t}{T(\tau )}\right]
{\mathcal{P}}[\epsilon (\tau )].
\label{distfin}
\end{equation}
To obtain the the pion and proton distributions, we use the values
$\tau_0=0.75$ fm and $\tau_f=3.5$ fm and an initial temperature $T_0=200$
MeV. From Eq.~(\ref{temperaturevstau}), this corresponds to a final
freeze-out temperature of $\sim 120$ MeV. For protons we take a degeneracy
factor $g=2$ whereas for pions $g=1$, to account for the spin degrees of
freedom. Figure~\ref{fig12} shows the proton
to pion ratio for three different values of the initial evolution proper time
$\tau_0=0.5,\ 0.75$ and $1$ fm and the same finial freeze-out proper-time
$\tau_f=3.5$ fm, compared to data for this ratio for Au + Au collisions at
$\sqrt{s_{NN}}=200$ GeV from PHENIX~\cite{PHENIXBM}. We notice that the
maximum height reached by this ratio is sensitive to the choice of the initial
evolution time. We also notice that the $p_t$ value for which the maximum is
reached is displaced to larger values than what the experimental values
indicate. This result is to be expected since the model assumptions leading to
Eq.~(\ref{distfin}) do not include the effects of radial flow that, for a
common flow velocity, are known to be larger for protons than for pions, and
which will produce the displacement of the ratio toward lower $p_t$ values.
\begin{figure}
\includegraphics[height=.3\textheight]{ratios.eps}
\caption{Proton to pion ratio as a function of transverse
momentum for three different values of the initial evolution proper-time
$\tau_0=0.5,\ 0.75$ and $1$ fm and the same finial freeze-out proper-time
$\tau_f=3.5$ fm, compared to data for Au + Au collisions at
$\sqrt{s_{NN}}=200$ GeV from PHENIX. The height of this ratio is very
sensitive to the choice of the initial evolution time.}
\label{fig12}
\end{figure}
\section{Summary and Conclusions}\label{concl}
In conclusion, we have used the string-flip model to introduce a dynamical
quark recombination scenario that accounts for the evolution of the
probability to form a meson or a baryon as a function of the energy
density during the collision of a heavy-ion system. We have used the model
variational parameter as a measure of the probability to form colorless
clusters of three quarks (baryons) or of quark-antiquark (mesons). We have
shown that these probabilities differ; whereas the probability to form a pion
transits smoothly from the high to the low energy density domains, the
probability to form a baryon changes abruptly at a given critical energy
density. We attribute this difference to the way the
energy is distributed during the formation of clusters: whereas for mesons the
clustering happens only for quark-antiquark pairs, for baryons the energy can
be minimized by also forming sets of three, six, etc., quarks in (colorless)
clusters. These produces competing minima in the energy that do not reach each
other smoothly. We interpret this behavior as a signal for a qualitative
difference in the probability to form mesons and a baryons during
the collision evolution.
We have incorporated these different probabilities to
compute the proton and pion spectra in a thermal model for a Bjorken-like
scenario. We use these spectra to compute the proton to pion ratio as a
function of transverse momentum and compare to experimental data at the
highest RHIC energies. We argue that the ratio computed from the model is able
to reach a height similar to the one shown by data, although the maximum is
displaced to larger $p_t$ values. This could be understood by recalling that
the model does not include the effects of radial flow which is known to be
stronger for protons (higher mass particles) than pions. The inclusion of
these effects is the subject of current research that will be reported
elsewhere.
\begin{theacknowledgments}
Support for this work has been received by PAPIIT-UNAM
under grant number IN116008 and CONACyT under grant number
40025-F. M. Martinez was supported by DGEP-UNAM.
\end{theacknowledgments}
\IfFileExists{\jobname.bbl}{}
{\typeout{}
\typeout{******************************************}
\typeout{** Please run "bibtex \jobname" to optain}
\typeout{** the bibliography and then re-run LaTeX}
\typeout{** twice to fix the references!}
\typeout{******************************************}
\typeout{}
}
|
1,108,101,564,229 | arxiv | \section{Introduction}
A small business is one that has fewer than 1,500 employees and a maximum of \$38.5 million in average annual receipts, according to the Small Business Administration. Small businesses, though, often lack the size, assets (for collateral), financial history, or data that are typically used by traditional financial institutions to assess credit worthiness. While lenders extended nearly \$650B worth of loans to small businesses in 2019 \cite{SBA2019}, the Federal Reserve reports this represents less than half of these businesses credit needs \cite{KCFed2021}.
Large public businesses, on the other hand, provide a rich credit dataset. SEC rules require these businesses to publicly report detailed financial data in the form of profit and loss, balance sheet, and cash flow statements. Three main ratings agencies publish credit ratings for public companies: Standard \& Poor's (S\&P), Moody's, and Fitch.
We study the applicability of deep learning-based credit models derived from large public corporate data to small businesses. We create a DL-based model to predict the credit ratings of large public companies from their financial statement data. We then explore the applicability of the model in forecasting adverse events or the probability of default of a small business on loan payments.
\section{Related work}
Traditionally, small business lenders rely on credit scores from one three credit reporting agencies: Dun \& Bradstreet, Experian, and Fico \cite{fundbox}. These utilize business longevity, revenues, debt, owners' personal credit history, public records, and industry category, to come up with a risk score.
Recently a number of new approaches have emerged. Divvy and Brex pioneered the use of real-time business bank balance monitoring to extend short term credit \cite{HBS2019}. Flowcast uses logistic regression and trees to estimate a risk score from business ERP transactional data such as invoices, shipments, and payment history \cite{Flowcast2018}. Visa released a new small business credit scoring service that uses logistic regression to estimate risk based on Visa payment transaction history \cite{Visa2020}. It is not known how well these proprietary models perform. They are also very invasive: businesses must consent to provide access to intimate business details for the score to be generated.
In the realm of DL modeling of large public corporate credit risk the work of Golbayani et.al. \cite{GolbayaniMar2020} stands out as the most current and comprehensive study. The authors analyze DNN models on a dataset similar to ours. Their research suggests that DNNs perform well. They also explore variations on feature engineering and model training approaches that are useful. The work does not address applicability outside of the training dataset. We take this work as our starting point. Since the authors have not released their datset or code, we will build our dataset and models from scratch.
\section{Dataset and Features}
\subsection{The Labels}
\vspace{-0.1in}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.4]{CreditRatingHist.png}
\caption{Histogram of annual long term credit ratings for 306 public companies from 2010 - 2016.}
\label{CreditRating}
\end{figure*}
Our labels are S\&P corporate credit ratings, accessible from Compustat's database via Stanford Libraries \cite{Compustat}. We identified 306 public companies with annual long term credit ratings over the 7 year period period 2010-2016.
Figure (\ref{CreditRating}) shows the credit ratings histogram. Ratings lie on a letter scale with '+' and '-' enhancements that includes up to 21 'grades', from 'AAA' (highest rating - extremely strong likelihood of repayment), down to 'D' (repayment default). Only 6 ratings (classes) actually appear in or data: 'A+', 'A-', 'BB+', 'B-', 'CCC+', and 'D'. The distribution is bi-modal and exhibits strong class imbalance. The rating change very slowly with only 7\% of companies on average experiencing a rating change in any given year.
\subsection{The Inputs (Features)}
\vspace{-0.1in}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.3]{FeatureSet.png}
\caption{Input financial statement feature set for credit rating modeling.}
\label{FeatureSet}
\end{figure*}
Access to Compustat's company financial data is restriced to Stanford GSB members only. Therefore we obtain financials that form our features from Yahoo! Finance. There are $\sim$250 data fields available across all 3 financial statements, though each company uses only a subset. We analyzed our companies to determine a set of 43 of the most common fields that also comprise a coherent and complete set of financials, shown in figure (\ref{FeatureSet}). We excluded companies missing any of these fields over the time period. This resulted in 236 usable companies and a total of $m = 7 \times 236 = 1,652$ samples. Since financial data varies by orders of magnitude across companies (e.g., revenues from \$M to \$B) as well as across features, each feature is normalized to zero mean, unit variance across the sample set.
\section{Methods}
\vspace{-0.1in}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.35]{MLP3HiddenLayer.png}
\caption{MLP model architecture with 3 hidden layers.}
\label{MLP1HiddenLayer}
\end{figure*}
Our main model is two variations of a NN architecture. The first is classification-based and mimics the work of Golbayani et.al. \cite{GolbayaniMar2020}. Those authors thoroughly explore networks of different architectures (MLP, CNN, LSTM) and sizes, achieving accuracies of roughly 75\%-80\% on test and 80\%-85\% on training. They show small improvements with CNN and LSTM architectures over MLP but not a lot.
Our main goal is to test applicability of these models on a different dataset, so we seek a 'good enough' model. We show in the Experiments section that a 3 hidden-layer MLP using sparse categorical cross entropy loss does the job. This is similar to the main model of \cite{GolbayaniMar2020, Huang2004}. Adding batch normalization / dropout did not have a significant performance improvement and in some cases reduced accuracy, so it was omitted. Improvements with LSTM models were hard to obtain due to the mostly static ratings, though we postulate with more data better performance might be achievable.
Since the credit classes lie on a continuum, we can modify this network to produce a risk score by replacing the softmax output activation by a single FC node. This regression-based approach is our second architecture. The classes are represented as sequential integers, and a mean squared error loss function is employed. This allows predictions to extend beyond the limits of the model classes, and the MSE loss also does a better job of penalizing deviations from nearby classes.
For our relatively small sample, we randomly split the samples into 80\% for training (1,322 samples) and 20\% for testing (330 samples). To address the large class imbalances we employ the oversampling technique, SMOTE, to create a training set with equal number of samples in each class \cite{Chawla}.
\begin{figure*}[h]
\centering
\includegraphics[scale=0.42]{AccuracyRMS.png}
\caption{Accuracy \& RMS error for MLP with 3 hidden layers for varying number of nodes/layer.}
\label{AccuracyRMS}
\end{figure*}
\section{Experiments/Results/Discussion}
\subsection{Multi Layer Perceptron (MLP) Model}
We assess the performance of our model with various nodes per layer as shown in figure (\ref{AccuracyRMS}). As expected the classification model outperforms on accuracy whereas the regression model outperforms on MSE. The differences are significant, though not overwhelming. Beyond 50 nodes per layer, the performance of the classification model degrades, indicating over-training. We select 50 nodes per layer as our working model size. These results are obtained after 3,000 epochs of training with no batching, which experimentation showed worked well.
The benefits of the regression model are further seen from the confusion matrix in figure (\ref{ConfusionMatrix}). Note how the regression model keeps mis-classifications closer to the main diagonal.
\vspace{-0.1in}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.37]{ConfusionMatrix50Nodes.png}
\caption{Comparison of the confusion matrices for classification and regression models.}
\label{ConfusionMatrix}
\end{figure*}
\subsection{Testing Outside the Dataset: Bankruptcies}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.39]{BankruptcyPrediction.png}
\caption{Predicted classes for companies that experienced a bankruptcy in 2019/2020 time frame.}
\label{BankruptcyPrediction}
\end{figure*}
Our first model application outside the dataset is to test detection of adverse credit events. We apply the model to a set of public companies that filed Chapter 11 bankruptcy in 2019 or 2020. Using the list from S\&P and Wikipedia \cite{SPBankruptcy2020, WikipediaBankruptcy2019}, we extract 7 such companies that had a full feature set over the period 2016-2020. If our model is relevant, we expect to see a souring of credit as these companies approach the bankruptcy event (i.e., an increase in predicted class output). The results are shown in figure (\ref{BankruptcyPrediction}). Some of the companies show considerable rating volatility over the period and the average does exhibit an overall upward trend as would be expected.
\subsection{Testing Outside the Dataset: Small Cap Businesses}
\begin{figure*}[h]
\centering
\includegraphics[scale=0.42]{ExperianComparison.png}
\caption{Comparsion of model output with Experian risk scores.}
\label{ExperianComparison}
\end{figure*}
We use small cap public companies as a proxy for small businesses. Starting with the 100 smallest companies (by market cap) of the Wilshire 5,000 index, all of which have market caps under \$35M \cite{WilshireTail}, we find 27 companies with a full feature set for the year 2020. We compare the regression model output for these against the current value of 2 risk scores produced by Experian \cite{ExperianWebsite}. One score is their 'Business Credit Score' which ranges on a scale of 1-100, with higher score indicating lower risk, and it predicts the likelihood of a serious credit delinquency within the next 12 months. The other score is their 'Financial Stability Risk Rating' which ranges on a scale of 1-5, with a lower score indicating lower risk, and it predicts the likelihood of payment default within the next 12 months. We plot the output of our model against these scores in figure (\ref{ExperianComparison}).
The result is surprising. We would expect our model to be negatively correlated with the Business Credit Score and positively correlated with the Financial Stability Risk Rating, but we see the opposite. Assuming Experian's scores are meaningful then either our model is wrong, or the financial factors that make large corporate businesses more credit worthy have the opposite effect on small businesses. One other observation across all our tests is that the model generates scores in a very narrow output range. This is likely a result of the input normalization using the mean and standard deviations derived from the training set on large companies.
\section{Conclusion/Future Work }
Credit risk modeling is murky. While a number of agencies are recognized as rating authorities, the underlying models they use are proprietary and opaque. Further, these models have known deficiencies \cite{Partnoy2017}. Yet these models are widely used and represent the best available measures of corporate credit risk. DNN's can approximate the predictions of these models relatively well, though not entirely, using just financial data. And while these models show some predictive capability outside the dataset, it is not clear how significant these are.
Many extensions of this work are possible. I would like to undertake a model interpretability exercise. Classical finance relies heavily on certain financial statement ratios in assessing credit risk \cite{Ganguin2004}. Our model may confirm some of these are key predictors, or unearth new ones that perform better. Our regression model also needs refinement with more sophisticated architectures and an expansion of the datasets (which were tedious to collect and could be expanded with better access to the right databases.) Finally, with enough data from Experian (we had a limit of 30 companies) we could train models to fit those scores directly, or combine that data with our dataset to train a single model.
\section{Contributions}
\begin{itemize}[leftmargin=*]
\item Presented a classification NN that approximates the credit risk assessment of large public companies reasonably well, confirming that financials play a large part in driving credit risk, but not everything.
\item Extended the model to a regression one that outputs a continuous score instead of just a rating class.
\item Investigated the use of our regression score model to detect adverse events outside the original dataset. We showed that there is correlation between or model output and bankruptcy events.
\item Investigated the use of our regression score model against a proprietary small business risk score from Experian. We show negative correlation between the two models. This warrants further investigation whether our model needs improvement or the financial factors that are beneficial to large corporate credit ratings are somehow detrimental to small businesses.
\end{itemize}
\bibliographystyle{plain}
|
1,108,101,564,230 | arxiv |
\section*{Acknowledgements}
The authors would like to thank David Chiang and Katrin Kirchhoff for their support of this research.
\section{Analysis}
\label{sec:analysis}
\subsection{Performance curves}
\label{sec:perf-curve}
\Cref{fig:dev_bleus} shows that \textsc{PreNorm}\ not only learns faster than \textsc{PostNorm}, but also outperforms it throughout training. Adding \textsc{FixNorm}\ also gives faster learning at first, but only achieves close performance to that with \textsc{PreNorm}\ and no \textsc{FixNorm}. However, once paired with \textsc{ScaleNorm}, we attain a better BLEU score at the end. Because of the slow warmup period, \textsc{ScaleNorm}\ with warmup learns slower than \textsc{ScaleNorm}\ without warmup initially; however, they all converge at about the same rate.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{figures/dev_bleus.pdf}
\caption{Development BLEU on \textit{en\textrightarrow vi}\ with \textsc{PostNorm}\ or \textsc{PreNorm}, and with \textsc{LayerNorm}\ or \textsc{ScaleNorm}.}
\label{fig:dev_bleus}
\end{figure}
To visualize how \textsc{PreNorm}\ helps backpropagation, we plot the global gradient norms from our runs in \Cref{fig:gnorm}. \textsc{PostNorm}\ produces noisy gradients with many sharp spikes, even towards the end of training. On the other hand, \textsc{PreNorm}\ has fewer noisy gradients with smaller sizes, even without warmup. \textsc{LayerNorm}\ has lower global norms than \textsc{ScaleNorm}\ + \textsc{FixNorm}\, but it has more gradient components corresponding to normalization.
\input{tables/g-ablation.tex}
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{figures/gnorm.pdf}
\caption{The global norm of gradients when using \textsc{PostNorm}\ or \textsc{PreNorm}, and with \textsc{LayerNorm}, \textsc{ScaleNorm}\ and \textsc{FixNorm}. Best viewed in color.}
\label{fig:gnorm}
\end{figure}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.50\textwidth}
\includegraphics[width=0.9\textwidth]{figures/att_scales.pdf}
\end{subfigure}~
\begin{subfigure}[b]{0.50\textwidth}
\includegraphics[width=0.9\textwidth]{figures/non_att_scales.pdf}
\end{subfigure}
\caption{Learned $g$ values for \textsc{PreNorm}\ + \textsc{ScaleNorm}\ + \textsc{FixNorm}\ models, versus depth. \textbf{Left:} Attention sublayers (\keyword{decoder-encoder} denotes decoder sublayers attending on the encoder). \textbf{Right:} Feedforward sublayers and the final linear layer.}
\label{fig:scales}
\end{figure*}
\begin{figure*}[ht!]
\centering
\begin{subfigure}[b]{0.50\textwidth}
\includegraphics[width=0.9\textwidth]{figures/lb_att_scales.pdf}
\end{subfigure}~
\begin{subfigure}[b]{0.50\textwidth}
\includegraphics[width=0.9\textwidth]{figures/lb_non_att_scales.pdf}
\end{subfigure}
\caption{Learned $g$ values for our \textsc{PreNorm}\ + \textsc{ScaleNorm}\ + \textsc{FixNorm}\ \textit{en\textrightarrow vi}\ model (with and without label smoothing), versus depth. \textbf{Left} and \textbf{Right} are the same as in \Cref{fig:scales}.}
\label{fig:scales-ls}
\end{figure*}
\subsection{Activation scaling and the role of $g$}
\label{ssec:g-values}
One motivation for \textsc{ScaleNorm}\ was that it expressed a good inductive bias for the global scaling of activations, independent of distributional stability (\Cref{ssec:scaled-cosine}). In contrast, a contemporaneous work \cite{Zhang2019} proposes \keyword{root mean square layer normalization} (\textsc{RMSNorm}), which still follows layer normalization's motivation but reduces overhead by forgoing additive adjustments, using only a scaling $g_i$ per activation $a_i$. Despite their differing motives, tying the $g_i$ of \textsc{RMSNorm}\ and dividing by $\sqrt{d}$ retrieves \textsc{ScaleNorm}.
Hence we can frame our comparisons in terms of number of learnable parameters. We rerun our \textsc{PreNorm}\ experiments with \textsc{RMSNorm}. We also consider fixing $g=\sqrt{d}$ for \textsc{ScaleNorm}, where only \textsc{FixNorm}\ has learnable $g$. \Cref{tab:g-ablation} shows that \textsc{ScaleNorm}\ always performs comparably or better than \textsc{RMSNorm}. Surprisingly, the fixed-$g$ model performs comparably to the one with learnable $g$. However, at higher learning rates (\textsc{ValDecay}\ with and without \textsc{2$\times$LR}), fixed-$g$ models perform much worse on ar\textrightarrow en, en\textrightarrow he\, and \textit{en\textrightarrow vi}. We conjecture that learning $g$ is required to accommodate layer gradients.
In \Cref{fig:scales}, we plot the learned $g$ values for pairs with 100k+ examples. For all but the decoder-encoder sublayers, we observe a positive correlation between depth and $g$, giving credence to \textsc{ScaleNorm}'s inductive bias of global scaling. This trend is clearest in the decoder, where $g$ linearly scales up to the output layer, perhaps in tandem with the discriminativeness of the hidden representations \cite{LiangHL18}. We also note a negative correlation between the number of training examples and the magnitude of $g$ for attention sublayers, which may reflect overfitting.
Finally, to affirm our intuition for interpreting $g$, we plot $g$ values with and without label smoothing (\Cref{fig:scales-ls}). We see a difference in later layers of the decoder; there, removing label smoothing results in lower $g$ values except at the output layer, where $g$ increases sharply. This corresponds to the known overconfidence of translation models' logits, on which label smoothing has a downscaling effect \cite{Muller2019}.
\section{Training details}
\label{appendix:setup}
\paragraph{Data and preprocessing.} The pairs are English (en) to {Hebrew (he), Vietnamese (vi)}, and {Galician (gl), Slovak (sk), Arabic (ar)} to English (en). Because the data is already preprocessed, we only apply BPE \cite{Sennrich2016} with \texttt{fastBPE}\footnote{\url{https://github.com/glample/fastBPE}}. Depending on the data size, we use different numbers of BPE operations.
We wanted to compare with the latest low-resource works of \cite{Neubig2019, Aharoni2019} on the TED Talks corpus \cite{Qi2018-word-embeddings-nmt}. In particular, \citet{Aharoni2019} identified 4 very low-resource pairs ($<$70k); we took the two (gl\textrightarrow en, sk\textrightarrow en) that were not extremely low ($\le$6k). They then identified 4 low-resource pairs with 100k-300k examples; we took the top two (ar\textrightarrow en, en\textrightarrow he). To introduce a second English-source pair and to showcase on a well-understood task, we used the \textit{en\textrightarrow vi}\ pair from IWSLT\,\textquotesingle 15\ with an in-between number of examples (133k). In this way, we have examples of different resource levels, language families, writing directions, and English-source versus -target.
\paragraph{Model configuration.} We set the hidden dimension of the feedforward sublayer to 2048 and the rest to 512, matching \citet{NIPS2017_7181}. We use the same dropout rate for output of sublayers, ReLU, and attention weights. Additionally, we also do word dropout \cite{Sennrich2016dropout} with probability 0.1. However, instead of zeroing the word embeddings, we randomly replace tokens with \texttt{UNK}. For all experiments, we use label smoothing of 0.1 \cite{label_smoothing_1, label_smoothing_2}. The source and target's input and output embeddings are shared \cite{tiedembed}, but we mask out words that are not in the target's vocabulary at the final output layer before softmax, by setting their logits to $-\infty$.
\paragraph{Training.} We use a batch size of 4096 and optimize using Adam \cite{Kingma2014} with the default parameters $\beta_1=0.9$, $\beta_2=0.999$, $\epsilon=10^{-8}$. Gradients are clipped when global norm exceeds 1.0 \cite{Pascanu2013}. An epoch is a predefined number of iterations for each pair. We stop training when a maximum number of epochs has been met or the learning rate becomes too small ($10^{-6}$). We also do early stopping when the development BLEU has not improved for 20 evaluations. For gl\textrightarrow en, this number is 50. When doing validation-based decay, we use $\alpha_{decay} = 0.8$ and $patience = 3$. For complete data and model statistics, please refer to \Cref{tab:stats}. The best checkpoint is selected based on the development BLEU score during training.
\paragraph{Evaluation.} We report tokenized BLEU \cite{papineni-etal-2002-bleu} with \texttt{multi-bleu.perl} to be comparable with previous works. We also measure statistical significance using bootstrap resampling \cite{Koehn2004}. For WMT\,\textquotesingle 14\ English-German, note that one needs to put compounds in ATAT format\footnote{\url{https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/utils/get_ende_bleu.sh}} before calculating BLEU score to be comparable with previous works.\\
\section{Further analysis}
\label{ssec:generalization}
\input{tables/train-test-ppl.tex}
We ask if improvements from \textsc{ScaleNorm}\ on our low-resource tasks are due to improved regularization (a smaller generalization gap) or improved overall performance. We record smoothed train and test perplexities of our \textsc{PreNorm}\ models in \Cref{tab:train-test-ppl}. We see suggestive results but no conclusive trends. For ar\textrightarrow en, gl\textrightarrow en, and sk\textrightarrow en, train and test drop slightly, with test more so than train. For \textit{en\textrightarrow vi}, train perplexity increases and test perplexity decreases an equivalent amount. For en\textrightarrow he, our smallest change between \textsc{ScaleNorm}\ and \textsc{LayerNorm}, train perplexity negligibly increased and test perplexity remains the same.
\section{Listings}
See the following page.
\onecolumn
\paragraph{\textsc{ScaleNorm}.}
\small
\begin{lstlisting}[language=Python]
class ScaleNorm(nn.Module):
"""ScaleNorm"""
def __init__(self, scale, eps=1e-5):
super(ScaleNorm, self).__init__()
self.scale = Parameter(torch.tensor(scale))
self.eps = eps
def forward(self, x):
norm = self.scale / torch.norm(x, dim=-1, keepdim=True).clamp(min=self.eps)
return x * norm
\end{lstlisting}
\normalsize
\paragraph{\textsc{fairseq}.}
\label{listing-fairseq}
We follow \textsc{fairseq}'s tutorial\footnote{\url{https://github.com/pytorch/fairseq/blob/master/examples/scaling_nmt/README.md}} and train a \textsc{PostNorm}\ Transformer base model using the following configuration:
\small
\begin{lstlisting}[language=bash]
fairseq-train \
data-bin/wmt16_en_de_bpe32k/ \
--arch transformer_wmt_en_de \
--share-all-embeddings \
--optimizer adam \
--adam-betas '(0.9, 0.98)' \
--clip-norm 1.0 \
--lr 0.001 \
--lr-scheduler inverse_sqrt \
--warmup-updates 4000 \
--warmup-init-lr 1e-07 \
--dropout 0.1 \
--weight-decay 0.0 \
--criterion label_smoothed_cross_entropy \
--label-smoothing 0.1 \
--max-tokens 8192 \
--update-freq 10 \
--attention-dropout 0.1 \
--activation-dropout 0.1 \
--max-epoch 40
\end{lstlisting}
\normalsize
For \textsc{PreNorm}, simply include the flags:
\small
\begin{lstlisting}[language=bash]
--encoder-normalize-before --decoder-normalize-before
\end{lstlisting}
\normalsize
For \textsc{ScaleNorm}, we replace all {\textsc{LayerNorm}}s in {\texttt{fairseq/models/transformer.py}} and {\texttt{fairseq/modules/transformer\_layer.py}} with \textsc{ScaleNorm}\ (implemented above). For \textsc{FixNorm}, we change the word embedding initialization to uniform with range $[-0.01, 0.01]$ and normalize with {\lstinline{torch.nn.functional.normalize}}.
We note that \textsc{fairseq}\ uses Xavier uniform initialization, which is big compared to our \textsc{SmallInit}\ (\Cref{experiment_weight_init}). We conjecture that \textsc{fairseq}\ training remains stable thanks to its large batch size, which gives more stable gradients.
\section{Background}
\subsection{Identity mappings for transformers}
\label{ssec:identity-mappings}
\keyword{Residual connections} \cite{He2016} were first introduced to facilitate the training of deep convolutional networks, where the output of the $\ell$-th layer $F_{\ell}$ is summed with its input:
\begin{equation}
{\bm{x}}_{\ell+1} = {\bm{x}}_\ell + F_{\ell}({\bm{x}}_\ell).
\end{equation}
The identity term ${\bm{x}}_\ell$ is crucial to greatly extending the depth of such networks \cite{He2016-identity-mappings}. If one were to scale ${\bm{x}}_\ell$ by a scalar $\lambda_\ell$, then the contribution of ${\bm{x}}_\ell$ to the final layer $F_L$ is $(\prod_{i=\ell}^{L-1}\lambda_i) {\bm{x}}_\ell$. For deep networks with dozens or even hundreds of layers $L$, the term $\prod_{i=\ell}^{L-1}\lambda_i$ becomes very large if $\lambda_i > 1$ or very small if $\lambda_i < 1$, for enough $i$. When backpropagating from the last layer $L$ back to $\ell$, these multiplicative terms can cause exploding or vanishing gradients, respectively. Therefore they fix $\lambda_i = 1$, keeping the total residual path an identity map.
The original Transformer applies \textsc{LayerNorm}\ after the sublayer and residual addition (\textsc{PostNorm}):
\begin{equation}\label{post-norm}
{\bm{x}}_{\ell+1} = \textsc{LayerNorm}({\bm{x}}_\ell + F_{\ell}({\bm{x}}_\ell)).
\end{equation}
We conjecture this has caused past convergence failures \cite{Popel2018, Shazeer2018}, with {\textsc{LayerNorm}}s in the residual path acting similarly to $\lambda_i \ne 1$; furthermore, warmup was needed to let \textsc{LayerNorm}\ safely adjust scale during early parts of training. Inspired by \citet{He2016-identity-mappings}, we apply \textsc{LayerNorm}\ immediately before each sublayer (\textsc{PreNorm}):
\begin{equation}\label{pre-norm}
{\bm{x}}_{\ell+1} = {\bm{x}}_\ell + F_{\ell}(\textsc{LayerNorm}({\bm{x}}_\ell)).
\end{equation}
This is cited as a stabilizer for Transformer training \cite{Chen2018, Wang2019-learning-deep-transformers} and is already implemented in popular toolkits \cite{tensor2tensor, fairseq, sockeye}, though not necessarily used by their default recipes. \citet{Wang2019-learning-deep-transformers} make a similar argument to motivate the success of \textsc{PreNorm}\ in training very deep Transformers. Note that one must append an additional normalization after both encoder and decoder so their outputs are appropriately scaled. We compare \textsc{PostNorm}\ and \textsc{PreNorm}\ throughout \Cref{sec:experiments}.
\subsection{Weight initialization}
\label{ssec:weight-init}
Xavier normal initialization \cite{Glorot2010} initializes a layer's weights ${\bm{W}}_{\ell} \in {\mathbb{R}}^{d_{\ell+1} \times d_{\ell}}$ ($d_{\ell}$ is the hidden dimension) with samples from a centered normal distribution with layer-dependent variance:
\begin{equation}\label{xavier}
({\bm{W}}_{\ell})_{i,j} \sim \mathcal{N}\left(0, \sqrt{\frac{2}{d_{\ell} + d_{\ell+1}}}\right).
\end{equation}
Our experiments with this default initializer find that \textsc{PostNorm}\ sometimes fails to converge, especially in our low-resource setting, even with a large number of warmup steps. One explanation is that Xavier normal yields initial weights that are too large. In implementations of the Transformer, one scales the word embeddings by a large value (e.g., $\sqrt{d} \approx 22.6$ for $d=512$), giving vectors with an expected square norm of $d$. \textsc{LayerNorm}'s unit scale at initialization preserves this same effect. Since feedforward layers already have their weights initialized to a smaller standard deviation, i.e., $\sqrt{\frac{2}{d + 4d}}$, we propose reducing the attention layers' initializations from $\sqrt{\frac{2}{d + d}}$ to $\sqrt{\frac{2}{d + 4d}}$ as well (\textsc{SmallInit}), as a corresponding mitigation. We evaluate the effect of this on \textsc{PostNorm}\ vs.\ \textsc{PreNorm}\ in \Cref{experiment_weight_init}.
\subsection{Scaled $\ell_2$ normalization and \textsc{FixNorm}}
\label{ssec:scaled-cosine}
\textsc{LayerNorm}\ is inspired by batch normalization \cite{Ioffe2015}, both of which aim to reduce internal covariate shift by fixing the mean and variance of activation distributions. Both have been applied to self-attention \cite{NIPS2017_7181,Kool2019}. However, \citet{Santurkar2018} show that batch normalization's success has little to do with covariate shift, but comes instead from smoothing the loss landscape. For example, they divide by the pre-centered $\ell_p$ norm instead of the variance and achieve similar or better results in image classification.
Hence, we propose replacing \textsc{LayerNorm}\ with \keyword{scaled $\ell_2$ normalization}:
\begin{equation} \label{scnorm}
\textsc{ScaleNorm}({\bm{x}}; g) = g\frac{{\bm{x}}}{\norm{{\bm{x}}}}.
\end{equation}
This can be viewed as projecting $d$-dimensional vectors onto a $(d-1)$-dimensional hypersphere with learned radius $g$. This expresses the inductive bias that each sublayer's activations has an ideal ``global scale,'' a notion we empirically validate in \Cref{ssec:g-values}. \textsc{ScaleNorm}\ replaces the $2d$ scale and shift parameters of \textsc{LayerNorm}\ with a single learned scalar, improving computational and parameter efficiency while potentially regularizing the loss landscape.
This bias has an explicit interpretation at the final layer: large inner products sharpen the output distribution, causing frequent words to disproportionately dominate rare words. This led \citet{Nguyen2018-improving-lexical-choice} to introduce $\textsc{FixNorm}({\bm{w}}) = g \frac{{\bm{w}}}{\norm{{\bm{w}}}}$ with fixed $g$ at the last linear layer, to maximize the angular difference of output representations and aid rare word translation. By making $g$ learnable, we can apply \textsc{ScaleNorm}\ and \textsc{FixNorm}\ jointly, which means applying the following at the final linear layer:
\begin{equation}
\begin{split}
(\textsc{ScaleNorm} + &\textsc{FixNorm})({\bm{x}}, {\bm{w}}; g)\\
&= g\frac{{\bm{w}} \cdot {\bm{x}}}{\norm{{\bm{w}}}\norm{{\bm{x}}}}.
\end{split}
\end{equation}
Note that this combination at the last layer is equivalent to cosine normalization \cite{Luo2017} with a learned scale.
\input{tables/data-model-stats.tex}
\subsection{Learning rates}
\label{ssec:learning-rate}
Despite using an adaptive optimizer, Adam \cite{Kingma2014}, Transformer training uses a learning rate (LR) schedule with a linear \keyword{warmup} and an inverse square root \keyword{decay} (\textsc{InvSqrtDecay}):
\begin{equation} \label{xmer-lr}
\text{LR}(n) = \frac{\lambda}{\sqrt{d}} \min\left(\frac{1}{\sqrt{n}}, \frac{n}{n_{\text{warmup}}^{1.5}}\right),
\end{equation}
where $d$ is the hidden dimension of the self-attention layers, and $\lambda$, $n_{\text{warmup}}$ are hyperparameters that determine the highest learning rate achieved and the number of steps to reach it, respectively. These two hyperparameters have been the subject of much empirical study \cite{Popel2018, Ott2018}. In light of our modifications however, we revisit various aspects of this schedule:
\paragraph{Warmup-free training.} We conjectured that warmup is primarily needed when using \textsc{PostNorm}\ to gradually learn \textsc{LayerNorm}\ parameters without gradient explosion/vanishing (\Cref{ssec:identity-mappings}). Hence, we evaluate both \textsc{PreNorm}\ and \textsc{PostNorm}\ without warmup in \Cref{experiment_lr}.
\paragraph{Large learning rates.} To speed up training, one often explores using larger learning rates. In the context of Transformer, \citet{Ott2018} and \citet{Aharoni2019} take $\lambda \in \{2, 3\}$ instead of the conventional $\lambda = 1$. \citet{Ott2018} showed that one can scale up Adam's learning rate to $10^{-3}$ with an extremely large batch (400k tokens). However, the improved convergence provided by our modifications could enable higher learning rates with much small batch sizes (4k tokens), as examined in \Cref{experiment_lr}.
\paragraph{Validation-based decay.} For similar reasons, one might wish to adopt a classic validation-based decay, i.e., training at a high learning rate for as long as tenable, decaying rapidly when development scores flatline. This has inspired usage of fixed decay schemes upon convergence with \textsc{InvSqrtDecay}\ \cite{Dong2018, Salazar2019}. We revisit \textsc{ValDecay}\ under our modifications, where we still perform a linear warmup but then multiply by a scale $\alpha_{\text{decay}} < 1$ when performance on a development set does not improve over $patience$ evaluations.
\section{Conclusion}
In this work, we presented three simple, normalization-centric changes to the Transformer model, with a focus on NMT. First, we show that while \textsc{PostNorm}\ performs better for high-resource NMT in the original base Transformer regime, \textsc{PreNorm}\ is both more stable and more competent in low-resource settings. Second, we propose replacing \textsc{LayerNorm}\ with \textsc{ScaleNorm}, a fast and effective \keyword{scaled $\ell_2$ normalization} technique which requires only a single learned parameter. Finally, we reaffirm the effectiveness of fixing the word embedding norm (\textsc{FixNorm}). Altogether, \textsc{PreNorm}\ + \textsc{FixNorm}\ + \textsc{ScaleNorm}\ significantly improves NMT on low-resource pairs, with the latter two performing comparably in the high-resource setting, but faster.
In the future, we would like to investigate the relationship between \textsc{PostNorm}\ and \textsc{PreNorm}\ when using other optimizers such as \textsc{RAdam}\ \cite{radam}, which has been shown to improve Transformer training without warmup. We are also interested in seeing if \textsc{FixNorm}\ or \textsc{ScaleNorm}\ at the final linear layer remains effective when paired with an initialization method such as \textsc{Fixup}\ \cite{fixupinit}, which enables the training of deep neural networks without normalization. One could also explore using other $\ell_p$ norms \cite{Santurkar2018}.
\section{Experiments and results}
\label{sec:experiments}
We train Transformer models for a diverse set of five low-resource translation pairs from the TED Talks \cite{Qi2018-word-embeddings-nmt} and the IWSLT\,\textquotesingle 15\ \cite{Cettolo2015} corpora. Details are summarized in \Cref{tab:stats}. For more information motivating our choice of pairs and for exact training details, refer to \Cref{appendix:setup}.
\subsection{Large vs. small initialization} \label{experiment_weight_init}
To see the impact of weight initialization, we run training on the \textit{en\textrightarrow vi}\ dataset using warmup steps of {4k, 8k, 16k} (\Cref{tab:big_small_init}). With default initialization, \textsc{PostNorm}\ fails to converge on this dataset even with a long warmup of 16k steps, only reaching 5.76 BLEU.
\input{tables/big_small_init.tex}
\input{tables/lnorm-vs-scnorm.tex}
\input{tables/learning_rate.tex}
The second row shows that taking a smaller standard deviation on the attention weights (\textsc{SmallInit}) restores convergence to \textsc{PostNorm}. Though the $\sqrt{2/5} \approx 0.63$ adjustment used here seems marginal, operations like residual connections and the products between queries and keys can compound differences in scale. Though both models now achieve similar performance, we note that \textsc{PreNorm}\ works in all setups, suggesting greater stability during training. For all remaining experiments, we use \textsc{PostNorm}\ and \textsc{PreNorm}\ with \textsc{SmallInit}. We find this choice does not affect the performance of \textsc{PreNorm}.
\subsection{Scaled $\ell_2$ normalization and \textsc{FixNorm}}
\label{sec:experiments_scnorm}
To compare \textsc{ScaleNorm}\ and \textsc{LayerNorm}, we take 8k warmup steps for all further experiments. Since we tie the target input word embedding and the last linear layer's weight (\Cref{appendix:setup}), \textsc{FixNorm}\ is implemented by applying $\ell_2$ normalization to the word embedding, with each component initialized uniformly in $[-0.01, 0.01]$. For non-\textsc{FixNorm}\ models, word embeddings are initialized with mean 0 and standard deviation $\sqrt{1/d}$ so they sum to unit variance. All $g$'s in \textsc{ScaleNorm}\ are initialized to $\sqrt{d}$.
\Cref{tab:lnorm-scnorm} shows our results along with some published baselines. First, note that our Transformer baselines with \textsc{PostNorm}\ + \textsc{LayerNorm}\ (1) are very strong non-multilingual NMT models on these pairs. They outperform the best published numbers, which are all Transformer models in the past year, by an average margin of +4.0 BLEU. Then, we see that \textsc{PreNorm}\ (2) achieves comparable or slightly better results than \textsc{PostNorm}\ on all tasks. \textsc{FixNorm}\ (3) gives an additional gain, especially on ar\textrightarrow en\ ($p < 0.01$).
Finally, we replace \textsc{LayerNorm}\ with \textsc{ScaleNorm}\ (4). \textsc{ScaleNorm}\ significantly improves on \textsc{LayerNorm}\ for two very low-resource pairs, gl\textrightarrow en\ and sk\textrightarrow en. On the other tasks, it performs comparably to \textsc{LayerNorm}. Upon aggregating all changes, our final model with \textsc{ScaleNorm}\ and \textsc{FixNorm}\ improves over our strong baseline with \textsc{PostNorm}\ on all tasks by an average of +1.1 BLEU ($p < 0.01$), with each change contributing an average of at least +0.3 BLEU. In \Cref{ssec:g-values} and \Cref{ssec:generalization}, we further examine where the performance gains of \textsc{ScaleNorm}\ come from.
Moreover, \textsc{ScaleNorm}\ is also faster than \textsc{LayerNorm}. Recall that for each vector of size $d$, \textsc{LayerNorm}\ needs to compute mean, standard deviation, scaling, and shifting, which costs $O(7d)$ operations. For \textsc{ScaleNorm}, we only need $O(3d)$ operations to perform normalization and global scaling. This does not account for further gains due to reduction in parameters. In our implementation, training with \textsc{ScaleNorm}\ is around 5\% faster than with \textsc{LayerNorm}, similar to the speedups on NMT observed by \citet{Zhang2019}'s \textsc{RMSNorm}\ (which can be viewed as \textsc{ScaleNorm}\ with per-unit scales; see \Cref{ssec:g-values}).
\subsection{Learning rates} \label{experiment_lr}
We compare the original learning rate schedule in equation \ref{xmer-lr} (\textsc{InvSqrtDecay}) with validation-based decay (\textsc{ValDecay}), possibly with no warmup (\textsc{NoWarmup}). We use $\lambda=1$, $n_{warmup}=8\text{k}$ for \textsc{InvSqrtDecay}\ and \textsc{ValDecay}. For \textsc{NoWarmup}, we instead use a learning rate of $3 \cdot 10^{-4}$ for all datasets. For both \textsc{ValDecay}\ and \textsc{NoWarmup}, we take $\alpha_{decay}=0.8$ and $patience=3$. For experiments with high learning rate, we use either \textsc{ValDecay}\ or \textsc{InvSqrtDecay}\ with $\lambda = 2$ (giving a peak learning rate of $\approx10^{-3}$). All experiments use \textsc{PreNorm}\ + \textsc{FixNorm}\ + \textsc{ScaleNorm}.
In \Cref{tab:learning-rate}, we see that \textsc{NoWarmup}\ performs comparably to \textsc{InvSqrtDecay}\ and \textsc{ValDecay}\ except on gl\textrightarrow en. We believe that in general, one can do without warmup, though it remains useful in the lowest resource settings. In our \textsc{2$\times$LR}\ experiments, we can still attain a maximum learning rate of $10^{-3}$ without disproportionately overfitting to small datasets like gl\textrightarrow en.
One might hypothesize that \textsc{ValDecay}\ converges more quickly to better minima than \textsc{InvSqrtDecay}\ by staying at high learning rates for longer. However, both schedulers achieve similar results with or without doubling the learning rate. This may be due to the tail-end behavior of \textsc{ValDecay}\ methods, which can involve multiplicative decays in rapid succession. Finally, our \textsc{2$\times$LR}\ experiments, while not yielding better performance, show that \textsc{PreNorm}\ allows us to train the Transformer with a very high learning rate despite small batches (4k tokens).
Since \textsc{PreNorm}\ can train without warmup, we wonder if \textsc{PostNorm}\ can do the same. We run experiments on \textit{en\textrightarrow vi}\ with \textsc{NoWarmup}, varying the number of encoder/decoder layers. As seen in \Cref{tab:no-warm-post-prev}, \textsc{PostNorm}\ often fails without warmup even with 5 or 6 layers. Even at 4 layers, one achieves a subpar result compared to \textsc{PreNorm}. This reaffirms \Cref{experiment_weight_init} in showing that \textsc{PreNorm}\ is more stable than \textsc{PostNorm}\ under different settings.
\input{tables/nowarm-prev-post.tex}
\subsection{High-resource setting} \label{experiment_highres}
Since all preceding experiments were in low-resource settings, we examine if our claims hold in a high-resource setting. We train the Transformer base model on WMT\,\textquotesingle 14\ English-German using \textsc{fairseq}\ and report tokenized BLEU scores on \keyword{newstest2014}. Implementation of our methods in \textsc{fairseq}\ can be found in \Cref{listing-fairseq}.
In \Cref{tab:high-resource}, \textsc{ScaleNorm}\ and \textsc{FixNorm}\ achieve equal or better results than \textsc{LayerNorm}. Since \textsc{ScaleNorm}\ is also faster, we recommend using both as drop-in replacements for \textsc{LayerNorm}\ in all settings. Surprisingly, in this task \textsc{PostNorm}\ works notably better than \textsc{PreNorm}; one observes similar behavior in \citet{Wang2019-learning-deep-transformers}. We speculate this is related to identity residual networks acting like shallow ensembles \cite{resnetensemble} and thus undermining the learning of the longest path; further study is required.
\input{tables/highresource.tex}
\section{Introduction}
The Transformer \cite{NIPS2017_7181} has become the dominant architecture for neural machine translation (NMT) due to its train-time parallelism and strong downstream performance. Various modifications have been proposed to improve the efficiency of its multi-head attention and feedforward sublayers \cite{Guo2019,Sukhbaatar2019}. Our work focuses on \keyword{layer normalization} (\textsc{LayerNorm}) \cite{Ba2015}, which we show has an outsized role in the convergence and performance of the Transformer in two ways:
\paragraph{Placement of normalization.} The original Transformer uses \keyword{post-norm residual units} (\textsc{PostNorm}), where layer normalization occurs after the sublayer and residual addition. However, \citet{Chen2018} found that \keyword{pre-norm residual units} (\textsc{PreNorm}), where layer normalization occurs immediately before the sublayer, were instrumental to their model's performance. \citet{Wang2019-learning-deep-transformers} compare the two, showing that \textsc{PreNorm}\ makes backpropagation more efficient over depth and training Transformers with deep, 30-layer encoders.
Our work demonstrates additional consequences in the base ($\le$6-layer encoder) Transformer regime. We show that \textsc{PreNorm}\ enables warmup-free, validation-based training with large learning rates even for small batches, in contrast to past work on scaling NMT \cite{Ott2018}. We also partly reclaim \textsc{PostNorm}'s stability via smaller initializations, although \textsc{PreNorm}\ is less sensitive to this magnitude and can improve performance. However, despite \textsc{PreNorm}'s recent adoption in many NMT frameworks, we find it degrades base Transformer performance on WMT\,\textquotesingle 14\ English-German.
\paragraph{Choice of normalization.} \citet{Santurkar2018} show that batch normalization's effectiveness is not from reducing internal covariate shift, but from smoothing the loss landscape. They achieve similar or better performance with non-variance-based normalizations in image classification. Hence, we propose replacing \textsc{LayerNorm}\ with the simpler \keyword{scaled $\ell_2$ normalization} (\textsc{ScaleNorm}), which normalizes activation vectors to a \keyword{single} learned length $g$. This is both inspired by and synergistic with jointly fixing the word embedding lengths (\textsc{FixNorm}) \cite{Nguyen2018-improving-lexical-choice}. These changes improve the training speed and low-resource performance of the Transformer without affecting high-resource performance.
\par\medskip
On five low-resource pairs from the TED Talks \cite{Qi2018-word-embeddings-nmt} and IWSLT\,\textquotesingle 15\ \cite{Cettolo2015} corpora, we first train state-of-the-art Transformer models (+4.0 BLEU on average over the best published NMT bitext-only numbers). We then apply \textsc{PreNorm}, \textsc{FixNorm}, and \textsc{ScaleNorm}\ for an average total improvement of +1.1 BLEU, where each addition contributes at least +0.3 BLEU (\Cref{sec:experiments}), and attain a new 32.8 BLEU on IWSLT\,\textquotesingle 15\ English-Vietnamese. We validate our intuitions in \Cref{sec:analysis} by showing sharper performance curves (i.e., improvements occur at earlier epochs) and more consistent gradient norms. We also examine the per-sublayer $g$'s learned by \textsc{ScaleNorm}, which suggest future study.
|
1,108,101,564,231 | arxiv | \section{Introduction}\label{sec:intro}
With the explosive bandwidth demand of wireless devices, milli-meter Wave (mmWave) is considered a promising solution to the spectrum scarcity problem. mmWave provides a large spectrum in the above-$6$ GHz band, i.e., Frequency Range 2 (FR-2). In contrast to sub-$6$ GHz band, FR-2 suffers from higher propagation losses that limit the coverage range of communication. Therefore, beamforming is used to combat mmWave losses by reshaping the beam pattern of the antenna in the direction of the user, hence achieving better power density in the direction of propagation. On the other hand, Non-Orthogonal Multiple Access (NOMA) is a promising multiple access technique for 5G and beyond 5G networks. The key idea in NOMA is to serve multiple users on the same time/frequency resources while superposing their messages in power domain, i.e. allocating different power levels to users' signals. This superposition process relies on the relative channel gains of the users such that users with better channel gains get less power levels; whereas users with bad channel gains get higher power levels. Successive Interference Cancellation (SIC) is applied at the users' side to remove inter-user interference. In particular, the user with the best channel decodes its message by successively decoding other users' messages and subtracting their effect from the received signal; whereas users with bad channel decode their respective signals directly \cite{6868214}.
Despite the significant spectral efficiency and capacity improvements that the aforementioned techniques bring about, several challenges hinder that performance gain. In a downlink multi-beam scenario, the coverage of beams associated with different cells might intersect causing Inter-Beam Inter-Cell Interference (IB-ICI). Careful allocation of power to each beam, i.e. inter-beam power allocation, is essential for IB-ICI mitigation. Furthermore, the number of users covered by a single beam impacts the complexity and performance of SIC. As mentioned earlier, SIC performs successive decode and encode iterations to remove inter-user interference. Therefore, increasing number of users per beam leads to a large increase in complexity. Furthermore, the performance of SIC diminishes rapidly as the number of users increases \cite{6861434}. To prevent SIC performance degradation, hence improve sum rate, it is imperative to balance the load across cells through user-cell association. In parallel to advances arising from mmWave, beamforming and NOMA, there are significant efforts to make use of machine learning techniques to improve the performance of next-generation wireless networks \cite{8758918}.\\
In this paper, we address IB-ICI by using machine learning for joint user-cell association and inter-beam power allocation. In particular, we use a Q-learning algorithm that aims to enhance the sum rate of the network. Our results show that the proposed algorithm increases the achieved sum rate with at least $13\%$ for the least offered traffic load with a convergence of about $286$ ms. In addition, about $30\%$ increase in sum rate is achieved in the highest traffic load simulated.
This paper is organized as follows. Section \ref{sec:relWork} presents the related work. Section \ref{sec:sysModel} presents the system model and highlights the main trade-offs in maximizing the sum rate. In section \ref{sec:alg}, we provide a background on reinforcement learning and present the proposed algorithm which is based on Q-learning. Section \ref{sec:perfEval} presents the simulation setup, baseline algorithm and performance results. Finally, section \ref{sec:conclusion} concludes the paper.
\section{Related Work}\label{sec:relWork}
In \cite{8454272}, the authors aim to maximize the sum rate of a mmWave NOMA system by solving user clustering and NOMA power allocation. The authors utilize the correlation features of the user channels to develop a k-means clustering algorithm. They also drive a closed form solution for the optimal NOMA power allocation within a cluster. The authors in \cite{8762180} aim to maximize the throughput of an ultra-dense mmWave network with multi-connectivity by solving the user-cell association problem. Using multi-label classification technique, the authors investigate three approaches for user-cell association: binary relevance, ranking by pairwise comparison, and random k-lebelsets. Besides these unsupervised learning techniques, reinforcement learning has been used in resource allocation in \cite{5GforumPaper} for capacity maximization and in \cite{8781859} for latency minimization.
Beam selection and power allocation have been further studied in \cite{8782638} where the authors formulate a mixed integer non-linear programming problem for joint beam selection and power allocation in 5G mmWave small cell network. Due to the non-convexity of the problem, they decompose it to two sub-problems, i.e. beam selection and power allocation, and solve the former with an optimal algorithm based on cooperative games and the latter with Lagrange duality and non-cooperative games. Interference alignment techniques based on coordinated beamforming is used in \cite{7582424}, where two base stations jointly optimize their beamforming vectors to improve the downlink throughput of a multi-cell MIMO-NOMA network. In addition, the authors in \cite{Ishihara2017} address inter-cell interference mitigation in downlink multi-user MIMO for wireless LAN networks using transmit beamforming. The scheme relies on the estimation of both the power of the inter-cell interference and the channel state information to calculate the transmit beamforming weight matrix.
The authors in \cite{7961156} formulate a multi-objective optimization problem for improving throughput and minimizing energy consumption in mmWave-based ultra-dense networks. The problem aims at solving the user association and power allocation with the constraints of load balancing, quality of service requirements, energy efficiency, energy harvesting, and cross-tier interference limits between macro base station and mmWave-based stations.
The authors in \cite{8536429} consider a massive 5G femtocell network and design a cluster scheme to group femtocells and femto-users having the highest probabilities of line of sight connectivity. Inside each cluster, a joint user-association and resource allocation is performed. The authors use joint difference of two convex function to solve the clustering problem and subproblems of mixed interger non-linear programming to solve the user-cell association and power allocation. The authors in \cite{7802615} address the problem of user clustering, beamforming and NOMA power allocation in a downlink multi-user MIMO system. In particular, the authors propose a multi-clustering zero-forcing beamforming algorithm to mitigiate inter-cluster interference and maximize the sum spectral efficiency. Furthermore, they provide a dynamic power allocation solution to perform inter-cluster and intra-cluster power allocation. In \cite{7084118}, the authors consider a coordinated multi-cell downlink beamforming in massive MIMO systems with the aim of minimizing the aggregate transmit power of all base stations subject to Signal to Interference plus Noise Ratio (SINR) constraint. They propose two algorithms, a decentralized algorithm of coordinated beamforming using tools from random matrix theory, and a heuristic extension with base station transmit power constraint. Finally, \cite{Attiah2019} presents a comprehensive survey on user-cell association and resource allocation in mmWave networks.
Unlike previous work, in this paper, we address inter-beam inter-cell and intra-beam interference using joint user-cell association and inter-beam power allocation. The main objective of our work is to improve network sum rate. To achieve this, we propose an online Q-learning algorithm with an action space of user-cell association and inter-beam power allocation. The proposed algorithm updates its decisions each scheduling interval, taking into account the interference among cells inferred from estimated SINR values at the UEs.
\section{System Model}\label{sec:sysModel}
\textit{\textbf{Notations:}} In the remainder of this paper, bold face lower case characters denote column vectors, while non-bold characters denote scalar values. The operators $(.)^T$, $(.)^H$ and $|.|$ correspond to the transpose, the Hermitian transpose, and the absolute value, respectively. The operator $(A)^-$ under set $B$ represents the absolute complement of $A$, i.e. $(A)^- = B \setminus A$.
Consider a downlink mmWave-NOMA system with $J \in$ {\Fontauri\bfseries J} 5G-NodeBs (gNBs) equipped with $M \in$ {\Fontauri\bfseries M} transmit antennas and $U \in$ {\Fontauri\bfseries U} single-antenna users. Furthermore, users are partitioned into different clusters, $k \in$ {\Fontauri\bfseries K}, that are served using different beams such that {\Fontauri\bfseries U}$_k$ is the set of users covered by $k^{th}$ beam. Let {\Fontauri\bfseries K}$_j$ be the set of beams of $j^{th}$ gNB. Henceforth, we use cluster and beam interchangeably. Indeed, different beams of different cells can have coverage intersection as shown in Fig. \ref{fig:sysModel}. Such intersection gives rise to Inter-Beam Inter-Cell Interference (IB-ICI). IB-ICI mitigation is essential in order to maximize network rate.
In this paper, Poisson Cluster Process (PCP) is used to model users' deployment in the network, where the parent process follows a uniform distribution and the users of a cluster are uniformly deployed within a circular disk of radius $R_k$ around the cluster center. Every gNB performs a clustering algorithm to group users that can be covered by a single beam. Under every beam, downlink NOMA power allocation is used to multiplex users in the power domain, whereas users use SIC to demodulate their respective signals. We employ k-means clustering algorithm and the closed-form NOMA power allocation that is proposed in \cite{8454272}. In particular, k-means is used to cluster users according to the correlation of their wireless channel properties, i.e. users with correlated channels are more likely to be located close to each other.
\begin{figure}
\centering
\includegraphics[scale=0.35]{networkModel.eps}
\caption{System model of mmWave network using beamforing.}
\label{fig:sysModel}
\end{figure}
In mmWave channels, the gain of the Line-of-Sight (LoS) path is significantly larger than the gain of the Non-LoS (NLoS) path, i.e. with around $20$ dB \cite{8454272}, hence the mmWave channel model can be simplified to a single-path LoS model as follows:
\begin{equation}
\boldsymbol{h}_{k,u,j} = \boldsymbol{a}(\theta_{k,u,j}) \frac{\alpha_{k,u,j}}{\sqrt{L}(1 + d_{u,j}^{\eta})},
\end{equation}
where $L$ is the number of paths, $\boldsymbol{h}_{k,u,j} \in \mathbb{C}^{M \times 1}$ is the channel complex coefficient vector of $u^{th}$ user and $j^{th}$ gNB on $k^{th}$ beam, i.e. link $(u,k,j)$, $\alpha_{k,u,j} \in \mathbb{C}N(0, \sigma^2)$ is the complex gain, $d_{j,u}^{\eta}$ is the distance of link $(u,j)$ with pathloss exponent $\eta$. In addition, $\boldsymbol{a}(\theta_{k,u,j})$ is the steering vector, which can be represented as follows:
\begin{equation}
\boldsymbol{a}(\theta_{k,u,j}) = [1, e^{-j 2 \pi \frac{D}{\lambda} \sin(\theta_{k,u,j})}, ..., e^{-j 2 \pi (M-1) \frac{D}{\lambda} \sin(\theta_{k,u,j})}]^T,
\end{equation}
where $D$ is the gNB's antenna spacing, $\lambda$ is the wavelength, $\theta_{k,u,j}$ is the Angle of Departure (AoD).
\subsection{Problem Analysis}
In this work, we aim to improve the sum rate in mmWave network by performing user-cell association and inter-beam power allocation. In particular, sum rate can be calculated as follows:
\begin{equation}
R_{sum} = \omega \sum\limits_{j \in \text{\Fontauri\bfseries J}} \sum\limits_{k \in \text{\Fontauri\bfseries K}_j} \sum\limits_{u \in \text{\Fontauri\bfseries U}_k} \log_2(1 + \Gamma_{k,u,j}),
\label{eq:sumRate}
\end{equation}
where $\omega$ is the bandwidth, and $\Gamma_{k,u,j}$ is the SINR of $(u,k,j)^{th}$ link, which can be expressed as:
\begin{equation}
\Gamma_{k,u,j} = \frac{P_{k,j} \beta_{k,u,j} |\boldsymbol{h}_{k,u,j}^H \boldsymbol{w}_{k,j}|^2}{I_1 + I_2 + \sigma^2},
\label{eq:sinr}
\end{equation}
\begin{equation}
I_1 = P_{k,j} |\boldsymbol{h}_{k,u,j}^H \boldsymbol{w}_{k,j}|^2 \sum\limits_{\substack{i \neq u \\ O(i) > O(u)}} \beta_{k,i,j},
\label{eq:inter1}
\end{equation}
\begin{equation}
I_2 = \sum\limits_{l \in \text{\Fontauri\bfseries K}_{(j)^-}} P_l |\boldsymbol{h}_{l,u,(j)^-}^H \boldsymbol{w}_{l,(j)^-}|^2,
\label{eq:inter2}
\end{equation}
where $P_{k,j}$ denotes the power allocated to $k^{th}$ beam of $j^{th}$ gNB, and $P_l$ is the power allocated to $l^{th}$ interfering beam. $\beta_{k,u,j}$ and $\beta_{k,i,j}$ is the power allocation factor of $(k,u,j)^{th}$ and $(k,i,j)^{th}$ links respectively. $\boldsymbol{w}_{k,j}$ is the beamforming vector, and $\sigma^2$ represents receiver's noise variance. The setup shown in Fig. \ref{fig:sysModel} presents three types of interference: Intra-beam interference, IB-ICI, and inter-beam interference. In this paper, different beams are allocated different spectrum bands, hence inter-beam interference becomes void. With NOMA power allocation, users sharing the same time/frequency resources, are multiplexed in the power domain. This incurs intra-beam interference as expressed in Eq. \ref{eq:inter1}. Finally, IB-ICI is expressed in Eq. \ref{eq:inter2}. $O(u)$ denotes the decoding order of $u^{th}$ user whereas $\text{\Fontauri\bfseries K}_{j^-}$ denotes the set of beams that belong to the absolute complement of $j$ under set {\Fontauri\bfseries J}, i.e. $((j)^- = \text{\Fontauri\bfseries J} \setminus j)$. Finally, $\boldsymbol{h}_{l,u,(j)^-}$ represents the channel vector between the $l^{th}$ interfering beam from other cells in the set $(j)^-$ and $u^{th}$ user, and $\boldsymbol{w}_{l,(j)^-}$ is the beamforming vector of $l^{th}$ interfering beam.
\section{Proposed Machine Learning Approach}\label{sec:alg}
\subsection{Background on Q-learning}
Q-learning algorithm is a sub-class of reinforcement learning algorithms. Reinforcement learning refers to any agent-oriented learning, in which an agent interacts with an environment towards achieving a certain goal. In particular, the agent aims to learn the dynamics of the environment through trial and error. In response to the agent's actions, the agent receives a quantitative feedback that represents either a reward or a cost and the environment's state changes. Indeed, such setup can be represented as a Markov Decision Process (MDP) with a tuple of (agents, states, actions, and reward function).
The ultimate goal of the agent is to maximize the total expected future discounted rewards. To achieve that, the agent aims to find a policy that quantifies the optimal action decision in each state. This can be done using an action-value function as follows:
\begin{equation}
q_{\pi}(s, a) = \mathbb{E}_{\pi}[R_{t+1} + \gamma R_{t+2} + \gamma^2 R_{t+3}... | S_t = s, A_t = a],
\end{equation}
where $q_{\pi}(s, a)$ is the quality value, i.e. Q-value, of policy $\pi$ when starting at $s^{th}$ state and taking $a^{th}$ action. In particular, the optimal Q-values can be computed using a brute-force method. However, in order to facilitate online learning, an iterative algorithm, Q-learning, is used to approximate Q-values each iteration. Q-learning is a temporal difference method that uses the following update to approximate an agent's policy:
\begin{equation}
q(s, a) \gets q(s, a) + \alpha [R + \gamma \max\limits_a q(s', a) - q(s, a)],
\label{eq:qvalue}
\end{equation}
where $R$ is the reward value, and $\max\limits_a q(s', a)$ computes an approximate of the Q-value at the next state $s'$.
In the next section, we present the proposed Q-learning algorithm to perform joint user-cell association and inter-beam power allocation for maximizing network's sum rate.
\subsection{Proposed Q-learning Algorithm}
We define an online multi-agent Q-learning algorithm as follows:
\begin{itemize}
\item \textbf{Agents:} gNBs.
\item \textbf{Actions:} Each gNB decides on its user associations and inter-beam power allocation. The user-cell association is performed only for users that lie in the intersection region of two or more cells. Let {\Fontauri\bfseries U}$_j^{int}$ be the set of users of $j^{th}$ gNB that lie in its intersection region with other cells. The vector of actions is defined as $\boldsymbol{a}_j = [\boldsymbol{\delta_{j}}, \boldsymbol{P_j}; j \in \text{\Fontauri\bfseries J}]$, where $\boldsymbol{a}_j \in A_j$. The vector $\boldsymbol{\delta_{j}} = [\delta_{j,i}, i \in \text{\Fontauri\bfseries U}_j^{int}]$ represents a binary vector of user-cell association where each element indicates whether the gNB decides to associate the $i^{th}$ user, $\delta_{j,i} = 1$, or not, $\delta_{j,i} = 0$. Furthermore, $\boldsymbol{P_j} = [P_{j,k}, k \in \text{\Fontauri\bfseries K}_j]$ represents a vector that defines the power allocated to each beam of $j^{th}$ gNB. As such the size of the action-space becomes $2^{|\text{\Fontauri\bfseries U}_j^{int}|} \times N_p^{|\text{\Fontauri\bfseries K}_j|}$, where $N_p$ represents the set of power values available for each beam.
\item \textbf{States:} We define the states in terms of the average SINR which reflects the level of interference in the wireless environment:
\begin{equation}
S_{j} =
\begin{cases}
S_0 \hspace{10pt} \overline{\Gamma}_j \geq \Gamma_{th}, \\
S_1 \hspace{10pt} otherwise,
\end{cases}
\label{eq:states}
\end{equation}
where $j^{th}$ gNB, i.e. agent, transits to state $S_0$ as long as its average SINR, $\overline{\Gamma}_j$ is greater than a threshold value, $\Gamma_{th}$, and transits to $S_1$ otherwise. The average SINR of $j^{th}$ gNB is defined as follows:
\begin{equation}
\overline{\Gamma}_j = \frac{1}{(K \times U)} \sum\limits_{k \in \text{\Fontauri\bfseries K}_j} \sum\limits_{u \in \text{\Fontauri\bfseries U}_k} \Gamma_{k,u,j}
\end{equation}
\item \textbf{Reward:} We formulate the reward function based on SINR as follows:
\begin{equation}
R_j =
\begin{cases}
1 \hspace{10pt} \overline{\Gamma}_j \geq \Gamma_{th}, \\
-1 \hspace{10pt} otherwise,
\end{cases}
\label{eq:reward}
\end{equation}
\end{itemize}
\begin{algorithm}
\begin{algorithmic}[1]
\STATE \underline{\textbf{Initialization:}} Q-table $\gets$ 0, $\alpha$, $\gamma$, and $\epsilon$.
\FOR{scheduling assignment period $t$ = 1 to $T$}
\STATE \underline{\textbf{Step 1:}} Receive SINR estimations from attached users.
\STATE \underline{\textbf{Step 2:}} Perform Q-learning algorithm for joint user-cell association and inter-beam power allocation:
\bindent
\STATE Compute average SINR of users in the intersection region.
\STATE Update reward as in Eq. (\ref{eq:reward}).
\STATE Update Q-value (and Q-table) as in Eq. (\ref{eq:qvalue}).
\STATE Switch to state $s'$ as in Eq. (\ref{eq:states}).
\IF{rand $\leq \epsilon$}
\bindent
\STATE $\boldsymbol{a}_j \gets \textit{draw uniformly from } A_j$
\eindent
\ELSE
\bindent
\STATE $\boldsymbol{a}_j = \max\limits_{a \in A_j} q(s', a)$
\eindent
\ENDIF
\eindent
\STATE \underline{\textbf{Step 3:}} Downlink transmission of user-cell association decisions to each UE.
\STATE \underline{\textbf{Step 4:}} Wait UEs to perform final user-cell association decisions as in Algorithm \ref{alg:propAlg2}.
\STATE \underline{\textbf{Step 5:}} Receive final user-cell association decisions from UEs.
\STATE \underline{\textbf{Step 6:}} Perform k-means clustering and NOMA intra-beam power allocation.
\STATE \underline{\textbf{Step 7:}} Perform downlink transmission, while each user performs downlink reception using SIC.
\ENDFOR
\end{algorithmic}
\caption{Proposed Q-learning algorithm for joint user-cell association and inter-beam power allocation (gNB)}
\label{alg:propAlg}
\end{algorithm}
\begin{algorithm}[t]
\begin{algorithmic}[1]
\FOR{scheduling assignment period $t$ = 1 to $T$}
\STATE \underline{\textbf{Step 1:}} Receive association decisions from gNBs.
\STATE \underline{\textbf{Step 2:}} Update priority list, i.e. maintain gNBs that decided to associate and remove gNBs that decided not to associate with the UE.
\STATE \underline{\textbf{Step 3:}} Select the gNB with the highest priority on the list to associate with and send the final decision to the selected gNB.
\ENDFOR
\end{algorithmic}
\caption{User-cell association (UE)}
\label{alg:propAlg2}
\end{algorithm}
\textit{Algorithm \ref{alg:propAlg}} presents the steps performed by each gNB, whereas \textit{Algorithm \ref{alg:propAlg2}} presents the steps performed by each user. Furthermore, user-cell association process involves Q-learning part at gNB's side and priority list at user's side. In particular, the user maintains a priority list of the gNBs to associate with, which is computed according to SINR estimation in the last transmission interval. Afterwards, each gNB performs the Q-learning algorithm which results in an association decision for each user in the intersection region, where each user is informed about that decision. Finally, each user follows \textit{Algorithm \ref{alg:propAlg2}} to combine the decisions from gNBs with its priority list and informs the selected gNB.
\section{Performance Evaluation}\label{sec:perfEval}
\subsection{Simulation Setup}
We use 5G Matlab Toolbox to construct a discrete event simulator. The simulator works on TTI level with 5G downlink transmission and reception. Table \ref{tab:simSettings} presents the simulation settings. The network is composed of two gNBs with inter-gNBs distance of $150$ m. Users are stationary and their positions follow a PCP with $\lambda = 7$. We consider $2$ clusters, and cluster radius of $30$ m. The performance of the proposed algorithm is tested under several traffic loads. The number of users in the intersection region is $2$, number of power levels used is $5$, number of clusters is $2$, and number of states is $2$. Hence, the size of the action-space becomes $2^2 \times 5^2 = 100$ and the size of the Q-table is $2^2 \times 5^2 \times 2 = 200$. In addition, we employ k-means clustering and closed-form NOMA power allocation proposed in \cite{8454272} as a base for our implementation.
The proposed algorithm is compared to a baseline algorithm that heuristically performs user-cell association and inter-beam power allocation. In particular, the baseline algorithm performs user-cell association by constructing a priority list of gNBs ordered according to SINR. Afterwards, users associate with the gNB with the highest priority on the list. In addition, power allocation is performed by equally dividing the total power of cell among its beams.
\subsection{Performance Results}
In this section, we present performance results of the proposed Q-learning algorithm compared with the baseline algorithm, UPA in terms of sum rate, latency, and Packet Drop Rate (PDR).
Fig. \ref{fig:sumRate} presents the network sum rate versus the total offered load. The figure shows that the proposed scheme outperforms UPA in all cases with a rate increase of $13\%$ and $33\%$ at the lowest and highest offered loads, respectively. In addition, Fig. \ref{fig:sumrateUsers} presents the network sum rate versus the total number of users in the network. The figure shows that Q-learning is able to maintain a sum rate close to the total offered load (which is set to 0.5 Mbps for the presented case) when increasing the number of users per network, whereas UPA is achieving lower sum rate. The packet drop rate (PDR) is presented in Fig. \ref{fig:avgPDR}, where both algorithms are achieving very comparable PDR (around $10-11\%$).
\begin{table}[htp]
\centering
\caption{5G mmWave Network Simulation Settings}
\begin{tabular}{|l|l|}
\hline
\textbf{\underline{5G PHY configuration}} & \\
Bandwidth & $20$ MHz \\
Carrier frequency & $30$ GHz \cite{8782638}\\
Subcarrier spacing & $15$ KHz \\
Subcarriers per resource block & $12$ \\
TTI size & $2$ OFDM symbols ($0.1429$ msec) \\
Max transmission power & $28$ dBm \\
\hline
\textbf{\underline{HARQ}} & \\
Type & Asynchronous HARQ \\
Round trip delay & $4$ TTIs \\
Number of processes & $6$ \\
Max. number of re-transmission & $1$ \\
\hline
\textbf{\underline{Distribution of users}} & \\
Mobility & Stationary \\
Distribution & Poisson Cluster Process (PCP) \\
PCP Average number of users & $7$ \\
Number of clusters & $2$ \\
Radius of cluster & $30$ m \\
Number of users & $4-16$ \\
Number of gNBs & $2$ \\
Inter-gNBs distance & $150$ m \cite{8782638} \\
\hline
\textbf{\underline{Traffic}} & \\
Distribution & Poisson \\
Packet size & $32$ Bytes \\
\hline
\textbf{\underline{Q-learning}} & \\
Learning rate $(\alpha)$ & $0.5$ \\
Discount factor $(\gamma)$ & $0.9$ \\
Exploration probability $(\epsilon)$ & $0.1$ \\
Inter-beam power levels & $[0:2:8]$ dBm \\
Threshold SINR ($\gamma_{th}$) & 20 dB \\
\hline
\textbf{\underline{Simulation parameters}} & \\
Simulation time & $4000$ TTI \\
Number of runs & $40$ \\
Confidence interval & $95\%$ \\
\hline
\end{tabular}
\label{tab:simSettings}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.3]{sumRate.eps}
\caption{Sum rate [Mbps] versus total offered load [Mbps]. Number of users is 9.}
\label{fig:sumRate}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.3]{sumrateUsers.eps}
\caption{Sum rate [Mbps] versus total number of users with $0.5$ Mbps total offered load.}
\label{fig:sumrateUsers}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[scale=0.3]{avgPDR.eps}
\caption{Average packet drop rate [\%] versus total offered load [Mbps]. Number of users is 9.}
\label{fig:avgPDR}
\end{figure}
Furthermore, Fig. \ref{fig:avgLatency} shows the empirical Complementary Cumulative Distribution Function (eCCDF) of the average achieved latency. Latency is defined as the delay of the packet since its creation at the gNB until its delivery at the user side. This includes queuing, transmission, and propagation delays. The processing at both ends, i.e. gNB and user, includes RLC, MAC and PHY layers. The figure presents that both algorithms achieve similar latency values at different offered loads. The figure also shows three main latency points: $0.1429$ ms, $0.2857$ ms, and $0.4286$ ms, which correspond to 1, 2, and 3 TTIs respectively, where 1 TTI represents 2 OFDM symbols. In particular, queuing and re-transmission delays contribute to the total achieved delay \cite{medhatelsayed1}. By improving interference, i.e. SINR, re-transmission delay, hence total latency improves.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.3]{avgLatency.eps}
\caption{Average Latency [ms] versus Total Offered Load [Mbps].}
\label{fig:avgLatency}
\end{figure}
Finally, Fig. \ref{fig:convergenceAll} shows the average cumulative reward versus the iteration number. The $\epsilon$-greedy action selection methodology, presented on lines 9-13 in Algorithm \ref{alg:propAlg}, is applied for $2000$ TTIs, whereas greedy policy is followed afterwards. The proposed algorithm converges at around the $2500^{th}$ TTI with a slight decrease of the reward at $500-600^{th}$ TTI due to the exploration policy.
\begin{figure}[t!]
\centering
\includegraphics[scale=0.3]{convergenceAll.eps}
\caption{Cumulative Average of Q-learning's Reward versus Iteration Number with different Total Offered Load.}
\label{fig:convergenceAll}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
In this paper, we presented a machine learning algorithm to address the joint problem of user-cell association and inter-beam power allocation in 5G mmWave network. The proposed algorithm aims at improving the sum rate by mitigating the intra-beam interference and inter-beam inter-cell interference. On one hand, the algorithm performs inter-beam power allocation such that it balances the interference posed by beams of adjacent cells. On the other hand, the algorithm performs user-cell association which balances users' attachments across cells, hence improving the performance of successive interference cancellation. The proposed algorithm is designed using an online Q-learning algorithm and compared with a baseline algorithm that uses uniform power allocation. Simulation results reveal the ability of the proposed algorithm to improve the sum rate.
\section*{Acknowledgment}
This research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under Canada Research Chairs Program and CREATE program.
\bibliographystyle{ieeetr}
|
1,108,101,564,232 | arxiv |
\section{INTRODUCTION}
\label{sec:intro}
\input{section/intro_and_related_works}
\section{SENSOR DESIGN}
\label{sec:sensor_design}
\input{section/sensor_design}
\section{MODEL-BASED APPROACH}
\label{sec:model_based}
\input{section/model_based_approach}
\section{DEEP-LEARNING BASED APPROACH}
\label{sec:learning_based}
\input{section/learning_based_approach}
\section{EXPERIMENTAL EVALUATION}
\label{sec:results}
\input{section/results}
\section{CONCLUSION}
\label{sec:conclusion}
\input{section/conclusion}
\section*{ACKNOWLEDGMENT}
This work was funded by the Air Force Office of Scientific Research MURI FA9550-19-1-0386 and by Ford Motor Company. The authors would like to thank Parker Lusk for his help in the system setup.
\balance
\bibliographystyle{IEEEtran}
\subsection{\ac{MAV} dynamic model}
We consider a \ac{MAV} of mass $m$ and inertia tensor $\mathbf{J}$, and the dynamic equations of the robot can be written as
\begin{equation}
\begin{split}
\prescript{}{W}{\dot{\mathbf{p}}} = & \prescript{}{W}{\mathbf{v}} \\
\dot{\mathbf{R}}_{W}^{B} = & \mathbf{R}_{W}^{B}[\prescript{}{B}{\boldsymbol{\omega}}\times] \\
m\prescript{}{W}{\dot{\mathbf{v}}} = & \mathbf{R}_{W}^{B} \prescript{}{B}{\mathbf{f}}_{\text{cmd}} + \prescript{}{W}{\mathbf{f}_\text{drag}} + m \prescript{}{W}{\mathbf{g}} + \prescript{}{W}{\mathbf{f}}_\text{touch} \\
\mathbf{J} \prescript{}{B}{\dot{\boldsymbol{\omega}}} = & -\prescript{}{B}{\boldsymbol{\omega}} \times \mathbf{J} \prescript{}{B}{\boldsymbol{\omega}} + \prescript{}{B}{\boldsymbol{\tau}}_{\text{cmd}} \\
\end{split}
\label{eq:mav_dynamic_model}
\end{equation}
where $\mathbf{p}$ and $\mathbf{v}$ represent the position and velocity of the MAV, respectively, $\mathbf{R}_{W}^{B}$ is the rotation matrix representing the attitude of the robot (i.e., such that a vector $\prescript{}{W}{\mathbf{p}} = \mathbf{R}_{W}^{B} \prescript{}{B}{\mathbf{p}}$), and $[\times]$ denotes the skew-symmetric matrix.
The vector $\prescript{}{B}{\mathbf{f}}_{\text{cmd}} = \prescript{}{B}{\mathbf{e}}_3 f_{\text{cmd}}$ is the thrust force produced by the propellers along the $z$-axis of the body frame, $\prescript{}{W}{\mathbf{g}} = -\prescript{}{W}{\mathbf{e}}_3 g$ is the gravitational acceleration, and $\prescript{}{W}{\mathbf{f}}_\text{touch}$ is the interaction force expressed in the inertial frame.
For simplicity we have assumed that interaction and aerodynamic disturbances do not cause any torque on the \ac{MAV}, due to its symmetric shape and the fact that interactions (in our hardware setup) can only safely happen in proximity of the center of mass of the robot.
Vector $\prescript{}{B}{\boldsymbol{\tau}}_{\text{cmd}}$ represents the torque generated by the propellers and $\prescript{}{B}{\boldsymbol{\omega}}$ the angular velocity of the MAV, both expressed in the body reference frame.
Here $\mathbf{f}_\text{drag}$ is the aerodynamic drag force on the robot, expressed as an isotropic drag \cite{tagliabue2019model}
\begin{equation}
\begin{split}
\prescript{}{W}{\mathbf{f}}_\text{drag} = & (\mu_1 \text{v}_\infty + \mu_2 \text{v}_\infty^2)\prescript{}{W}{\mathbf{e}}_{\text{v}_\infty} = f_\text{drag} \prescript{}{W}{\mathbf{e}}_{\text{v}_{\infty}} \\
& _W{\mathbf{e}}_{\text{v}_{\infty}} = \frac{\prescript{}{W}{\mathbf{v}}_\infty}{\text{v}_\infty}, \quad \text{where} \quad \text{v}_\infty = \mnorm{\prescript{}{W}{\mathbf{v}}_\infty},
\label{eq:drag_force}
\end{split}
\end{equation}
$\prescript{}{W}{\mathbf{v}}_\infty$ is the velocity vector of the relative airflow acting on the \ac{CoM} of the \ac{MAV} (expressed in the inertial frame)
\begin{equation}
\prescript{}{W}{\mathbf{v}}_\infty = \prescript{}{W}{\mathbf{v}}_\text{wind} - \prescript{}{W}{\mathbf{v}},
\label{eq:relative_airflow}
\end{equation}
and $\prescript{}{W}{\mathbf{v}}_\text{wind}$ is the velocity vector of the wind expressed in the inertial frame.
\begin{figure}
\vspace*{.1in}
\centering
\includegraphics[width=1\columnwidth]{figs/model-based-paper.pdf}
\caption{Diagram of the most important signals used by each step of the proposed model-based approach for simultaneous estimation of wind, drag force, and interaction force.}
\label{fig:model_based}
\end{figure}
\subsection{Airflow sensor model}\label{subsec:sensor_model}
We consider the $i$-th airflow sensor to be rigidly attached to the body reference frame $B$, with $i=1,\ldots,N$. The reference frame of each sensor is translated with respect to $B$ by a vector $\prescript{}{B}{\mathbf{r}}_{S_i}$ and rotated according to the rotation matrix $\mathbf{R}_{S_i}^B$.
To derive a model of the whiskers subject to aerodynamic drag, we make the following assumptions. Each whisker is massless; its tilt angle is not significantly influenced by the accelerations from the base $B$ (due to the high stiffness of its spring and the low mass of the fins), but is subject to the aerodynamic drag force $\mathbf{f}_{\text{drag}, i}$.
We further assume that each sensor can be modeled as a stick hinged at the base via a linear torsional spring. Each sensor outputs the displacement angle $\theta_{x,i}$ and $\theta_{y,i}$, which correspond to the rotation of the stick around the $x$ and $y$ axis of the $S_i$ reference frame.
We can then express the aerodynamic drag force acting on the aerodynamic surface of each sensor $\prescript{}{S_i}{\mathbf{f}}_{\text{drag}, i}$ as a function of the (small) displacement of the angle
\begin{equation}
\mathbf{S}_{x,y}\prescript{}{S_i}{\mathbf{f}}_{\text{drag}, i} \approx
\begin{bmatrix}
0 & l_i k_i \\
- l_i k_i & 0 \\
\end{bmatrix}
\begin{bmatrix}
\theta_{x,i} \\
\theta_{y,i} \\
\end{bmatrix}
= \mathbf{K}_i \boldsymbol{\theta}_i
\label{eq:model:drag_and_spring}
\end{equation}
where $k_i$ represents the stiffness of the torsional spring, $l_i$ the length of the sensor, and
\begin{equation}
\mathbf{S}_{x,y} =
\begin{bmatrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
\end{bmatrix}
\end{equation}
captures
the assumption that the aerodynamic drag acting on the $z$-axis of the sensor is small (given the fin shapes) and has a negligible effect on the sensor deflection.
We now consider the aerodynamic force acting on a whisker. Assuming a non-isotropic drag, proportional to the squared relative velocity w.r.t. the relative airflow, we obtain
\begin{equation}
\begin{split}
\prescript{}{S_i}{\mathbf{f}}_{\text{drag}, i} = &\frac{\rho}{2}c_{D,i}\mathbf{A}_i
\norm{\prescript{}{S_i}{\mathbf{v}}_{\infty,i}}\prescript{}{S_i}{\mathbf{v}}_{\infty,i}
\end{split}
\label{eq:model:whisker_drag}
\end{equation}
where $\rho$ is the density of the air, $c_D$ is the aerodynamic drag coefficient, $\mathbf{A}_i = \text{diag}([a_{xy,i}, a_{xy,i}, a_{z}]^\top)$ is the aerodynamic section of each dimension, and $c_{D,i}$ the corresponding drag coefficient. Due to the small vertical surface of the fin of the sensor, we assume $a_z = 0$. The vector $\prescript{}{S_i}{\mathbf{v}}_{\infty,i}$ is the velocity of the relative airflow experienced by the $i$-th whisker, and expressed in the $i$-th whisker reference frame, and can be obtained as %
\begin{equation}
\prescript{}{S_i}{\mathbf{v}}_{\infty,i} = {\mathbf{R}_{B}^{S_i}}^\top(\prescript{}{B}{\mathbf{v}}_\infty
- \prescript{}{B}{\boldsymbol{\omega}} \times \prescript{}{B}{\mathbf{r}}_{S_i}) \\
\label{eq:rel_airflow_whisker_in_whisker_frame}
\end{equation}
where $\prescript{}{B}{\mathbf{v}}_{\infty}$
is the relative airflow in the \ac{CoM} of the robot expressed in the body frame, given by:
\begin{equation}
\prescript{}{B}{\mathbf{v}}_\infty = {\textbf{R}_{W}^{B}}^\top\prescript{}{W}{\mathbf{v}}_\infty = {\textbf{R}_{W}^{B}}^\top(\prescript{}{W}{\mathbf{v}}_\text{wind} - \prescript{}{W}{\mathbf{v}})
\label{eq:rel_airflow_in_body}
\end{equation}
\subsection{Model-based estimation scheme}
\subsubsection{Process model, state and output}
We discretize the \ac{MAV} dynamic model described in \cref{eq:mav_dynamic_model} augmenting the state vector with the unknown wind $\prescript{}{W}{\mathbf{v}}_{\text{wind,k}}$ and unknown interaction force $\prescript{}{W}{\mathbf{f}}_{\text{touch},k}$ that are to be estimated. We assume that these two state variables evolve as:
\begin{equation}
\begin{split}
\prescript{}{W}{\mathbf{f}}_{\text{touch},k+1} = \prescript{}{W}{\mathbf{f}}_{\text{touch},k} + \boldsymbol{\epsilon}_{{f},k} \\
\prescript{}{W}{\mathbf{v}}_{\text{wind,k+1}} = \prescript{}{W}{\mathbf{v}}_{\text{wind,k}} + \boldsymbol{\epsilon}_{{v},k} \\
\end{split}
\end{equation}
where $\boldsymbol{\epsilon}_{f,k}$ and $\boldsymbol{\epsilon}_{v,k}$ represent the white Gaussian process noise, with covariances used as tuning parameters.
The full, discrete time state of the system used for estimation is
\begin{equation}
\begin{aligned}
{{\boldsymbol{x}}_k}^\top \! = \{ &
{\prescript{}{W}{\mathbf{p}}_k}^\top,
{\mathbf{q}_{W,k}^{B}}^\top,
{\prescript{}{W}{\mathbf{v}_k}}^\top, \\ &{\prescript{}{B}{\boldsymbol{\omega}}_k}^\top, {\prescript{}{W}{\mathbf{f}}_{\text{touch},k}}^\top, {\prescript{}{W}{\mathbf{v}}_{\text{wind},k}}^\top
\}
\end{aligned}
\label{eq:ukf_state}
\end{equation}
where $\mathbf{q}_{W,k}^{B}$ is the more computationally efficient quaternion-based attitude representation of the robot, obtained from the rotation matrix $\mathbf{R}_{W,k}^{B}$.
The filter output is then
\begin{equation}
{\mathbf{y}_k}^\top \! = \!
\{
{\prescript{}{W}{\mathbf{f}}_{\text{touch},k}}^\top, {\prescript{}{W}{\mathbf{v}}_{\text{wind},k}}^\top,
{\prescript{}{B}{\mathbf{v}}_{\infty,k}}^\top,
{\prescript{}{W}{\mathbf{f}}_{\text{drag},k}}^\top
\}
\end{equation}
where $\prescript{}{W}{\mathbf{f}}_{\text{drag},k}$ is obtained from \cref{eq:drag_force} and \cref{eq:relative_airflow}, and $\prescript{}{B}{\mathbf{v}}_{\infty,k}$ is obtained from \cref{eq:rel_airflow_in_body}.
\subsubsection{Measurements and measurement model}
We assume that two sets of measurements are available asynchronously:
\paragraph{Odometry}
The filter fuses odometry measurements (position $\prescript{}{}{\hat{\mathbf{p}}}_k$, attitude $\prescript{}{}{\hat{\mathbf{q}}}_{W,k}^{B}$, linear velocity $\prescript{}{W}{\hat{\mathbf{v}}}$ and angular velocity $\prescript{}{B}{\hat{\boldsymbol{\omega}}}$) provided by a cascaded state estimator
\begin{equation}
\begin{split}
{\mathbf{z}_{\text{odometry},k}}^\top & =
\{
{\prescript{}{W}{\hat{\mathbf{p}}}_k}^\top, {\prescript{}{}{\hat{\mathbf{q}}}_{W,k}^{B}}^\top, {\prescript{}{W}{\hat{\mathbf{v}}}}^\top, {\prescript{}{B}{\hat{\boldsymbol{\omega}}}}^\top
\} \\
\end{split}
\end{equation}
the odometry measurement model is linear, as shown in \cite{tagliabue2019robust}.
\paragraph{Airflow sensors}
We assume that the $N$ sensors are sampled synchronously, providing the measurement vector
\begin{equation}
{\mathbf{z}_{\text{airflowsensor},k}}^\top =
\{
{\hat{\boldsymbol{\theta}}_{1,k}}^\top,\ldots,{\hat{\boldsymbol{\theta}}_{N,k}}^\top \} = {\hat{\boldsymbol{\theta}}_k}^\top \\
\end{equation}
The associated measurement model for the $i$-th sensor can be obtained by combining \cref{eq:model:drag_and_spring} and \cref{eq:model:whisker_drag}
\begin{equation}
\boldsymbol{\theta}_{i,k} =
\frac{\rho}{2}c_D
{\mathbf{K}_i}^{-1}
\mathbf{S}_{x,y}
\mathbf{A}_i
\norm{\prescript{}{S_i}{\mathbf{v}}_{\infty,i,k}}\prescript{}{S_i}{\mathbf{v}}_{\infty,i,k}
\label{eq:whisker_measurement}
\end{equation} where $\prescript{}{S_i}{\mathbf{v}}_{\infty,i,k}$ is obtained using information about the attitude of the robot $\mathbf{q}_{W,k}^{B}$, its velocity $\prescript{}{W}{\mathbf{v}_k}$, and angular velocity $\prescript{}{B}{\boldsymbol{\omega}_k}$, and the estimated windspeed $\prescript{}{W}{\mathbf{v}}_{\text{wind}}$ as described in \cref{eq:rel_airflow_whisker_in_whisker_frame} and \cref{eq:rel_airflow_in_body}. The synchronous measurement update is obtained by repeating \cref{eq:whisker_measurement} for every sensor $i=1,\ldots,N$.
\subsubsection{Prediction and update step}
\paragraph{Prediction}
The prediction step (producing the \textit{a priori} state estimate) \cite{simon2006optimal} is performed using the \ac{USQUE} \cite{crassidis2003unscented} prediction technique for the attitude quaternion. The process model is propagated using the commanded thrust force $f_{\text{cmd}}$ and torque $\prescript{}{B}{\boldsymbol{\tau}}_{\text{cmd}}$ output of the position and attitude controller on the \ac{MAV}.
\paragraph{Update}
The odometry measurement update step is performed using the linear Kalman filter update step \cite{simon2006optimal}, while the airflow-sensor measurement update is performed via the Unscented Transformation \cite{simon2006optimal} due to the non-linearities in the associated measurement model.
\subsection{Sensor design and considerations}
The sensors, shown in \cref{fig:sensor_model}, consist of a base and an easily-exchangeable tip. The base is composed of a magnetic field sensor connected to a conditioning circuit that interfaces with the robot via I2C and a 3D-printed case that encloses the sensor.
The tip consists of a planar spring mounted in a 3D-printed enclosure that fits with the base, with a permanent magnet attached to its bottom and a carbon-fiber rod glued on the spring's top.
Eight foam fins are attached on the other end of this rod.
When the sensor is subjected to airflow, the drag force from the air on the fins causes a rotation about the center of the planar spring which results in a displacement of the magnet.
This displacement is then measured by the magnetic sensor.
The fins are placed with even angular distribution in order to achieve homogeneous drag for different airflow directions.
Foam and carbon fiber were chosen as the material of the fin structure due to their low density, which is crucial to minimize the inertia of the sensor. See \cite{kim2019magnetically} for more information about the sensor characteristics and manufacturing procedure.
Due to the complex aerodynamic interactions between the relative airflow and the blade rotor wakes, the sensor placement needs to be chosen carefully \cite{prudden2018measuring, ventura2018high}. To determine the best locations, we attached short pieces of string both directly on the vehicle and on metal rods extending away from it horizontally and vertically. We then flew the hexarotor indoors and observed that the pieces of string on top of the vehicle and on the propeller guards were mostly unaffected by the blade rotor wakes. Therefore, these are the two locations chosen to mount the sensors, as seen in \cref{fig:multirotor_with_whiskers}. They are distributed so that the relative airflow coming from any direction excites at least one sensor (that is, for at least one sensor, the relative airflow is not aligned with its length). %
\subsection{Sensor measurements}
The sensors detect the magnetic field $\mathbf{b} = (b_x, b_y, b_z)$, but the model outlined in \cref{subsec:sensor_model} requires the deflection angles of the $i$th sensor $\theta_{x,i}$ and $\theta_{y,i}$, which correspond to the rotation of the carbon fiber rod about the $x$ and $y$ axes in reference frame $S_i$. At the spring's equilibrium, the rod is straight and $\mathbf{b} = (0, 0, b_z)$, where $b_z > 0$ if the magnet's north pole is facing the carbon-fiber rod. The angles are then
\begin{equation}
\boldsymbol{\theta}_i =
\begin{bmatrix}
\theta_{x,i} \\
\theta_{y,i} \\
\end{bmatrix}
=
\begin{bmatrix}
-\arctan{(b_y/b_z)} \\
\hphantom{-}\arctan{(b_x/b_z)} \\
\end{bmatrix}
\label{eq:from_b_to_theta}
\end{equation}
Note that if the magnet was assembled with the south pole facing upward instead, $-\mathbf{b}$ must be used in \cref{eq:from_b_to_theta}.
\subsection{Output and inputs}
The output of the network is the relative airflow $\prescript{}{B}{\mathbf{v}}_{\infty}$ of the \ac{MAV}.
The inputs to the network are the airflow sensor measurements $\boldsymbol{\theta}$, the angular velocity of the robot $\prescript{}{B}{\boldsymbol{\omega}}$, the raw acceleration measurement from the IMU and the normalized throttle commanded to the six propellers (which ranges between 0 and 1).
The sign of the throttle is changed for the propellers spinning counterclockwise, in order to provide information to the network about the spinning direction of each propeller.
The reason for the choice of the input is dictated by the derivation from our model-based approach: from \cref{eq:rel_airflow_whisker_in_whisker_frame} and \cref{eq:model:whisker_drag} we observe that the relative airflow depends on the angle of the sensors and on the angular velocity of the robot.
The acceleration from the IMU is included to provide information about hard to model effects, such as the orientation of the body frame w.r.t. gravity (which causes small changes in the angle measured by the sensors), as well as the effects of accelerations of the robot.
Information about the throttle and spinning direction of the propellers is instead added to try to capture the complex aerodynamic interactions caused by their induced velocity.
We chose to express every output and input of the network in the body reference frame, in order to make the network invariant to the orientation of the robot, thus potentially reducing the amount of training data needed.
\subsection{Network architecture}
We employ an \ac{LSTM} architecture, which is able to capture time-dependent effects \cite{lipton2015critical, goodfellow2016deep}, such as, in our case, the dynamics of the airflow surrounding the robot and the dynamics of the sensor. We chose
a 2-layer LSTM, with the size of the hidden layer set to 16 (with the input size, 20, and the output size, 3). We add a single fully connected layer to the output of the network, mapping the hidden layer into the the desired output size.
\subsection{Interface with the model-based approach}
The \ac{UKF} treats the \ac{LSTM} output as a new sensor which provides relative airflow measurements $\prescript{}{B}{\hat{\mathbf{v}}}_\infty$, replacing the airflow sensor's measurement model provided in \cref{sec:model_based}. The output of the LSTM is fused via the measurement model in \cref{eq:rel_airflow_in_body}, using the Unscented Transformation.
A block-diagram representing the interface between learning-based approach and model-based approach is represented in \cref{fig:data_driven}.
\subsection{System identification}\label{subsec:sysid}
\subsubsection{Drag force}
Estimating the drag force acting on the vehicle is required to differentiate from force due to relative airflow and force due to other interactions with the environment. To this purpose, the vehicle was commanded to follow a circular trajectory at speeds of 1 to 5 m/s, keeping its altitude constant (see \cref{subsec:implementation} for more information about the trajectory generator). In this scenario, the thrust produced by the MAV's propellers $\hat{f}_{\text {thrust }}$ is
\begin{equation}
\hat{f}_{\text {thrust }}=\frac{m}{\cos \phi \cos \theta} g
\end{equation}
where $m$ is the vehicle's mass, $g$ is the gravity acceleration, and $\phi$ and $\theta$ are respectively the roll and pitch angles of the MAV. The drag force is then
\begin{equation}
\hat{f}_{\text {drag }}=
\left( \prescript{}{B}{\hat{\mathbf{f}}_{\text{thrust }}} - m\prescript{}{B}{\dot{\mathbf{v}}} \right) \cdot \prescript{}{B}{\mathbf{e}_{v}}
\end{equation}
where $\prescript{}{B}{\mathbf{\hat{f}_{\text {thrust }}}} = [0, 0, \hat{f}_{\text {thrust }}]$, and $\prescript{}{B}{\mathbf{e}_{v}}$ is the unit vector in the direction of the vehicle's velocity in body frame. By fitting a second-degree polynomial to the collected data, we obtain $\mu_1 = 0.20$ and $\mu_2 = 0.07$ (see \cref{eq:drag_force}).
\subsubsection{Sensor parameters identification}
The parameters required to fuse the output $\boldsymbol{\theta}_i$ of $i$-th airflow sensor are its position $\prescript{}{B}{\mathbf{r}}_{S_i}$ and rotation $\mathbf{R}_{B}^{S_i}$ with respect to the body frame B of the \ac{MAV}, and a lumped parameter coefficient $c_i$ mapping the relative airflow $\prescript{}{S_i}{\mathbf{v}}_{\infty,i}$ to the measured deflection angle $\boldsymbol{\theta}_i$. The coefficient $c_i = \frac{\rho}{2}c_{D_i}\frac{a_{xy_i}}{kl}$ can be obtained by re-arranging \cref{eq:whisker_measurement} and by solving
\begin{align}
c_i = \frac{\norm{\boldsymbol{\theta}_i}}{
\norm{\prescript{}{S_i}{\mathbf{v}_{\infty,i}}} \hspace{3pt}
\norm{\begin{bsmallmatrix}
0 & -1 & 0 \\
1 & 0 & 0 \\
\end{bsmallmatrix}
\prescript{}{S_i}{\mathbf{v}}_{\infty,i} }}
\end{align}
and the velocity $\prescript{}{S_i}{\mathbf{v}}_{\infty,i}$ is obtained from indoor flight experiments (assuming no wind, so that $\prescript{}{W}{\mathbf{v}}_\infty = - \prescript{}{W}{\mathbf{v}}$), or by wind tunnel experiments. Wind tunnel experiments have also been used to validate our model choice (quadratic relationship between wind speed and sensor deflection), as show in \cref{fig:octaflower_comparison}. Furthermore, these experiments also confirmed our assumption on the structure of $\mathbf{A}_i$, i.e., the variation of the sensor's deflection with respect to the direction of the wind speed is small and therefore it can be considered that $a_x = a_y = a_{xy}$.
\begin{figure}
\centering
\includegraphics[width=\linewidth, trim=0 0 0 0, clip,]{figs/octaflower_comparison_with_plot}
\caption{Roll deflection angle of the sensor as a function of the wind speed, for the case where the wind vector is aligned with a fin (1), and the case where it is most misaligned with a fin (2).}
\label{fig:octaflower_comparison}
\end{figure}
\subsubsection{LSTM training}
We train the \ac{LSTM} using two different datasets collected in indoor flight. In the first flight the hexarotor follows a circular trajectory at a set of constant velocities ranging from 1 to 5 m/s, spaced of 1 m/s each. In the second data-set we command the robot via a joystic, making aggressive maneuvers, while reaching velocities up to 5.5 m/s.
Since the robot flies indoor (and thus wind can be considered to be zero) we assume that the relative airflow of the \ac{MAV} $\prescript{}{B}{\mathbf{v}_\infty}$ corresponds to its estimated velocity $-\prescript{}{B}{\mathbf{v}}$, which we use to train the network. The network is implemented and trained using PyTorch \cite{paszke2019pytorch}. The data is pre-process by re-sampling it at 50 Hz, since the inputs of the network used for training have different rates (e.g. 200 Hz for the acceleration data from the IMU and 50 Hz from the airflow sensors).
The network is trained for 400 epochs using sequences of 5 samples of length, with a learning rate of 10$^{-4}$, using the Adam optimizer \cite{kingma2014adam} and the \ac{MSE} loss.
Unlike the model-based approach, the LSTM does not require any knowledge of the position and orientation of the sensors, nor the identification of the lumped parameter for each sensor. Once the network has been trained, however, it is not possible to reconfigure the position or the type of sensors used.
\subsection{Implementation details}\label{subsec:implementation}
\subsubsection{System architecture}
We use a custom-built hexarotor of 1.31 kg of mass. The pose of the robot is provided by a motion capture system, while odometry information is obtained by an estimator running on-board, which fuses the pose information with the inertial data from an IMU. Our algorithms run on the onboard Nvidia Jetson TX2 and are interfaced with the rest of the system via ROS. We use Aerospace Controls Laboratory's snap-stack \cite{acl_snap_stack} for controlling the \ac{MAV}.
\subsubsection{Sensor driver}
The sensors are connected via I2C to the TX2. A ROS node (sensor driver) reads the magnetic field data at 50~Hz and publishes the deflection angles as in \cref{eq:from_b_to_theta}.
Slight manufacturing imperfections are handled via an initial calibration of offset angles .
The sensor driver rejects any measured outliers by comparing each component of $\mathbf{b}$ with a low-pass filtered version. If the difference is large, the measurement is discarded, but the low-pass filter is updated nevertheless. Therefore, if the sensor deflects very rapidly and the measurement is incorrectly regarded an outlier, the low-pass filtered $\mathbf{b}$ quickly approaches the true value and consequent negative positives do not occur.
\begin{figure*}[ht]
\centering
\includegraphics[trim=0 0 0 0, clip, width=1.0\textwidth]{figs/rel_velocity}
\vspace*{-.15in}
\caption{Comparison of the relative velocity estimated by the model based (UKF) and the learning-based (LSTM) approaches.
We assume that the ground truth (GT) is given by the velocity of the robot.}
\label{fig:rel_velocity}
\end{figure*}
\subsubsection{Trajectory generator}
A trajectory generator ROS node commands the vehicle to follow a circular path at different constant speeds or a line trajectory between two points with a maximum desired velocity. This node also handles the finite state machine transitions: take off, flight to the initial position of the trajectory, execution of the trajectory, and landing where the vehicle took off. We use this trajectory generator to identify the drag coefficient of the MAV (see \cref{subsec:sysid}), to collect data for training, and to execute the experiments described below.
\subsection{Relative airflow estimation}
For this experiment, we commanded the vehicle with a joystick along our flight space at different speeds, to show the ability of our approach to estimate the relative airflow. Since the space is indoors (no wind), we assume that the relative airflow is opposite to the velocity of the MAV. We thus compare the velocity of the MAV (obtained from a motion capture system) to the opposite relative airflow estimated via the model-based strategy and the deep-learning based strategy.
\Cref{fig:rel_velocity} shows the results of the experiment. Each subplot presents the velocity of the vehicle in body frame. The ground truth (GT) in red is the MAV's speed obtained via the motion capture system, the green dotted line represents the relative airflow velocity in body frame $-\prescript{}{B}{\mathbf{v}}_\infty$ as estimate via the deep-learning based strategy (LSTM), and the blue dashed line represents $-\prescript{}{B}{\mathbf{v}}_\infty$ as estimated by the the fully model-based strategy (UKF).
The root mean squared errors of the UKF and LSTM's estimation for this experiment are shown in \cref{tab:rmse_rel_vel}. The results demonstrate that both approaches are effective, but show that the LSTM is more accurate.
\subsection{Wind gust estimation}
To demonstrate the ability to estimate wind gusts, we flew the vehicle in a straight line commanded by the trajectory generator outlined in \cref{subsec:implementation} along the diagonal of the flight space while a leaf blower was pointing approximately to the middle of this trajectory. \Cref{fig:wind_gust_detection} shows in red the estimated wind speed of the \ac{UKF} drawn at the 2D position where this value was produced, and in green the leaf blower pose obtained with the motion capture system. As expected, the wind speed is increased in the area affected by the leaf blower.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth,trim={0cm 0cm 0cm 0cm},clip]{figs/wind_gust_detection}
\vspace*{-.3in}
\caption{In this plot the vehicle is flown in a straight line at high speed, from left to right, while a leaf blower (shown in black) aims at the middle of its trajectory. The red arrows indicate the intensity of the estimated wind speed.}
\label{fig:wind_gust_detection}
\end{figure}
\begin{table}[t]
\vspace*{.15in}
\begin{center}
\caption{RMS between LSTM and UKF in the estimation of the relative velocity of the robot on joystick dataset}
\label{tab:rmse_rel_vel}
\begin{tabular}{ccccc}
\hline
Method & RMS error $x$ & RMS error $y$ & RMS error $z$ & Unit\\
\hline
UKF & 0.44 & 0.34 & 0.53 & m/s \\
LSTM & \textbf{0.38} & \textbf{0.31} & \textbf{0.28} & m/s \\
\end{tabular}
\end{center}
\vspace*{-.25in}
\end{table}
\subsection{Simultaneous estimation of drag and interaction forces}
Our approach can differentiate between drag and interaction forces, which is shown in the following experiments.
There are four main parts to the experiment: hovering with no external force, hovering in a wind field generated by three leaf blowers, simultaneously pulling the vehicle with a string attached to it while the vehicle is still immersed in the wind field, and turning off the leaf blowers so that there is only interaction force.
\Cref{fig:simultaneous_drag_interaction} shows the forces acting on the \ac{MAV} in world frame estimated by the \ac{UKF}: $\prescript{}{W}{\mathbf{f}}_{\text{drag}}$ and $\prescript{}{W}{\mathbf{f}}_{\text{touch}}$.
As expected, the drag force is close to zero when no wind is present even when the \ac{MAV} is pulled, and similarly the interaction force is approximately zero when the vehicle is not pulled even when the leaf blowers are acting on it.
Therefore, drag and interaction forces are differentiated correctly.
Note that the leaf blowers turn on quickly and thus the drag force resembles a step, while the interaction force was caused by manually pulling the \ac{MAV} with a string following approximately a ramp from 0 to 4 N as measured with a dynamometer.
The \ac{UKF} estimates $\prescript{}{W}{\mathbf{f}}_{\text{touch}}$ to about 6N, potentially due to inaccuracies of our external force ground truth measurement procedure and mis-calibration of the commanded throttle to thrust mapping. As for the wind speed generated by the leaf blowers, it has an average value of 3.6 m/s at the distance where the vehicle was flying. According to our model, a drag force of approximately 1.2 N as shown in \cref{fig:simultaneous_drag_interaction} should correspond to a wind speed of 3 m/s. The difference is due to the fact that the leaf blowers are not perfectly aimed to the \ac{MAV}, and the wind field that they generate is narrow.
\begin{figure*}
\centering
\includegraphics[trim=0 0 0 0, clip, width=1\textwidth]{figs/drag_and_interaction_force}
\vspace*{-.3in}
\caption{Simultaneous estimation of drag and interaction force. Vertical bars separate the four phases of the experiment.}
\label{fig:simultaneous_drag_interaction}
\end{figure*}
\section{RELATED WORK}
Distinguishing between interaction and aerodynamic disturbance is a challenging task, and most of the current approaches focus on the estimation of one or the other disturbance. \textbf{Aerodynamic disturbances}: Accurate wind or airflow sensing is at the heart of the techniques employed for aerodynamic disturbance estimation. A common strategy is based on directly measuring the airflow surrounding the robot via sensors, such as pressure sensors \cite{bruschi2016wind}, ultrasonic sensors \cite{hollenbeck2018wind}, or whisker-like sensors \cite{deer2019lightweight}. Other strategies estimate the airflow via its inertial effects on the robot, for example using model-based approaches \cite{demitrit2017model, sikkel2016novel}, learning-based approaches \cite{shi2019neural, allison2019estimating}, or hybrid (model-based and learning-based) solutions \cite{marton2019hybrid}. \textbf{Generic wrench-like disturbances}: Multiple related works focus instead on estimating wrench disturbances, without explicitly differentiating for the effects of the drag force due to wind: \cite{augugliaro2013admittance, mckinnon2016unscented, tagliabue2019robust, tagliabue2017collaborative} propose a model-based approach which utilizes an \ac{UKF} for wrench estimation, while \cite{nisar2019vimo} proposes a factor graph-based estimation scheme.
|
1,108,101,564,233 | arxiv | \section{Introduction}
For many years, Artificial Intelligence research has been focusing on inventing
new algorithms and approaches for solving similar kinds of problems. In some
scenarios, a new algorithm is clearly superior to previous approaches. In the
majority of cases however, a new approach will improve over the current state of
the art only for some problems. This may be because it employs a heuristic that
fails for problems of a certain type or because it makes other assumptions about
the problem or environment that are not satisfied in some cases. Selecting the
most suitable algorithm for a particular problem aims at mitigating these
problems and has the potential to significantly increase performance in
practice. This is known as the Algorithm Selection Problem.
The Algorithm Selection Problem has, in many forms and with different names,
cropped up in many areas of Artificial Intelligence in the last few decades.
Today there exists a large amount of literature on it. Most publications are
concerned with new ways of tackling this problem and solving it efficiently in
practice. Especially for combinatorial search problems, the application of
Algorithm Selection techniques has resulted in significant performance
improvements that leverage the diversity of systems and techniques developed in
recent years. This paper surveys the available literature and describes how
research has progressed.
Researchers have long ago recognised that a single algorithm will not give the
best performance across all problems one may want to solve and that selecting
the most appropriate method is likely to improve the overall performance.
Empirical evaluations have provided compelling evidence for this
\cite<e.g.>{aha_generalizing_1992,wolpert_no_1997}.
The original description of the Algorithm Selection Problem was published in
\citeA{rice_algorithm_1976}. The basic model described in the paper is very
simple -- given a space of problems and a space of algorithms, map each
problem-algorithm pair to its performance. This mapping can then be used to
select the best algorithm for a given problem. The original figure that
illustrates the model is reproduced in Figure~\vref{algselectionorig}. As Rice
states,
\begin{quote}
``The objective is to determine $S(x)$ [the mapping of problems to algorithms]
so as to have high algorithm performance.''
\end{quote}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=\textwidth]{algselectionmodel-orig}
\end{center}
\caption{Basic model for the Algorithm Selection Problem
as published in \protect\citeA{rice_algorithm_1976}.}
\label{algselectionorig}
\end{figure}
He identifies the following four criteria for the selection process.
\begin{enumerate}
\item Best selection for all mappings $S(x)$ and problems $x$. For every
problem, an algorithm is chosen to give maximum performance.
\item Best selection for a subclass of problems. A single algorithm is chosen to
apply to each of a subclass of problems such that the performance
degradation compared to choosing from all algorithms is minimised.
\item Best selection from a subclass of mappings. Choose the selection mapping
from a subset of all mappings from problems to algorithms such that the
performance degradation is minimised.
\item Best selection from a subclass of mappings and problems. Choose a single
algorithm from a subset of all algorithms to apply to each of a subclass of
problems such that the performance degradation is minimised.
\end{enumerate}
The first case is clearly the most desirable one. In practice however, the other
cases are more common -- we might not have enough data about individual problems
or algorithms to select the best mapping for everything.
\citeA{rice_algorithm_1976} lists five main steps for solving the problem.
\begin{description}
\item[Formulation] Determination of the subclasses of problems and mappings to
be used.
\item[Existence] Does a best selection mapping exist?
\item[Uniqueness] Is there a unique best selection mapping?
\item[Characterization] What properties characterize the best selection mapping
and serve to identify it?
\item[Computation] What methods can be used to actually obtain the best
selection mapping?
\end{description}\label{solframework}
This framework is taken from the theory of approximation of functions. The
questions for existence and uniqueness of a best selection mapping are usually
irrelevant in practice. As long as a \emph{good} performance mapping is found
and improves upon the current state of the art, the question of whether there is
a different mapping with the same performance or an even better mapping is
secondary. While it is easy to determine the theoretically best selection
mapping on a set of given problems, casting this mapping into a
\emph{generalisable} form that will give good performance on new problems or
even into a form that can be used in practice is hard. Indeed,
\citeA{guo_algorithm_2003} shows that the Algorithm Selection Problem in general
is undecidable. It may be better to choose a mapping that generalises well
rather than the one with the best performance. Other considerations can be
involved as well. \citeA{guo_learning-based_2004} and
\citeA{cook_maximizing_1997} compare different Algorithm selection models and
select not the one with the best performance, but one with good performance that
is also easy to understand, for example. \citeA{vrakas_learning_2003} select
their method of choice for the same reason. Similarly, \citeA{xu_satzilla_2008}
choose a model that is cheap to compute instead of the one with the best
performance. They note that,
\begin{quote}
``All of these techniques are computationally more expensive than ridge
regression, and in our previous experiments we found that they did not improve
predictive performance enough to justify this additional cost.''
\end{quote}
Rice continues by giving practical examples of where his model applies. He
refines the original model to include features of problems that can be used to
identify the selection mapping. The original figure depicting the refined model
is given in Figure~\ref{algselectionfeaturesorig}. This model, or a variant of
it, is what is used in most practical approaches. Including problem features is
the crucial difference that often makes an approach feasible.
\begin{figure}[tp]
\begin{center}
\includegraphics[width=\textwidth]{algselectionmodel-origfeatures}
\end{center}
\caption{Refined model for the Algorithm Selection Problem with problem
features \protect\cite{rice_algorithm_1976}.}
\label{algselectionfeaturesorig}
\end{figure}
For each problem in a given set, the features are extracted. The aim is to use
these features to produce the mapping that selects the algorithm with the best
performance for each problem. The actual performance mapping for each
problem-algorithm pair is usually of less interest as long as the individual
best algorithm can be identified.
Rice poses additional questions about the determination of features.
\begin{itemize}
\item What are the best features for predicting the performance of a specific
algorithm?
\item What are the best features for predicting the performance of a specific
class of algorithms?
\item What are the best features for predicting the performance of a subclass of
selection mappings?
\end{itemize}
He also states that,
\begin{quote}
``The determination of the best (or even good) features is one of the most
important, yet nebulous, aspects of the algorithm selection problem.''
\end{quote}
He refers to the difficulty of knowing the problem space. Many problem spaces
are not well known and often a sample of problems is drawn from them to
evaluate empirically the performance of the given set of algorithms. If the
sample is not representative, or the features do not facilitate a good
separation of the problem classes in the feature space, there is little hope of
finding the best or even a good selection mapping.
\citeA{vassilevska_confronting_2006} note that,
\begin{quote}
``While it seems that restricting a heuristic to a special case would likely
improve its performance, we feel that the ability to partition the problem
space of some $\mathcal{NP}$-hard problems by efficient selectors is mildly
surprising.''
\end{quote}
This sentiment was shared by many researchers and part of the great prominence
of Algorithm Selection systems especially for combinatorial search problems can
probably be attributed to the surprise that it actually works.
Most approaches employ Machine Learning to learn the performance mapping from
problems to algorithms using features extracted from the problems. This often
involves a \emph{training phase}, where the candidate algorithms are run on a
sample of the problem space to experimentally evaluate their performance. This
training data is used to create a \emph{performance model} that can be used to
predict the performance on new, unseen problems. The term \emph{model} is used
only in the loosest sense here; it can be as simple as a representation of the
training data without any further analysis.
\subsection{Practical motivation}
\citeA{aha_generalizing_1992} notes that in Machine Learning, researchers often
perform experiments on a limited number of data sets to demonstrate the
performance improvements achieved and implicitly assume that these improvements
generalise to other data. He proposes a framework for better experimental
evaluation of such claims and deriving rules that determine the properties a
data set must have in order for an algorithm to have superior performance. His
objective is
\begin{quote}
``\ldots to derive rules of the form `this algorithm outperforms these other
algorithms on these dependent measures for databases with these
characteristics'. Such rules summarize \emph{when} [\ldots] rather than
\emph{why} the observed performance difference occurred.''
\end{quote}
\citeA{tsang_attempt_1995} make similar observations and show that there is no
algorithm that is universally the best when solving constraint problems. They
also demonstrate that the best algorithm-heuristic combination is not what one
might expect for some of the surveyed problems. This provides an important
motivation for research into performing Algorithm Selection automatically. They
close by noting that,
\begin{quote}
``\ldots research should focus on how to retrieve the most efficient
[algorithm-heuristic] combinations for a problem.''
\end{quote}
The focus of Algorithm Selection is on identifying algorithms with good
performance, not on providing explanations for why this is the case. Most
publications do not consider the question of ``Why?'' at all. Rice's framework
does not address this question either. The simple reason for this is that
explaining the Why? is difficult and for most practical applications not
particularly relevant as long as improvements can be achieved. Research into
what makes a problem hard, how this affects the behaviour of specific algorithms
and how to exploit this knowledge is a fruitful area, but outside the scope of
this paper. However, we present a brief exposition of one of the most important
concepts to illustrate its relevance.
The notion of a \emph{phase transition} \cite{cheeseman_where_1991} refers to a
sudden change in the hardness of a problem as the value of a single parameter of
the problem is changed. Detecting such transitions is an obvious way to
facilitate Algorithm Selection. \citeA{hogg_phase_1996} note that,
\begin{quote}
``In particular, the location of the phase transition point might provide a
systematic basis for selecting the type of algorithm to use on a given
problem.''
\end{quote}
While some approaches make use of this knowledge to generate challenging
training problems for their systems, it is hardly used at all to facilitate
Algorithm Selection. \citeA{nudelman_understanding_2004} use a set of features
that can be used to characterise a phase transition and note that,
\begin{quote}
``It turns out that [\ldots] this group of features alone suffices to construct
reasonably good models.''
\end{quote}
It remains unclear how relevant phase transitions are to Algorithm Selection in
practice. On one hand, their theoretical properties seem to make them highly
suitable, but on the other hand almost nobody has explored their use in actual
Algorithm Selection systems.
\subsubsection{No Free Lunch theorems}
The question arises of whether, in general, the performance of a system can be
improved by always picking the best algorithm. The ``No Free Lunch'' (NFL)
theorems \cite{wolpert_no_1997} state that no algorithm can be the best across
all possible problems and that on average, all algorithms perform the same. This
seems to provide a strong motivation for Algorithm Selection -- if, on average,
different algorithms are the best for different parts of the problem space,
selecting them based on the problem to solve has the potential to improve
performance.
The theorems would apply to Algorithm Selection systems themselves as well
though (in particular the version for supervised learning are
relevant, see \citeR{wolpert_supervised_2001}). This means that although
performance improvements can be achieved by selecting the right algorithms on
one part of the problem space, wrong decisions will be made on other parts,
leading to a loss of performance. On average over all problems, the performance
achieved by an Algorithm Selection meta-algorithm will be the same as that of
all other algorithms.
The NFL theorems are the source of some controversy however. Among the
researchers to doubt their applicability is the first proponent of the Algorithm
Selection Problem \cite{rice_how_1999}. Several other publications show that
the assumptions underlying the NFL may not be
satisfied \cite{rao_for_1995,domingos_how_1998}. In particular, the distribution
of the best algorithms from the portfolio to problems is not random -- it is
certainly true that certain algorithms are the best on a much larger number of
problems than others.
A detailed assessment of the applicability of the NFL theorems to the Algorithm
Selection Problem is outside the scope of this paper. However, a review of the
literature suggests that, if the theorems are applicable, the ramifications in
practice may not be significant. Most of the many publications surveyed here do
achieve performance improvements across a range of different problems using
Algorithm Selection techniques. As a research area, it is very active and
thriving despite the potentially negative implications of the NFL.
\subsection{Scope and related work}
Algorithm Selection is a very general concept that applies not only in almost
all areas of Computer Science, but also other disciplines. However, it is
especially relevant in many areas of Artificial Intelligence. This is a large
field itself though and surveying all Artificial Intelligence publications that
are relevant to Algorithm Selection in a single paper is infeasible.
In this paper, we focus on Algorithm Selection for \emph{combinatorial search
problems}. This is a large and important subfield of Artificial Intelligence
where Algorithm Selection techniques have become particularly prominent in
recent years because of the impressive performance improvements that have been
achieved by some approaches. Combinatorial search problems include for example
satisfiability (SAT), constraint problems, planning, quantified Boolean formulae
(QBF), scheduling and combinatorial optimisation.
A combinatorial search problem is one where an initial state is to be
transformed into a goal state by application of a series of operators, such as
assignment of values to variables. The space of possible states is usually
exponential in the size of the input and finding a solution is
$\mathcal{NP}$-hard. A common way of solving such problems is to use
\emph{heuristics}. A heuristic is a strategy that determines which operators to
apply when. Heuristics are not necessarily complete or deterministic, i.e.\ they
are not guaranteed to find a solution if it exists or to always make the same
decision under the same circumstances. The nature of heuristics makes them
particularly amenable to Algorithm Selection -- choosing a heuristic manually is
difficult even for experts, but choosing the correct one can improve performance
significantly.
Several doctoral dissertations with related work chapters that survey the
literature on Algorithm Selection have been produced. Examples of the more
recent ones include
\citeA{streeter_using_2007,hutter_automated_2009,carchrae_low_2009,gagliolo_online_2010,ewald_automatic_2010,kotthoff_algorithm_2012,malitsky_thesis_2012}.
\citeA{smith-miles_cross-disciplinary_2009} presents a survey with similar aims.
It looks at the Algorithm Selection Problem from the Machine Learning point of
view and focuses on seeing Algorithm Selection as a learning problem. As a
consequence, great detail is given for aspects that are relevant to Machine
Learning. In this paper, we take a more practical point of view and focus on
techniques that facilitate and implement Algorithm Selection systems. We are
furthermore able to take more recent work in this fast-moving area into account.
In contrast to most other work surveying Algorithm Selection literature, we take
an approach-centric view instead of a literature-centric one. This means that
instead of analysing a particular publication or system according to various
criteria, the different aspects of Algorithm Selection are illustrated with
appropriate references. A single publication may therefore appear in different
sections of this paper, giving details on different aspects of the authors'
approach.
There exists a large body of work that is relevant to Algorithm Selection in the
Machine Learning literature. \citeA{smith-miles_cross-disciplinary_2009}
presents a survey of many approaches. Repeating this here is unnecessary and
outside the scope of this paper, which focuses on the application of such
techniques. The most relevant area of research is that into \emph{ensembles},
where several models are created instead of one. Such ensembles are either
implicitly assumed or explicitly engineered so that they complement each other.
Errors made by one model are corrected by another. Ensembles can be engineered
by techniques such as \emph{bagging} \cite{breiman_bagging_1996} and
\emph{boosting} \cite{schapire_strength_1990}.
\citeA{bauer_empirical_1999,opitz_popular_1999} present studies that compare
bagging and boosting empirically. \citeA{dietterich_ensemble_2000} provides
explanations for why ensembles can perform better than individual algorithms.
There is increasing interest in the integration of Algorithm Selection
techniques with programming language paradigms
\cite<e.g.>{ansel_petabricks_2009,hoos_programming_2012}. While these issues are
sufficiently relevant to be mentioned here, exploring them in detail is outside
the scope of the paper. Similarly, technical issues arising from the
computation, storage and application of performance models, the integration of
Algorithm Selection techniques into complex systems, the execution of choices
and the collection of experimental data to facilitate Algorithm Selection are
not surveyed here.
\subsection{Terminology}
Algorithm Selection is a widely applicable concept and as such has cropped up
frequently in various lines of research. Often, different terminologies are
used.
\citeA{borrett_adaptive_1996} use the term \emph{algorithm chaining} to mean
switching from one algorithm to another while the problem is being solved.
\citeA{lobjois_branch_1998} call
Algorithm Selection \emph{selection by performance prediction}.
\citeA{vassilevska_confronting_2006} use the term \emph{hybrid algorithm} for
the combination of a set of algorithms and an Algorithm Selection model (which
they term \emph{selector}).
In Machine Learning, Algorithm Selection is usually referred to as
\emph{meta-learning}. This is because Algorithm Selection models for Machine
Learning learn when to use which method of Machine Learning. The earliest
approaches also spoke of \emph{hybrid approaches}
\cite<e.g.>{utgoff_perceptron_1988}. \citeA{aha_generalizing_1992} proposes
rules for selecting a Machine Learning algorithm that take the characteristics
of a data set into account. He uses the term \emph{meta-learning}.
\citeA{brodley_automatic_1993} introduces the notion of \emph{selective
superiority}. This concept refers to a particular algorithm being best on some,
but not all tasks.
In addition to the many terms used for the process of Algorithm Selection,
researchers have also used different terminology for the models of what Rice
calls \emph{performance measure space}. \citeA{allen_selecting_1996} call them
\emph{runtime performance predictors}.
\citeA{leyton-brown_learning_2002,hutter_performance_2006,xu_hierarchical_2007,leyton-brown_empirical_2009}
coined the term \emph{Empirical Hardness model}. This stresses the reliance on
empirical data to create these models and introduces the notion of
\emph{hardness} of a problem. The concept of hardness takes into account all
performance considerations and does not restrict itself to, for example, runtime
performance. In practice however, the described empirical hardness models only
take runtime performance into account. In all cases, the predicted measures are
used to select an algorithm.
\medskip
Throughout this paper, the term \emph{algorithm} is used to refer to what is
selected for solving a problem. This is for consistency and to make the
connection to Rice's framework. An algorithm may be a system, a programme, a
heuristic, a classifier or a configuration. This is not made explicit unless it
is relevant in the particular context.
\subsection{Organisation}
An organisation of the Algorithm Selection literature is challenging, as there
are many different criteria that can be used to classify it. Each publication
can be evaluated from different points of view. The organisation of this paper
follows the main criteria below.
\begin{description}
\item[What to select algorithms from]\hfill\\
Section~\ref{sec:portfolios} describes how sets of algorithms, or
\emph{portfolios}, can be constructed. A portfolio can be \emph{static},
where the designer decides which algorithms to include, or \emph{dynamic},
where the composition or individual algorithms vary or change for different
problems.
\item[What to select and when]\hfill\\
Section~\ref{sec:solving} describes how algorithms from portfolios are
selected to solve problems. Apart from the obvious approach of picking a
single algorithm, time slots can be allocated to all or part of the
algorithms or the execution monitored and earlier decisions revised.
We also distinguish between selecting before the solving of the actual
problem starts and while the problem is being solved.
\item[How to select]\hfill\\
Section~\ref{sec:selectors} surveys techniques used for making the choices
described in Section~\ref{sec:solving}. It details how performance models
can be built and what kinds of predictions they inform. Example predictions
are the best algorithm in the portfolio and the runtime performance of each
portfolio algorithm.
\item[How to facilitate the selection]\hfill\\
Section~\ref{sec:features} gives an overview of the types of analysis
different approaches perform and what kind of information is gathered to
facilitate Algorithm Selection. This includes the past performance of
algorithms and structural features of the problems to be solved.
\end{description}
The order of the material follows a top-down approach. Starting with the
high-level idea of Algorithm Selection, as proposed by
\citeA{rice_algorithm_1976} and described in this introduction, more technical
details are gradually explored. Earlier concepts provide motivation and context
for later technical details. For example, the choice of whether to select a
single algorithm or monitor its execution (Section~\ref{sec:solving}) determines
the types of predictions required and techniques suitable for making them
(Section~\ref{sec:selectors}) as well as the properties that need to be measured
(Section~\ref{sec:features}).
The individual sections are largely self-contained. If the reader is more
interested in a bottom-up approach that starts with technical details on what
can be observed and measured to facilitate Algorithm Selection,
Sections~\ref{sec:portfolios} through~\ref{sec:features} may be read in reverse
order.
Section~\ref{sec:domains} again illustrates the importance of the field by
surveying the many different application domains of Algorithm Selection
techniques with a focus on combinatorial search problems. We close by briefly
discussing current and future research directions in
Section~\ref{sec:directions} and summarising in Section~\ref{sec:conclusion}.
\section{Algorithm portfolios}\label{sec:portfolios}
For diverse sets of problems, it is unlikely that a single algorithm will be the
most suitable one in all cases. A way of mitigating this restriction is to use
a \emph{portfolio} of algorithms. This idea is closely related to the notion of
Algorithm Selection itself -- instead of making an up-front decision on what
algorithm to use, it is decided on a case-by-case basis for each problem
individually. In the framework presented by \citeA{rice_algorithm_1976},
portfolios correspond to the algorithm space $\mathcal{A}$.
Portfolios are a well-established technique in Economics. Portfolios of assets,
securities or similar products are used to reduce the risk compared to holding
only a single product. The idea is simple -- if the value of a single security
decreases, the total loss is less severe. The problem of allocating funds to the
different parts of the portfolio is similar to allocating resources to
algorithms in order to solve a computational problem. There are some important
differences though. Most significantly, the past performance of an algorithm can
be a good indicator of future performance. There are fewer factors that affect
the outcome and in most cases, they can be measured directly. In Machine
Learning, \emph{ensembles} \cite{dietterich_ensemble_2000} are instances of
algorithm portfolios. In fact, the only difference between algorithm portfolios
and Machine Learning ensembles is the way in which its constituents are used.
The idea of algorithm portfolios was first presented by
\citeA{huberman_economics_1997}. They describe a formal framework for the
construction and application of algorithm portfolios and evaluate their approach
on graph colouring problems. Within the Artificial Intelligence community,
algorithm portfolios were popularised by
\citeA{gomes_algorithm_1997,gomes_practical_1997} and a subsequent extended
investigation \cite{gomes_algorithm_2001}. The technique itself however had
been described under different names by other authors at about the same time in
different contexts.
\citeA{tsang_attempt_1995} experimentally show for a selection of constraint
satisfaction algorithms and heuristics that none is the best on all evaluated
problems. They do not mention portfolios, but propose that future research
should focus on identifying when particular algorithms and heuristics deliver
the best performance. This implicitly assumes a portfolio to choose algorithms
from. \citeA{allen_selecting_1996} perform a similar investigation and come to
similar conclusions. They talk about selecting an appropriate algorithm from an
\emph{algorithm family}.
Beyond the simple idea of using a set of algorithms instead of a single one,
there is a lot of scope for different approaches. One of the first problems
faced by researchers is how to construct the portfolio. There are two main
types. \emph{Static portfolios} are constructed offline before any problems are
solved. While solving a problem, the composition of the portfolio and the
algorithms within it do not change. \emph{Dynamic portfolios} change in
composition, configuration of the constituent algorithms or both during solving.
\subsection{Static portfolios}
Static portfolios are the most common type. The number of algorithms or systems
in the portfolio is fixed, as well as their parameters. In Rice's notation, the
algorithm space $\mathcal{A}$ is constant, finite and known. This approach is
used for example in SATzilla
\cite{nudelman_understanding_2004,xu_satzilla-07_2007,xu_satzilla_2008}, AQME
\cite{pulina_multi-engine_2007,pulina_self-adaptive_2009}, CPhydra
\cite{omahony_using_2008}, \textsc{ArgoSmArT}
\cite{nikoli_instance-based_2009} and BUS \cite{howe_exploiting_1999}.
The vast majority of approaches composes static portfolios from different
algorithms or different algorithm configurations.
\citeA{huberman_economics_1997} however use a portfolio that contains the same
randomised algorithm twice. They run the portfolio in parallel and as such
essentially use the technique to parallelise an existing sequential algorithm.
Some approaches use a large number of algorithms in the portfolio, such as
ArgoSmArT, whose portfolio size is 60. SATzilla uses 19 algorithms, although the
authors use portfolios containing only subsets of those for specific
applications. BUS uses six algorithms and CPhydra five.
\citeA{gent_learning_2010} select from a portfolio of only two algorithms. AQME
has different versions with different portfolio sizes, one with 16 algorithms,
one with five and three algorithms of different types and one with two
algorithms \cite{pulina_self-adaptive_2009}. The authors compare the different
portfolios and conclude that the one with eight algorithms offers the best
performance, as it has more variety than the portfolio with two algorithms and
it is easier to make a choice for eight than for 16 algorithms. There are also
approaches that use portfolios of variable size that is determined by training
data \cite{kadioglu_isac_2010,xu_hydra_2010}.
As the algorithms in the portfolio do not change, their selection is crucial for
its success. Ideally, the algorithms will complement each other such that good
performance can be achieved on a wide range of different problems.
\citeA{hong_groups_2004} report that portfolios composed of a random selection
from a large pool of diverse algorithms outperform portfolios composed of the
algorithms with the best overall performance. They develop a framework with a
mathematical model that theoretically justifies this observation.
\citeA{samulowitz_learning_2007} use a portfolio of heuristics for solving
quantified Boolean formulae problems that have specifically been crafted to be
orthogonal to each other. \citeA{xu_hydra_2010} automatically engineer a
portfolio with algorithms of complementary strengths. In
\citeA{xu_evaluating_2012}, the authors analyse the contributions of the
portfolio constituents to the overall performance and conclude that not
algorithms with the best overall performance, but with techniques that set them
apart from the rest contribute most. \citeA{kadioglu_isac_2010} use a static
portfolio of variable size that adapts itself to the training data. They cluster
the training problems and choose the best algorithm for each cluster. They do
not emphasise diversity, but suitability for distinct parts of the problem
space. \citeA{xu_hydra_2010} also construct a portfolio with algorithms that
perform well on different parts of the problem space, but do not use clustering.
In financial theory, constructing portfolios can be seen as a quadratic
optimisation problem. The aim is to balance expected performance and risk (the
expected variation of performance) such that performance is maximised and risk
minimised. \citeA{ewald_selecting_2010} solve this problem for algorithm
portfolios using genetic algorithms.
Most approaches make the composition of the portfolio less explicit. Many
systems use portfolios of solvers that have performed well in solver
competitions with the implicit assumption that they have complementing strengths
and weaknesses and the resulting portfolio will be able to achieve good
performance.
\subsection{Dynamic portfolios}
Rather than relying on a priori properties of the algorithms in the portfolio,
dynamic portfolios adapt the composition of the portfolio or the algorithms
depending on the problem to be solved. The algorithm space $\mathcal{A}$ changes
with each problem and is a subspace of the potentially infinite super algorithm
space $\mathcal{A}'$. This space contains all possible (hypothetical) algorithms
that could be used to solve problems from the problem space. In static
portfolios, the algorithms in the portfolio are selected from $\mathcal{A}'$
once either manually by the designer of the portfolio or automatically based on
empirical results from training data.
One approach is to build a portfolio by combining algorithmic building blocks.
An example of this is the Adaptive Constraint Engine (ACE)
\cite{epstein_collaborative_2001,epstein_adaptive_2002}. The building blocks are
so-called advisors, which characterise variables of the constraint problem and
give recommendations as to which one to process next. ACE combines these
advisors into more complex ones.
\citeA{elsayed_synthesis_2010,elsayed_synthesis_2011} use a similar idea to
construct search strategies for solving constraint problems.
\citeA{fukunaga_automated_2002,fukunaga_automated_2008} proposes CLASS, which
combines heuristic building blocks to form composite heuristics for solving SAT
problems. In these approaches, there is no strong notion of a portfolio -- the
algorithm or strategy used to solve a problem is assembled from lower level
components.
Closely related is the concept of specialising generic building blocks for the
problem to solve. This approach is taken in the SAGE system (Strategy
Acquisition Governed by Experimentation)
\cite{langley_learningd_1983,langley_learning_1983}. It starts with a set of
general operators that can be applied to a search state. These operators are
refined by making the preconditions more specific based on their utility for
finding a solution. The \textsc{Multi-tac} (Multi-tactic Analytic Compiler)
system
\cite{minton_integrating_1993,minton_analytic_1993,minton_automatically_1996}
specialises a set of generic heuristics for the constraint problem to solve.
There can be complex restrictions on how the building blocks are combined.
RT-Syn \cite{smith_knowledge-based_1992} for example uses a preprocessing step
to determine the possible combinations of algorithms and data structures to
solve a software specification problem and then selects the most appropriate
combination using simulated annealing. \citeA{balasubramaniam_automated_2012}
model the construction of a constraint solver from components as a constraint
problem whose solutions denote valid combinations of components.
Another approach is to modify the parameters of parameterised algorithms in the
portfolio. This is usually referred to as automatic tuning and not only
applicable in the context of algorithm portfolios, but also for single
algorithms. The HAP system \cite{vrakas_learning_2003} automatically tunes
the parameters of a planning system depending on the problem to solve.
\citeA{horvitz_bayesian_2001} dynamically modify algorithm parameters during
search based on statistics collected during the solving process.
\subsubsection{Automatic tuning}
The area of automatic parameter tuning has attracted a lot of attention in
recent years. This is because algorithms have an increasing number of parameters
that are difficult to tune even for experts and because of research into
dynamic algorithm portfolios that benefits from automatic tuning. A survey of
the literature on automatic tuning is outside the scope of this paper, but some
of the approaches that are particularly relevant to this survey are described
below.
Automatic tuning and portfolio selection can be treated separately,
as done in the Hydra portfolio builder \cite{xu_hydra_2010}. Hydra uses
ParamILS \cite{hutter_automatic_2007,hutter_paramils_2009} to automatically
tune algorithms in a SATzilla \cite{xu_satzilla_2008} portfolio. ISAC
\cite{kadioglu_isac_2010} uses GGA \cite{ansotegui_gender-based_2009} to
automatically tune algorithms for clusters of problem instances.
\citeA{minton_automatically_1996} first enumerates all possible rule
applications up to a certain time or size bound. Then, the most promising
configuration is selected using beam search, a form of parallel hill climbing,
that empirically evaluates the performance of each candidate.
\citeA{balasubramaniam_automated_2012} use hill climbing to similarly identify
the most efficient configuration for a constraint solver on a set of problems.
\citeA{terashima-marin_evolution_1999,fukunaga_automated_2002} use genetic
algorithms to evolve promising configurations.
The systems described in the previous paragraph are only of limited suitability
for dynamic algorithm portfolios. They either take a long time to find good
configurations or are restricted in the number or type of parameters.
Interactions between parameters are only taken into account in a limited way.
More recent approaches have focused on overcoming these limitations.
The ParamILS system \cite{hutter_automatic_2007,hutter_paramils_2009} uses
techniques based on local search to identify parameter configurations with good
performance. The authors address over-confidence (overestimating the performance
of a parameter configuration on a test set) and over-tuning (determining a
parameter configuration that is too specific).
\citeA{ansotegui_gender-based_2009} use genetic algorithms to discover
favourable parameter configurations for the algorithms being tuned. The authors
use a racing approach to avoid having to run all generated configurations to
completion. They also note that one of the advantages of the genetic algorithm
approach is that it is inherently parallel.
Both of these approaches are capable of tuning algorithms with a large number of
parameters and possible values as well as taking interactions between parameters
into account. They are used in practice in the Algorithm Selection systems Hydra
and ISAC, respectively. In both cases, they are only used to construct static
portfolios however. More recent approaches focus on exploiting parallelism
\cite<e.g.>{hutter_parallel_2012}.
\medskip
Dynamic portfolios are in general a more fruitful area for Algorithm
Selection research because of the large space of possible decisions. Static
portfolios are usually relatively small and the decision space is amenable for
human exploration. This is not a feasible approach for dynamic portfolios
though. \citeA{minton_automatically_1996} notes that
\begin{quote}
``\textsc{Multi-tac} turned out to have an unexpected advantage in this arena,
due to the complexity of the task. Unlike our human subjects, \textsc{Multi-tac}
experimented with a wide variety of combinations of heuristics. Our human
subjects rarely had the inclination or patience to try many alternatives, and on
at least one occasion incorrectly evaluated alternatives that they did try.''
\end{quote}
\section{Problem solving with portfolios}\label{sec:solving}
Once an algorithm portfolio has been constructed, the way in which it is to be
used has to be decided. There are different considerations to take into account.
The two main issues are as follows.
\begin{description}
\item[What to select]\hfill\\
Given the full set of algorithms in the portfolio, a subset has to be chosen
for solving the problem. This subset can consist of only a single algorithm
that is used to solve the problem to completion, the entire portfolio with
the individual algorithms interleaved or running in parallel or anything in
between.
\item[When to select]\hfill\\
The selection of the subset of algorithms can be made only once before
solving starts or continuously during search. If the latter is the case,
selections can be made at well-defined points during search, for example at
each node of a search tree, or when the system judges it to be necessary to
make a decision.
\end{description}
Rice's model assumes that only a single algorithm $A \in \mathcal{A}$ is
selected. It implicitly assumes that this selection occurs only once and before
solving the actual problem.
\subsection{What to select}
A common and the simplest approach is to select a single algorithm from the
portfolio and use it to solve the problem completely. This single algorithm has
been determined to be the best for the problem at hand. For example SATzilla
\cite{nudelman_understanding_2004,xu_satzilla-07_2007,xu_satzilla_2008},
\textsc{ArgoSmArT} \cite{nikoli_instance-based_2009}, SALSA
\cite{demmel_self-adapting_2005} and \textsc{Eureka}
\cite{cook_maximizing_1997} do this. The disadvantage of this approach is that
there is no way of mitigating a wrong selection. If an algorithm is chosen that
exhibits bad performance on the problem, the system is ``stuck'' with it and no
adjustments are made, even if all other portfolio algorithms would perform much
better.
An alternative approach is to compute schedules for running (a subset of) the
algorithms in the portfolio. In some approaches, the terms portfolio and
schedule are used synonymously -- all algorithms in the portfolio are selected
and run according to a schedule that allocates time slices to each of them. The
task of Algorithm Selection becomes determining the schedule rather than to
select algorithms.
\citeA{roberts_directing_2006} rank the portfolio algorithms in order of
expected performance and allocate time according to this ranking.
\citeA{howe_exploiting_1999} propose a round-robin schedule that contains all
algorithms in the portfolio. The order of the algorithms is determined by the
expected run time and probability of success. The first algorithm is allocated a
time slice that corresponds to the expected time required to solve the problem.
If it is unable to solve the problem during that time, it and the remaining
algorithms are allocated additional time slices until the problem is solved or a
time limit is reached.
\citeA{pulina_self-adaptive_2009} determine a schedule according to
three strategies. The first strategy is to run all portfolio algorithms for a
short time and if the problem has not been solved after this, run the predicted
best algorithm exclusively for the remaining time. The second strategy runs all
algorithms for the same amount of time, regardless of what the predicted best
algorithm is. The third variation allocates exponentially increasing time slices
to each algorithm such that the total time is again distributed equally among
them. In addition to the three different scheduling strategies, the authors
evaluate four different ways of ordering the portfolio algorithms within a
schedule that range from ranking based on past performance to random. They
conclude that ordering the algorithms based on their past performance and
allocating the same amount of time to all algorithms gives the best overall
performance.
\citeA{omahony_using_2008} optimise the computed schedule with respect to the
probability that the problem will be solved. They use the past performance data
of the portfolio algorithms for this. However, they note that their approach of
using a simple complete search procedure to find this optimal schedule relies on
small portfolio sizes and that ``for a large number of solvers, a more
sophisticated approach would be necessary''.
\citeA{kadioglu_algorithm_2011} formulate the problem of computing a schedule
that solves most problems in a training set in the lowest amount of time as a
resource constrained set covering integer programme. They pursue similar aims as
\citeA{omahony_using_2008} but note that their approach is more efficient and
able to scale to larger schedules. However, their evaluation concludes that the
approach with the best overall performance is to run the predicted best
algorithm for 90\% of the total available time and distribute the remaining 10\%
across the other algorithms in the portfolio according to a static schedule.
\citeA{petrik_statistically_2005} presents a framework for calculating optimal
schedules. The approach is limited by a number of assumptions about the
algorithms and the execution environment, but is applicable to a wide range of
research in the literature. \citeA{petrik_learning_2006,bougeret_combining_2009}
compute an optimal static schedule for allocating fixed time slices to each
algorithm. \citeA{sayag_combining_2006} propose an algorithm to efficiently
compute an optimal schedule for portfolios of fixed size and show that the
problem of generating or even approximating an optimal schedule is
computationally intractable. \citeA{roberts_learned_2007} explore different
strategies for allocating time slices to algorithms. In a serial execution
strategy, each algorithm is run once for an amount of time determined by the
average time to find a solution on previous problems or the time that was
predicted for finding a solution on the current problem. A round-robin strategy
allocates increasing time slices to each algorithm. The length of a time slice
is based on the proportion of successfully solved training problems within this
time. \citeA{gerevini_automatically_2009} compute round-robin schedules
following a similar approach. Not all of their computed schedules contain all
portfolio algorithms. \citeA{streeter_combining_2007} compute a schedule with
the aim of improving the average-case performance. In later work, they compute
theoretical guarantees for the performance of their schedule
\cite{streeter_new_2008}.
\citeA{wu_portfolios_2007} approach scheduling the chosen algorithms in a
different way and assume a fixed limit on the amount of resources an algorithm
can consume while solving a problem. All algorithms are run sequentially for
this fixed amount of time. Similar to \citeA{gerevini_automatically_2009}, they
simulate the performance of different allocations and select the best one based
on the results of these simulations. \cite{fukunaga_genetic_2000} estimates the
performance of candidate allocations through bootstrap sampling.
\citeA{gomes_algorithm_1997,gomes_algorithm_2001} also evaluate the performance
of different candidate portfolios, but take into account how many algorithms can
be run in parallel. They demonstrate that the optimal schedule (in this case the
number of algorithms that are being run) changes as the number of available
processors increases. \citeA{gagliolo_towards_2008} investigate how to allocate
resources to algorithms in the presence of multiple CPUs that allow to run more
than one algorithm in parallel. \citeA{yun_learning_2012} craft portfolios with
the specific aim of running the algorithms in parallel.
\medskip
Related research is concerned with the scheduling of restarts of stochastic
algorithms -- it also investigates the best way of allocating resources. The
paper that introduced algorithm portfolios \cite{huberman_economics_1997} uses a
portfolio of identical stochastic algorithms that are run with different random
seeds. There is a large amount of research on how to determine restart schedules
for randomised algorithms and a survey of this is outside the scope of this
paper. A few approaches that are particularly relevant to Algorithm Selection
and portfolios are mentioned below.
\citeA{horvitz_bayesian_2001} determine the amount of time to allocate to a
stochastic algorithm before restarting it. They use dynamic policies that take
performance predictions into account, showing that it can outperform an optimal
fixed policy.
\citeA{cicirello_max_2005} investigate a restart model model that allocates
resources to an algorithm proportional to the number of times it has been
successful in the past. In particular, they note that the allocated resources
should grow doubly exponentially in the number of successes. Allocation of fewer
resources results in over-exploration (too many different things are tried and
not enough resources given to each) and allocation of more resources in
over-exploitation (something is tried for to too long before moving on to
something different).
\citeA{streeter_restart_2007} compute restart schedules that take the runtime
distribution of the portfolio algorithms into account. They present an approach
that does so statically based on the observed performance on a set of training
problems as well as an approach that learns the runtime distributions as new
problems are solved without a separate training set.
\subsection{When to select}\label{sec:offon}
In addition to whether they choose a single algorithm or compute a schedule,
existing approaches can also be distinguished by whether they operate before the
problem is being solved (offline) or while the problem is being solved (online).
The advantage of the latter is that more fine-grained decisions can be made and
the effect of a bad choice of algorithm is potentially less severe. The price
for this added flexibility is a higher overhead however, as algorithms are
selected more frequently.
Examples of approaches that only make offline decisions include
\citeA{xu_satzilla_2008,minton_automatically_1996,smith_knowledge-based_1992,omahony_using_2008}.
In addition to having no way of mitigating wrong choices, often these will not
even be detected. These approaches do not monitor the execution of the chosen
algorithms to confirm that they conform with the expectations that led to them
being chosen. Purely offline approaches are inherently vulnerable to bad
choices. Their advantage however is that they only need to select an algorithm
once and incur no overhead while the problem is being solved.
Moving towards online systems, the next step is to monitor the execution of an
algorithm or a schedule to be able to intervene if expectations are not met.
\citeA{fink_statistical_1997,fink_how_1998} investigates setting a time
bound for the algorithm that has been selected based on the predicted
performance. If the time bound is exceeded, the solution attempt is abandoned.
More sophisticated systems furthermore adjust their selection if
such a bound is exceeded. \citeA{borrett_adaptive_1996} try to detect behaviour
during search that indicates that the algorithm is performing badly, for example
visiting nodes in a subtree of the search that clearly do not lead to a
solution. If such behaviour is detected, they propose switching the currently
running algorithm according to a fixed replacement list.
\citeA{sakkout_instance_1996} explore the same basic idea. They switch between
two algorithms for solving constraint problems that achieve different levels of
consistency. The level of consistency refers to the amount of search space that
is ruled out by inference before actually searching it. Their approach achieves
the same level of search space reduction as the more expensive algorithm at a
significantly lower cost. This is possible because doing more inference does not
necessarily result in a reduction of the search space in all cases. The authors
exploit this fact by detecting such cases and doing the cheaper inference.
\citeA{stergiou_heuristics_2009} also investigates switching propagation methods
during solving. \citeA{yu_adaptive_2004,yu_adaptive_2006} do not monitor the
execution of the selected algorithm, but instead the values of the features used
to select it. They re-evaluate the selection function when its inputs change.
Further examples of approaches that monitor the execution of the selected
algorithm are \citeA{pulina_self-adaptive_2009,gagliolo_adaptive_2004}, but also
\citeA{horvitz_bayesian_2001} where the offline selection of an algorithm is
combined with the online selection of a restart strategy. An interesting feature
of \citeA{pulina_self-adaptive_2009} is that the authors adapt the model used
for the offline algorithm selection if the actual run time is much higher than
the predicted runtime. In this way, they are not only able to mitigate bad
choices during execution, but also prevent them from happening again.
The approaches that make decisions during search, for example at every node of
the search tree, are necessarily online systems. \citeA{arbelaez_online_2009}
select the best search strategy at checkpoints in the search tree. Similarly,
\citeA{brodley_automatic_1993} recursively partitions the classification problem
to be solved and selects an algorithm for each partition. In this approach, a
lower-level decision can lead to changing the decision at the level above. This
is usually not possible for combinatorial search problems, as decisions at a
higher level cannot be changed easily.
Closely related is the work by
\citeA{lagoudakis_algorithm_2000,lagoudakis_learning_2001}, which partitions the
search space into recursive subtrees and selects the best algorithm from the
portfolio for every subtree. They specifically consider recursive algorithms. At
each recursive call, the Algorithm Selection procedure is invoked. This is a
more natural extension of offline systems than monitoring the execution of the
selected algorithms, as the same mechanisms can be used.
\citeA{samulowitz_learning_2007} also select algorithms for recursively solving
sub-problems.
The PRODIGY system \cite{carbonell_prodigy_1991} selects the next
operator to apply in order to reach the goal state of a planning problem at each
node in the search tree. Similarly, \citeA{langley_learning_1983} learn weights
for operators that can be applied at each search state and select from among
them accordingly.
Most approaches rely on an offline element that makes a decision before search
starts. In the case of recursive calls, this is no different from making a
decision during search however.
\citeA{gagliolo_adaptive_2004,gagliolo_neural_2005,gagliolo_learning_2006} on the
other hand learn the Algorithm Selection model only dynamically while the
problem is being solved. Initially, all algorithms in the portfolio are
allocated the same (small) time slice. As search progresses, the allocation
strategy is updated, giving more resources to algorithms that have exhibited
better performance. The expected fastest algorithm receives half of the total
time, the next best algorithm half of the remaining time and so on.
\citeA{armstrong_dynamic_2006} also rely exclusively on a selection model trained
online in a similar fashion. They evaluate different strategies of allocating
resources to algorithms according to their progress during search. All of these
strategies converge to allocating all resources to the algorithm with the best
observed performance.
\section{Portfolio selectors}\label{sec:selectors}
Research on \emph{how} to select from a portfolio in an Algorithm Selection
system has generated the largest number of different approaches within the
framework of Algorithm Selection. In Rice's framework, it roughly corresponds to
the performance mapping $p(A,x)$, although only few approaches use this exact
formulation. Rice assumes that the performance of a particular algorithm on a
particular problem is of interest. While this is true in general, many
approaches only take this into account implicitly. Selecting the single best
algorithm for a problem for example has no explicit mapping into Rice's
performance measure space $\mathcal{R}^n$ at all. The selection mapping
$S(f(x))$ is also related to the problem of how to select.
There are many different ways a mechanism to select from a portfolio can be
implemented. Apart from accuracy, one of the main requirements for such a
selector is that it is relatively cheap to run -- if selecting an algorithm for
solving a problem is more expensive than solving the problem, there is no point
in doing so. \citeA{vassilevska_confronting_2006} explicitly define the selector
as ``an efficient (polynomial time) procedure''.
There are several challenges associated with making selectors efficient.
Algorithm Selection systems that analyse the problem to be solved, such as
SATzilla, need to take steps to ensure that the analysis does not become too
expensive. Two such measures are the running of a pre-solver and the prediction
of the time required to analyse a problem \cite{xu_satzilla_2008}. The idea
behind the pre-solver is to choose an algorithm with reasonable general
performance from the portfolio and use it to start solving the problem before
starting to analyse it. If the problem happens to be very easy, it will be
solved even before the results of the analysis are available. After a fixed
time, the pre-solver is terminated and the results of the Algorithm Selection
system are used. \citeA{pulina_self-adaptive_2009} use a similar approach and
run all algorithms for a short time in one of their strategies. Only if the
problem has not been solved after that, they move on to the algorithm that was
actually
selected.
Predicting the time required to analyse a problem is a closely related idea. If
the predicted required analysis time is too high, a default algorithm with
reasonable performance is chosen and run on the problem. This technique is
particularly important in cases where the problem is hard to analyse, but easy
to solve. As some systems use information that comes from exploring part of the
search space (cf.\ Section~\ref{sec:features}), this is a very relevant concern
in practice. On some problems, even probing just a tiny part of the search space
may take a very long time.
\citeA{gent_learning_2010,gent_machine_2010} report that using the
misclassification penalty as a weight for the individual problems during
training improves the quality of the predictions. The misclassification penalty
quantifies the ``badness'' of a wrong prediction; in this case as the additional
time required to solve a problem. If an algorithm was chosen that is only
slightly worse than the best one, it has less impact than choosing an algorithm
that is orders of magnitude worse. Using the penalty during training is a way of
guiding the learned model towards the problems where the potential performance
improvement is large.
\medskip
There are many different approaches to how portfolio selectors operate. The
selector is not necessarily an explicit part of the system.
\citeA{minton_automatically_1996} compiles the Algorithm Selection system into a
Lisp programme for solving the original constraint problem. The selection rules
are part of the programme logic.
\citeA{fukunaga_automated_2008,garrido_dvrp_2010} evolve selectors and
combinators of heuristic building blocks using genetic algorithms. The selector
is implicit in the evolved programme.
\subsection{Performance models}
The way the selector operates is closely linked to the way the performance model
of the algorithms in the portfolio is built. In early approaches, the
performance model was usually not learned but given in the form of human expert
knowledge. \citeA{borrett_adaptive_1996,sakkout_instance_1996} use hand-crafted
rules to determine whether to switch the algorithm during solving.
\citeA{allen_selecting_1996} also have hand-crafted rules, but estimate the
runtime performance of an algorithm. More recent approaches sometimes use only
human knowledge as well. \citeA{wei_switching_2008} select a local search
heuristic for solving SAT problems by a hand-crafted rule that considers the
distribution of clause weights. \citeA{tolpin_rational_2011} model the
performance space manually using statistical methods and use this hand-crafted
model to select a heuristic for solving constraint problems.
\citeA{vrakas_learning_2003} learn rules automatically, but then filter them
manually.
A more common approach today is to automatically learn performance models
using Machine Learning on training data. The portfolio algorithms are run on a
set of representative problems and based on these experimental results,
performance models are built. This approach is used by
\citeA{xu_satzilla_2008,pulina_multi-engine_2007,omahony_using_2008,kadioglu_isac_2010,guerri_learning_2004},
to name but a few examples. A drawback of this approach is that the training
time is usually large. \citeA{gagliolo_impact_2006} investigate ways of
mitigating this problem by using censored sampling, which introduces an upper
bound on the runtime of each experiment in the training phase.
\citeA{kotthoff_evaluation_2012} also investigate censored sampling where not
all algorithms are run on all problems in the training phase. Their results show
that censored sampling may not have a significant effect on the performance of
the learned model.
Models can also be built without a separate training phase, but while the
problem is solved. This approach is used by
\citeA{gagliolo_learning_2006,armstrong_dynamic_2006} for example. While this
significantly reduces the time to build a system, it can mean that the result is
less effective and efficient. At the beginning, when no performance models have
been built, the decisions of the selector might be poor. Furthermore, creating
and updating performance models why the problem is being solved incurs an
overhead.
The choice of Machine Learning technique is affected by the way the portfolio
selector operates. Some techniques are more amenable to offline approaches
(e.g.\ linear regression models used by \citeR{xu_satzilla_2008}), while others
lend themselves to online methods (e.g.\ reinforcement learning used by
\citeR{armstrong_dynamic_2006}).
Performance models can be categorised by the type of entity whose performance is
modelled -- the entire portfolio or individual algorithms within it. There are
publications that use both of those categories however
\cite<e.g.>{smith-miles_towards_2008}. In some cases, no performance models as
such are used at all.
\citeA{caseau_meta-heuristic_1999,minton_automatically_1996,balasubramaniam_automated_2012}
run the candidates on a set of test problems and select the one with the best
performance that way for example.
\citeA{gomes_algorithm_1997,wu_portfolios_2007,gerevini_automatically_2009}
simulate the performance of different selections on training data.
\subsubsection{Per-portfolio models}
One automated approach is to learn a performance model of the entire portfolio
based on training data. Usually, the prediction of such a model is the best
algorithm from the portfolio for a particular problem. There is only a weak
notion of an individual algorithm's performance. In Rice's notation for the
performance mapping $P(A,x)$, $A$ is the (subset of the) portfolio instead of an
individual algorithm, i.e.\ $A\subseteq \mathcal{A}$ instead of Rice's $A\in
\mathcal{A}$.
This is used for example by
\citeA{omahony_using_2008,cook_maximizing_1997,pulina_multi-engine_2007,nikoli_instance-based_2009,guerri_learning_2004}.
Again there are different ways of doing this. Lazy approaches do not learn an
explicit model, but use the set of training examples as a case base. For new
problems, the closest problem or the set of $n$ closest problems in the case
base is determined and decisions made accordingly.
\citeA{wilson_case-based_2000,pulina_multi-engine_2007,omahony_using_2008,nikoli_instance-based_2009,gebruers_making_2004,malitsky_non-model-based_2011}
use nearest-neighbour classifiers to achieve this. Apart from the conceptual
simplicity, such an approach is attractive because it does not try to abstract
from the examples in the training data. The problems that Algorithm Selection
techniques are applied to are usually complex and factors that affect the
performance are hard to understand. This makes it hard to assess whether a
learned abstract model is appropriate and what its requirements and limitations
are.
Explicitly-learned models try to identify the concepts that affect performance
for a given problem. This acquired knowledge can be made explicit to improve the
understanding of the researchers of the problem domain. There are several
Machine Learning techniques that facilitate this, as the learned models are
represented in a form that is easy to understand by humans.
\citeA{carbonell_prodigy_1991,gratch_composer_1992,brodley_automatic_1993,vrakas_learning_2003}
learn classification rules that guide the selector. \citeA{vrakas_learning_2003}
note that the decision to use a classification rule leaner was not so much
guided by the performance of the approach, but the easy interpretability of the
result.
\citeA{langley_learning_1983,epstein_adaptive_2002,nareyek_choosing_2001} learn
weights for decision rules to guide the selector towards the best algorithms.
\citeA{cook_maximizing_1997,guerri_learning_2004,guo_learning-based_2004,roberts_directing_2006,bhowmick_application_2006,gent_learning_2010}
go one step further and learn decision trees. \citeA{guo_learning-based_2004}
again note that the reason for choosing decision trees was not primarily the
performance, but the understandability of the result.
\citeA{pfahringer_meta-learning_2000} show the set of learned rules in the paper
to illustrate its compactness. Similarly, \citeA{gent_learning_2010} show their
final decision tree in the paper.
Some approaches learn probabilistic models that take uncertainty and variability
into account. \citeA{gratch_composer_1992} use a probabilistic model to learn
control rules. The probabilities for candidate rules being beneficial are
evaluated and updated on a training set until a threshold is reached. This
methodology is used to avoid having to evaluate candidate rules on larger
training sets, which would show their utility more clearly but be more
expensive. \citeA{demmel_self-adapting_2005} learn multivariate Bayesian
decision rules. \citeA{carchrae_low-knowledge_2004} learn a Bayesian classifier
to predict the best algorithm after a certain amount of time.
\citeA{stern_collaborative_2010} learn Bayesian models that incorporate
collaborative filtering. \citeA{domshlak_max_2010} learn decision rules using
na\"ive Bayes classifiers.
\citeA{lagoudakis_algorithm_2000,petrik_statistically_2005} learn performance
models based on Markov Decision Processes. \citeA{kotthoff_evaluation_2012} use
statistical relational learning to predict the ranking of the algorithms in the
portfolio on a particular problem. None of these approaches make explicit use of
the uncertainty attached to a decision though.
Other approaches include support vector machines
\cite{hough_modern_2006,arbelaez_online_2009}, reinforcement learning
\cite{armstrong_dynamic_2006}, neural networks \cite{gagliolo_neural_2005},
decision tree ensembles \cite{hough_modern_2006}, ensembles of general
classification algorithms \cite{kotthoff_ensemble_2010}, boosting
\cite{bhowmick_application_2006}, hybrid approaches that combine regression and
classification \cite{kotthoff_hybrid_2012}, multinomial logistic regression
\cite{samulowitz_learning_2007}, self-organising maps
\cite{smith-miles_towards_2008} and clustering
\cite{stamatatos_learning_2009,stergiou_heuristics_2009,kadioglu_isac_2010}.
\citeA{sayag_combining_2006,streeter_combining_2007} compute schedules for
running the algorithms in the portfolio based on a statistical model of the
problem instance distribution and performance data for the algorithms. This is
not an exhaustive list, but focuses on the most prominent approaches and
publications. Within a single family of approaches, such as decision trees,
there are further distinctions that are outside the scope of this paper, such as
the type of decision tree inducer.
\citeA{arbelaez_online_2009} discuss a technical issue related to the
construction of per-portfolio performance models. A particular algorithm often
exhibits much better performance in general than other algorithms on a
particular instance distribution. Therefore, the training data used to learn
the performance model will be skewed towards that algorithm. This can be a
problem for Machine Learning, as always predicting this best algorithm might
have a very high accuracy already, making it very hard to improve on. The
authors mention two means of mitigating this problem. The training set can be
\emph{under-sampled}, where examples where the best overall algorithm performs
best are deliberately omitted. Alternatively, the set can be \emph{over-sampled}
by artificially increasing the number of examples where another algorithm is
better.
\subsubsection{Per-algorithm models}
A different approach is to learn performance models for the individual
algorithms in the portfolio. The predicted performance of an algorithm on a
problem can be compared to the predicted performance of the other portfolio
algorithms and the selector can proceed based on this. The advantage of this
approach is that it is easier to add and remove algorithms from the portfolio --
instead of having to retrain the model for the entire portfolio, it suffices to
train a model for the new algorithm or remove one of the trained models.
Most approaches only rely on the order of predictions being correct. It does not
matter if the prediction of the performance itself is wildly inaccurate as long
as it is correct relative to the other predictions.
This is the approach that is implicitly assumed in Rice's framework. The
prediction is the performance mapping $P(A,x)$ for an algorithm
$A\in\mathcal{A}$ on a problem $x\in\mathcal{P}$. Models for each algorithm in
the portfolio are used for example by
\citeA{xu_satzilla_2008,howe_exploiting_1999,allen_selecting_1996,lobjois_branch_1998,gagliolo_learning_2006}.
A common way of doing this is to use regression to directly predict the
performance of each algorithm. This is used by
\citeA{xu_satzilla_2008,howe_exploiting_1999,leyton-brown_learning_2002,haim_restart_2009,roberts_learned_2007}.
The performance of the algorithms in the portfolio is evaluated on a set of
training problems, and a relationship between the characteristics of a problem
and the performance of an algorithm derived. This relationship usually has the
form of a simple formula that is cheap to compute at runtime.
\citeA{silverthorn_latent_2010} on the other hand learn latent class models of
unobserved variables to capture relationships between solvers, problems and run
durations. Based on the predictions, the expected utility is computed and used
to select an algorithm. \citeA{sillito_improvements_2000} surveys sampling
methods to estimate the cost of solving constraint problems.
\citeA{watson_empirical_2003} models the behaviour of local search algorithms
with Markov chains.
Another approach is to build statistical models of an algorithm's performance
based on past observations. \citeA{weerawarana_pythia_1996} use Bayesian belief
propagation to predict the runtime of a particular algorithm on a particular
problem. Bayesian inference is used to determine the class of a problem and the
closest case in the knowledge base. A performance profile is extracted from that
and used to estimate the runtime. The authors also propose an alternative
approach that uses neural nets. \citeA{fink_statistical_1997,fink_how_1998}
computes the expected gain for time bounds based on past success times. The
computed values are used to choose the algorithm and the time bound for running
it. \citeA{brazdil_comparison_2000} compare algorithm rankings based on different
past performance statistics. Similarly, \citeA{leite_using_2010} maintain a
ranking based on past performance. \citeA{cicirello_max_2005} propose a bandit
problem model that governs the allocation of resources to each algorithm in the
portfolio. \citeA{wang_optimizing_2007} also use a bandit model, but furthermore
evaluate a Q-learning approach, where in addition to bandit model rewards, the
states of the system are taken into account.
\citeA{gomes_algorithm_1997,wu_portfolios_2007,gerevini_automatically_2009} use
the past performance of algorithms to simulate the performance of different
algorithm schedules and use statistical tests to select one of the schedules.
\subsubsection{Hierarchical models}\label{hierarchical}
There are some approaches that combine several models into a hierarchical
performance model. There are two basic types of hierarchical models. One type
predicts additional \emph{properties of the problem} that cannot be measured
directly or are not available without solving the problem. The other type makes
\emph{intermediate predictions} that do not inform Algorithm Selection directly,
but rather the final predictions.
\citeA{xu_hierarchical_2007} use sparse multinomial logistic regression to
predict whether a SAT problem instance is satisfiable and, based on that
prediction, use a logistic regression model to predict the runtime of each
algorithm in the portfolio. \citeA{haim_restart_2009} also predict the
satisfiability of a SAT instance and then choose an algorithm from a portfolio.
Both report that being able to distinguish between satisfiable and unsatisfiable
problems enables performance improvements. The satisfiability of a problem is a
property that needs to be \emph{predicted} in order to be useful for Algorithm
Selection. If the property is \emph{computed} (i.e.\ the problem is solved),
there is no need to perform Algorithm Selection anymore.
\citeA{gent_machine_2010} use classifiers to first decide on the level of
consistency a constraint propagator should achieve and then on the actual
implementation of the propagator that achieves the selected level of
consistency. A different publication that uses the same data set does not make
this distinction however \cite{kotthoff_ensemble_2010}, suggesting that the
performance benefits are not significant in practice.
Such hierarchical models are only applicable in a limited number of scenarios,
which explains the comparatively small amount of research into them. For many
application domains, only a single property needs to be predicted and can be
predicted without intermediate steps with sufficient accuracy.
\citeA{kotthoff_hybrid_2012} proposes a hierarchical approach that is
domain-independent. He uses the performance predictions of regression models as
input to a classifier that decides which algorithm to choose and demonstrates
performance improvements compared to selecting an algorithm directly based on
the predicted performance. The idea is very similar to that of \emph{stacking}
in Machine Learning \citeA{wolpert_stacked_1992}.
\subsubsection{Selection of model learner}
Apart from the different types of performance models, there are different
Machine Learning algorithms that can be used to learn a particular kind of
model. While most of the approaches mentioned here rely on a single way of doing
this, some of the research compares different methods.
\citeA{xu_satzilla_2008} mention that, in addition to the chosen ridge regression
for predicting the runtime, they explored using lasso regression, support vector
machines and Gaussian processes. They chose ridge regression not because it
provided the most accurate predictions, but the best trade-off between accuracy
and cost to make the prediction. \citeA{weerawarana_pythia_1996} propose an
approach that uses neural networks in addition to the Bayesian belief
propagation approach they describe initially. \citeA{cook_maximizing_1997}
compare different decision tree learners, a Bayesian classifier, a nearest
neighbour approach and a neural network. They chose the C4.5 decision tree
inducer because even though it may be outperformed by a neural network, the
learned trees are easily understandable by humans and may provide insight into
the problem domain. \citeA{leyton-brown_learning_2002} compare several versions
of linear and non-linear regression. \citeA{hutter_performance_2006} report
having explored support vector machine regression, multivariate adaptive
regression splines (MARS) and lasso regression before deciding to use the linear
regression approach of \citeA{leyton-brown_learning_2002}. They also report
experimental results with sequential Bayesian linear regression and Gaussian
Process regression. \citeA{guo_algorithm_2003,guo_learning-based_2004} explore
using decision trees, na\"ive Bayes rules, Bayesian networks and meta-learning
techniques. They also chose the C4.5 decision tree inducer because it is one of
the top performers and creates models that are easy to understand and quick to
execute. \citeA{gebruers_using_2005} compare nearest neighbour classifiers,
decision trees and statistical models. They show that a nearest neighbour
classifier outperforms all the other approaches on their data sets.
\citeA{hough_modern_2006} use decision tree ensembles and support vector
machines. \citeA{bhowmick_application_2006} investigate alternating decision
trees and various forms of boosting, while \citeA{pulina_multi-engine_2007} use
decision trees, decision rules, logistic regression and nearest neighbour
approaches. They do not explicitly choose one of these methods in the paper, but
their Algorithm Selection system AQME uses a nearest neighbour classifier by
default. \citeA{roberts_learned_2007} use 32 different Machine Learning
algorithms to predict the runtime of algorithms and probability of success. They
attempt to provide explanations for the performance of the methods they have
chosen in \citeA{roberts_what_2008}. \citeA{silverthorn_latent_2010} compare the
performance of different latent class models. \citeA{gent_machine_2010} evaluate
the performance of 19 different Machine Learning classifiers on an Algorithm
Selection problem in constraint programming. The investigation is extended to
include more Machine Learning algorithms as well as different performance models
and more problem domains in \citeA{kotthoff_evaluation_2012}. They identify
several Machine Learning algorithms that show particularly good performance
across different problem domains, namely linear regression and alternating
decision trees. They do not consider issues such as how easy the models are to
understand or how efficient they are to compute.
Only
\citeA{guo_learning-based_2004,gebruers_using_2005,hough_modern_2006,pulina_multi-engine_2007,silverthorn_latent_2010,gent_machine_2010,kotthoff_evaluation_2012}
quantify the differences in performance of the methods they used. The other
comparisons give only qualitative evidence. Not all comparisons choose one of
the approaches over the other or provide sufficient detail to enable the reader
to do so. In cases where a particular technique is chosen, performance is often
not the only selection criterion. In particular, the ability to understand a
learned model plays a significant role.
\subsection{Types of predictions}
The way of creating the performance model of a portfolio or its algorithms is
not the only choice researchers face. In addition, there are different
predictions the performance model can make to inform the decision of the
selector of a subset of the portfolio algorithms. The type of decision is
closely related to the learned performance model however. The prediction can be
a single categorical value -- the algorithm to choose. This type of prediction
is usually the output of per-portfolio models and used for example in
\citeA{gent_learning_2010,cook_maximizing_1997,pulina_multi-engine_2007,nikoli_instance-based_2009,guerri_learning_2004}.
The advantage of this simple prediction is that it determines the choice of
algorithm without the need to compare different predictions or derive further
quantities. One of its biggest disadvantages however is that there is no
flexibility in the way the system runs or even the ability to monitor the
execution for unexpected behaviour.
A different approach is to predict the runtime of the individual algorithms in
the portfolio. This requires per-algorithm models. For example
\citeA{horvitz_bayesian_2001,petrik_statistically_2005,silverthorn_latent_2010}
do this. \citeA{xu_satzilla_2008} do not predict the runtime itself, but the
logarithm of the runtime. They note that,
\begin{quote}
``In our experience, we have found this log transformation of runtime to be very
important due to the large variation in runtimes for hard combinatorial
problems.''
\end{quote}
\citeA{kotthoff_evaluation_2012} also compare predicting the runtime itself and
the log thereof, but find no significant difference between the two.
\citeA{kotthoff_hybrid_2012} however also reports better results with the
logarithm.
\citeA{allen_selecting_1996} estimate the runtime by proxy by predicting the
number of constraint checks. \citeA{lobjois_branch_1998} estimate the runtime by
predicting the number of search nodes to explore and the time per node.
\citeA{lagoudakis_algorithm_2000} talk of the \emph{cost} of selecting a
particular algorithm, which is equal to the time it takes to solve the problem.
\citeA{nareyek_choosing_2001} uses the \emph{utility} of a choice to make his
decision. The utility is an abstract measure of the ``goodness'' of an algorithm
that is adapted dynamically. \citeA{tolpin_rational_2011} use the \emph{value of
information} of selecting an algorithm, defined as the amount of time saved by
making this choice. \citeA{xu_satzilla2009_2009} predict the \emph{penalized
average runtime score}, a measure that combines runtime with possible timeouts.
This approach aims to provide more realistic performance predictions when
runtimes are capped.
More complex predictions can be made, too. In most cases, these are made by
combining simple predictions such as the runtime performance.
\citeA{brazdil_comparison_2000,soares_meta-learning_2004,leite_using_2010}
produce rankings of the portfolio algorithms. \citeA{kotthoff_evaluation_2012}
use statistical relational learning to directly predict the ranking instead of
deriving it from other predictions.
\citeA{howe_exploiting_1999,gagliolo_adaptive_2004,gagliolo_learning_2006,roberts_directing_2006,omahony_using_2008}
predict resource allocations for the algorithms in the portfolios.
\citeA{gebruers_using_2005,little_capturing_2002,borrett_context_2001} consider
selecting the most appropriate formulation of a constraint problem.
\citeA{smith_knowledge-based_1992,brewer_high-level_1995,wilson_case-based_2000,balasubramaniam_automated_2012}
select algorithms and data structures to be used in a software system.
Some types of predictions require online approaches that make decisions during
search.
\citeA{borrett_adaptive_1996,sakkout_instance_1996,carchrae_low-knowledge_2004,armstrong_dynamic_2006}
predict when to switch the algorithm used to solve a problem.
\citeA{horvitz_bayesian_2001} predict whether to restart an algorithm.
\citeA{lagoudakis_algorithm_2000,lagoudakis_learning_2001} predict the cost to
solve a sub-problem. However, most online approaches make predictions that can
also be used in offline settings, such as the best algorithm to proceed with.
The primary selection criteria and prediction for
\citeA{soares_meta-learning_2004} and \citeA{leite_using_2010} is the quality of
the solution an algorithm produces rather than the time it takes the algorithm
to find that solution. In addition to the primary selection criteria, a number
of approaches predict secondary criteria.
\citeA{howe_exploiting_1999,fink_how_1998,roberts_learned_2007} predict the
probability of success for each algorithm. \citeA{weerawarana_pythia_1996}
predict the quality of a solution.
In Rice's model, the prediction of an Algorithm Selection system is the
performance $p\in\mathcal{R}^n$ of an algorithm. This abstract notion does not
rely on time and is applicable to many approaches. It does not fit techniques
that predict the portfolio algorithm to choose or more complex measures such as
a schedule however. As Rice developed his approach long before the advent of
algorithm portfolios, it should not be surprising that the notion of the
performance of individual algorithms as opposed to sets of algorithms dominates.
The model is sufficiently general to be able to accommodate algorithm portfolios
with only minor modifications to the overall framework however.
\section{Features}\label{sec:features}
The different types of performance models described in the previous sections
usually use features to inform their predictions. Features are an integral part
of systems that do Machine Learning. They characterise the inputs, such as the
problem to be solved or the algorithm employed to solve it, and facilitate
learning the relationship between the inputs and the outputs, such as the time
it will take the algorithm to solve the problem. In Rice's model, features
$f(x)$ for a particular problem $x$ are extracted from the feature space
$\mathcal{F}$.
The selection of the most suitable features is an important part of the design
of Algorithm Selection systems. There are different types of features
researchers can use and different ways of computing these. They can be
categorised according to two main criteria.
First, they can be categorised according to how much background knowledge a
researcher needs to have to be able to use them. Features that require no or
very little knowledge of the application domain are usually very general and can
be applied to new Algorithm Selection problems with little or no modification.
Features that are specific to a domain on the other hand may require the
researcher building the Algorithm Selection system to have a thorough
understanding of the domain. These features usually cannot be applied to other
domains, as they may be non-existent or uninformative in different contexts.
The second way of distinguishing different classes of features is according to
when and how they are computed. Features can be computed \emph{statically},
i.e.\ before the search process starts, or \emph{dynamically}, i.e.\ during
search. These two categories roughly align with the offline and online
approaches to portfolio problem solving described in Section~\ref{sec:solving}.
\citeA{smith-miles_measuring_2012} present a survey that focuses on what
features can be used for Algorithm Selection. This paper categorises the
features used in the literature.
\subsection{Low and high-knowledge features}
In some cases, researchers use a large number of features that are specific to
the particular problem domain they are interested in, but there are also
publications that only use a single, general feature -- the performance of a
particular algorithm on past problems.
\citeA{gagliolo_adaptive_2004,petrik_statistically_2005,cicirello_max_2005,streeter_combining_2007,silverthorn_latent_2010},
to name but a few examples, use this approach to build statistical
performance models of the algorithms in their portfolios. The underlying
assumption is that all problems are similar with respect to the relative
performance of the algorithms in the portfolio -- the algorithm that has done
best in the past has the highest chance of performing best in the future.
Approaches that build runtime distribution models for the portfolio algorithms
usually do not select a single algorithm for solving a problem, but rather use
the distributions to compute resource allocations for the individual portfolio
algorithms. The time allocated to each algorithm is proportional to its past
performance.
Other sources of features that are not specific to a particular problem domain
are more fine-grained measures of past performance or measures that characterise
the behaviour of an algorithm during search. \citeA{langley_learningd_1983} for
example determines whether a search step performed by a particular algorithm is
good, i.e.\ leading towards a solution, or bad, i.e.\ straying from the path to
a solution if the solution is known or revisiting an earlier search state if the
solution is not known. \citeA{gomes_algorithm_1997,gomes_algorithm_2001} use the
runtime distributions of algorithms over the size of a problem, as measured by
the number of backtracks. \citeA{fink_how_1998} uses the past success times of
an algorithm as candidate time bounds on new problems.
\citeA{brazdil_comparison_2000} do not consider the runtime, but the error rate
of algorithms. \citeA{gerevini_automatically_2009} use both computation time and
solution quality.
\citeA{beck_simple_2004,carchrae_low-knowledge_2004,carchrae_applying_2005}
evaluate the performance also during search. They explicitly focus on features
that do not require a lot of domain knowledge. \citeA{beck_simple_2004} note
that,
\begin{quote}
``While existing algorithm selection techniques have shown impressive results,
their knowl\-edge-intensive nature means that domain and algorithm expertise is
necessary to develop the models. The overall requirement for expertise has not
been reduced: it has been shifted from algorithm selection to predictive model
building.''
\end{quote}
They do, like several other approaches, assume \emph{anytime} algorithms --
after search has started, the algorithm is able to return the best solution
found so far at any time. The features are based on how search progresses and
how the quality of solutions is improved by algorithms. While this does not
require any knowledge about the application domain, it is not applicable in
cases when only a single solution is sought.
Most approaches learn models for the performance on particular problems and do
not use past performance as a feature, but to inform the prediction to be made.
Considering problem features facilitates a much more nuanced approach than a
broad-brush general performance model. This is the classic supervised Machine
Learning approach -- given the correct prediction derived from the behaviour on
a set of training problems, learn a model that enables to make this prediction.
The features that are considered to learn the model are specific to the problem
domain or even a subset of the problem domain to varying extents. For
combinatorial search problems, the most commonly used basic features include,
\begin{itemize}
\item the number of variables,
\item properties of the variable domains, i.e.\ the list of possible
assignments,
\item the number of clauses in SAT, the number of constraints in constraint
problems, the number of goals in planning,
\item the number of clauses/constraints/goals of a particular type (for example
the number of \texttt{alldifferent} constraints, \citeR{gent_machine_2010}),
\item ratios of several of the above features and summary statistics.
\end{itemize}
Such features are used for example in
\citeA{omahony_using_2008,pulina_multi-engine_2007,weerawarana_pythia_1996,howe_exploiting_1999,xu_satzilla_2008}.
Other sources of features include
the generator that produced the problem to be
solved \cite{horvitz_bayesian_2001}, the runtime environment
\cite{armstrong_dynamic_2006}, structures derived from the problem such as the
primal graph of a constraint problem
\cite{gebruers_making_2004,guerri_learning_2004,gent_learning_2010}, specific
parts of the problem model such as variables \cite{epstein_collaborative_2001},
the algorithms in the portfolio themselves \cite{hough_modern_2006} or
the domain of the problem to be solved \cite{carbonell_prodigy_1991},
\citeA{gerevini_automatically_2009} rely on the problem domain as the only
problem-specific feature and select based on past performance data for the
particular domain. \citeA{beck_dynamic_2000} consider not only the values of
properties of a problem, but the changes of those values while the problem is
being solved. \citeA{smith_knowledge-based_1992} consider features of abstract
representations of the algorithms. \citeA{yu_adaptive_2004,yu_adaptive_2006} use
features that represent technical details of the behaviour of an algorithm on a
problem, such as the type of computations done in a loop.
Most approaches use features that are applicable to all problems of the
application domain they are considering. However, \citeA{horvitz_bayesian_2001}
use features that are not only specific to their application domain, but also to
the specific family of problems they are tackling, such as the variance of
properties of variables in different columns of Latin squares. They note that,
\begin{quote}
``\ldots{}the inclusion of such domain-specific features was important in
learning strongly predictive models.''
\end{quote}
\subsection{Static and dynamic features}
In most cases, the approaches that use a large number of domain-specific
features compute them \emph{offline}, i.e.\ before the solution process starts
(cf.\ Section~\ref{sec:offon}). Examples of publications that only use such
static features are
\citeA{leyton-brown_learning_2002,pulina_multi-engine_2007,guerri_learning_2004}.
An implication of using static features is that the decisions of the Algorithm
Selection system are only informed by the performance of the algorithms on past
problems. Only dynamic features allow to take the performance on the current
problem into account. This has the advantage that remedial actions can be taken
if the problem is unlike anything seen previously or the predictions are wildly
inaccurate for another reason.
A more flexible approach than to rely purely on static features is to
incorporate features that can be determined statically, but try to estimate the
performance on the current problem. Such features are computed by probing the
search space. This approach relies on the performance probes being sufficiently
representative of the entire problem and sufficiently equal across the different
evaluated algorithms. If an algorithm is evaluated on a part of the search space
that is much easier or harder than the rest, a misleading impression of its true
performance may result.
Examples of systems that combine static features of the problem to be solved
with features derived from probing the search space are
\citeA{xu_satzilla_2008,gent_learning_2010,omahony_using_2008}. There are also
approaches that use only probing features. We term this \emph{semi-static}
feature computation because it happens before the actual solving of the problem
starts, but parts of the search space are explored during feature extraction.
Examples include
\citeA{allen_selecting_1996,beck_simple_2004,lobjois_branch_1998}.
The idea of probing the search space is related to \emph{landmarking}
\cite{pfahringer_meta-learning_2000}, where the performance of a set of initial
algorithms (the \emph{landmarkers}) is linked to the performance of the set of
algorithms to select from. The main consideration when using this technique is
to select landmarkers that are computationally cheap. Therefore, they are
usually versions of the portfolio algorithms that have either been simplified or
are run only on a subset of the data the selected algorithm will run on.
While the work done during probing explores part of the search space and could
be used to speed search up subsequently by avoiding to revisit known areas,
almost no research has been done into this. \citeA{beck_simple_2004} run all
algorithms in their (small) portfolio on a problem for a fixed time and select
the one that has made the best progress. The chosen algorithm resumes its
earlier work, but no attempt is made to avoid duplicating work done by the
other algorithms. To the best of our knowledge, there exist no systems that
attempt to avoid redoing work performed by a different algorithm during the
probing stage.
For successful systems, the main source of performance improvements is the
selection of the right algorithm using the features computed through probing. As
the time to compute the features is usually small compared to the runtime
improvements achieved by Algorithm Selection, using the results of probing
during search to avoid duplicating work does not have the potential to achieve
large additional performance improvements.
The third way of computing features is to do so \emph{online}, i.e.\ while
search is taking place. These dynamic features are computed by an execution
monitor that adapts or changes the algorithm during search based on its
performance. Approaches that rely purely on dynamic features are for example
\citeA{borrett_adaptive_1996,nareyek_choosing_2001,stergiou_heuristics_2009}.
There are many different features that can be computed during search.
\citeA{minton_automatically_1996} determines how closely a generated heuristic
approximates a generic target heuristic by checking the heuristic choices at
random points during search. He selects the one with the closest match.
Similarly, \citeA{nareyek_choosing_2001} learn how to select heuristics during
the search process based on their performance. \citeA{armstrong_dynamic_2006} use
an agent-based model that rewards good actions and punishes bad actions based on
computation time. \citeA{kuefler_using_2008} follow a very similar
approach that also takes success or failure into account.
\citeA{carchrae_low-knowledge_2004,carchrae_applying_2005} monitor the solution
quality during search. They decide whether to switch the current algorithm based
on this by changing the allocation of resources. \citeA{wei_switching_2008}
monitor a feature that is specific to their application domain, the distribution
of clause weights in SAT, during search and use it to decide whether to switch a
heuristic. \citeA{stergiou_heuristics_2009} monitors propagation events in a
constraint solver to a similar aim. \citeA{caseau_meta-heuristic_1999} evaluate
the performance of candidate algorithms in terms of number of calls to a
specific high-level procedure. They note that in contrast to using the runtime,
their approach is machine-independent.
\subsection{Feature selection}
The features used for learning the Algorithm Selection model are crucial to its
success. Uninformative features might prevent the model learner from recognising
the real relation between problem and performance or the most important
feature might be missing. Many researchers have recognised this problem.
\citeA{howe_exploiting_1999} manually select the most important features. They
furthermore take the unique approach of learning one model per feature for
predicting the probability of success and combine the predictions of the models.
\citeA{leyton-brown_learning_2002,xu_satzilla_2008} perform automatic feature
selection by greedily adding features to an initially empty set. In addition to
the basic features, they also use the pairwise products of the features.
\citeA{pulina_multi-engine_2007} also perform automatic greedy feature
selection, but do not add the pairwise products.
\citeA{kotthoff_evaluation_2012} automatically select the most important subset
of the original set of features, but conclude that in practice the performance
improvement compared to using all features is not significant.
\citeA{wilson_case-based_2000} use genetic algorithms to determine the
importance of the individual features. \citeA{petrovic_case-based_2002} evaluate
subsets of the features they use and learn weights for each of them.
\citeA{roberts_what_2008} consider using a single feature and automatic
selection of a subset of all features. \citeA{guo_learning-based_2004} and
\citeA{kroer_feature_2011} also use techniques for automatically determining the
most predictive subset of features. \citeA{kotthoff_hybrid_2012} compares the
performance of ten different sets of features.
It is not only important to use informative features, but also features that are
cheap to compute. If the cost of computing the features and making the decision
is too high, the performance improvement from selecting the best algorithm might
be eroded. \citeA{xu_satzilla2009_2009} predict the feature computation time for
a given problem and fall back to a default selection if it is too high to avoid
this problem. They also limit the computation time for the most expensive
features as well as the total time allowed to compute features.
\citeA{bhowmick_towards_2009} consider the computational complexity of
calculating problem features when selecting the features to use. They show that
while achieving comparable accuracy to the full set of features, the subset of
features selected by their method is significantly cheaper to compute.
\citeA{gent_learning_2010} explicitly exclude features that are expensive to
compute.
\section{Application domains}\label{sec:domains}
The approaches for solving the Algorithm Selection Problem that have been
surveyed here are usually not specific to a particular application domain,
within combinatorial search problems or otherwise. Nevertheless this survey
would not be complete without a brief exposition of the various contexts in
which Algorithm Selection techniques have been applied.
Over the years, Algorithm Selection systems have been used in many different
application domains. These range from Mathematics, e.g.\ differential equations
\cite{kamel_odexpert_1993,weerawarana_pythia_1996}, linear algebra
\cite{demmel_self-adapting_2005} and linear systems
\cite{bhowmick_application_2006,kuefler_using_2008}, to the selection of
algorithms and data structures in software design
\cite{smith_knowledge-based_1992,cahill_knowledge-based_1994,brewer_high-level_1995,wilson_case-based_2000}.
A very common application domain are combinatorial search problems such as SAT
\cite{xu_satzilla_2008,lagoudakis_learning_2001,silverthorn_latent_2010},
constraints
\cite{minton_automatically_1996,epstein_adaptive_2002,omahony_using_2008},
Mixed Integer Programming \cite{xu_hydra-mip_2011},
Quantified Boolean Formulae
\cite{pulina_self-adaptive_2009,stern_collaborative_2010}, planning
\cite{carbonell_prodigy_1991,howe_exploiting_1999,vrakas_learning_2003},
scheduling \cite{beck_dynamic_2000,beck_simple_2004,cicirello_max_2005},
combinatorial auctions
\cite{leyton-brown_learning_2002,gebruers_making_2004,gagliolo_learning_2006},
Answer Set Programming \cite{gebser_portfolio_2011},
the Travelling Salesperson Problem \cite{fukunaga_genetic_2000}
and general search algorithms
\cite{langley_learningd_1983,cook_maximizing_1997,lobjois_branch_1998}.
Other domains include Machine Learning
\cite{soares_meta-learning_2004,leite_using_2010}, the most probable
explanation problem \cite{guo_learning-based_2004}, parallel reduction
algorithms \citeA{yu_adaptive_2004,yu_adaptive_2006} and simulation
\cite{wang_optimizing_2007,ewald_selecting_2010}. It should be noted that a
significant part of Machine Learning research is concerned with developing
Algorithm Selection techniques; the publications listed in this paragraph are
the most relevant that use the specific techniques and framework surveyed here.
Some publications consider more than one application domain.
\citeA{stern_collaborative_2010} choose the best algorithm for Quantified
Boolean Formulae and combinatorial auctions.
\citeA{allen_selecting_1996,kroer_feature_2011} look at SAT and constraints.
\citeA{gomes_algorithm_2001} consider SAT and Mixed Integer Programming. In
addition to these two domains, \citeA{kadioglu_isac_2010} also investigate set
covering problems. \citeA{streeter_new_2008} apply their approach to SAT,
Integer Programming and planning.
\citeA{gagliolo_algorithm_2011,kotthoff_evaluation_2012,kotthoff_hybrid_2012}
compare the performance across Algorithm Selection problems from constraints,
Quantified Boolean Formulae and SAT.
In most cases, researchers take some steps to adapt their approaches to the
application domain. This is usually done by using domain-specific features, such
as the number of constraints and variables in constraint programming. In
principle, this is not a limitation of the proposed techniques as those features
can be exchanged for ones that are applicable in other application domains.
While the overall approach remains valid, the question of whether the
performance would be acceptable arises. \citeA{kotthoff_evaluation_2012}
investigate how specific techniques perform across several domains with the aim
of selecting the one with the best overall performance. There are approaches
that have been tailored to a specific application domain to such an extent that
the technique cannot be used for other applications. This is the case for
example in the case of hierarchical models for SAT
\cite{xu_hierarchical_2007,haim_restart_2009}.
\section{Current and future directions}\label{sec:directions}
Research into the Algorithm Selection Problem is ongoing. Many aspects of
Algorithm Selection in various contexts have been explored already. Current
research is extending and refining existing approaches, as well as exploring new
directions. Some of them are listed below, in no particular order.
\subsection{Use of more sophisticated Machine Learning techniques}
Most of the research to date has focused on predicting either the best algorithm
in a portfolio or the performance of an algorithm on a particular problem. In
some cases, these simple predictions are used to generate more complex outputs,
such as a schedule according to which to run the algorithms.
\citeA{kotthoff_evaluation_2012} have started exploring Machine Learning
techniques to predict such complex outputs more directly, but their results are
not competitive with other approaches.
A related direction is to explore the use of generic Machine Learning techniques
that can be applied to many approaches to improve performance.
\citeA{kotthoff_hybrid_2012} for example explores this.
\citeA{xu_evaluating_2012} analyse the performance of a portfolio and the
contributions of its constituent algorithms. The results of such an analysis
could be used to inform the choice of suitable Machine Learning techniques.
\citeA{smith-miles_measuring_2012} focus on identifying features that are
suitable for Machine Learning in Algorithm Selection.
This raises the question of what type of Machine Learning to use in general.
While this has long been a research topic in Machine Learning research, there is
almost no research that applies such knowledge to Algorithm Selection. This
problem is in particular interesting as the authors of the SATzilla system
decided to fundamentally change the type of Machine Learning they use in a
recent publication \cite{xu_hydra-mip_2011}.
\subsection{Exploitation of parallelism}
Many researchers acknowledge at least implicitly that their approaches can be
parallelised across the many cores that modern computers provide. Current
research has started to focus on explicitly exploiting parallelism
\cite<e.g.>{gagliolo_towards_2008,yun_learning_2012,hutter_parallel_2012}. Apart
from technical considerations, one of the main issues is that the composition of
a good algorithm portfolio changes with the number of processors available to
run those algorithms.
There remain challenges that have been largely ignored so far however. As an
example, some portfolio algorithms may be able to take advantage of specialised
processing units such as GPUs while others are not. This would place
restrictions on how the algorithms can be run in parallel. Given the current
trend to have more powerful GPUs with increasing numbers of processing elements
in off-the-shelf computers, we expect this direction of research to become more
prominent.
\subsection{Application to new domains}
Even though Algorithm Selection techniques have been applied to many domains,
especially in Artificial Intelligence, there remain many more that might benefit
from its research. Recently, Algorithm Selection techniques have been applied to
Answer Set Programming for example \cite{gebser_portfolio_2011}. An increasing
number of research communities are becoming aware of Algorithm Selection
techniques and the potential benefits for their domain.
Related research explores how Algorithm Selection techniques can be used in the
construction of software
\cite{balasubramaniam_automated_2012,hoos_programming_2012}. This is not just
the application in a new problem domain, but the deployment of techniques in a
new context that has the potential for much higher performance improvements.
While at the moment Algorithm Selection is somewhat of a specialised subject,
the integration of relevant techniques into mainstream programming languages and
software development systems will stimulate further research in this direction.
\section{Summary}\label{sec:conclusion}
Over the years, there have been many approaches to solving the Algorithm
Selection Problem. Especially in Artificial Intelligence and for combinatorial
search problems, researchers have recognised that using Algorithm Selection
techniques can provide significant performance improvements with relatively
little effort. Most of the time, the approaches involve some kind of Machine
Learning that attempts to learn the relation between problems and the
performance of algorithms automatically. This is not a surprise, as the
relationship between an algorithm and its performance is often complex and hard
to describe formally. In many cases, even the designer of an algorithm does not
have a general model of its performance.
Despite the theoretical difficulty of Algorithm Selection, dozens of systems
have demonstrated that it can be done in practice with great success. In some
sense, this mirrors achievements in other areas of Artificial Intelligence.
Satisfiability is formally a problem that cannot be solved efficiently, yet
researchers have come up with ways of solving very large instances of
satisfiability problems with very few resources. Similarly, some Algorithm
Selection systems have come very close to always choosing the best algorithm.
This survey presented an overview of the Algorithm Selection research that has
been done to date with a focus on combinatorial search problems. A
categorisation of the different approaches with respect to fundamental criteria
that determine Algorithm Selection systems in practice was introduced. This
categorisation abstracts from many of the low level details and additional
considerations that are presented in most publications to give a clear view of
the underlying principles. We furthermore gave details of the many different
ways that can be used to tackle Algorithm Selection and the many techniques that
have been used to solve it in practice.
On a high level, the approaches surveyed here can be summarised as follows.
\begin{itemize}
\item Algorithms are chosen from portfolios, which can be statically
constructed or dynamically augmented with newly constructed algorithms as
problems are being solved. Portfolios can be engineered such that the
algorithms in it complement each other (i.e.\ are as diverse as possible),
by automatically tuning algorithms on a set of training problems or by using
a set of algorithms from the literature or competitions. Dynamic portfolios
can be composed of algorithmic building blocks that are combined into
complete algorithms by the selection system. Compared to tuning the
parameters of algorithms, the added difficulty is that not all combinations
of building blocks may be valid.
\item A single algorithm can be selected from a portfolio to solve a problem to
completion or a set of larger size can be selected that is run in parallel
or according to a schedule. Another approach is to select a single algorithm
to start with and then decide if and when to switch to another algorithm.
Some approaches always select the entire portfolio and vary the resource
allocation to the algorithms.
\item Algorithm Selection can happen offline, without any interaction with the
Algorithm Selection system after solving starts, or online. Some approaches
monitor the performance of the selected algorithm and take action if it does
not conform to the expectations or some other criteria. Others repeat the
selection process at specific points during the search (e.g.\ every node in
the search tree), skew a computed schedule towards the best performers or
decide whether to restart stochastic algorithms.
\item Performance can be modelled and predicted either for a portfolio as a
whole (i.e.\ the prediction is the best algorithm) or for each algorithm
independently (i.e.\ the prediction is the performance). A few approaches
use hierarchical models that make a series of predictions to facilitate
selection. Some publications make secondary predictions (e.g.\ the quality
of a solution) that are taken into account when selecting the most suitable
algorithm, while others make predictions that the desired output is derived
from instead of predicting it directly. The performance models are usually
learned automatically using Machine Learning, but a few approaches use
hand-crafted models and rules. Models can be learned from separate training
data or incrementally while a problem is being solved.
\item Learning and using performance models is facilitated by features of the
algorithms, problems or runtime environment. Features can be
domain-independent or specific to a particular set of problems. Similarly,
features can be computed by inspecting the problem before solving or while
it is being solved. The use of feature selection techniques that
automatically determine the most important and relevant features is quite
common.
\end{itemize}
Given the amount of relevant literature, it is infeasible to discuss every
approach in detail. The scope of this survey is necessarily limited to
the detailed description of high-level details and a summary overview of
low-level traits. Work in related areas that is not immediately relevant to
Algorithm Selection for combinatorial search problems has been pointed to, but
cannot be explored in more detail.
A tabular summary of the literature organised according to the criteria
introduced here can be found at
\url{http://4c.ucc.ie/~larsko/assurvey/}.
\acks
Ian Miguel and Ian Gent provided valuable feedback that helped shape this paper.
We also thank the anonymous reviewers of a previous version of this paper whose
detailed comments helped to greatly improve it. This work was supported by an
EPSRC doctoral prize.
\bibliographystyle{theapa}
|
1,108,101,564,234 | arxiv | \chapter{Introduction}
\medskip
\REF\topquark{F. Abe \etal\ [CDF Collaboration], {\sl Phys. Rev. Lett.}
{\bf 74} (1995) 2626;
S. Abachi \etal\ [D0 Collaboration], {\sl Phys. Rev. Lett.}
{\bf 74} (1995) 2632.}
\REF\lepglobal{M.G. Alviggi \etal\ [LEP Electroweak Working Group],
LEPEWWG/95-01 (1995); M. Calvi, invited talk at this meeting.}
\REF\rhoparm{M. Veltman, {\sl Nucl. Phys.} {\bf B123} (1977) 89.}
\REF\chw{M. Carena, H.E. Haber and C.E.M. Wagner, CERN preprint in
preparation.}
Recently, the CDF and D0 Collaborations have announced the discovery
of the top quark at the Tevatron,\refmark\topquark\ with a measured
mass of $m_t=176\pm 8\pm 10$~GeV and
$m_t=199^{+19}_{-21}\pm 22$~GeV, respectively. Both measurements
are in excellent agreement with the top quark mass deduced by the LEP
global analysis of precision electroweak
measurements.\refmark\lepglobal\ The LEP determination of $m_t$ is based
on the sensitivity of electroweak observables in $Z$ decay
to virtual top quark exchange, which
enters in two distinct ways. First, top quark loops
in gauge boson self-energies (the so-called oblique corrections)
can directly effect the properties of the $Z$. The most
famous of the oblique corrections is the top-quark contribution to
the electroweak $\rho$ parameter,\refmark\rhoparm\ which is given by
$\rho=1+\delta\rho$, where
$\delta\rho\simeq 3G_F m_t^2/8\pi^2\sqrt{2}$.
Second, virtual top quark exchange can
contribute to certain vertex radiative corrections.
For example, the one-loop correction to $Z\rightarrow b\bar b$
is also quadratically sensitive to the top quark mass.
The LEP global fit yields $m_t=176\pm 10^{+17}_{-19}$~GeV,
where the second set of
errors corresponds to varying the Higgs mass between 60~GeV and 1~TeV
(with a central value of 300~GeV).
Clearly a heavy top quark mass has been confirmed.
But is there an alternative interpretation?
In this paper, I present a model constructed in
collaboration with Marcela Carena and Carlos Wagner,\refmark\chw\
in which we
explore the possibility of circumventing the apparent
ironclad conclusion that $m_t\gg m_W$.
\chapter{A Four Generation Supersymmetric Model with a Light Top Quark}
\medskip
Consider that the LEP measured rate for
$Z\rightarrow b\bar b$ differs from the Standard Model prediction by
2.4$\sigma$. Defining $R_b\equiv\Gamma(Z\rightarrow
b\bar b)/\Gamma(Z\rightarrow{\rm hadrons})$,\refmark\lepglobal\
$$R_b=\cases{0.2204\pm 0.0020,&LEP global fit;\cr
0.2157,&Standard Model prediction.\cr}
\eqn\zbbnumbers$$
Clearly, one does not give up on the Standard Model because of a
2.4$\sigma$ discrepancy. Nevertheless, it is amusing to note that if
one extracts the top quark mass from this
measurement alone, one would conclude that $m_t<m_W$! We proceed by
fixing $m_t\simeq m_W$ in what follows. Of course, with such a light
top quark mass, we must address three obvious questions:
\pointbegin
Would not a top quark with $m_t\simeq m_W$ have already been
discovered at hadron colliders?
\point
What is the particle recently announced by CDF and D0 which is
observed to decay into $bW$?
\point
What is the nature of the new physics that contributes to the oblique
corrections and simulates the heavy top quark inferred by the LEP
experiments?
\REF\wwidth{B. Klima, invited talk at this meeting.}
\REF\dzerotop{S. Abachi \etal\ [D0 Collaboration], {\sl Phys. Rev. Lett.}
{\bf 72} (1994) 2138.}
\noindent
If the top quark were
sufficiently light, then $W^+\rightarrow t\bar b$ would be kinematically
allowed; this would modify the total width of the $W$. But
$\Gamma_W$ can be measured at hadron colliders indirectly by
studying the ratio of production cross section times
leptonic branching ratio of the
$W$ and $Z$. The most recent analysis of this kind, reported at this
meeting by the D0 collaboration,\refmark\wwidth\ finds $m_t>62$~GeV.
Direct searches for the top quark at hadron colliders assume that an
observable fraction of top quark decays results in a final state
lepton. For example, in ref.~\dzerotop,
the D0 collaboration ruled out the mass range
$m_W+m_b\lsim m_t<131$~GeV, assuming that the decay $t\rightarrow bW$ is
not unexpectedly suppressed.\refmark\dzerotop\
Previous top quark searches at hadron
colliders are able to close the window between 62 and 85~GeV,
assuming that $t\rightarrow bW^\star$ is the dominant top-quark decay mode.
However, in this case the final state is three-body since $W^\star$
is virtual. If the top quark were to possess any two-body decay modes (due
to new physics processes), and if these modes rarely produced
leptons, then a top quark in this mass region would not have been
detected in any experiment.
\REF\pdg{Limits on supersymmetric particle masses
are summarized in L. Montanet \etal\ [Particle Data Group],
{\sl Phys. Rev.} {\bf D50} (1994) 1173.}
\REF\stopsearch{A. White, invited talk given at the SUSY-95 Conference,
15--19 May 1995, Palaiseau, France.}
An example of such a scenario occurs in supersymmetric models
in which the decay
$t\rightarrow\widetilde t\widetilde\chi^0_1$ is kinematically allowed
(where $\widetilde t$ is the top squark and
$\widetilde\chi^0_1$ is the lightest neutralino). Experimental searches for
both $\widetilde t$ and $\widetilde\chi^0_1$ place constraints on their
masses, but do not rule out the possibility of $M_{\tilde t}+
M_{\tilde\chi_1^0}<m_W$.
In particular, the LEP neutralino and chargino searches\refmark\pdg\
obtain a limit on the lightest neutralino mass which typically lies
between 20 and 25 GeV. Using this result and the limits on the top squark
mass from searches at LEP and at the Tevatron,\refmark\stopsearch\
one finds that the mass region
$42\lsim M_{\tilde t}\lsim 60$~GeV cannot be excluded.
\REF\rudaz{I.I. Bigi and S. Rudaz, {\sl Phys. Lett.} {\bf 153B} (1985)
335.}
To be definite, we choose $m_t\simeq m_W$, $M_{\tilde t}\simeq 50$~GeV and
$M_{\tilde\chi^0_1}\simeq 25$~GeV. Then, the dominant decay chain is
$t\rightarrow\widetilde t\widetilde\chi^0_1$ followed by
$\widetilde t\rightarrow c\widetilde\chi^0_1$ through a one-loop
process,\refmark\rudaz\ which rarely produces a hard isolated lepton.
Hence, these events would not have been detected at hadron colliders.
But, now we must
reconsider to the recent CDF and D0 discoveries and the LEP
``measurement'' of $m_t$. We propose to account for these results
by introducing a fourth generation of quarks (and leptons) plus
their supersymmetric partners. Then, $t^\prime\rightarrow bW^+$ can be the
source of the CDF and D0 events, while the effects of
the third and fourth generation quarks and squarks contributing to
the oblique corrections are large enough
to be consistent with LEP precision electroweak
data.
\chapter{Phenomenological and Theoretical Constraints}
\medskip
The model parameters are determined by imposing the phenomenological
and theoretical constraints listed below.
1. In order that the $t^\prime$ be consistent with the CDF and
D0 ``top-quark'' events, its dominant decay must be $t^\prime\rightarrow
bW^+$. This means that $t^\prime\rightarrow b^\prime W^\star$ must be a
three-body decay. Furthermore, the $t^\prime$--$b$ mixing angle
($V_{t^\prime b}$) must not be too small; otherwise, the latter decay
will dominate. We find:
$${\Gamma(t^\prime\rightarrow b^\prime W^\star)\over\Gamma(t^\prime\rightarrow b^\prime W)}
={9G_Fm_{t^\prime}^2\over \pi^2\sqrt{2}|V_{t^\prime b}|^2}
\int_0^{1-2\sqrt{x}+x}\,{z(1-z+x)\sqrt{(1-z+x)^2-4x}\over
\left(1-{zm_{t^\prime}^2/m_W^2}\right)^2}\,dz\,,
\eqn\gammarats$$
where $x\equiv m_{b^\prime}^2/m_{t^\prime}^2$.
Since the rate of the CDF and D0 ``top-quark'' events is consistent
with the QCD prediction for $t\bar t$ production under the assumption
that $BR(t\rightarrow bW^+)=$ 100\%, a reinterpretation of these events as
$t^\prime \bar {t^\prime}$ production (followed by $t^\prime\rightarrow bW^+$)
requires $BR(t^\prime\rightarrow bW^+)$ to be near 1. We assume that $V_{t^\prime b}$
lies between $V_{cb}=0.04$ and $V_{ud}=0.2$; for definiteness, we
choose $V_{t^\prime b}=0.1$. Then, if we require $BR(t^\prime\rightarrow bW^+)
\gsim 0.75$, it follows that we must take $m_{b^\prime}\geq 105$~GeV.
\REF\irfps{C.T. Hill, {\sl Phys. Rev.} {\bf D24} (1981) 691;
C.T. Hill, C.N. Leung and S. Rao, {\sl Nucl. Phys.} {\bf B262} (1985)
517.}
2. In low-energy
supersymmetric model building, it is common practice to require
that all couplings of the model stay perturbative up to very high energies.
Here, we shall insist that the Higgs-quark Yukawa couplings
do not blow up below the grand unification (GUT) scale. Then, if we wish to
have the $t^\prime$ and $b^\prime$ masses as large as possible, it
follows that the corresponding Yukawa couplings will be forced to lie close to
their quasi-infrared fixed points.\refmark\irfps\
For example, if we take $m_{t^\prime}
\geq 170$~GeV, then we find that $m_{b^\prime}\leq 110$~GeV.
Combined with point 1, we see that the mass of the $b^\prime$ is
essentially fixed. Moreover, since we are at the infrared fixed point
values of the Yukawa couplings, which depend on the corresponding
masses and the ratio of Higgs vacuum expectation values, $\tan\beta$,
it follows that $\tan\beta$ is also fixed. In this work, we choose
$m_{t^\prime}=170$~GeV and $m_{b^\prime}=110$~GeV; for these values
$\tan\beta\simeq 1.6$. One can also add in the requirement that the
fourth generation leptons lie at their quasi-infrared fixed points
(in order to maximize their masses). We assume that the fourth
generation neutrino ($N$) is a Dirac fermion. Then, the resulting lepton
masses are: $m_{\tau^\prime}\simeq 50$~GeV and $m_N
\simeq 80$~GeV. Remarkably, these masses lie above the corresponding
bounds from LEP. In addition, it is amusing to note that the above
masses are consistent with the unification of
all {\it four} fermion-Higgs Yukawa couplings at the GUT scale!
\endpage
\REF\poketal{M. Carena, M. Olechowski, S. Pokorski, and C.E.M. Wagner,
{\sl Nucl. Phys.} {\bf B419} (1994) 213; {\bf B426} (1994) 269.}
3. In order that $M_{\tilde t}<m_t$, there must be substantial
$\widetilde t_L$--$\widetilde t_R$ mixing. The squared
mass of $\widetilde t$ is given by the smallest eigenvalue of the
matrix
$$\pmatrix{M_{\widetilde Q}^2+m_t^2+c_L m_Z^2 & m_t(A_t-\mu\cot\beta)\cr
m_t(A_t-\mu\cot\beta)& M_{\widetilde U}^2+m_t^2+c_R m_Z^2\cr}\,,
\eqn\stopmatrix$$
where $c_L\equiv ({1\over 2}-{2\over 3}\sin^2\theta_W)\cos2\beta$,
$c_R\equiv {2\over 3}\sin^2\theta_W\cos2\beta$, $M_{\widetilde Q}$,
$M_{\widetilde U}$, and $A_t$ are soft-supersymmetry-breaking parameters,
and $\mu$ is the supersymmetric Higgs mass parameter. Large mixing
requires that the off-diagonal terms above are of the same order as
the diagonal terms. If there is large mixing in the third generation
squark sector, why not in the fourth generation squark sector as well?
In fact, if $A_{t^\prime}\simeq A_t$, the mixing in the fourth
generation squark sector would be too large, driving the smallest
eigenvalue of the ${\widetilde t}^\prime_L$--${\widetilde t}^\prime_R$
squared-mass matrix negative. Remarkably, this does not occur
due to the infrared fixed-point behavior of the fourth generation.
$A_{t^\prime}$ is driven to a fixed point that is independent of
its high energy value.\refmark\poketal\
Roughly, $A_{t^\prime}\simeq -2m_{1/2}$
where $m_{1/2}$ is the high-energy (GUT-scale) value of the gaugino Majorana
mass. In contrast, the top quark is not controlled by the infrared fixed point
(since in our model $m_t$ is not large enough); hence, $A_t$ can be chosen
large.
Moreover, choosing $\mu$ negative enhances the third generation
squark mixing while it somewhat suppresses the fourth generation
squark mixing.
4. If gaugino Majorana mass parameters are unified with a common GUT-scale
mass given by $m_{1/2}$,
then the gluino, chargino and neutralino masses are
determined by $m_{1/2}$, $\mu$, and $\tan\beta$.
Our model prefers the region of parameter space where
$m_{1/2}\ll|\mu|$ (with $\mu$ negative).
Then, our choice of $M_{\tilde\chi_1^0}\simeq 25$~GeV
fixes $m_{1/2}\simeq 55$~GeV.
Typical values for the masses of the other light chargino and
neutralino states are
$M_{\tilde\chi_1^\pm}\simeq M_{\tilde\chi_2^0}\simeq 60$~GeV.
The choice of $m_{1/2}$ also fixes the gluino mass;
we find $M_{\tilde g}\simeq 3m_{1/2}
\simeq 165$~GeV. The dominant decay of this gluino would be
$\widetilde g\rightarrow \widetilde t\bar t$ (or its charge-conjugated state).
Such a gluino cannot be ruled out by present Tevatron limits.
\REF\cleo{M.S. Alam \etal\ [CLEO Collaboration], {\sl Phys. Rev. Lett.}
{\bf 74} (1995) 2885.}
We have checked that virtual effects of
the light supersymmetric particles do not generate new conflicts
with experimental data. For example, because the light chargino
is nearly a pure gaugino, the chargino--top squark loop has a
negligible effect on the rate for $Z\rightarrow b\bar b$. Our model then
predicts $R_b=0.2184$, which is within one standard deviation of the measured
LEP value [eq.~\zbbnumbers]. The improvement over the Standard Model
result is due to the fact that $m_t\simeq m_W$.
As a second example, one of the
most sensitive tests of the model is to check that its prediction for
$b\rightarrow s\gamma$ is consistent with $1.0\times 10^{-4}\lsim
BR(b\rightarrow s\gamma)\lsim 4\times 10^{-4}$, as required by the CLEO
measurement.\refmark\cleo\
The predictions of our model live comfortably within this bound.
\REF\radcorr{H.E. Haber and R. Hempfling, {\sl Phys. Rev. Lett.} {\bf 66}
(1991) 1815; {\sl Phys. Rev.} {\bf D48} (1993) 4280;
Y. Okada, M. Yamaguchi and T. Yanagida, {\sl Prog. Theor. Phys.}
{\bf 85} (1991) 1; {\sl Phys. Lett.} {\bf B262} (1991) 54;
J. Ellis, G. Ridolfi and F. Zwirner, {\sl Phys. Lett.}
{\bf B257} (1991) 83; {\bf B262} (1991) 477; R. Barbieri, M. Frigeni,
and F. Caravaglios {\sl Phys. Lett.} {\bf B258} (1991) 167.}
5. The mass of the lightest CP-even Higgs boson should lie above the
LEP lower limit. For $\tan\beta=1.6$, the tree-level {\it upper bound}
on the light Higgs mass is $m_{\hl}\leq m_Z|\cos{2\beta}|=40$~GeV, which would
have been detected at LEP. However, radiative corrections can raise the
upper bound substantially.\refmark\radcorr\
The bound increases with increasing
values of the soft-supersymmetry-breaking parameters which appear in
the squark squared-mass matrix [eq.~\stopmatrix]. We find as a typical
range of values that $m_{\hl}\simeq 65$--70~GeV, above the present LEP limits.
6. The Tevatron may be able to rule out the existence of the $b^\prime$
with mass $m_{b^\prime}\simeq 110$~GeV.
If kinematically allowed, the decay $b^\prime\rightarrow\widetilde
t\widetilde\chi_1^-$ would be the dominant decay mode. If disallowed,
there would be a competition between $b^\prime\rightarrow Wc$ (a change of two
generations) and $b^\prime\rightarrow W^\star t$ (a change of one generation, but
suppressed by three-body phase space). If necessary, one can choose
$|V_{b^\prime c}|\ll |V_{t^\prime b}|$ to remove the possibility of
$b^\prime\rightarrow Wc$. Then, all $b^\prime$ decays would result in
$W^\star c\widetilde\chi_1^0\widetilde\chi_1^0$.
There are no published limits that exclude such a
$b^\prime$. However, a dedicated search at the Tevatron should be
able to discover or exclude such events.
7. Perhaps the most difficult requirement for our model is to
reproduce the oblique electroweak radiative corrections inferred from
the precision measurements at LEP. Consider the contributions to
$\delta\rho$. Since in our model,
$m_t$ is less than half of its standard value,
the contribution of the $t$--$b$ doublet to $\delta\rho$ is reduced
by a factor of 4. This cannot be made up entirely by the contribution
of the fourth generation fermions, since the mass of the $b^\prime$ is
not negligible. We find that the contributions of the third and
fourth generation fermions make up only half the observed $\delta\rho$.
The remainder must come from the third and fourth generation squarks.
This requirement places severe restrictions on the squark
parameters [eq.~\stopmatrix].
One must maximize the off-diagonal squark mixing while keeping
the diagonal squark mass parameters as small as possible. However,
the latter cannot be too small; otherwise the radiative corrections to
the light Higgs mass will be reduced leading to a value of $m_{\hl}$ below
the current LEP bound.
\REF\peskin{M.E. Peskin and T. Takeuchi, {\sl Phys. Rev. Lett.}
{\bf 65} (1990) 964; {\sl Phys. Rev.} {\bf D46} (1992) 381.}
\REF\langacker{J. Erler and P. Langacker, UPR-0632-T (1994)
[hep-ph 9411203]; P. Langacker, private communication.}
\REF\stable{J. Ellis, D.V. Nanopoulos and K. Tamvakis, {\sl Phys. Lett.}
{\bf 121B} (1983) 123; L. Iba\~nez and C. Lopez, {\sl Phys. Lett.}
{\bf 126B} (1983) 54; L. Alvarez-Gaum\'e, J. Polchinski and M. Wise,
{\sl Nucl. Phys.} {\bf B221} (1983) 495.}
It is convenient to parameterize the oblique radiative corrections in terms
of the Peskin-Takeuchi variables\refmark\peskin\
$S$, $T$ and $U$. Here $T\equiv \alpha^{-1}
\delta\rho$ (where $\alpha^{-1}\simeq 137$) is the the most sensitive
(although some interesting restrictions can be obtained by considering $S$).
Langacker has performed a global analysis of precision electroweak
data,\refmark\langacker\
assuming that $m_t=80$~GeV and $m_{\hl}=65$~GeV, and extracts values
for the oblique parameters. He finds $T_{\rm new}= 0.70\pm 0.21$, which
in our model must arise from the contribution of the fourth
generation fermions and the third and fourth generation squarks.
(The contributions from other supersymmetric particles are negligible.)
We find that the fourth generation fermions yield a contribution of 0.2
to $T_{\rm new}$. The contributions of the third and fourth generation
squarks depend sensitively on the squark parameters as noted above;
a range of parameters can be found that yields
a total squark contribution to $T_{\rm new}$
that lies between 0.3 and 0.4. This would bring us within
one standard deviation of Langacker's value for $T_{\rm new}$.
To achieve such a value for the squark contribution to
$T_{\rm new}$ requires substantial
$\widetilde q_L$--$\widetilde q_R$ mixing in the third
generation, which is uncomfortably large and may cause
stability problems\refmark\stable\
for the complete scalar potential of the model.
Non-negligible mixing in the fourth generation also enhances the
fourth generation squark contributions to $T_{\rm new}$. The maximum
effect is limited phenomenologically by
a lower bound on the mass of $\widetilde b^\prime$.
In order that $t^\prime\rightarrow bW^+$ remain the dominant decay, one
must kinematically forbid $t^\prime\rightarrow\widetilde b^\prime\widetilde
\chi_1^+$. Given $M_{\widetilde\chi_1^\pm}\simeq 60$~GeV,
a value of $M_{\tilde b^\prime}\simeq 120$~GeV
is a comfortable choice. All the phenomenological
constraints have now forced
the parameters of the model into a very narrow corner of parameter space.
\chapter{Conclusions}
\medskip
It is still possible that $m_t\simeq m_W$, despite the recent announcement
of the top quark discovery by the CDF and D0 collaborations. A
model has been exhibited that satisfies all phenomenological constraints
and is not ruled out by published data. The most
theoretically troubling feature of the model is the large mixing among
the third generation squarks that is necessary to ensure a viable
prediction for the electroweak $\rho$-parameter.
The model possesses a rich spectrum of new particles that will be accessible
to LEP-II and the Tevatron. In particular, eight new particles of this
model could be discovered at LEP-II: the $t$-quark, the fourth generation
leptons ($\tau^\prime$ and $N$), the light Higgs
boson ($h^0$), and four supersymmetric particles ($\widetilde\chi_1^0$,
$\widetilde\chi_2^0$, $\widetilde\chi_1^\pm$, and $\widetilde t$).
Note that even at the initial run of LEP-II at $\sqrt{s}=140$~GeV planned
for the fall of 1995, all four supersymmetric particles listed above
(and the $\tau^\prime$) should be discovered, or else the model would be
excluded.
\REF\pois{J.F. Gunion, D.W. McKay and H. Pois, {\sl Phys. Lett.}
{\bf B334} (1994) 339.}
Thus, the fate of this model may be decided before these Proceedings
appear in print. Nevertheless, this exercise was useful in
demonstrating the difficult in constructing four-generation models
of low-energy supersymmetry.
In a previous work, Gunion, McKay and Pois\refmark\pois\
attempted to construct four-generation models in the context of minimal
low-energy supergravity. They identified the top quark as the
state discovered by CDF and D0. In order to keep Higgs-quark
Yukawa couplings perturbative up to the GUT scale, they
were forced to try to hide the $b^\prime$ and $t^\prime$ in a mass region
below $m_t\simeq 175$~GeV. The resulting models were contrived and
phenomenologically unappealing. Our approach represents the logical
alternative for four-generation low-energy supersymmetric models.
If these models are excluded, one will finally be able to state with
confidence that in the low-energy suersymmetric approach the number
of generations is indeed three!
\endpage
\centerline{\bf Acknowledgments}
\medskip
I would like to
thank Marcela Carena and Carlos Wagner for an enjoyable and
rewarding collaboration. I am also grateful to Jean-Marie Fr\`ere
for his kind invitation to speak at the Moriond meeting.
Finally, I send
a special appreciation to Jo\"elle Raguideau, whose encouragements
were a great help to a painful knee. This work was supported in
part by a grant from the U.S. Department of Energy.
\bigskip
\refout
\bye
|
1,108,101,564,235 | arxiv | \section*{Abstract}
Large whole-genome sequencing projects have provided access
to much of the rare variation in human populations, which is
highly informative about population structure and recent
demography. Here, we show how the age of rare variants can be
estimated from patterns of haplotype sharing and how these ages can be
related to historical relationships between populations.
We investigate the distribution of the age of
variants occurring exactly twice ($f_2$ variants) in a
worldwide sample sequenced by the 1000 Genomes Project, revealing
enormous variation across populations. The median age of haplotypes
carrying $f_2$ variants is 50 to 160 generations across populations
within Europe or Asia, and 170 to 320 generations within Africa. Haplotypes shared between
continents are much older with median ages for haplotypes shared
between Europe and Asia ranging from 320 to 670 generations. The distribution of the
ages of $f_2$ haplotypes is informative about their
demography, revealing recent bottlenecks, ancient splits, and more
modern connections between populations. We see the signature of
selection in the observation that functional variants are
significantly younger than nonfunctional variants of the same
frequency. This approach is relatively insensitive to mutation rate
and complements other nonparametric methods for demographic inference.
\section*{Author Summary}
In this paper we describe a method for estimating the age of rare
genetic variants. These ages are highly informative about the extent
and dates of connections between populations. Variants in closely related populations generally arose more
recently than variants of the same frequency in more diverged
populations. Therefore, comparing the ages of variants shared across different
populations allows us to infer the dates of demographic events like
population splits and bottlenecks. We also see that rare functional
variants shared within populations tend to have more recent
origins than nonfunctional variants, which is likely to be the
signature of natural selection.
\section*{Introduction}
The recent availability of large numbers of fully sequenced human
genomes has allowed, for the first time, detailed investigation of
rare genetic variants. These are highly differentiated
between populations \cite{bustamante2011,nelson2012}, may make an
important contribution to genetic susceptibility to disease
\cite{nejentsev2009,johansen2010,mcclellan2010,rivas2011,beaudoin2013},
and provide information about both demographic history, and fine-scale
population structure \cite{gravel2011,mathieson2012}. While patterns
of rare variant sharing are informative in themselves, knowing
the age of the variants allows us to observe changes in structure over
time, and thus to infer the dates of demographic events.
Rare variants are typically more recent than common variants
and in fact, the age of a variant can be estimated directly from its frequency
\cite{kimura1973,griffiths1998,fu2012}. However there are two problems
with this approach. First, using only the frequency information means
that we cannot distinguish differences between the ages of variants
which are at the same frequency which, as we demonstrate here, can be both large and
important. Second, in order to use this approach, we have to know
the demographic history of the populations involved. In this article, we describe an
alternative approach which uses the fact that the lengths of shared
haplotypes around variants are informative about their ages
\cite{palamara2012,ralph2013,harris2012}.
Specifically, we estimate the time to the most recent common ancestor
(TMRCA) for $f_2$ haplotypes, which are regions where two chromosomes
are each other's unique closest relative in a sample. To find these
regions, we look for variants which occur exactly twice in the
sample ($f_2$ variants, or doubletons). We then use nearby variation
to estimate the extent of the $f_2$ haplotype and use
the length of, and number of mutations on, this haplotype to infer its age,
and therefore a lower bound for the age of the variant. Every $f_2$ variant identifies
an $f_2$ haplotype, but we do not detect all $f_2$ haplotypes because
not all of them carry mutations. This approach
is fast, robust, and finds shared haplotypes directly from
genotype data, which avoids the need for statistical phasing. We apply this method to the
1000 Genomes phase 1 dataset \cite{1000genomes2012},
to quantify the distribution of the ages of variants
shared within and between populations, and between variants in
different functional classes. We demonstrate dramatic differences
between the ages of variants shared across different populations,
and reveal the signatures of both demography and selective constraint.
\section*{Results}
\begin{figure}[]
\begin{center}
\includegraphics{Figure1}
\end{center}
\caption{
{\bf{Algorithm and model for haplotypes.}} {\bf{A}}:
Algorithm for detecting $f_2$ haplotypes. For each $f_2$ variant
in the sample (green), we scan left and right until we find
inconsistent homozygote genotypes (red), record the physical and
genetic length of this region (blue), and the number of singletons
(purple). {\bf{B}}: Model for haplotype age $t$. Consider the 4
chromosomes (grey) of the two individuals sharing an $f_2$ haplotype
(blue). We model the total genetic length of the inferred haplotype, $L_g$, as the sum of the true
genetic length $L_g^*$ and an error $\Delta_g$. Similarly, we model
the number of singletons $S$ as the sum of the number on the shared
chromosome ($S^*$) and the number on the unshared chromosomes,
$\Delta_S$. We ignore the fact that we overestimate $L_p$ and
therefore that some of the singletons might lie in the unshared part
of the chromosome.
}
\label{Fig1}
\end{figure}
We first give a brief outline of our approach (Figures \ref{Fig1}, S\ref{FigS1},
{\bf{Methods}}). Given a sample of individual
genotypes, we find all $f_2$ variants. That is, variants which
have exactly two copies (in different individuals) in the sample.
This tells us that, in the absence of repeat mutations and assuming
that the $f_2$ variant is derived, those individuals must share an
$f_2$ haplotype at that
position. We then scan left and right along the genome, until we reach
a point where the two individuals have inconsistent homozygote
genotypes (0 and 2, Figure \ref{Fig1}A).
Using both the genetic and physical lengths of the region, and the
number of singletons, we compute an approximate likelihood for the age
of the haplotype (Figure \ref{Fig1}B). We use the data to estimate error
terms to take into account the fact that the algorithm described above
does not find the shared haplotypes precisely. Then, for each
haplotype, we find the maximum likelihood estimate (MLE) of the age of
each haplotype. We investigate the distribution of these MLEs for
different classes of $f_2$ variants, for example those shared within
or between specific populations.
\subsection*{Simulation results}
\begin{figure}
\begin{center}
\includegraphics{Figure2}
\end{center}
\caption{
{\bf{Estimating $f_2$ age from simulated data.}}
We simulated whole genomes
for 100 individuals (200 chromosomes), with $N_e=14,000$,
$\mu=1.2\times10^{-8}$ and HapMap 2 recombination rates. {\bf{A}}: Estimated
age against true age. The grey dots are the MLEs for each detected
haplotype. The blue line is a quantile-quantile (qq) plot for the MLEs
(from the 1$^{st}$ to 99$^{th}$ percentile). {\bf{B}}-{\bf{D}} Power to detect
$f_2$ haplotypes as a function of {\bf{B}}: genetic length, {\bf{C}}:
physical length and {\bf{D}}: haplotype age; in each case the darker
line represents the power to detect $f_2$ haplotype with 100\% power
to detect $f_2$ variants, and the lighter line the power with 66\%
power to detect variants.
}
\label{Fig2}
\end{figure}
To test our approach, we ran whole genome simulations for a sample of
100 diploid individuals with MaCS\cite{chen2009a}, using the combined
HapMap 2 recombination maps \cite{hapmap2}, and a mutation rate
($\mu$) of $1.2\times 10^{-8}$ per-base per-generation, assuming a constant
effective population size ($N_e$) of 14,000; chosen to reflect
parameters relevant to human genetic variation. We investigated both our power to detect the $f_2$
haplotypes and how accurately we could estimate
the distribution of $f_2$ ages (Figure \ref{Fig2}). We detected around 26\% of all $f_2$
haplotypes. Unsurprisingly, we have more power to detect very long
haplotypes, but we detected many small haplotypes as well: 19\% of our
total had true genetic length less than than 0.1cM. Having imperfect
power to detect $f_2$ variants does not have a large effect on our power
to detect $f_2$ haplotypes since most haplotypes carry more than one
$f_2$ variant. We have higher power for more recent haplotypes because
they are longer but, at least for a population of constant size,
this effect is cancelled to some extent
for older haplotypes because the branches above them tend to be longer
and therefore more likely to carry mutations.
There is high uncertainly in the age of any individual haplotype
(Figure \ref{Fig2}A). However, we can
compute well-calibrated confidence intervals (Figure S\ref{FigS2}). In this
example, the median MLE of the age of the detected haplotypes is 179
generations and the true median is 192 generations. The median width
of the 95\% confidence interval is 730 generations. Information about the ages comes
mainly from the genetic length, and the principal
advantage of the singleton information is for very old haplotypes
where the length-based estimator is otherwise biased (Figure S\ref{FigS3}).
In addition, we ran simulations to check that the model was robust to
more complicated demographic scenarios including splits, bottlenecks
and expansions, as well as mis-specification of $N_e$ (Figure S\ref{FigS4}).
We also investigated the effect that these scenarios had
on the distribution of the ages of the $f_2$ haplotypes, demonstrating
that we could detect the signatures of demographic events. For example,
population bottlenecks lead to a high density of $f_2$ haplotypes
during the bottleneck and, following a population split haplotypes
shared between populations have median age roughly equal to the split
time (Figure S\ref{FigS5}).
\subsection*{1000 Genomes data}
We applied our estimator to the phase 1 data release of the 1000
Genomes Project \cite{1000genomes2012}, which consists of whole-genome
variant calls for 1,092 individuals drawn from 14 populations (Table
\ref{Tab1}). We used two of the 1000 Genomes callsets;
one made from sequence data, and one made using a dense genotyping array.
Restricting our analysis to the autosomes,
we extracted $f_2$ variants from the sequence data callset,
and then detected haplotype lengths around them (that is, the distances to
incompatible homozygotes), using only the array data, to minimise the effect of genotyping
errors. We then counted $f_1$ variants on these haplotypes from the
sequence data callset. From 4,066,530 $f_2$ variants we detected 1,893,391 $f_2$ haplotypes, with
median genetic and physical lengths of 0.7cM and 600kb
respectively. The median number of singletons per haplotype was 3. Of the 1.9
million $f_2$ haplotypes, 0.7 million were shared within populations
and 1.5 million were shared within continents. Sharing
of $f_2$ variants largely reflects expected patterns of
relatedness on a population level, and also reveals substructure in
some populations, notably GBR (Figure S\ref{FigS6}).
\begin{table}[]
\begin{tabular}{r|ll}
Abbreviation & Sample size & Description\\
\hline
ASW & 61 &African Ancestry in SW USA\\
LWK & 97 & Luhya in Webuye, Kenya\\
YRI & 88 & Yoruba in Ibadan, Nigera\\
CLM & 60 & Colombians in Medell\'{i}n, Colombia\\
MXL & 66 & Mexican Ancestry in Los Angeles, CA, USA\\
PUR & 55 & Puerto Ricans in Puerto Rico\\
CHB & 97 & Han Chinese in Beijing, China\\
CHS & 100 & Han Chinese South\\
JPT & 89 & Japanese in Tokyo, Japan\\
CEU & 85 & Utah residents with ancestry from northern and western
Europe\\
FIN & 93 & Finnish in Finland\\
GBR & 89 & British from England and Scotland\\
IBS & 14 & Iberian Populations in Spain\\
TSI & 98 & Toscani in Italy\\
\end{tabular}
\caption{Short descriptions of the 1000 Genomes populations.}
\label{Tab1}
\end{table}
We used the combined recombination rate map from HapMap 2 to determine
genetic lengths, and assumed a mutation rate of $0.4\times10^{-8}$
per-base per-generation (reflecting a true mutation rate of $1.2\times
10^{-8}$ multiplied by a power of $\frac{1}{3}$ to detect
singletons \cite{scally2012,1000genomes2012}). We estimated $N_e=185,000$ (from
the number of singletons in the dataset, {\bf{Methods}}). We then computed
MLEs for the ages of all the $f_2$ haplotypes shared
between every pair of populations (Figures \ref{Fig3}, S\ref{FigS7},
Tables S\ref{TabS13}-S\ref{TabS17}).
We estimated that on most chromosomes the
median overestimate in the haplotype lengths is 0.1-0.15cM (but
more on chromosomes 1, 9 and 15), and that $\theta$
(estimated from singletons) was around $3.7\times10^{-3}$ and
$2.4\times10^{-3}$ per-base for African and Non-African populations
respectively (Table S\ref{TabS18}).
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Figure3}
\end{center}
\caption{
{\bf{The estimated age distribution of $f_2$ haplotypes.}} {\bf{A}}: The
distribution of the MLE of the ages of haplotypes shared within each
population. {\bf{B}}-{\bf{F}}: The distribution of the MLE of the ages of haplotypes shared
between one population and all other populations, shown for each of GBR, JPT,
LWK, ASW, and PUR. Populations are described in Table \ref{Tab1}.
Density estimates are computed in $\log_{10}$ space, using the
base R ``density'' function with a Gaussian kernel.
}
\label{Fig3}
\end{figure}
For haplotypes shared within populations (Figure \ref{Fig3}A), the MLEs of
haplotypes within most European and Asian
populations are clustered around 100 generations ago. For
example, the median age of GBR-GBR haplotypes is 90 generations.
PUR and, to a lesser extent, CLM have many
very recent haplotypes (peaking around 11 generations
ago), consistent with a historical bottleneck in these
populations 300-350 years ago. FIN haplotypes peak around 14
generations (400-450 years) ago. African populations have many recent haplotypes but
also a much longer tail than the other populations, with
ancestry apparently extending back for thousands of generations.
For example the median age of LWK-LWK haplotypes is
320 generations, but the 95\% quantile is 8,500 generations.
Between-population sharing is largely consistent with
the historical relationships among populations (Figure \ref{Fig3}B-D).
Within continents, sharing within Asia or Europe has a median
of 50-160 generations, depending on the populations,
and sharing within Africa 170-340 generations. Sharing between
continents is much older, with median Asian-European sharing 320-670 generations
old, and Asian-African sharing rather older, with a median around
2,300 generations ago for LWK and 1,700 generations ago for YRI.
The age of European-African sharing varies between
populations, from 1,000 to 2,000 generations ago, but is
more recent than Asian-African sharing, perhaps
suggesting greater subsequent migration between these continents. We
discuss these figures in the context of split times and migrations in
the {\bf{Discussion}}.
Admixed populations have age distributions that are combinations of
the distributions of the admixing populations (Figure \ref{Fig3}E-F).
Even in these populations we can see signs of more subtle
history. For example, GBR-CLM haplotypes have an age distribution
which looks more like GBR-TSI or GBR-IBS than GBR-CEU, presumably representing the
fact that the major contribution to European admixture in the Americas
is from southern Europe (Figure S\ref{FigS8}).
\begin{SCfigure}
\includegraphics[width=8.3cm]{Figure4}
\caption{
{\bf{The ages of haplotypes around $f_2$ variants with
different functional annotations.}} Density is indicated by the width of the
shape, and horizontal bars show the median.
We show separately the densities for $f_2$ variants
shared within a population (left, blue), and $f_2$ variants shared
between populations (right, red). Numbers in brackets show the number
of variants in each class. Bars show the pairwise differences in
means, and $t$ test p-values for a difference in log means between groups.
}
\label{Fig4}
\end{SCfigure}
We also looked at the distribution of the ages of $f_2$ variants
broken down by functional annotation (Figure \ref{Fig4},
{\bf{Methods}}). We found that for variants shared within a single
population, loss-of-function (LOF) variants are younger than coding
variants, which are younger than functional noncoding variants, and
all annotated variants are younger than unannotated variants. The
median ages of these variants are 58, 83, 112 and 125 generations for
LOF, coding, functional noncoding and unannotated variants
respectively. This is presumably because purifying selection against
damaging mutations means that functional variants are less likely to
become old (though positive selection for beneficial mutations would
have the same effect). This effect has previously been both predicted
and observed \cite{maruyama1974,kiezun2013}. However, it is not
strictly true for variants shared between different populations and, in fact, the
effect is partially reversed (median ages are 176, 205, 186 and 195
generations for LOF, coding, functional noncoding and unannotated
variants). A likely explanation is that functional variants surviving
long enough to be shared between populations are selectively
neutral or recessive and thus unaffected by selection at low
frequency. This suggests that studies looking for disease-causing rare
variants should concentrate on variants private to a single
population, since variants shared across populations are unlikely to
have large phenotypic effects.
\subsection*{Robustness}
This analysis requires us to estimate several parameters, and in this
section, we investigate how robust it is to varying them.
The parameter $k$ is related to the probability of
discovering $f_2$ haplotypes. We know that $1\leq k \leq2$. $k=1$ implies that the
probability that we discover a haplotype is independent of its length
while $k=2$ implies that this probability increases linearly with
length. We chose $k=1.5$ based on simulations, but it may be the case that this is not the
optimal $k$ for real data. To test how much of an impact this might have, we
re-ran the analysis of the 1000 Genomes data using $k=1$ and $k=2$.
Larger values of $k$ increase our age estimates. For example, the median
CEU-CHB age is 403, 481 and 560 generations using $k=1$, 1.5 and 2
respectively. Overall, setting $k=2$ increases the median age
estimates by between 6 and 30\%, depending on population,
with more recent ages more sensitive to $k$.
The parameters $k_e$ and $\lambda_e$ are the shape and
rate of the (gamma) distribution of the overestimate of haplotype lengths
({\bf{Methods}}). We estimated these parameters separately from the
array data for each
chromosome (Table S\ref{TabS18}). We noticed that chromosomes 1, 5
and 9 had estimated parameters that implied a greater overestimate
(larger $k_e$, smaller $\lambda_e$), presumably due to the density of
markers on the array for those chromosomes.
In addition, these chromosomes had older estimated haplotype
ages, for example we estimated that the median age of $f_2$ haplotypes
on chromosome 1 was 16\% higher than the median age of haplotypes on
chromosome 2, suggesting that our error model is not fully robust to variation
in marker density.
\section*{Discussion}
We described an approach to estimate the age of $f_2$
haplotypes, without making any prior assumptions about population
structure or history. Though the age of any individual haplotype is uncertain,
major features of the distribution of haplotype ages are detected,
demonstrating qualitative differences between populations that are
almost certainly due to past demographic events. The next important
question is to what extent we can use these distributions as quantitative
estimates of the ages of demographic events.
\begin{figure}
\begin{center}
\vskip-1cm
\includegraphics[width=17.35cm]{Figure5}
\end{center}
\vskip-1cm
\caption{
{\bf{Comparison with MSMC, and the effect of estimating haplotypes with
sequence data.}} {\bf{A}}: The age distribution of $f_2$ haplotypes
shared between CHB and CEU estimated with array, sequence and
``clean'' sequence (with indels and low
complexity regions removed; {\bf{Methods}}). Coloured dashed lines show
the medians of each distribution. The grey stepped line shows
relative cross-population coalescence rates estimated by MSMC (S. Schiffels,
personal communication), and the grey dashed line shows the earliest
point where this rate is less that 0.5. In both cases, we assume 30 years per
generation and $\mu=1.25\times 10^{-8}$. {\bf{B}}: As in {\bf{A}} but
for $f_2$ haplotypes shared between CHB and MXL, restricted to
haplotypes where the MXL individual is inferred to be homozygous for
Native American ancestry. {\bf{C-D}}: Age
distributions inferred using ``clean'' sequence data, comparable to
Figure 3A-B (Note the extended x-axis).
}
\label{Fig5}
\end{figure}
Consider the split between European and East Asian
populations. Model based estimates of this split time have
ranged from 14 to 40 thousand years ago (kya) \cite{keinan2007,gutenkunst2009,gronau2011}.
Although, these are likely to be too low because they
assumed a mutation rate, $\mu$, of $2-2.5\times10^{-8}$ per-base per-generation,
now thought to be an overestimate
\cite{scally2012} and so a more reasonable range of estimates might be 22-80kya.
The nonparametric PSMC approach\cite{li2011}
estimated a split time of around 22kya (if a lower mutation rate of
$1.25\times 10^{-8}$ is used, 11kya with the higher rate),
and a similar method, MSMC, estimates a split time of 20-40kya
(S. Schiffels, personal communication; Figure \ref{Fig5}A).
Simulations suggest that, under a clean split model, the median of our
estimated ages is close to or slightly below the split time, at least
for recent splits (less than 1,000 generations; Figures S\ref{FigS5} and S\ref{FigS9}).
Comparing CEU to each of CHB, CHS and JPT, taking the median of our
haplotype ages, and assuming a generation time of 30
years\cite{fenner2005}, would imply split times of 14, 17 and 18kya
respectively. Other European populations give different estimates, but mostly between 15
and 20kya.
Similarly, when we looked at $f_2$ variants shared between East Asia
and America (CHB-MXL, but restricting to regions homozygous for
Native American ancestry in MXL; {\bf{Methods}}), we found that the
median age was around 10kya, substantially more recent than the split
time estimated using MSMC (S. Schiffels, personal communication;
Figure \ref{Fig5}B). This seems low, given geological evidence that the
Bering land bridge was submerged by 11-13kya, although a seasonal or
maritime route likely remained open after that time
\cite{brigham-grette2004,keigwin2006,meltzer2009}.
Our dates are all around or below the low end of published
estimates, even after we take into account the fact that the median
might be lower than the split time (we estimate about 11\% lower for a
500-generation old split; Figure S\ref{FigS5}D).
There are several non-exclusive explanations for
this observation. First, post-split gene flow could explain the
discrepancy. As we have greater power to detect $f_2$ haplotypes if they are more
recent, when the split is not clean many of the haplotypes we observe will derive from the
post-split gene flow rather than from before the initial
split (Figure S\ref{FigS9}B). In this scenario, we would be detecting the most
recent haplotypes, and our dates would be closer to the most recent
date of contact, rather than the initial split date.
An alternative explanation might be systematic errors in our estimates.
As we described in the {\bf{Results}}, the approach
is sensitive to the estimated parameters $k$, $k_e$ and
$\lambda_e$. At the extreme, increasing $k$ from 1.5 to its maximum value of 2 would increase
the median age of CEU-CHB haplotypes from 14kya to 17kya.
To investigate sensitivity to $k_e$ and $\lambda_e$,
we repeated the analysis, but using sequence data rather than
array data to find the length of the haplotypes (Figure \ref{Fig5}). We note that
when we estimated $k_e$ and $\lambda_e$ using sequence
data they vary very little across chromosomes (Table S\ref{TabS18}). The ages
estimated using sequence data were older than those estimated using
array data (Median age of CEU-CHB haplotypes 23kya, Figure \ref{Fig5}A-B).
We might expect that sequence data, being more dense
than array data, would find haplotypes more accurately.
However we would also expect that genotype errors, more
common in sequence than array data, would make all haplotypes look
older, by incorrectly breaking haplotypes. Removing indels and low
complexity regions (LCRs; {\bf{Methods}}) thought to be enriched for
genotyping errors from the sequence data reduced the difference
(median CEU-CHB age of 19kya), suggesting
that around half the increase in age is driven by
errors. Further, the haplotype ages estimated from sequence data do
not contain the very young (long) haplotypes within CLM, FIN and PUR, which we
independently believe to be correct (Figure \ref{Fig5}C), and also contain a
long tail of extremely old haplotypes which seems unlikely (Figure
\ref{Fig5}D).
Another source of systematic errors could be the use of incorrect
mutation or recombination rates. There is
considerable uncertainty about the mutation rate in humans, but
our approach is relatively insensitive to this, so if the true rate is higher than
$\mu=1.25\times 10^{-8}$ per-base per-generation then mutational clock
based methods which scale linearly with mutation rate
will overestimate the ages of events, thus reducing the discrepancy.
On the other hand, our approach might be
sensitive to errors in the recombination map. We tested
this by running simulations with a different
genetic map to the HapMap map that we used to determine genetic
length. We tested a population-based African
American map\cite{hinch2011}, a map derived from an Icelandic pedigree
\cite{kong2002} and a chimpanzee map from a small
population\cite{auton2012}, but none of these made a substantial
difference to the results and we conclude that the length of the
haplotypes we investigate is sufficiently large that they are robust
to the uncertainty in the recombination map (Figure S\ref{FigS10}).
Finally, systematic errors might occur due to homoplasy
(where the same mutation occurs independently on two
different lineages). While this rate is expected to be low, it may be
locally high in some parts of the genome, for example in CpG islands
which have an order of magnitude higher mutation rate than the genomic
background. If such false positives do occur, they would appear as very
short haplotypes that we would infer to be very old, so they cannot
explain our systematically lower ages. On the other hand, it is likely
that some of the very old haplotypes we see are, in fact, due to
repeat mutations and, in particular, this might explain some of the
very old haplotypes discovered with sequence data.
However, while systematic biases in our estimates might explain some
of the difference between our estimated ages and independent split
time estimates, they cannot explain the
observation that the age distributions vary greatly between
different pairs of populations. This strongly suggests that there is
variation in the extent of gene flow. For example, Asian-FIN sharing
seems to be more recent than other Asian-European sharing, suggesting relatively recent
contact between East Asian and Finnish populations, compared to other
European populations. It seems likely that worldwide demographic
history is sufficiently complicated that trying to estimate a single
Asian-European (or African-Non African) split time is futile,
and that a complex model of many splits, migrations and admixtures
is required to explain the relationship between different populations.
Ultimately, we would like to be able to make explicit estimates of
parameters like historical effective population size, and the dates of
demographic events. Though the approach we describe here is limited in
in this respect, there is a clear path to extend it to do so.
We could first use a similar approach to estimate the
age of variants at frequency three and higher. Then, treating the
estimates of haplotype ages as estimates of coalescent times, we could
use the empirical distribution of coalescent times to estimate
population sizes and cross-population migration rates as a function of
time. Another improvement would be to use information from the full likelihood surface for each
haplotype, rather than just the point estimate of the age as we do
here. Since, for large samples, we would have good estimates of recent
coalescent rates, we expect that this approach would be very accurate
at inferring recent history, making it a complementary approach to
sequential Markovian coalescent based methods which are typically accurate in the ancient past,
but less so for very recent history.
\section*{Methods}
\subsection*{Definitions}
Suppose we have a sample of size $N$ of genotypes from a
single genetic region. Define
an $f_2$ variant to be one which occurs exactly twice in the sample in
different individuals. That is, either two individuals have genotype 1
and all the others have genotype 0, or two individuals have genotype 1 and
the others have genotype 2. We assume that the minor allele is
the derived allele. Under the neutral coalescent, for a sample of $2N$
chromosomes, an $f_2$ minor allele will be the derived allele with probability
$\frac{2N-1}{2N+1}\approx 1$ for large $N$ so this is a reasonable assumption for large
samples.
Define an $f_2$ haplotype shared between chromosomes $a$ and $b$ to be
a region satisfying the following two conditions: 1) The time to the most
recent common ancestor (TMRCA) of $a$ and $b$ does not change over the
region. 2) At one or more sites in the region, $a$ and $b$ coalesce
with each other before either of them coalesce with any other
chromosome. In other words, they are unique genealogical nearest
neighbours (Figure S1). We call the TMRCA of
$a$ and $b$ the age of the haplotype. Additionally, we say that individuals $i$ and $j$ ($i\neq
j$) share an $f_2$ haplotype if $a$ is one of $i$'s two chromosomes and $b$ is one
of $j$'s two chromosomes.
The problem we solve is to find the $f_2$ haplotypes and then
estimate their ages. Since each $f_2$ variant must lie in an $f_2$ haplotype, the variants provide a
simple way of detecting the haplotypes. We use the algorithm described
in the main text to find regions which should be larger
than the $f_2$ haplotypes. The next problem is to determine the
likelihood of the age. We describe our approximate likelihood below
but first, as an example, we describe exact inference in the absence of
confounding factors.
\subsection*{Exact case}
Suppose we knew the exact genetic and physical lengths of an
$f_2$ haplotype and the number of singletons it carries. Call these
quantities $L_g^*,L_p^*$ and $S^*$. Let the age of this
haplotype be $t$ generations, or $\tau$ in coalescent time
($\tau=\frac{t}{2N_e}$). Then, for a randomly chosen $f_2$ haplotype
(but not a haplotype at a randomly chosen position, discussed in the
next section),
$L_g^*$ has an exponential distribution with parameter $4N_e\tau$ and
$S^*$ has a Poisson distribution with parameter $\theta L_p^* \tau$
where $\theta=4N_e\mu$ and $\mu$ is the per-base per-generation
mutation rate. Therefore (ignoring terms that do not depend
on $\tau$), the
log-likelihood of $\tau$ given $L_g^*,L_p^*$ and $S^*$ is
\begin{equation*}
\ell \left( \tau ; l_g^*,l_p^*,s^*\right)=\left(1+s^*\right)\log\left(\tau\right)-4N_e\tau
l_g^* -\theta l_p^* \tau
\end{equation*}
and the maximum likelihood estimator of $t$ is therefore
\begin{equation*}
\hat{t}=\frac{1+s^*}{2\left( l_g^*+\mu l_p^*\right)}.
\end{equation*}
\subsection*{Approximate likelihood for genetic length}
There are two corrections to the likelihood for genetic length. The
first relates to the ascertainment process of the haplotypes, and the
second to the overestimate in the length due to the way we detect the
endpoints.
The ascertainment problem is as follows. Suppose we pick a haplotype
at random, then its length is exponentially distributed (i.e. gamma
with shape parameter 1). However, if we pick a point on the sequence
at random then the distribution of the length of the haplotype in
which it falls is gamma distributed with shape parameter 2. This is an
example of the ``inspection paradox'' and it is because in
the second case, we are sampling haplotypes effectively weighted by
their length. In our case, we detect haplotypes if they contain one or
more $f_2$ variants. Therefore the probability that we find a
haplotype is increasing with its physical length (because longer haplotypes are
more likely to carry $f_2$ variants), but sub-linearly. The probability
also increases with genetic length, but in a complex way that
depends on the variation of recombination and mutation rate along the
genome, the age of the haplotype and the demographic history of the
population. For example, in a constant sized population, older
haplotypes are likely to have longer branches
above them, and therefore to have more $f_2$ variants, but in an expanding
population the opposite may be true. Rather than trying to take all of these effects into
account, we made the simplifying assumption that we could model the
genetic length $L_g^*$ as a gamma distribution with shape parameter $k$ where
$1<k<2$ and rate $4N_e\tau$. Simulations suggested that $k$ around 1.5
was optimal (Figure S11), and we used this value
throughout.
The second correction involves the overestimate of
genetic length. We tried to detect the ends of the haplotype by
looking for inconsistent homozygote genotypes, but of course in
practice, after the end of the $f_2$ haplotype, there will be some
distance before reaching such a site. This (genetic) distance $\Delta_g$ is the
amount by which we overestimate the length of the haplotype. We
estimate the distribution of $\Delta_g$ for a given sample by sampling
pairs of genotype vectors, then sampling sites at random and computing
the sum of genetic distance to the first inconsistent homozygote site
on either side. We then fit a gamma distribution with (shape, rate)
parameters $(k_e, \lambda_e)$ to this distribution, for each
chromosome. The likelihood of
$\tau$ is given by the convolution density of $L_g^*$ and
$\Delta_g$,
\begin{equation}
L(\tau;l_g)=\int_0^{l_g} f_{\gamma}\left(x; \left(k,
4N_e\tau\right)\right) f_{\gamma}\left(l_g-x; \left(k_e,
\lambda_e\right)\right) dx
\label{ll_Lg}
\end{equation}
where $f_{\gamma}\left(x;
(\kappa,\lambda)\right)=\frac{1}{\Gamma(\kappa)}\lambda^\kappa x^{\kappa-1} e^{-\lambda x}$ is
the density of a gamma distribution with (shape, rate) parameters $(\kappa,
\lambda)$. This integral, and therefore the loglikelihood
$\ell(\tau;l_g)=\log\left[L(\tau;l_g)\right]$ can be expressed in
terms of the confluent hypergeometric function $\,_1F_1$ (ignoring
terms that do not depend on $\tau$),
\begin{equation}
\ell(\tau;l_g)=k\log(\tau)+\log\left[\,_1F_1\left(k, k+k_e, l_g\left(\lambda_e-4N_e\tau\right)\right)\right].
\label{ll_Lg2}
\end{equation}
\noindent Where, recall, we assume $k=1.5$. Note that if we replace
$2N_e\tau$ with $t$, and drop constant terms, then we get an expression for the likelihood of
$t$ that does not depend on $N_e$, so our estimate of time in
generations does not depend on $N_e$.
\begin{equation}
\ell(t;l_g)=k\log(t)+\log\left[\,_1F_1\left(k, k+k_e, l_g\left(\lambda_e-2t\right)\right)\right].
\label{ll_Lg3}
\end{equation}
\noindent Finally, note that the rate
at which recombination events occur on the
branch connecting the two shared haplotypes is $4N_e\tau$. We
assume that the first such event marks the end of the
haplotype. However, there is a non-zero probability that a
recombination event occurring on this branch does not change the MRCA
of $a$ and $b$. Simulations suggest that for large numbers of
chromosomes, this probability is extremely small (Figure S12)
and so we assume it is 0. In practice, for small samples, this
might be a non-negligible effect.
\subsection*{Approximate likelihood for singleton count}
Recall that the physical length of the shared haplotype is $L_p$
bases. We assume that we can find this exactly. Then assuming a
constant mutation rate $\mu$ per base per generation, the sum of the number of
singletons on the shared haplotypes, $S^*$ has a Poisson distribution
with parameter $\theta L_p \tau$, where $\theta=4N_e\mu$.
Now consider the distribution of singletons on the unshared
haplotypes. To approximate this distribution, we make the
following three assumptions: 1) There is no
recombination on the unshared haplotypes over the region. 2) No other
lineage coalesces with the shared haplotype before it is broken. 3) The
distribution of the time to first coalescence of the unshared haplotypes is
exponential with parameter $N$ (Recall that $N$ is the number of
sampled individuals). In fact the true distribution is a mixture of
exponentials but the approximation at least matches the
correct mean, $\frac{1}{N}$ \cite{blum2005}. The variance is too small
because of the first assumption, however.
Consider one of the unshared haplotypes. Conditional on the time ($\tau_1$) at
which it first coalesces with any other haplotype, the number of
singleton mutations it carries is Poisson with
parameter $\theta L_p \tau_1$ and so, using the assumptions above, the
unconditional distribution is geometric (on $0, 1 \dots$) with parameter
$\frac{1}{1+\frac{\theta L_p}{2N}}$. Therefore the distribution of the number
of mutations on both unshared haplotypes, $\Delta_S$, is the sum of two geometric
distributions which is negative binomial with parameters
$\left(2, \frac{\theta L_p}{\theta L_p+2N}\right)$. The
density of the total number of singletons, $S$ is the convolution of
these two densities
\begin{equation}
L(\tau; l_p, s)=\sum_{x=0}^s f_{Po}\left(x; \theta l_p \tau\right)
f_{NB}\left(s-x; \left(2, \frac{\theta l_p}{\theta l_p+2N}\right)\right)
\label{ll_S}
\end{equation}
where $f_{Po}\left(x; \lambda\right)=\frac{\lambda^x e^{-\lambda}}{x!}$ is the density of a Poisson
distribution with parameter $\lambda$ and $f_{NB}\left(x;
\left(n,p\right) \right)=\binom{x+n-1}{x}(1-p)^np^x$ is the density of a negative binomial
distribution with parameters $(n,p)$. As with the genetic length, we
can write this in terms of $t$, the haplotype age in generations,
\begin{equation}
L(t; l_p, s)=\sum_{x=0}^s f_{Po}\left(x; 2\mu l_p t\right)
f_{NB}\left(s-x; \left(2, \frac{\theta l_p}{\theta l_p+2N}\right)\right)
\label{ll_S2}
\end{equation}
In practice we assume $\mu$ is known and estimate $\theta$
separately for each individual, for each chromosome, by counting the
number of singletons, multiplying by the number of chromosomes in the
sample, and dividing by the chromosome length.
Then for each pair, we use use the average of these values in Equation \ref{ll_S2}.
A more accurate approach would be to
compute the likelihood as a double convolution over the distribution
of both haplotypes with different values for $\theta$. An extension
would be to estimate $\theta$ separately for different regions of the
genome.
\subsection*{Approximate full likelihood}
We can now write the approximate log-likelihood for $t$ as the sum
of Equation \ref{ll_Lg3} and the log of Equation \ref{ll_S2}, assuming
that the recombination process is independent of the mutational process,
\begin{equation}
\ell(t; l_g, l_p, s)=\ell(t;l_g) +\log\left[L(t;l_p, s)\right].
\end{equation}
We maximise it numerically with respect to $t$ in order to find the maximum likelihood
estimate (MLE). It is possible for this likelihood to be bimodal,
in which case we might find a local but not global optimum. However, this
seems to be rare.
\subsection*{1000 Genomes Data}
The 1000 Genomes data was obtained from
\texttt{ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/}.
The phase 1 release sequence data is in
\texttt{phase1/analysis\_results/integrated\_call\_sets},
and the array data is in
\texttt{phase1/analysis\_results/supporting/omni\_haplotypes}. In
order to generate the ``clean'' sequence data, we removed any sites
that fell in the list of low complexity regions found in
\texttt{technical/working/20140224\_low\_complexity\_regions/hs37d5-LCRs.txt}.
Functional annotations are in
\texttt{phase1/analysis\_results/functional\_annotation}.
Detailed explanations of the annotations can be found there, but briefly the
classifications are as follows:
\begin{itemize}
\item
Loss-of-function: Includes premature stop codons, and essential splice
site disruptions.
\item
Coding: Variants in coding regions.
\item
Functional noncoding: Including variants in noncoding RNAs, promoters,
enhancers and transcription factor binding sites.
\item
Unannotated: Any variant not included in any of the above categories.
\end{itemize}
We included haplotypes in more than one of these categories if
they contained multiple variants.
\subsection*{Code}
All the code we used to run simulations and analyse the 1000 Genomes
data is available from www.github.com/mathii/f2.
\section*{Acknowledgments}
Part of this work was completed while
I.M. was a research fellow at the Simons Institute for the Theory
of Computing at UC Berkeley. We thank Stephan Schiffels for extensive discussion, and providing
MSMC results. We also thank Richard Durbin, Alexander Kim and David Reich for helpful
suggestions.
\newpage
|
1,108,101,564,236 | arxiv | \section{Motivation and Introduction}
\label{sec:introduction}
The interaction between the building blocks of matter typically
involve potentials with a power law dependence. For charged particles
this is the long-range Coulomb potential ($\propto \frac{1}{r}$) \cite{Jackson} whereas for neutral
constituents such as atoms \cite{Friedrich} or molecules \cite{Stone} their interaction
at large distances can be of permanent dipolar character
($\propto \frac{1}{r^3}$) or of induced dipolar origin, i.e. van der Waals interaction
($\propto \frac{1}{r^6}$). The importance of these interaction potentials is closely connected
to the fact that they describe the forces occuring in nature. This
allows us to understand the structures and properties as well as
dynamics of few- to many-body systems via a bottom-up approach.
Complementary to the above
the development and analysis of more abstract models of interacting few- and many-body
systems possesses a rich history. These models are motivated, for
example, by the request for a thorough understanding of integrability versus
nonintegrability \cite{Sirker,Giamarchi}, the mechanisms of the transition from few- to many-body systems
\cite{Zinner}, and the emergence of thermodynamical behaviour in the particle number to
infinity limit \cite{Samaj}. A particularly striking and impactful paradigm
is a system of contact interacting particles in one spatial dimension for
which the interaction among the particles is contracted to a single point
providing corresponding boundary conditions. This leads to an intricate
relationship between impenetrable bosons and fermions in one dimension
\cite{Busch,Girardeau}. After many years of their discovery and investigation,
these models are nowadays used extensively to describe the physics of ultracold
quantum gases and Bose-Einstein condensates \cite{Pethick}. Due to the separation
of length scales in dilute gases for which the range of the collisional interactions
is typically much smaller than the distance between the particles as well as the overall
size of the atomic cloud the model of contact interacting atoms provides a valid
description of the structure and properties as well as dynamics of these many-body
systems \cite{Pethick,Pitaevskii}.
While many naturally occuring interactions involve power law potentials with
a constant exponent, the properties and dynamics of models with so-called superexponential
interactions have been explored very recently \cite{Schmelcher1,Schmelcher2,Schmelcher3,Schmelcher4}.
Rendering the exponent time-dependent one arrives at a periodically driven power-law oscillator \cite{Schmelcher1}.
Covering weak and strong confinement during a single driving period, the resulting classical phase
space comprises not only regular and chaotic bounded motion but exhibits also a tunable exponential
Fermi acceleration. Note that the fundamental mechanisms of exponential acceleration
and their applications have come into the focus of research in nonlinear dynamics
in the past ten years \cite{Turaev1,Shah1,Liebchen,Shah2,Turaev2,Turaev3,Batistic,Pereira}.
A major step forward in the direction of superexponential dynamics is provided by the
so-called superexponential self-interacting oscillator \cite{Schmelcher2}. The potential
of this oscillator takes on the unusual form $V = |q|^q$ where the exponent
depends on the spatial coordinate $q$ of the oscillator. The exponentially varying nonlinearity
leads to a crossover in the period of the oscillator from a linearly decreasing to a nonlinearly
increasing behaviour with increasing energy. This oscillator potential possesses
a hierarchy of (derivative) singularities at its transition point $q=0$ which are responsible for this crossover
and lead to a focusing of trajectories in phase space. The spectral and eigenstate properties
of the corresponding quantum superexponential oscillator \cite{Schmelcher3} do reflect this
transition equally: the ground state shows a metamorphosis of decentering, asymmetrical squeezing
and emergence of a tail. Signatures of the crossover can be seen in the excited states by analyzing
e.g. their central moments which show a transition from an exponentially decaying to an increasing
behaviour.
A major step forward on the route to superexponentially interacting many-body systems is represented
by the very recently explored two-body case \cite{Schmelcher4}. The latter represents a fundamental
building block for many-body systems and is therefore a key ingredient to the present work. The
underlying Hamiltonian contains the superexponential interaction potential $V = |q_2|^{q_1}$
which couples the degrees of freedom $q_1$ and $q_2$ in an exponential manner. The resulting potential
landscape exhibits two distinct regions: a region where motion takes place in a confining channel (CC) with
varying transversal anharmonicity and a region with asymptotically free motion. These regions are connected
via two saddle points allowing for a deconfinement transition between the confined and free motion.
In ref.\cite{Schmelcher4} the dynamics and in particular scattering functions have been analyzed in
depth for this peculiar interaction potential thereby demonstrating the impact of the dynamically
varying nonlinearity on the scattering properties.
On basis of our understanding gained for the fundamental two-body system it is now
a natural next step to investigate many-body superexponentially interacting systems
which we shall pursue here. We thereby focus on systems with a single exponent ($q_1$)
and many base ($q_i, i=2,...,N$) degrees of freedom with interaction terms of the form $\propto |q_i|^{q_1}$.
We provide a comprehensive study of the many-body scattering dynamics thereby analyzing
the mechanisms of the collisional dynamics in the CC with increasing energy.
Due to the presence of many transversal channel degrees of freedom
a plethora of energy transfer processes are enabled. As a consequence the incoming
longitudinal $q_1$ scattering motion undergoes in the low-energy regime
a transition from a step-like to a smooth behaviour.
While the two-body scattering allows for energies below the saddle points only for a monotonous
behaviour of the incoming and outgoing motion we show that many-body processes
lead to an intricate combination of backscattering and recollision events. This includes a
highly oscillatory behaviour with multiple turning points emanating from
the saddle point region and reaching out into the CC. This oscillatory
and intermittent scattering motion exhibits largely fluctuating amplitudes, a feature which is
absent in the case of two-body scattering.
Our analysis comprises the energy-dependent behaviour of individual trajectories as well as the
statistical behaviour of ensembles including an analysis via momentum-time and turning point maps.
This work is structured as follows. In section \ref{sec:hamiltonian} we introduce the
Hamiltonian and discuss the underlying interaction potential landscape as well as the
classification of the dynamics in terms of invariant subspaces. Section \ref{sec:dynamics1}
contains a detailed discussion of the individual many-body trajectories in the low-, intermediate and
high energy regime. Section \ref{sec:dynamics2} provides an analysis of the statistical
ensemble properties with a focus on the reflection time distribution. Section \ref{sec:sac} presents
our summary and conclusions including a brief outlook.
\section{The Superexponential Hamiltonian and Potential Landscape}
\label{sec:hamiltonian}
This section is dedicated to the introduction of the Hamiltonian and a discussion
of the landscape of its interaction potential. We will also provide
the invariant subspaces of the dynamics. Our superexponential Hamiltonian
takes on the following appearance
\begin{equation}
{\cal{H}} = {\cal{T}} + {\cal{V}} = \sum_{i=1}^{N} \frac{p_i^2}{2} + \sum_{k=2}^{N} |q_k|^{q_1}
\label{eq:hamiltonian1}
\end{equation}
where ($q_i,p_i, i=1,...,N$) are the canonically conjugate
coordinates and momenta of our 'effective particles or entities' respectively,
and in this sense we refer to the above Hamiltonian as a many-body Hamiltonian. Equally the term
'interaction' is employed here to indicate that the individual potential terms $\propto |q_k|^{q_1}$
depend on two particles coordinates. Note that both the base degrees of
freedom (dof) ($q_k,k=2,...,N$) as well as the exponent dof $q_1$ possess a corresponding
kinetic energy and therefore evolve dynamically. The individual $N-1$ interaction terms $|q_k|^{q_1}$ share
a single exponent dof and therefore the interaction between the dof $q_k$ takes place indirectly
via the dof $q_1$. In other words the dof $q_1$ could be seen as a common dof shared by all
the base dof ($q_k,k=2,...,N$). The superexponential potential ${\cal{V}}= \sum_{k=2}^{N} |q_k|^{q_1}$
(SEP) mediates the interaction among the dof $q_k, k=1,...,N$.
Obviously, the above model Hamiltonian does not exhibit well-established symmetries
such as a translation invariance. It possesses however an exchange symmetry with respect to
the base dof $q_k,k=2,...,N$ since they all couple in the same manner to the exponent dof $q_1$. This will
allow us to conclude upon invariant dynamical subspaces (see below). We remark that
the Hamiltonian \ref{eq:hamiltonian1} is a specific choice out of many possible superexponential Hamiltonians
(see discussion in the conclusions section \ref{sec:sac}) which is motivated by the appearance
of only a single exponent dof which promises a more straightforward interpretation of the resulting
many-body dynamics.
In ref.\cite{Schmelcher4} the superexponentially interacting
two-body system containing a single interaction term has been explored and analyzed in detail.
To be independent and to set the stage for the many-body case we will in the following briefly summarize the
main properties of the potential landscape for a single interaction term $|q_2|^{q_1}$. It shows
(see Figure \ref{fig1}) for $q_1 > 0$ (region I) a CC leading to a bounded motion w.r.t. the
coordinate $q_2$ and an unbounded motion for the dof $q_1$. The transversal confinement of this channel
illustrated by the intersection curves $V(q_1=\mathrm{const},q_2)$ (see inset of Figure \ref{fig1})
continuously changes with increasing values of $q_1$: the cusp for $q_1 < 1$ turns into a linear confinement
for $q_1 = 1$, a quadratic one for $q_1=2$ and finally into a steep wall anharmonic confinement for $q_1 \gg 2$.
For $q_1 \rightarrow \infty$ the channel confinement is that of a box with infinite walls.
The channel region I is connected via two saddle points at energy $E=1$ to regions II and III which
exhibit asymptotically ($q_1 \rightarrow -\infty, q_2 \rightarrow \pm \infty$) free motion.
Regions II and III are separated by a repulsive potential barrier with a (singular) maximum at $q_2=0$.
In region II both particles move in a correlated manner in the same direction ($p_1,p_2<0$)
and in region III they move in opposite directions ($p_1<0,p_2>0$). To conclude, while the appearance
of the SEP is very simple it shows an interesting geometrical structure. Let us now turn back
to the many-body problem.
\begin{figure}
\parbox{15cm}{\includegraphics[width=14cm,height=8cm]{fig1-pap.jpg}}
\caption{The potential energy landscape of a single interaction term
$V(q_1,q_2)=|q_2|^{q_1}$. The CC
(region I) as well as the two regions of asymptotically free motion, regions II and III, are indicated.
The inset shows intersections of the potential energy along the $q_2$ coordinate i.e.
$V(q_1=\mathrm{const},q_2)$ for $q_1=0.1,1,2,16$ corresponding to the curves from top to bottom.}
\label{fig1}
\end{figure}
The SEP ${\cal{V}}(q_1,q_2,...,q_N)=\sum_{k=2}^{N} |q_k|^{q_1}$
in eq.(\ref{eq:hamiltonian1}) possesses stationary points, i.e.
zero derivatives $\frac{\partial {\cal{V}}}{\partial q_i}=0, \forall i=1,...,N$,
at the positions $q_1=0,q_i=\pm 1, i=2,...,N$. The resulting Hessian possesses a zero determinant, but a
more detailed analysis shows that the extrema have unstable and stable directions, i.e. they are saddle
points. The energies of the extrema are $E={\cal{V}}(q_1=0,\{q_i=\pm 1 \} )=(N-1)$.
Since the Hamiltonian equations of motion belonging to the Hamiltonian (\ref{eq:hamiltonian1})
possess a singularity for $q_1<0,q_i=0,i=2,...,N$, we introduce a regularization parameter
$\epsilon > 0$ for the SEP which now reads ${\cal{V}}_{reg}(q_1,q_2,...,q_N;\epsilon)=\sum_{k=2}^{N}
(\sqrt{q_k^2+\epsilon})^{q_1}$. This facilitates the numerical integration of the corresponding
Hamiltonian equations of motion which read
\begin{eqnarray}
{\dot{q_i}} = p_i \hspace*{1cm} i=1,...,N \label{eq:heom1}\\
{\dot{p}}_1 = - \sum_{k=2}^{N} \left(\sqrt{q_k^2+\epsilon} \right)^{q_1} \ln \left( \sqrt{q_k^2+\epsilon} \right)
\label{eq:heom2}\\
{\dot{p}}_i = - \left(\sqrt{q_i^2+\epsilon} \right)^{q_1-2} q_1 q_i \hspace*{1cm} i=2,...,N \label{eq:heom3}
\end{eqnarray}
Typical values chosen for the numerical simulations are $\epsilon = 10^{-8}$.
The equation of motion for $p_1(t)$ depends symmetrically on all $q_k, k=2,...,N$ due to the above-mentioned
exchange symmetry. Note the appearance of the logarithm which will be of major importance
for the later on observed dynamics. The equation of motion of $p_i(t),i=2,...,N$
depends only on the coordinates $q_i$ and $q_1$ and these equations ($i=2,...,N$) are structurally invariant
due to the exchange symmetry. This means, that for equal initial conditions (ICs) of all $(q_i,p_i),i=2,...,N$
at $t=t_0=0$ the dynamics of all $q_i(t),i=2,...,N$ will be identical.
Let us elaborate on this in some more detail since it allows us to identify a hierarchy of invariant
subspaces that classify the dynamics. The exchange symmetry among the $N-1$ particles with respective coordinates
and momenta $(q_i,p_i),i=2,...,N$ can be either (i) completely broken (ii) partially broken or (iii) fully
maintained by the corresponding ICs. We therefore partition the complete phase space
of ICs into subspaces as follows. We divide the $2N-2$-dimensional total phase space ${\cal{P}}$
of the dof $q_i,i=2,...,N$
into dynamically invariant subspaces ${\cal{C}}_i$ of identical ICs which
lead consequently to an identical dynamics (trajectories).
Here the invariance refers to the exchange of (initial) phase space coordinates in the corresponding
subspace ${\cal{C}}_i$. These subspaces represent a classification of the dynamics.
More specifically, we define a series of positive integers $\{n_i\}= n_1,....,n_k$ with $\sum_{i=1}^{k}n_i=(N-1)$
where $n_i$ is the maximal dimension of the subspace ${\cal{C}}_i$ with identical initial phase space coordinates.
A complete set of ICs (and resulting trajectories) is then given by the
decomposition $\cup_{i=1}^{k} {\cal{C}}_i = {\cal{P}}$. This set involves, per definition, $k$ different
classes of identical trajectories, the $i-th$ class containing $n_i$ identical phase space coordinates.
A remark concerning the resulting combinatorics is in order.
For a single subset of $l$ identical ICs only there is $\binom{N-1}{l}$
possible configurations or subspaces with $l \le (N-1)$. For $r$ subsets each one with $k$ identical ICs
the number of possibilities is $\sum_{i=0}^{r-1} \binom{N-ik-1}{k}$. This generalizes to
the case of an arbitrary number of subspaces of properly chosen dimensions with identical ICs.
\section{Dynamics: Individual Many-Body Trajectories}
\label{sec:dynamics1}
This section is devoted to the exploration of the many-body dynamics by analyzing individual
trajectories which illustrate the relevant collisional processes. We note that these trajectories
are representative and show the typical observed behaviour.
The general procedure is as follows. We will simulate the dynamics in the CC (region I) for
incoming ($p_1 <0$) trajectories starting at $t=0$ in the outer part of the channel at $q_1=30$.
At this value of $q_1$ the transverse profile of the channel represented by the intersections of the individual
interaction potential terms ${\cal{V}}(q_1=\mathrm{const},q_2)$ is already very similar to a box confinement.
We will then study the dynamics with increasing total energy and for different subspaces of identical
ICs.
\subsection{Low energy scattering} \label{dyn:lse}
Let us start by assuming that all ICs of the coordinates and momenta $(q_i,p_i),i=2,...,N$ are identical.
Since then (see discussion in section \ref{sec:hamiltonian}) all dynamical evolutions $q_i(t),p_i(t)$ are
identical this case is similar to the case of the corresponding superexponentially interacting two-body
system \cite{Schmelcher4}. Let us summarize the main features and characteristics of the
dynamics for the two-body case (where the total potential reads ${\cal{V}}=|q_2|^{q_1}$)
for reasons of comparison to the actual many-body case. Since the exponent dof $q_1$
provides the confinement for the dof $q_2$ the time evolution of $q_2(t)$ shows bounded oscillations
in the channel (see Figure \ref{fig3}(a) for a specific case of the many-body system).
For large values of $q_1$ this confinement is strongly anharmonic and close to a box-like
confinement: as a consequence the channel is approximately flat for $-1 \lesssim q_2 \lesssim +1$ and energy exchange
processes (between particles but also from kinetic to potential energy for a single dof) happen only close to the
turning points of the $q_2$ oscillations. Opposite to this the $q_1$-motion is not oscillatory and is unbounded.
This can be argued as follows. Inspecting eq.(\ref{eq:heom2}) and specializing it to the case
of a single base dof $q_2$ one realizes that the r.h.s. is positive ($\epsilon = 0$) as long as the logarithm
is negative, which implies $q_2 < 1$. A necessary condition for ${\dot{p_1}} < 0$ to happen
is then given by the occurence of $q_2 > 1$ which implies that the total energy $E > 1$. The
latter is however the energy of the saddle points in the two-body problem. To conclude, this
means that for energies below the saddle point energies the two-body scattering in the
CC involves a time evolution $q_1(t)$ with exclusively ${\ddot{q}}_1>0$ i.e.
the incoming $q_1(t)$ trajectory possesses a single turning point ! As a consequence,
$q_1(t)$ cannot perform an oscillatory bounded motion but describes simply a direct in-out
scattering process finally escaping asymptotically to $q_1 \rightarrow \infty$.
In this sense multiple scattering processes are not encountered and scattering is not
chaotic, i.e. there is even no transient dynamics with nonzero Lyapunov exponents.
This situation changes when considering the many-body situation. Here the saddle point
energy is given by $E_s=(N-1)$ and the dynamics of $(q_1,p_1)$ is determined (see
eqs.(\ref{eq:heom1},\ref{eq:heom2})) by the sum over all forces involving the dof $q_i,i=2,...,N$.
This sum has to become overall positive (as a combination of the appearing logarithms and their
'above threshold' $q_k >1, k \in \{2,...,N\}$ arguments) in order to enable ${\dot{p_1}} < 0$
and to provide multiple turning points as well as an oscillatory dynamics: it is an inherent many-body process.
Figure \ref{fig2}(a) shows the kinetic energies $E_{k1}=\frac{p_1^2}{2}$ and $E_{ki}=\frac{p_i^2}{2}$ as well as the
corresponding potential energies $E_{pi}=|q_i|^{q_1}$ (see inset) as a function of time for the
scattering process of a system of $N=10$ particles with the total energy $E=0.28$.
Here all ICs of the base dof are identical i.e. the particle exchange symmetry among the dof $q_i,i=2,...,N$
is fully maintained and their dynamics is the same. Therefore, we expect that the
above described properties of the two-body scattering dynamics should also appear here.
\begin{figure}
\hspace*{-6cm} \parbox{12cm}{\includegraphics[width=18cm,height=12cm]{fig2-pap.jpg}}
\caption{The kinetic energies $E_{k1}$ (blue solid curve),
$E_{ki}$ (dotted, dashed and dot-dashed curves), and
potential energies $E_{pi}$ (see inset), belonging to the dof $q_1,q_i$, respectively,
as a function of time $t$ for individual trajectories. Initial conditions are $q_1=30,q_i=0,i=2,...,N$.
(a) Total energy $E=0.28$ and initial conditions $E_{k1}=0.1,E_{ki}=0.02$. Note that all curves
$E_{ki},i=2,...,N$ are identical due to identical ICs. Similar statements hold for (b,c).
(b) Total energy $E=0.29$ and initial conditions $E_{k1}=0.1$, $E_{ki}=0.01, i=2-5$, $E_{kj}=0.03,j=6-10$.
(c) Total energy $E=0.37$ and initial conditions $E_{k1}=0.1$, $E_{ki}=0.01, i=2-4$, $E_{kj}=0.03,j=5-7$,
$E_{kl}=0.05, l=8-10$, (d) Total energy $E=0.95$ and initial conditions $E_{k1}=0.5$,
$E_{ki}=0.01 \cdot (i-1),i=2-10$. All simulations involve $N=10$ particles.}
\label{fig2}
\end{figure}
Indeed, the initial kinetic energy $E_{k1}(t=0)=0.1$ belonging to the subsequent time evolution $(q_1(t),p_1(t)$
decreases monotonically to zero and subsequently increases in the course of the scattering
process (see Figure \ref{fig2}(a)).
$E_{k1}(t)$ exhibits a sequence of plateaus which correspond (see discussion above) to the
traversal of $q_i(t)$ of the bottom of the CC, while the phases of rapid changes of $E_{k1}(t)$ between
two plateaus is caused by the dynamics in the vicinity of the potential walls. These facts
are consistent with the behaviour of the potential energy $E_{pi}$ (see inset of Figure \ref{fig2}(a)) which exhibits
pronounced peaks during these collisions with the potential walls. For reasons of energy conservation
$E_{ki}(t)$ show then corresponding dips.
As a next step let us break the (total) exchange symmetry among the base dof by firstly inspecting the case of
two sets of identical ICs. Figure \ref{fig2}(b) shows the kinetic $E_{k1},E_{ki}$ and the potential energies
$E_{pi}$ for a total energy $E=0.29$ and ICs $(E_{ki}=0.01,i=2-5);(E_{kj}=0.03, j=6-10)$. Since we have now
two different sets of identical dynamics namely $(q_i(t),i=2-5);(q_j(t),j=6-10)$ a partial exchange symmetry
remains. The corresponding time evolution $E_{k1}(t)$ carries now the signatures of two different transversal motions
$(q_i(t),q_j(t))$: the overall decrease and subsequent increase due to the collision process exhibits now
a 'superposition' of plateau-like structures. Correspondingly, there is two different kinds of time evolution
of kinetic energies $E_{ki}(t),E_{kj}(t)$ which show sharp dips at the time instants where the kinetic
energy $E_{k1}(t)$ varies rapidly in between two plateaus. The associated potential energies $E_{pi}(t),E_{pj}(t)$
(see inset of Figure \ref{fig2}(b)) show pronounced peaks at the time instants of collisions with
the potential walls which correspond to the time instants of the previously mentioned dips of $E_{ki}(t),E_{kj}(t)$.
Figure \ref{fig3}(a) shows the channel dynamics for the base and exponent dof $q_1,q_i,i=2,...,N$
and Figure \ref{fig2}(c) the corresponding kinetic and potential energies for a total energy $E=0.37$
for the case of three sets of identical ICs. The dynamics of $E_{k1}(t)$ shows a larger number of
plateaus which, due to their partial overlap, gradually become washed-out.
This becomes even more pronounced for the case of no identical ICs and a total energy $E=0.95$ shown
in Figure \ref{fig2}(d): here the time evolution $E_{k1}(t)$ becomes almost smoothly decreasing
and subsequently increasing, i.e. without any pronounced plateau-like structures. In Figures \ref{fig2}(c,d)
the time evolutions of the kinetic energies $E_{ki}(t)$ show an increasing number of dips and
in case of the potential energies $E_{pi}(t)$ an increasing number of peak structures (see corresponding insets).
In Figure \ref{fig2}(d) there exists already a rather dense accumulation of peaks ($E_{pi}(t)$, see
inset) and dips ($E_{ki}(t)$) due to the
many collisions of the particles with dof $q_i(t)$ with the walls of the interaction potential ${\cal{V}}$.
\begin{figure}
\parbox{8cm}{\includegraphics[width=7.6cm,height=6.6cm]{fig3a-pap.jpg}}
\parbox{8cm}{\includegraphics[width=7.3cm,height=6.3cm]{fig3b-pap.jpg}}
\parbox{8cm}{\includegraphics[width=7.0cm,height=6.3cm]{fig3c-pap.jpg}}
\caption{(a) A $(q_1,q_i)$ graph of scattering trajectories in the CC with initial
conditions $q_1=30,q_i=0,i=2,...,N$ and $E_{k1}=0.1,(E_{ki}=0.01,i=2-4),(E_{kj}=0.03,j=5-7),(E_{kl}=0.05,l=8-10)$
and for a total energy $E=0.37$. Clearly visible are three types of transversal $q_i(t)$ oscillations and
the reflection process at the minimal value of $q_1$.
(b) Time evolution $q_1(t),(q_i(t),i=2,...,N)$ of a scattering trajectory with total energy $E=10.6$ via the
CC. Initial conditions for the coordinates are the same as in (a), and $E_{ki}=4.5,1,0.4,0.7,0.9,1.1,
0.5,0.8,0.6,0.1$ corresponding to $i=1,...,10$. An oscillatory behaviour in the saddle point region is
clearly visible. (c) Same as in (b) concerning the parameters and ICs. Shown are the kinetic energies
$E_{k1}(t),E_{ki}(t)$ for a few selected particles to get a representative view. Multiple oscillations
and inelastic processes are evident.}
\label{fig3}
\end{figure}
\subsection{Intermediate energy scattering} \label{dyn:ise}
We remind the reader of the fact that the saddle point threshold energy
is $E_{s}=(N-1)$ which amounts to $E_s=9$ for our prototypical
10 particle system. As discussed above (see section \ref{dyn:lse}) the two-body case as well as the
case of identical ICs for all base dof (in the many particle case)
show only a single turning point, i.e. a direct in-out scattering
behaviour, for the exponent dof $q_1(t)$ for energies $E<E_s$.
This statement holds also for the low energy scattering $E \ll E_s$ discussed in the previous subsection
where a transition of the dynamics $E_{k1}(t)$ from plateau-dominated to a smooth behaviour has been
observed with increasing number of different ICs.
Let us now increase the total energy available in the scattering process for non-identical
initial conditions. A necessary condition for
further turning points to occur in the dynamics of $q_1(t)$ is (see corresponding discussion in
section \ref{dyn:lse}) the positivity of the logarithmic terms in the equation of motion (\ref{eq:heom2})
which implies that $q_i > 1$ has to occur for some particles such that the
overall sum becomes positive. Consequently certain interaction potential contributions obey $E_{pi}>1$.
Figure \ref{fig3}(b) shows for an energy $E=10.6$ the dynamics $q_1(t),(q_i(t), i=2,...,N)$
of an example trajectory with no identical ICs. Here it is clearly visible that the dof $q_1(t)$ enters in the
course of the scattering process from the CC to the saddle point region and
performs thereafter an oscillation followed by an escape back into the CC.
The dof $q_i(t),i=2,...,N$ show an increase of the amplitude of oscillations during the
the dynamics in the saddle point region. Figure \ref{fig3}(c) shows the kinetic energy $E_{k1}(t)$
and exemplarily two of the kinetic energies $E_{ki}(t), i \ne 1$ for the same trajectory.
$E_{k1}(t)$ shows according to the oscillation of $q_1(t)$ in the saddle point region
an oscillation with three zeros. Inspecting the incoming and outgoing $E_{k1}(t),E_{ki}(t), i \ne 1$
the inelasticity of this scattering event for intermediate energies becomes visible:
$E_{k1}$ and one of the $E_{ki}$ loose energy in the course of the scattering whereas
the other $E_{ki}$ component gains energy. While this example trajectory possesses an
energy $E>E_s$ the principal process that an oscillatory dynamics of $q_1(t)$ becomes now possible
is by no means restricted to an energy above the saddle point energy. This is impressively
demonstrated in Figure \ref{fig4}(a,b,c).
\begin{figure}
\parbox{8cm}{\includegraphics[width=7.6cm,height=6.6cm]{fig4a-pap.jpg}}
\parbox{8cm}{\includegraphics[width=8.2cm,height=6.5cm]{fig4b-pap.jpg}}
\parbox{8cm}{\includegraphics[width=8.0cm,height=6.3cm]{fig4c-pap.jpg} \vspace*{-0.3cm}}
\parbox{8cm}{\includegraphics[width=7.6cm,height=6.6cm]{fig4d-pap.jpg}}
\caption{(a) Time evolution $q_1(t)$ of a scattering trajectory closely approaching the saddle point
region and showing oscillations of largely different amplitudes. Initial conditions are
$q_1=30,q_i=0,i=2,...,N$ and $p_{i}=-0.53,0.65,1.08,0.58,1.92,2.15,0.35,2.57,0.53,0.06$ and the total energy is $E=8.8$.
(b) and (c) show the specific kinetic $E_{k1}(t)$ and potential $E_{p4}(t)$ energies.
(d) The time $T_{at}$ spent above the threshold $q_2=1 \leftrightarrow E=1$ within the single
particle dynamics as a function of $q_1$. The curves from top to bottom correspond to the
energies $E=2.0,...,1.1$ in steps of $0.1$.}
\label{fig4}
\end{figure}
Figure \ref{fig4}(a) shows the time evolution of $q_1(t)$ of a scattering trajectory emerging
from $q_1(t=0)=30$ and traveling towards the saddle point region. Reaching the latter we observe
a series of oscillations until, at time $t\approx 1700$, backscattering into the CC
takes place with no further turning points to occur. The many oscillations taking place possess
very different amplitudes. Indeed, the first oscillation has its turning point at $q_1 \approx 18$
followed by a large number of oscillations with a significantly smaller amplitude. At $t \approx 600$
a huge amplitude oscillation with a turning point deep into CC is observed. Subsequently a series of
small amplitude oscillations occurs until the final escape into the CC happens. We emphasize that
such an intermittent behaviour involving backscattering and recollision events
is completely absent for the corresponding two-body system but is an inherent feature
of the many-body case. Although being a high-dimensional phase space, we could exemplarily show that
this highly oscillatory behaviour traces unstable periodic orbits which occur in the saddle point region.
This means once the trajectory gets close to one of those orbits it stays temporarily in its vicinity
i.e. it temporarily shadows the unstable periodic motion.
A few remarks are in order. As emphasized above the corresponding two-body system shows only
simple backscattering into the CC. Scattering trajectories of the many-body system can, however, show backscattering
into the CC followed by recollision events. Once the system recollides it dwells in the saddle
point regime and finally gets backscattered into the CC. Of course, since oscillations take place
also for small amplitudes this is a crude picture of what happens indeed. According to the
analysis in section \ref{dyn:lse} a necessary condition for the occurence of a recollision event
is the surpassing of the threshold value $q_i=1$ for some dof $i$. A closer inspection reveals
that there is generically several transversal channel dof from the set $q_i, i=2,...,N$
involved in this process: it is the sum on the r.h.s of eq.(\ref{eq:heom2}) which has to change sign in order
to introduce the possibility of a recollision event. Indeed, the surpassing of the threshold
value leads to a deceleration
and finally a turning point in the dynamical evolution. Note that this process of repeated backscattering
and recollision does not require a fine tuning but happens generically for the regime of
intermediate energies below (and above, see next section \ref{dyn:hes}) the saddle point threshold
energy $E_s$. We remind the reader of the fact that this oscillatory behaviour is a pure dynamical
interaction effect and there is no stable equilibria of the potential landscape that would be responsible for
these processes.
To get a simple measure for the probability that our dynamical system resides in the above-threshold
regime $q_i > 1$, which enables a pronounced deceleration dynamics and finally leads to an oscillatory behaviour,
we take the following approach. We focus on the case of a single particle in the one-dimensional potential
$V(q_2;q_1)=|q_2|^{q_1}$ with a constant value for the parameter $q_1 > 0$. Assuming $q_2 > 1$ means for the
energy $E > 1$. The time which the particle spends in this regime $q_2 > 1$ in the course of a
positive half-period of its oscillation reads as follows
\begin{equation}
T_{at} = \sqrt{2} \int_{1}^{q_t} \frac{dq}{\sqrt{E-|q|^{q_1}}}
\end{equation}
for $q_t=E^{\frac{1}{q_1}}$. Figure \ref{fig4}(d) shows $T_{at}$ as a function of $q_1$ which represents the
power of the potential $V(q_2;q_1)$ for varying energy $E=1.1-2.0$ in steps of $0.1$. Obviously, $T_{at}$
is very small for large $q_1$ due to the box-like confinement and the steep walls which lead to a very
short time spent in the course of the dynamics in the region $q_2 > 1$. $T_{at}$ increases strongly
with decreasing value of $q_1$ - this increase is neither a power law nor an exponential one but of
superexponential character. It reflects the flattening of the increase of the potential $V$ for
$q_2 > 1$ in particular for $q_1 < 1$. With increasing energy the dependence of $T_{at}$ on $q_1$
becomes more pronounced. This analysis provides an intuitive explanation of the observation
that the oscillations of the trajectories of the many-body system, i.e. the backscattering
and recollision events, emanate from the saddle point region for which $q_1 < 1$ and where
the particles possess a large dwell time in the 'reactive zone' $q_i>1$.
Let us now return to our superexponential many-body system.
Figure \ref{fig4}(b) presents the kinetic energy $E_{k1}(t)$ belonging to this heavily
oscillating scattering trajectory. We observe that small amplitude oscillations involve
high frequency energy exchange processes whereas large amplitude excursions into the CC
involve low frequency oscillations of the kinetic energy. Since small amplitude oscillations (see $q_1(t)$
in Figure \ref{fig4}(a)) are interdispersed between large amplitude oscillations we correspondingly
observe in Figure \ref{fig4}(b) bursts of high frequency kinetic energy oscillations interdispersed between
intervals of smooth variations. Correspondingly a representative of the potential energy $E_{p4}$ is
shown in Figure \ref{fig4}(c) which peaks whenever a collision with the confining walls takes place.
\subsection{High energy scattering} \label{dyn:hes}
We now turn to a discussion of the dynamics for energies above the saddle point threshold $E_s=(N-1)$.
Due to the structure of our Hamiltonian (\ref{eq:hamiltonian1}) which possesses many base dof but only
a single exponential dof $q_1$, the dynamics $q_1(t)$ determines whether backscattering into the CC
or transmission to the regions II and III of asymptotically free motion happens. Indeed, either all
particles are backscattered or transmitted - a splitting into partial
backscattering and partial transmission is not possible.
\begin{figure}
\parbox{8cm}{\includegraphics[width=7.6cm,height=6.6cm]{fig5a-pap.jpg}}
\parbox{8cm}{\includegraphics[width=8.2cm,height=6.5cm]{fig5b-pap.jpg}}
\parbox{8cm}{\includegraphics[width=8.0cm,height=6.3cm]{fig5c-pap.jpg}}
\caption{Transmitting trajectories in the $(q_1,q_i)$-plane above the saddle point energy $E_s=9$ for $N=10$
particles. ICs are $q_1=30,q_i=0,i=2,...,N$ and (a) $p_1=-2.45,(p_i=1.411;i=2-5),(p_j=1.55;j=6-10)$
for a total energy $E=13$ (b) $p_i=-2.45,1.18,1.41,1.27,1.18,1.61,1.48,1.61,1.48,1.00; i=1-10$
for a total energy $E=9.1$ as well as (c) $p_i=-2.82,1.45,1.48,1.52,1.55,1.58,1.61,1.64,1.67,1.70; i=1-10$
for a total energy $E=15.2$. From (a) to (c) the distribution of the particles onto the regions
II and III of the potential landscape varies significantly.}
\label{fig5}
\end{figure}
Figure \ref{fig5}(a) shows a many-body trajectory in the $(q_1,q_i)$-planes for a total energy $E=13$, i.e.
well above the saddle point energy $E_s=9$, and for two sets of identical ICs. Consequently two
scattering processes are observed in Figure \ref{fig5}(a): the one set of identical ICs is scattered
to region II and the other set to region III (see Figure \ref{fig1}). Figure \ref{fig5}(b) shows
a trajectory in the $(q_1,q_i)$-planes for an energy $E=9.1$ slightly above the saddle point energy $E_s$ and for
non-identical ICs except three sets of two identical ICs. In this case four scattering paths
go to the region II whereas two enter the region III while overall transmission takes place.
Finally Figure \ref{fig5}(c) shows a trajectory with energy $E=15.2$ with no identical ICs
and as a result nine distinct paths can be observed. Eight of them go to region II and one to region III.
The above clearly demonstrates that particles can be arbitrarily distributed, after passing the saddle
point region, onto the regions II and III of asymptotic freedom. Dof with identical ICs, of course, show
identical paths.
\section{Dynamics: Statistical Properties}
\label{sec:dynamics2}
Let us now explore the statistical properties i.e. the behaviour of an ensemble of trajectories
scattering in the CC of the superexponential potential landscape. Initial conditions are $q_1=30, q_i=0, i=2,...,N$,
as in the case of the individual trajectories analyzed in the previous section, and we choose the
kinetic energies $E_{ki},i=2,...,N$ randomly from a uniform distribution with the constraint to match the energy shell.
First we analyze the case of identical ICs for the momenta $p_{2},...,p_{N}$,
followed by the case of two sets of identical ICs and finally
the case of all ICs being different. This way the particle exchange symmetry of the Hamiltonian
is broken to an increasing extent by the chosen ICs. The main observables of our analysis are
the so-called reflection time distribution (RTD) and the momentum-time map (MTM). The reflection time
is the time interval a scattering trajectory needs to travel back to its starting-point in the CC
at $q_1=30$. The RTD represents then a histogram of the distribution of these reflection times
with varying initial conditions from the chosen ensemble. The MTM shows the intricate connection
between the initial momentum $p_1$ and the reflection time for corresponding ensembles.
\begin{figure}
\parbox{15cm}{\includegraphics[width=15cm,height=6.5cm]{fig6-pap.jpg}}
\caption{Reflection time distribution for scattering in the CC (region I).
ICs are $q_1(t=0)=30,q_i(t=0)=0$. All further ICs for $p_i,i=2,...,N$ are identical. (a) Parameters
are $E=1,N=10$. The ensemble consists of $4 \cdot 10^{5}$ trajectories with randomly chosen kinetic energies.
Inset: The corresponding momentum-time map which provides the initial momentum $p_1$
versus the reflection time. (b) $E=8.8$ and the ensemble consists of $10^5$ trajectories with randomly chosen
kinetic energies. Inset: The corresponding momentum-time map.}
\label{fig6}
\end{figure}
\subsection{Ensemble properties: Identical initial conditions}
\label{sec:iic}
As discussed in section \ref{dyn:lse} the case of identical ICs w.r.t. the momenta $p_i,i=2,...,N$
for the many-body scattering dynamics
is reminescent of the corresponding behaviour of the two-body superexponential scattering dynamics
as discussed in detail in ref.\cite{Schmelcher4}. Nevertheless, for reasons of comparison to the
generic symmetry-broken case of non-identical ICs we summarize here the main characteristics of this
case. Figure \ref{fig6}(a,b) show the RTD and MTM for a low energy $E=1$ (a) and an energy
$E=8.8$ (b) close to the saddle point energy $E_s=9$. The most striking observation in Figure \ref{fig6}(a)
is the appearance of two plateaus. For the first plateau given by the range $0<t\lesssim 42.5$ the typical values
of the RTD are by several orders of magnitude smaller as compared to the corresponding values in the range
$42.5 < t < 87.5$ of the second plateau. Finally a prominent peak occurs at $t \approx 87.5$.
The second plateau exhibits a broad valley towards this dominant peak.
The origin of the above-described features of the RTD can be understood by inspecting the
corresponding MTM which is shown in the inset of Figure \ref{fig6}(a). The appearance of the
MTM, i.e. whether it is e.g. a (single-valued) curve or a spreaded point pattern, is not determined a priori.
The inset of Figure \ref{fig6}(a) shows that the MTM for the present case is a well-defined curve.
For reflection times $0 < t \lesssim 42.5$, this curve is single-valued whereas for $42.5 < t < 87.5$
it is double-valued, i.e. there appear two momentum branches of the MTM.
These two regimes correspond to the first and the second plateau of the RTD (see main figure \ref{fig6}(a)).
The time instant of the appearance of the second branch in the MTM with increasing reflection time
is the time of the appearance of trajectories that travel to the origin of the SEP in the saddle point
region and back. The lower branch for strongly negative values of the momentum $p_2$ provides the
dominant contribution to the RTD for $t > 42.5$ providing much larger values as compared to the contribution
provided by the upper branch for $t< 42.5$. The prominent peak at $t \approx 87.5$ can be understood
by the observation that the MTM possesses at this maximal reflection time a vertical derivative: The integrated
contribution to the RTD is therefore particularly large. This explains the overall appearance of the RTD.
For more details we refer the reader to ref.\cite{Schmelcher4}.
Figure \ref{fig6}(b) shows the RTD and MTM (see inset) for an energy $E=8.8$ close to, but still below,
the saddle point energy. For $0 < t < 13.6$ the RTD is strongly suppressed.
It shows for $t \gtrsim 13.6$ a series of peaks followed by a smooth decay up to $t \approx 63$.
These peaks stem from the small scale oscillations present in the MTM (see inset) near the onset
of its second branch.
\begin{figure}
\parbox{8cm}{\includegraphics[width=7.6cm,height=6.6cm]{fig7a-pap.jpg}}
\parbox{8cm}{\includegraphics[width=8.2cm,height=6.8cm]{fig7b-pap.jpg}}
\caption{Reflection time distribution for scattering in the CC (region I).
ICs are $q_1(t=0)=30,q_i(t=0)=0$. All further ICs of the dof $q_i,i=2,...,N$ belong to two classes of
identical ICs. (a) Parameters are $E=1,N=10$.
The ensemble consists of $10^{5}$ trajectories with randomly chosen kinetic energies.
Inset: The corresponding momentum-time map which provides the initial momentum $p_1$
versus the reflection time. (b) Same as (a) but for $E=6$. Inset: The corresponding momentum-time map.}
\label{fig7}
\end{figure}
\subsection{Ensemble properties: Two classes of initial conditions}
\label{sec:tsoic}
Let us now analyze the RTD for a random ensemble of trajectories that possess
two sets of identical ICs for $p_i,i=2,...,N$, i.e. the particle exchange symmetry of the Hamiltonian
is partially broken. These ICs of the coordinates of these trajectories
obey $q_1=30,q_i=0,i=2,...,N$ as in section \ref{sec:iic}. Figure \ref{fig7}(a) shows the RTD for an energy $E=1$.
Again two plateaus can be observed: the first one for $0<t \lesssim 43$ with a very low probability
and a second plateau for $43 \lesssim t < 104$. Opposite to the case of all identical ICs
the increase from the first to the second plateau as well as the decrease
following the main peak is much smoother. The second plateau is essentially flat and possesses no undulation
(compare to Figure \ref{fig6}(a)). These changes can be traced back to the corresponding changes in the MTM
which is shown as an inset in Figure \ref{fig7}(a). We remind the reader that the MTM for all identical ICs concerning
$p_i,i=2,...,N$ represented a curve with two-branches (see inset of Figure \ref{fig6}(a)).
The present MTM shows a similar overall structure but the branches possess now a finite width which
increases with increasing reflection time. Again, the appearance of the second branch is responsible for
the onset of the second plateau, but now the continuous increase of the widths of the branches leads to the
observed smoothened behaviour of the RTD. Equally the smooth decay following the main peak of the RTD at $t \approx 88$
is due to the substantial extension of the MTM following the contact of the
two distinct branches, i.e. for reflection times $t > 88$.
Figure \ref{fig7}(b) shows the RTD and MTM (see inset) for a significantly larger energy $E=6$.
Compared to the case $E=1$ (Figure \ref{fig7}(a)) a major reshaping of the RTD has taken place.
The two regions of reflection times with largely different probabilities (plateaus) are still present,
but the second plateau has become a highly asymmetric, broad and dominant peak with a maximum
at $t \approx 25$. The peak at $t \approx 60$ where the two branches of the MTM fuse (see inset of Figure \ref{fig7}(b))
has decreased significantly. The features of the RTD can again be interpreted in terms of the significantly
changed shape of the MTM: the onset of the second branch possesses a very steep slope and this branch exhibits
for increasing reflection times $t \gtrsim 25$ a series of 'spread transversal oscillations'. This adds up to the broad
asymmetric peak of the RTD.
\subsection{Ensemble properties: Mutually different initial conditions}
\label{sec:adic}
Let us now address the statistics of an ensemble for which all ICs of $p_i,i=2,...,N$ are different which
refers to the case of a completely broken particle exchange symmetry of the Hamiltonian.
Figure \ref{fig8}(a) shows the RTD and in the inset the corresponding MTM for an energy $E=1$.
The plateau-like structure observed in sections \ref{sec:iic} and \ref{sec:tsoic} for the unbroken
and partially exchange symmetry-broken cases, respectively, is now absent and is replaced by a single
strongly asymmetric peak centered at $t \approx 90$.
With increasing reflection times the RTD shows an accelerated increase culminating in the one central
peak while decreasing rapidly thereafter. The underlying MTM (see inset of Figure \ref{fig8}(a))
shows the typical boomerang-like structure with two broadened branches. The first branch is widening
systematically from its start at $t=0$ which is responsible for the substantial increase of the RTD for
low reflection times.
\begin{figure}
\parbox{16cm}{\includegraphics[width=16cm,height=7cm]{fig8-pap.jpg}}
\caption{Reflection time distribution for scattering in the CC (region I).
ICs are $q_1(t=0)=30,q_i(t=0)=0$. All further ICs (kinetic energies)
of the dof $q_i,i=2,...,N$ are different from each other. (a) Parameters are $E=1,N=10$.
The ensemble consists of $10^{5}$ trajectories with randomly chosen kinetic energies.
Inset: The momentum-time map. (b) Same as (a) but for $E=6$. Inset: The momentum-time map.}
\label{fig8}
\end{figure}
\begin{figure}
\parbox{18cm}{\includegraphics[width=17cm,height=11cm]{fig9-pap.jpg}}
\caption{Reflection time versus number of turning points of the $q_1(t)$-motion for
$E=1,6,8.8$ in (a,b,c), respectively, for an ensemble of $10^5$ trajectories. ICs
as in Figure \ref{fig8}.}
\label{fig9}
\end{figure}
Figure \ref{fig8}(b) shows the RTD and MTM for an energy $E=6$. The main differences
compared to the case $E=1$ is the reshaping of the asymmetric peak and the emergence of a very
dilute tail for large reflection times. This is reflected in the strongly distorted MTM shown
in the inset of Figure \ref{fig8}(b). The steep rise of the peak of the RTD for low reflection times
emerges again from the large slope of the second branch of the MTM. The diffuse tail of the RTD
has a corresponding counterpart in the MTM for large reflection times.
\subsection{Ensemble properties: Turning point distributions}
\label{sec:tpd}
In section \ref{sec:dynamics1} we have investigated our superexponential many-body Hamiltonian
by analyzing the dynamics in terms of individual trajectories. The underlying basic two-body system
\cite{Schmelcher4} shows a scattering dynamics without oscillatory behaviour w.r.t. the exponential
dof, i.e. $q_1(t)$ possesses for energies below the saddle point energy only a single turning point
which occurs at the minimal distance of the trajectories from the center of the SEP at $q_1=0$.
In section \ref{dyn:ise} we have shown that a major novelty in the many-body case is the
oscillating structure with largely fluctuating amplitudes of trajectories experiencing the saddle
point region or physically speaking the occurrence of multiple backscattering and recollision events.
Let us now analyze the map between the reflection time of a trajectory
and its number of turning points, which we call the RTPM, for the case of mutually different IC w.r.t.
the momenta $p_i,i=2,...,N$.
Figure \ref{fig9}(a) shows the RTPM for the energy $E=1$ for the scattering dynamics in the CC.
Clearly, all trajectories and scattering events exhibit only a single turning point, and no
oscillatory dynamics is encountered. The corresponding reflection times vary continuously from zero
up to a maximal value $t \approx 114$. Increasing the energy to $E=6$ Figure \ref{fig9}(b) presents
the corresponding RTPM which shows now a large number of events up to $9$ turning points and
a few further events up to $25$ turning points. Note, that the number of turning points is
always odd due to the fact that scattering takes place in the CC parametrized by the coordinate $q_1$.
As a general tendency one observes that the reflection time increases with the number of turning points
which is natural due to the fact that the dwell time in the saddle point region increases with increasing
number of oscillations taking place in or traversing this region. Finally Figure \ref{fig9}(c) shows
the RTPM for the energy $E=8.8$ rather close to the threshold energy $E_s=9$. As compared to the case of
$E=6$ the number of turning points possible now extends even up to approximately $90$, while the
absolute majority of events lies below $21$ turning points.
\section{Summary and conclusions}
\label{sec:sac}
Model systems with superexponential interaction represent a peculiar type of dynamical systems with uncommon
properties. Already for a two-body system the potential landscape shows a crossover from a confining channel (CC)
with a strongly varying transversal profile via two saddle points to a region of asymptotic freedom. The scattering
dynamics in the CC is intricate but at the same time restricted in the sense that it is a direct
in-out scattering with a single turning point of the longitudinal channel coordinate $q_1$. This situation changes
fundamentally when passing to many-body systems. In the present approach we have chosen a model system
with a single exponent degree of freedom $q_1$ for the superexponential interactions and many base degrees
of freedom $q_i,i=2,...,N$. The exponential dof $q_1$ might be considered as a 'background' or a 'guiding'
dof that determines the potential felt by the base dof. Each of the interaction terms $|q_i|^{q_{1}}$
shows the above-described geometrical crossover from channel confinement to asymptotic freedom. The many-body
Hamiltonian exhibits a particle exchange symmetry of the dof $q_i,i=2,...,N$ which can be respected, partially
broken, or completely broken by the initial conditions.
Simulating the dynamics of the many-body system we have revealed a number of important differences to the
two-body case. For low energies in the CC the $q_1(t)$ dynamics shows a transition from a step-like
behaviour due to the spatially localized energy transfer processes to a smooth in-out scattering transition.
Increasing the energy the trajectories incoming from the CC exhibit an oscillatory behaviour emanating
from the saddle point region and possessing largely fluctuating amplitudes. This oscillatory dynamics
comprised of backscattering and recollision events becomes increasingly more pronounced with
increasing energy. It represent an inherent many-body effect since, generically,
all of the dof $q_i,i=2,...,N$ contribute to this process. We have analyzed this on the level of individual
trajectories but also for the case of statistical ensembles. Here the reflection time distribution shows
a characteristic transition from a two plateau structure to a single asymmetric peak behaviour. The latter
has been analyzed by inspecting the so-called momentum-time map which shows a transition from a one-dimensional
curve with two-branches to a spatially two-dimensional distribution with a characteristic shape.
There are several directions for possible future research on superexponential few- and many-body systems.
The generalization of the interaction potential to higher spatial dimensions might lead to an even
more intricate potential landscape with novel properties. The present case of a single exponent degree of
freedom and many base degrees of freedom is certainly a specific choice, and it is an intriguing perspective
to explore the case of several exponent degrees of freedom. An intriguing topic is the
statistical mechanics of our many-body system in the thermodynamical limit, where one could pose
the question whether superexponential systems relax to a stationary state of thermal equilibrium.
Finally quantum superexponentially interacting systems resulting from a canonical quantization of the many-body Hamiltonian
might show interesting scattering properties in particular due to the
squeezing channel structure and the saddle point crossover.
\section{Acknowledgments}
The author thanks F.K. Diakonos for helpful discussions and B. Liebchen for a careful reading of the
manuscript.
|
1,108,101,564,237 | arxiv |
\section{Introduction}
For $a\in {\mathbb R}^2$, let $P_a$ be the radial projection from $a$:
$$
P_a:\ {\mathbb R}^2 \setminus \{a\} \longrightarrow S^1,\ \ \ P_a(x) =
\frac{(x-a)}{|x-a|}\,.
$$
\input{abra2.tex}
A special case of our theorem asserts that the ``four corner Cantor
set" of contraction ratio $1/4$ has radial projection of zero length
from all points $a\in \mathbb{R}^2$. See Figure \ref{zeroth} where
we show the second-level approximation of the four corner Cantor set
and the radial projection of some of its points.
Denote by ${\mathcal H}^1$ the one-dimensional Hausdorff measure.
A Borel set $\Lambda$ is a 1-{\em set} if $0 < {\mathcal H}^1(\Lambda) <
\infty$. It is said to be {\em invisible from $a$} if $P_a(\Lambda\setminus\{a\})$
has zero length.
\begin{theorem} \label{th-main}
Let $\Lambda$ be a self-similar 1-set in ${\mathbb R}^2$ satisfying
the Open Set Condition, which is not on a line.
Then $\Lambda$ is invisible from every $a\in {\mathbb R}^2$.
\end{theorem}
Recall that a nonempty compact $\Lambda$ is self-similar if
$\Lambda = \bigcup_{i=1}^m S_i(\Lambda)$ for some contracting similitudes $S_i$.
This means that
$$
S_i(x) = \lambda_i{\mathcal O}_i x + b_i,
$$
where $0<\lambda_i<1$, ${\mathcal O}_i$ is an orthogonal transformation
of the plane, and $b_i
\in {\mathbb R}^2$. The Open Set Condition holds if there
exists an open set $V\ne \emptyset$ such that $S_i(V) \subset V$ for all $i$ and
$S_i(V)\cap S_j(V)=\emptyset$ for all $i\ne j$.
For a self-similar set satisfying the Open Set Condition, being a 1-set
is equivalent to $\sum_{i=1}^m \lambda_i =1$.
A Borel set $\Lambda$ is {\em purely unrectifiable} (or {\em
irregular}), if ${\mathcal H}^1(\Lambda\cap \Gamma) = 0$ for every rectifiable curve $\Gamma$.
A set $\Lambda$ satisfying the assumptions of Theorem~\ref{th-main} is
purely unrectifiable by Hutchinson \cite{Hutch} (see also \cite{Mat2}).
A classical theorem of
Besicovitch \cite{besi} (see also \cite[Theorem 6.13]{falc}) says that a purely
unrectifiable 1-set has orthogonal projections of zero length on almost
every line through the origin. We use it in our proof.
In \cite[Problem 12]{matsurv} (see also \cite[10.12]{mattila})
Mattila raised the following question: Let $\Lambda$ be a Borel set in
${\mathbb R}^2$ with ${\mathcal H}^1(\Lambda) < \infty$. Is it true that for ${\mathcal H}^1$ almost all
$a\in A$, the intersection $\Lambda \cap L$ is a finite set for almost all
lines $L$ through $a$? If $\Lambda$ is purely unrectifiable, is it true
that $\Lambda \cap L = \{a\}$ for almost all lines through $a$?
Our theorem implies a positive answer
for a purely unrectifiable self-similar 1-set $\Lambda$
satisfying the Open Set Condition.
The general case of a purely unrectifiable set remains open.
On the other hand, M. Cs\"ornyei and D. Preiss proved recently
that the answer to the first part of the question is negative
[personal communication].
Note that we prove a stronger property for our class of sets, namely, that
the set is invisible from {\em every} point $a\in {\mathbb R}^2$. It is easy
to construct examples of non-self-similar purely unrectifiable
1-sets for which this
property fails. Marstrand \cite{mars} has an example of a
purely unrectifiable 1-set which is visible from a set of dimension one.
We do not discuss here other results and problems related to
visibility; see \cite[Section 6]{mattila} for a recent survey.
We only mention a result of Mattila \cite[Th.5.1]{mattila2}:
if a set $\Lambda$ has projections of zero length
on almost every line (which could have ${\mathcal H}^1(\Lambda) = \infty$), then the
set of points $\Xi$ from which $\Lambda$ is visible is a purely unrectifiable set
of zero 1-capacity. A different proof of this and a characterization of
such sets $\Xi$ is due to Cs\"ornyei \cite{marianna}.
\section{Preliminaries}
We have $S_i(x):=\lambda _i\mathcal{O}_ix+b_i$, where
$0<\lambda_i<1$,
$$\mathcal{O}_i=\left[\begin{array}{cr}
\cos(\varphi_i) & -\varepsilon_i\sin(\varphi_i) \\
\sin(\varphi_i) & \varepsilon_i\cos(\varphi_i) \\
\end{array}
\right],$$ $\varphi _i\in [0,2\pi )$, and $\varepsilon _i\in
\left\{-1,1\right\}$ shows whether $\mathcal{O}_i$ is a rotation through the
angle $\varphi _i$ or a reflection about the line through the origin
making the angle $\varphi_i /2$ with the $x$-axis.
Let $\Sigma :=\left\{1,\dots ,m\right\}^\mathbb{N}$ be the symbolic
space. The natural projection $\Pi:\,\Sigma \to \Lambda$ is defined by
\begin{equation} \label{eq-nat}
\Pi({\bf i}) = \lim_{n\to \infty} S_{i_1\ldots i_n} (x_0), \ \ \ \mbox{where}\ \
{\bf i} = (i_1 i_2 i_3\ldots) \in \Sigma,
\end{equation}
and $S_{i_1\ldots i_n} = S_{i_1} \circ \cdots \circ S_{i_n}$.
The limit in (\ref{eq-nat}) exists and does not depend on $x_0$.
Denote $\lambda_{i_1\dots i_n} = \lambda_{i_1}\cdots \lambda_{i_n}$ and
${\varepsilon}_{i_1\ldots i_k} = {\varepsilon}_{i_1} \cdots {\varepsilon}_{i_k}$.
We can write
$$
S_{i_1\dots i_n}(x)=\lambda _{i_1\dots i_n}{\mathcal O}_{i_1\dots
i_n}x+ b_{i_1\dots i_n},
$$
where
$$
{\mathcal O}_{i_1\dots i_n}:= {\mathcal O}_{i_1} \circ \cdots \circ {\mathcal O}_{i_n} =
\left[\begin{array}{cr}
\cos(\varphi _{i_1\dots i_n}) & -{\varepsilon}_{i_1\ldots i_n}\sin(\varphi _{i_1\dots i_n}) \\
\sin(\varphi _{i_1\dots i_n}) & {\varepsilon}_{i_1\ldots i_n}\cos(\varphi _{i_1\dots i_n}) \\
\end{array}\right],
$$
$$
\varphi_{i_1\ldots i_n} := \varphi_{i_1} + {\varepsilon}_{i_1} \varphi_{i_2} +
{\varepsilon}_{i_1 i_2} \varphi_{i_3} + \cdots + {\varepsilon}_{i_1\ldots i_{n-1}}\varphi_{i_n},
$$
and
$$
b_{i_1\dots i_n} = b_{i_1} + \lambda_{i_1} {\mathcal O}_{i_1} b_{i_2} + \cdots +
\lambda_{i_1\ldots i_{n-1}} {\mathcal O}_{i_1\ldots i_{n-1}} b_{i_n}.
$$
\begin{sloppypar}
\noindent
Since $\sum_{i=1}^m \lambda_i =1$, we can consider the probability product measure
$\mu = (\lambda_1,\ldots,\lambda_m)^{{\mathbb N}}$ on the symbolic space $\Sigma$
and define the {\em natural measure} on $\Lambda$:
$$
\nu = \mu \circ \Pi^{-1}.
$$
By a result of Hutchinson \cite[Theorem 5.3.1(iii)]{Hutch}, as a consequence
of the Open Set Condition we have
\begin{equation} \label{name}
\nu = c{\mathcal H}^1|_\Lambda,\ \ \ \mbox{where}\ \ c = ({\mathcal H}^1(\Lambda))^{-1}.
\end{equation}
\end{sloppypar}
To $\theta\in [0,\pi)$ we associate the unit vector $e_\theta = (\cos\theta,\sin\theta)$,
the line $L_\theta = \{te_\theta:\ t\in {\mathbb R}\}$, and the orthogonal projection
onto $L_\theta$ given by $x\mapsto (e_\theta\cdot x)e_\theta$. It is more convenient
to work with the signed distance of the projection to the origin, which
we denote by $p_\theta$:
$$
p_\theta:\, {\mathbb R}^2\to {\mathbb R},\ \ \ p_\theta x = e_\theta\cdot x.
$$
Denote ${\mathcal A}:= \{1,\ldots,m\}$ and let ${\mathcal A}^* = \bigcup_{i=1}^\infty {\mathcal A}^i$
be the set of all finite words over the alphabet ${\mathcal A}$.
For $u = u_1\ldots u_k \in {\mathcal A}^k$ we define the corresponding ``symbolic''
cylinder set by
$$
[u] = [u_1\ldots u_k] := \{{\bf i} \in \Sigma:\ i_\ell = u_\ell, \, 1 \le \ell \le k
\}.
$$
We also let
$$
\Lambda_u = S_u(\Lambda) = \lambda_u {\mathcal O}_u \Lambda + b_u
$$
and call $\Lambda_u$ the cylinder set of $\Lambda$ corresponding to the word $u$.
Let $d_\Lambda$ be the diameter of $\Lambda$; then ${\rm diam}(\Lambda_u) = \lambda_u d_\Lambda$.
For $\rho>0$ consider the ``cut-set''
$$
{\mathcal W}(\rho) = \{u\in {\mathcal A}^*:\ \lambda_u\le \rho,\ \lambda_{u'} >\rho\}
$$
where $u'$
is obtained from $u$ by deleting the last symbol.
Observe that for every $\rho>0$,
\begin{equation} \label{decom}
\Lambda = \bigcup_{u\in {\mathcal W}(\rho)} \Lambda_u.
\end{equation}
In view of (\ref{name}), we have $\nu(\Lambda_u \cap \Lambda_v) = 0$ for distinct
$u,v\in {\mathcal W}(\rho)$, hence
$$
\nu(\Lambda_u) = \lambda_u\ \ \ \mbox{for all}\ u\in {\mathcal A}^*.
$$
Denote $\lambda_{\min} := \min\{\lambda_i:\ i\le m\}$; then
$\mu(\Lambda_u) = \lambda_u \in (\rho \lambda_{\min},\rho]$ for $u\in {\mathcal W}(\rho)$.
We identify the unit circle $S^1$ with $[0,2\pi)$ and use additive notation
$\theta_1 + \theta_2$ understood mod $2\pi$ for points on the circle.
For a Radon measure $\eta$ on the line or on $S^1$,
the upper density of $\eta$ with respect to ${\mathcal H}^1$ is defined by
$$
\overline{D}(\eta,t) = \limsup_{r\to 0} \frac{\eta([t-r,t+r])}{2r}\,.
$$
The open ball of radius $r$ centered at $x$ is denoted by $B(x,r)$.
\section{Proof of the main theorem}
In the proof of Theorem~\ref{th-main}
we can assume, without loss of generality, that $a\not\in
\Lambda$, and
\begin{equation} \label{ass2}
\mbox{$P_a(\Lambda)$ is contained in an arc of length less than $\pi$.}
\end{equation}
Indeed, $\Lambda\setminus \{a\}$ can be written as
a countable union of self-similar sets
$\Lambda_u$ for $u \in {\mathcal A}^*$, of arbitrarily small diameter. If each of them
is invisible from $a$, then $\Lambda$ is invisible from $a$.
Let
$$
\Omega:= \{{\bf i} \in \Sigma:\ \forall\,u\in {\mathcal A}^*\ \exists\,n\ \mbox{such that}\
\sigma^n{\bf i} \in [u]\},
$$
that is, $\Omega$ is the set of sequences which contain each finite word over
the alphabet ${\mathcal A}= \{1,\ldots,m\}$. It is clear that every ${\bf i} \in \Omega$
contains each finite word infinitely many times and $\mu(\Sigma\setminus \Omega) =0$.
\begin{lemma}[Recurrence Lemma] \label{lem-rec}
For every ${\bf i} \in \Omega,\ \delta>0$, and $j_1,\ldots,j_k \in \{1,\ldots,m\}$,
there are infinitely many $n\in {\mathbb N}$ such that
\begin{equation} \label{eq-rec}
\phi_{i_1\ldots i_n} \in [0,\delta],\ {\varepsilon}_{i_1\ldots,i_n} =1,\ \ \mbox{and}
\ \ \sigma^n{\bf i} \in [j_1\ldots j_k].
\end{equation}
\end{lemma}
If the similitudes have no rotations or reflections, that is, $\phi_i = 0$
and ${\varepsilon}_i = 1$
for all $i\le m$ (as in the case of the four corner Cantor set),
then the conditions on $\phi$ and ${\varepsilon}$ in (\ref{eq-rec}) hold
automatically and
the lemma is true by the definition of $\Omega$. The proof in the
general case is not difficult, but requires a detailed
case analysis, so we postpone
it to the next section.
Let
$$
\Theta := \{\theta \in [0,\pi):\ {{\mathcal H}^1}(p_\theta(\Lambda))=0\}\ \ \ \mbox{and}\ \ \
\Theta' := (\Theta + \pi/2) \cup (\Theta + 3\pi/2)
$$
(recall that addition is considered mod $2\pi$).
Since $\Lambda$ is purely unrectifiable, ${{\mathcal H}^1}([0,\pi)\setminus \Theta') = 0$
by Besicovitch's Theorem \cite{besi}.
The following proposition is the key step of the proof.
We need the following measures:
$$
\nu_a := \nu\circ P_a^{-1}\ \ \ \mbox{and}\ \ \ \nu_\theta:= \nu\circ p_\theta^{-1},
\ \theta \in [0,\pi).
$$
We also denote $ \Lambda' = \Pi(\Omega)$.
\begin{prop} \label{prop-dens}
If $\theta' \in P_a(\Lambda') \cap \Theta'$, then
$\overline{D}(\nu_a,\theta') = \infty$.
\end{prop}
\begin{sloppypar}
{\em Proof of Theorem~\ref{th-main} assuming Proposition~\ref{prop-dens}.}
By Proposition~\ref{prop-dens} and \cite[Lemma 2.13]{mattila} (a
corollary of the Vitali covering theorem), we obtain that
${{\mathcal H}^1}(P_a(\Lambda') \cap \Theta') = 0$. As noted above,
$\Theta'$ has full ${{\mathcal H}^1}$ measure in $S^1$. On the other hand,
$$
\mu(\Sigma\setminus \Omega) = 0\ \Rightarrow\ \nu(\Lambda \setminus \Lambda') = 0
\ \Rightarrow\ {\mathcal H}^1(\Lambda \setminus \Lambda') = 0
\ \Rightarrow\ {{\mathcal H}^1}(P_a(\Lambda \setminus \Lambda')) = 0,
$$
and we conclude that ${{\mathcal H}^1}(P_a(\Lambda))=0$, as desired. \qed
\end{sloppypar}
\medskip
{\em Proof of Proposition~\ref{prop-dens}.}
Let $x\in \Lambda'$ and $\theta' = P_a(x) \in \Theta'$. Let
$\theta := \theta' - \pi/2$ mod $[0,\pi)$. By the definition of $\Theta'$ we have
${{\mathcal H}^1}(p_{\theta}(\Lambda)) = 0$.
\smallskip
First we sketch the idea of the proof. Since ${{\mathcal H}^1}(p_{\theta}(\Lambda)) = 0$, we
have $\nu_\theta\perp {{\mathcal H}^1}$, and this implies that for every $N\in {\mathbb N}$ there
exist $N$ cylinders of $\Lambda$ approximately the same diameter (say,
$\sim r$), such that their projections to $L_\theta$ are $r$-close to each other.
Then there is a line parallel to the segment $[a,x]$, whose
$Cr$-neighborhood contains all $\Lambda_{u_j}, j=1,\ldots,N$.
By the definition of $\Lambda' = \Pi(\Omega)$, we can find similar copies of this
picture near $x\in \Lambda'$ at arbitrarily small scales. The Recurrence Lemma
\ref{lem-rec} guarantees that these copies can be chosen with a small
relative rotation. This will give $N$ cylinders of $\Lambda$
of diameter $\sim r_0 r$ contained in a $C'r_0r$-neighborhood of the
ray obtained by extending $[a,x]$. Since $a$ is assumed to be separated
from $\Lambda$, we will conclude that
$\overline{D}(\nu_a,\theta') \ge C''N$, and the proposition will follow.
Now we make this precise. The proof is illustrated in Figure 2.
\input{abra.tex}
\smallskip
{\sc Claim.}
{\em For each $N\in {\mathbb N}$ there exists $r>0$
and distinct $u^{(1)},\ldots, u^{(N)} \in {\mathcal W}(r)$ such that}
\begin{equation} \label{eq-pro1}
|p_{\theta}(b_{u^{(j)}}-b_{u^{(i)}})| \le r,\ \ \forall\,i,j\le N.
\end{equation}
Indeed, for every $u\in {\mathcal A}^*$,
$$
\Lambda_u = \lambda_u {\mathcal O}_u \Lambda + b_u\ \Rightarrow\
\Lambda_u \subset B(b_u,d_\Lambda \lambda_u),
$$
hence for every interval $I\subset {\mathbb R}$ and $r>0$,
$$
\nu_{\theta} (I) \le \sum_{u\in {\mathcal W}(r)}\{\lambda_u:\ {\rm dist}(p_\theta(b_u),I)
\le d_\Lambda r\}.
$$
If the claim does not hold, then there exists $N\in {\mathbb N}$ such that
for every $t\in {\mathbb R}$ and $r>0$,
$$
\nu_{\theta} ([t-r,t+r]) \le N(2(1+d_\Lambda)+1)r.
$$
Then $\nu_{\theta}$ is absolutely continuous with respect to ${{\mathcal H}^1}$,
which is a contradiction. The claim is verified. \qed
\smallskip
We are given that $x \in \Lambda' = \Pi(\Omega)$, which means that
$x = \pi({\bf i})$ for an infinite sequence ${\bf i}$ containing all finite words.
We fix $N\in {\mathbb N}$ and find $r>0$, $u^{(1)},\ldots, u^{(N)} \in {\mathcal W}(r)$ from
the Claim.
Then we apply Recurrence Lemma~\ref{lem-rec} with $j_1\ldots j_k:= u^{(1)}$
and $\delta = r$ to obtain infinitely many $n\in {\mathbb N}$ satisfying
(\ref{eq-rec}). Fix such an $n$. Denote
$$
w:= i_1\ldots i_n\ \ \ \mbox{and}\ \ v^{(j)}=wu^{(j)},\ j=1,\ldots,N.
$$
Observe that ${\bf i}$ starts with $v^{(1)}$, so $x=\Pi({\bf i}) \in \Lambda_{v^{(1)}}$,
hence
\begin{equation} \label{eq-pro2}
|p_{\theta}(x-b_{v^{(1)}})|\le |x-b_{v^{(1)}}|
\le d_\Lambda \lambda_{v^{(1)}} \le d_\Lambda \lambda_w r.
\end{equation}
Here we used that $u^{(1)} \in {\mathcal W}(r)$, so $\lambda_{v^{(1)}} =
\lambda_w\lambda_{u^{(1)}}\le \lambda_w r$.
We have for $z\in {\mathbb R}^2$,
$$
\lambda_{v^{(j)}} {\mathcal O}_{v^{(j)}} z + b_{v^{(j)}} = S_{v^{(j)}}(z) =
S_w\circ S_{u^{(j)}}(z) =
\lambda_w{\mathcal O}_w(\lambda_{u^{(j)}} {\mathcal O}_{u^{(j)}} z + b_{u^{(j)}}) + b_w,
$$
hence
$$
b_{v^{(j)}} = \lambda_w{\mathcal O}_w b_{u^{(j)}} + b_w.
$$
It follows that
$$
p_{\theta}(b_{v^{(i)}} - b_{v^{(j)}}) = \lambda_w p_{\theta} {\mathcal O}_w (b_{u^{(i)}}-
b_{u^{(j)}}).
$$
By (\ref{eq-rec}), we have ${\varepsilon}_w=1$ and $\phi:= \phi_w \in [0,r)$;
therefore, ${\mathcal O}_w=R_\theta$ is the rotation through the angle $\phi$.
One can check that $p_\theta R_\phi = p_{\theta-\phi}$,
which yields
\begin{equation} \label{eq-pro3}
|p_{\theta}(b_{v^{(i)}} - b_{v^{(j)}})|= \lambda_w |p_{\theta-\phi}(b_{u^{(i)}}-
b_{u^{(j)}})|.
\end{equation}
Clearly, $\|p_\theta - p_{\theta-\phi}\|\le |\phi|\le r$, where $\|\cdot\|$
is the operator norm,
so we obtain from (\ref{eq-pro1}) and (\ref{eq-pro3}) that
$$
|p_{\theta}(b_{v^{(i)}} - b_{v^{(j)}})|\le \lambda_w(|b_{u^{(i)}}-b_{u^{(j)}}|r + r)
\le\lambda_w(d_\Lambda+1)r.
$$
Recall that ${\bf i}$ starts with $v_1$, so $x=\Pi({\bf i}) \in \Lambda_{v^{(1)}}$, hence
for each $j\le N$, for every $y\in \Lambda_{v^{(j)}}$,
\begin{eqnarray}
|p_{\theta}(x-y)| & \le & |x-b_{v^{(1)}}|+|p_{\theta}(b_{v^{(1)}}- b_{v^{(j)}})| +
|b_{v^{(j)}}-y| \nonumber \\
& \le & d_\Lambda (\lambda_{v^{(1)}} +\lambda_{v^{(j)}}) + \lambda_w (d_\Lambda+1)r\le
\lambda_w (3d_\Lambda + 1) r. \label{eq-pro4}
\end{eqnarray}
Now we need a simple geometric fact: given that
$$
P_a(x)= \theta',\ \ \theta = \theta' + \pi/2\ {\rm mod}\ [0,\pi),\ \
|p_\theta(x-y)| \le \rho,\ \ |y-a| \ge c_1,\ \ \mbox{and (\ref{ass2}) holds},
$$
we have
$$
|P_a(y)-\theta'| = |P_a(y) -P_a(x)| =
\arcsin\frac{|p_\theta(y-x)|}{|y-a|} \le
\frac{\pi}{2c_1} \rho.
$$
This implies, in view of (\ref{eq-pro4}), that for $c_2=
\pi(3d_\Lambda + 1)/(2c_1)$,
$$
\nu_a([\theta'-c_2 \lambda_w r, \theta'+c_2 \lambda_w r]) \ge \sum_{j=1}^N
\nu(\Lambda_{v^{(j)}})
= \sum_{j=1}^N \lambda_{v^{(j)}} = \lambda_w \sum_{j=1}^N \lambda_{u^{(j)}} \ge
\lambda_w N \lambda_{\min}r,
$$
where $\lambda_{\min} = \min\{\lambda_1,\ldots,\lambda_m\}$, by the definition of
${\mathcal W}(r)$.
Recall that $n$ can be chosen arbitrarily large, so $\lambda_w$ can be
arbitrarily small, and we obtain that
$$
\overline{D}(\nu_a,\theta') \ge c_2^{-1}\lambda_{\min} N.
$$
Since $N\in {\mathbb N}$ is arbitrary, the proposition follows. \qed
\section{Proof of the recurrence lemma~\ref{lem-rec}}
Let $K\in
\left\{0,\dots ,m\right\}$ be the number of $i$ for which
$\varphi _i\not\in \pi \mathbb{Q}$.
Without loss of generality we may assume the following:
if $K\ge 1$ then $\varphi _1,\dots ,\varphi _K\not\in \pi{\mathbb Q}$.
\smallskip
We distinguish the following cases:
\begin{description}
\item[A] $\varphi _i \in \pi{\mathbb Q}$ for all $i\le m$.
\item[B] there exists $i$ such that $\varphi _i\not\in \pi \mathbb{Q}$
and $\varepsilon _i=1$.
\item[C] $K\geq 1$ and $\varepsilon _i=-1$ for all $i\le K$.
\begin{description}
\item[C1] there exist $i,j\leq K$ such that
$\varphi _i-\varphi _j\not\in \pi \mathbb{Q}$.
\item[C2] there exists $r_i\in \mathbb{Q}$ such that
$\varphi _i=\varphi _1+r_i\pi $ for $1\leq i\leq K$.
\begin{description}
\item[C2a] $K<m$ and there exists $j\geq K+1$
such that $\varepsilon _j=-1$.
\item[C2b] $K<m$ and for all $j\geq K+1$
we have $\varepsilon _j=1$.
\item[C2c] $K=m$.
\end{description}
\end{description}
\end{description}
Denote by $R_\phi$ the rotation through the angle $\phi$. We call it
an irrational rotation if $\phi\not\in \pi{\mathbb Q}$.
Consider the semigroup generated by ${\mathcal O}_i,\ i\le m$, which we denote by
${\mathcal S}$. We begin with the following observation.
\smallskip
{\sc Claim.} {\em Either ${\mathcal S}$ is finite, or ${\mathcal S}$ contains an
irrational rotation.}
\smallskip
The semigroup ${\mathcal S}$ is clearly finite in Case A and contains an
irrational rotation in Case B. In Case C1 we have ${\mathcal O}_i{\mathcal O}_j =
R_{\phi_i-\phi_j}$, which is an irrational rotation. In Case C2a we
also have that ${\mathcal O}_i{\mathcal O}_j = R_{\phi_i-\phi_j}$ is an irrational
rotation, since $\phi\not\in \pi{\mathbb Q}$ and $\phi_j\in \pi{\mathbb Q}$. We claim
that in remaining Cases C2b and C2c the semigroup is finite. This
follows easily; then ${\mathcal S}$ is generated by one irrational reflection
and finitely many rational rotations.
\medskip
{\em Proof of Lemma~\ref{lem-rec} when ${\mathcal S}$ is finite.} A finite
semigroup of invertible transformations is necessarily a group. Let
${\mathcal S} = \{s_1,\ldots,s_t\}$. By the definition of the semigroup ${\mathcal S}$
we have $s_i = {\mathcal O}_{w^{(i)}}$ for some $w^{(i)} \in {\mathcal A}^*$,
$i=1,\ldots,t$. For every $v\in {\mathcal A}^*$ we can find $\widehat{v}\in
{\mathcal A}^*$ such that ${\mathcal O}_{\widehat{v}} = {\mathcal O}_v^{-1}$. Fix $u =
j_1\ldots j_k$ from the statement of the lemma. Consider the
following finite word over the alphabet ${\mathcal A}$:
$$
\omega := \tau_1\ldots \tau_t,\ \ \ \mbox{where}\ \ \tau_j = (w^{(j)}
u)\, \widehat{(w^{(j)}u)},\ j=1,\ldots,t.
$$
Note that ${\mathcal O}_{\tau_j} = I$ (the identity).
By the definition of $\Omega$, the sequence ${\bf i}\in \Omega$ contains $\omega$
infinitely many times. Suppose that $\sigma^\ell{\bf i} \in [\omega]$. Since
${\mathcal O}_{{\bf i}|\ell} \in {\mathcal S}$, there exists $w^{(j)}$ such that ${\mathcal O}_{w^{(j)}} =
{\mathcal O}_{{\bf i}|\ell}^{-1}$. Then the occurrence of $u$ in $\tau_j$, the $j$th
factor of $\omega$, will be at the position $n$ such that ${\mathcal O}_{{\bf i}|n}= I$,
so we will have $\phi_{{\bf i}|n} = 0 \in [0,\delta]$ and
${\varepsilon}_{{\bf i}|n} = 1$, as desired.
\medskip
{\em Proof of Lemma~\ref{lem-rec} when ${\mathcal S}$ is infinite.}
By the claim above, there exists $w\in {\mathcal A}^*$ such that $\phi_w \not\in\pi
{\mathbb Q}$ and ${\varepsilon}_w= 1$. Fix $u = j_1\ldots j_k$ from the statement of the lemma.
Let
$$
v := \left\{\begin{array}{ll} uu, & \mbox{if}\ \phi_u\not\in\pi{\mathbb Q};\\
uuw, & \mbox{if}\ \phi_u\in\pi{\mathbb Q}.\end{array}
\right.
$$
Observe that $\phi_v\not\in\pi{\mathbb Q}$ and ${\varepsilon}_v =1$. Let $v^k=v\ldots v$
(the word $v$ repeated $k$ times).
Since $\phi_v/\pi$ is irrational, there exists an $N$ such that
every orbit of $R_{\phi_v}$ of length $N$
contains a point in every subinterval
of $[0,2\pi)$ of length $\delta$. Put
$$
\omega:= \left\{\begin{array}{ll} v^N, & \mbox{if}\ {\varepsilon}_i =1,\ \forall\, i\le m;
\\ v^N j^* v^N, & \mbox{if}\ \exists\,j^*\ \mbox{such that}\ {\varepsilon}_{j^*} = -1.
\end{array}
\right.
$$
By the definition of $\Omega$, the sequence ${\bf i}\in \Omega$ contains $\omega$
infinitely many times. Let $\ell\in {\mathbb N}$ be such
that $\sigma^\ell{\bf i} \in [\omega]$.
Suppose first that ${\varepsilon}_{{\bf i}|\ell} = 1$.
Then we have, denoting the length of $v$ by $|v|$,
\begin{equation} \label{rot}
\sigma^{\ell + k|v|}{\bf i} \in [u],\ \ \ \ \phi_{{\bf i}|(\ell+k|v|)} = \phi_{{\bf i}|\ell} +
k\phi_v\ (\mbox{mod}\ 2\pi),\ \ \ \ {\varepsilon}_{{\bf i}|(\ell+k|v|)} =1,
\end{equation}
for $k=0,\ldots, N-1$.
By the choice of $N$, we
can find $k\in \{0,\ldots,N-1\}$ such that $\phi_{{\bf i}|(\ell+k|v|)}\in
[0,\delta]$, then $n=\ell + k|v|$ will be as desired.
If ${\varepsilon}_{{\bf i}|\ell} = -1$, then we replace $\ell$ by $\ell^*:= \ell +
N|v|+1$ in (\ref{rot}), that is, we consider the occurrences of $u$ in
the second factor $v^N$. The orientation will be switched by
${\mathcal O}_{j^*}$ and we can find the desired $n$ analogously.
\qed
\section{Concluding remarks}
Consider the special case when the self-similar set $\Lambda$ is of the
form
\begin{equation} \label{eq-ss2}
\Lambda = \bigcup_{i=1}^m (\lambda_i \Lambda + b_i),\ \ \ b_i\in {\mathbb R}^2.
\end{equation}
In other words, the contracting similitudes have no rotations or reflections,
as for the four corner Cantor set.
Then the projection $\Lambda^\theta:=p_\theta(\Lambda)$ is itself a self-similar set on the
line:
$$
\Lambda^\theta = \bigcup_{i=1}^m (\lambda_i \Lambda^\theta +
p_\theta(b_i)),\ \ \mbox{ for}\ \theta\in [0,\pi).
$$
Let $\Lambda^\theta_i = \lambda_i \Lambda^\theta + p_\theta(b_i)$. As above, $\nu$ is the natural
measure on $\Lambda$. Let $\nu_\theta$ be the natural measure on $\Lambda^\theta$,
so that $\nu_\theta = \nu \circ p_\theta^{-1}$.
\begin{corollary} \label{cor-pro}
Let $\Lambda$ be a self-similar set of the form (\ref{eq-ss2}) that is not
on a line, such that $\sum_{i=1}^m \lambda_i \le 1$.
If $\Lambda$ satisfies the Open Set Condition condition, then
$$
\nu_\theta(\Lambda^\theta_i \cap \Lambda^\theta_j) = 0,\ i\ne j,\ \ \ \mbox{for a.e.}
\ \theta\in [0,\pi).
$$
\end{corollary}
{\em Proof.} Let $s>0$ be such that $\sum_{i=1}^m \lambda_i^s = 1$.
By assumption, we have $s\le 1$. This number is known as the similarity
dimension of $\Lambda$ (and also of $\Lambda^\theta$ for all $\theta$). Suppose first
that $s=1$. Then we are in the situation covered by Theorem~\ref{th-main},
and $\nu$ is just the normalized restriction of ${\mathcal H}^1$ to $\Lambda$.
Consider the product measure $\nu\times {\mathcal L}$, where ${\mathcal L}$ is the Lebesgue
measure on $[0, \pi)$. Theorem~\ref{th-main} implies that
$$
(\nu\times {\mathcal L})\{(x,\theta)\in \Lambda\times [0,\pi):\ \exists\,y\in \Lambda,\ y\ne x,
\ p_\theta(x) = p_\theta(y)\} = 0.
$$
By Fubini's Theorem, it follows that for ${\mathcal L}$ a.e.\ $\theta$,
for $\nu_\theta$ a.e.\ $z\in L^\theta$, we have that $p_\theta^{-1}(z)$ is a
single point. This proves the desired statement, in view of the fact that
$\nu(\Lambda_i \cap \Lambda_j) = 0$ for $\Lambda$ satisfying the
Open Set Condition.
In the case when $s<1$ we can use \cite[Proposition 1.3]{PSS}, which
implies that the packing measure ${\mathcal P}^s(\Lambda^\theta)$ is positive and finite
for ${\mathcal L}$ a.e.\ $\theta$. By self-similarity and the properties of ${\mathcal P}^s$
(translation invariance and scaling), we
have ${\mathcal P}^s(\Lambda^\theta_i\cap \Lambda^\theta_j) = 0$ for $i\ne j$.
Then we use \cite[Corollary 2.2]{PSS},
which implies that $\nu_\theta$ is the normalized restriction of
${\mathcal P}^s$ to $\Lambda^\theta$, to complete the proof. \qed
\medskip
\noindent {\bf Remark.}
In \cite[Proposition 2]{BG} it is claimed that if a self-similar set
${\mathcal K} = \bigcup_{i=1}^m {\mathcal K}_i$ in ${\mathbb R}^d$ has the
Hausdorff dimension equal to the similarity dimension, then the natural
measure of the ``overlap set'' $\bigcup_{i\ne j}({\mathcal K}_i\cap {\mathcal K}_j)$
is zero. This would imply Corollary~\ref{cor-pro},
since the Hausdorff dimension of
$\Lambda^\theta$ equals $s$ for ${\mathcal L}$ a.e.\ $\theta$ by Marstrand's Projection
Theorem. Unfortunately, the proof in \cite{BG} contains an error,
and it is still unknown whether
the result holds [C. Bandt, personal communication].
(It should be noted that \cite[Proposition 2]{BG}
was not used anywhere in \cite{BG}.)
\medskip
\noindent
{\bf Acknowledgment.} We are grateful to M. Cs\"ornyei, E. J\"arvenp\"a\"a,
and M. J\"arvenp\"a\"a for helpful discussions. This work was done while
K. S. was visiting the University of Washington.
|
1,108,101,564,238 | arxiv | \section{Introduction}
Quantum entanglement and nonclassicality of quantum correlations seemed to appear as two sides of the same coin. However some discrepancies from this picture were found. Firstly, it was shown that there exist weakly entangled states that do not give rise to nonclassical correlations \cite{Werner89}. Further it turned out that two- and three-qutrit states reveal maximal nonclassicality for non-maximally entangled states \cite{CGLMPorigin, AcinChen04, LRZ14, Gruca12}. In the first case the discrepancy between nonclassicality of correlations and entanglement was reduced by showing, that any bipartite entangled state gives rise to nonclassical correlations, if properly extended by attaching a classically correlated state \cite{Masanes08}. In the second case the discrepancy has been questioned by suggesting that the maximal violation of an optimal Bell inequality is not a proper measure of maximal nonclassicality \cite{CGLMPvolume}. In this work we show that the last discrepancy disappears if we translate optimal quantum correlations of qutrits into a qubit representation. For this aim we introduce a new way of analyzing maximal quantum violation of Bell inequalities by many qutrit states. Namely, we represent the optimal qutrit measurement operators by means of symmetric two-qubit operators.
Nonclassical nature of quantum correlations in the case of two three-level systems (qutrits) was firstly demonstrated numerically by Kaszlikowski et. al \cite{Kaszlikowski00}. However the first analytical form of a Bell inequality for qutrit states was found a few years later by Collins et. al. In \cite{CGLMPorigin} they proposed a set of Bell inequalities (called further CGLMP inequalities) for bipartite correlations, with two settings per observer and arbitrary number $d$ of outcomes, which are violated by quantum $d$-level systems. A little bit later a paradoxical nature of two-qutrit nonclassicality has been revealed: although the CGLMP inequalities are optimal \cite{Masanes02}, they are maximally violated by non-maximally entangled two-qutrit states \cite{CGLMPacin}. What is more, the discrepancy between the CGLMP violation for maximally and non-maximally entangled states increases with the system's dimension \cite{CGLMPacin, CGLMPmax08}.
Further a similar effect was found in the case of three qutrits, namely Acin et. al. \cite{AcinChen04} found a generalization of a CGLMP inequality to a three-qutrit case, which is tight and maximally violated by a non-maximally entangled state.
Although this discrepancy between maximal nonclassicality and maximal entanglement has been thoroughly studied from the geometrical perspective \cite{Spengler11, CGLMPvolume}, it is still lacking a deeper understanding.
In this work we present a new approach to the analysis of a qutrit nonclassicality --- namely we analyse the form of the Bell operator \cite{Braunstein92} corresponding to the qutrit Bell inequalities in two different local operator bases, completely different from the ones used in \cite{CGLMPmax08}: the spin-$1$ basis in $3$-dimensional representation \cite{SpinSqueezed93} and the spin-1 basis in 4-dimensional representation, which corresponds to the symmetric subspace of two qubits \cite{Kurzynski16}.
Expressing qutrit Bell operators in the local bases of symmetric-two-qubit operators allows for translating the analysis of a maximal qutrit nonclassicality to the analysis of many qubit nonclassicality --- the topic, which is much more understood and intuitive.
Using this method we show that the CGLMP Bell operator in the four-qubit symmetric subspace is a composition of correlations corresponding to CHSH \cite{CHSH} and Mermin's \cite{Mermin90} inequalities, therefore the optimal state for its violation is a superposition of states maximizing violations of CHSH and Mermin inequalities respectively, that is a superposition of two two-qubit Bell states and the four qubit GHZ state. The maximally entangled state of two qutrits in the four qubit representation is also of this form, however with slightly different superposition coefficients.
Moreover using the four-qubit representation we show, that the maximal quantum violation of the CGLMP inequality (known as the Tsirelson's bound) can be derived from the complementarity of quantum correlations --- the property, which was previously known only for correlations between many qubits \cite{BellComplementarity}.
Further we analyze the maximal violation of a three-qutrit inequality \cite{AcinChen04} from the perspective of its corresponding six-qubit Bell operator. We found a similar structure of maximally nonclassical states for this inequality, however the form of the inequality itself is much more complicated in this case.
\section{Single qutrit operators as symmetric two-qubit operators}
The linear space of qutrit operators, that is the space of matrices from $M_9(\mathbb C)$ has the same (complex) dimension of $9$ as the space of symmetric two-qubit operators, namely the operators from $\mathrm{Sym}(M_2(\mathbb C)\otimes M_2(\mathbb C))$. This shows, that the two spaces are isomorphic, and their elements are in one-to-one correspondence. This is a well known fact which is commonly used in the quantum theory of angular momentum. To get a deeper understanding of the isomorphism, let us proceed within a physically motivated approach. When discussing the operators of some quantum system, one is often interested in expressing them in an Hermitian operator basis. For finite dimensional quantum systems such a canonical Hermitian basis is the one consisting of Gell-Mann matrices \cite{Krammer08} --- Hermitian generators of the special unitary group $\mathrm{SU}(N)$, extended by the identity matrix. It turns out that in the case of qutrits, each Gell-Mann matrix can be expressed in more convenient way as a function of spin-$1$ matrices $\tilde{S_x},\tilde{S_y},\tilde{S_z}$, its squares $\tilde{S_x^2},\tilde{S_y^2},\tilde{S_z^2}$, and anticommutators $\{\tilde{S_x},\tilde{S_y}\}, \{\tilde{S_y},\tilde{S_z}\}, \{\tilde{S_x},\tilde{S_z}\}$ \cite{Krammer08},\cite{LM13} (from now on we will denote any qutrit operators and states with a tilde, to distinguish them from the qubit ones). The relation between Gell-Mann matrices and spin operators can be reversed, and one can get Hermitian basis consisting of spin-$1$ operators \cite{Kurzynski16} $\tilde{S_x},\tilde{S_y},\tilde{S_z}$,
shifted squares of the spin operators:
\begin{eqnarray}
\tilde{S_x^2} &=& \openone - (\tilde{S_x})^2,\nonumber\\
\tilde{S_y^2} &=& \openone - (\tilde{S_y})^2,\nonumber\\
\tilde{S_z^2} &=& \openone - (\tilde{S_z})^2,
\label{Spin32}
\end{eqnarray}
and all possible anticommutators:
\begin{eqnarray}
\tilde{A_x} &=& \tilde{S_z} \tilde{S_y} + \tilde{S_y} \tilde{S_z},\nonumber\\
\tilde{A_y} &=& \tilde{S_x} \tilde{S_z} + \tilde{S_z} \tilde{S_x},\nonumber\\
\tilde{A_z} &=& \tilde{S_x} \tilde{S_y} + \tilde{S_y} \tilde{S_z}.
\label{Spin33}
\end{eqnarray}
Note that the spin basis is not unique --- one can take as $\tilde{S_x},\tilde{S_y},\tilde{S_z}$ any set of Hermitian matrices fulfilling the spin commutation relations $[\tilde{S_x},\tilde{S_y}]=i \tilde{S_z}$ (and cyclic permutations of $x,y,z$). Two typical choices, to which we further refer are:
\begin{eqnarray}
&\tilde{S_x}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}
0 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0\\
\end{array}
\right),
\tilde{S_y}=\frac{1}{\sqrt{2}}\left(\begin{array}{ccc}
0 & -i & 0\\
i & 0 & -i\\
0 & i & 0\\
\end{array}
\right),&\nonumber\\
&\tilde{S_z}=\left(\begin{array}{ccc}
1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & -1\\
\end{array}
\right)&,
\label{SpinBasis1}
\end{eqnarray}
and:
\begin{eqnarray}
&\tilde{S_x}=\left(\begin{array}{ccc}
0 & 0 & 0\\
0 & 0 & -i\\
0 & i & 0\\
\end{array}
\right),
\tilde{S_y}=\left(\begin{array}{ccc}
0 & 0 & -i\\
0 & 0 & 0\\
i & 0 & 0\\
\end{array}
\right),&\nonumber\\
&\tilde{S_z}=\left(\begin{array}{ccc}
0 & i & 0\\
-i & 0 & 0\\
0 & 0 & 0\\
\end{array}
\right)&.
\label{SpinBasis2}
\end{eqnarray}
For the sake of convenience let us rename the spin basis elements as follows:
\begin{eqnarray}
\begin{array}{lll}
\gamma_1=\tilde{S_x}, & \gamma_2=\tilde{S_y}, & \gamma_3=\tilde{S_z},\\
\gamma_4=\openone - (\tilde{S_x})^2, & \gamma_5=\openone - (\tilde{S_y})^2, & \gamma_6=\openone - (\tilde{S_z})^2,\\
\gamma_7=\tilde{A_x}, & \gamma_8=\tilde{A_y}, & \gamma_9=\tilde{A_z}.\\
\end{array}
\label{SpinG}
\end{eqnarray}
Then any $n$-qutrit operator $\hat B\in M^3(\mathbb C)^{\otimes n}$ can be decomposed into a tensor form, by finding its coefficients in the product spin basis \eqref{SpinG} as follows:
\begin{eqnarray}
&&\hat B=\sum_{i_1,\ldots ,i_n}B_{i_1,\ldots , i_n}(\gamma_{i_1}\otimes\ldots\otimes\gamma_{i_n}) \nonumber\\
&&B_{i_1,\ldots , i_n}=\frac{\mathrm{Tr}(\hat B (\gamma_{i_1}\otimes\ldots\otimes\gamma_{i_n}))}{\mathrm{Tr}\left((\gamma_{i_1}\otimes\ldots\otimes\gamma_{i_n})^2\right)},
\label{BtoSpin1}
\end{eqnarray}
where the denominator compensates the fact that the trace norms of the basis elements may be different.
However, the spin-$1$ operators can be represented in another way, which comes from the composition of two spin-half systems. From the point of view of the (non-relativistic) symmetries of a physical system, the spin of a particle determines how its state vector reacts on rotations of the physical space $\mathbb R^3$, or saying more mathematically --- under which representation of the rotation group the state transforms. If we compose two particles of spin-half, therefore each transforming under rotation as a two-dimensional spinor, the state space of the composite system contains two invariant subspaces with respect to three dimensional rotations: the fully symmetric space (of complex dimension $3$), the vectors of which transform under rotations as if they correspond to spin-$1$ particle, and the invariant one dimensional subspace spanned by the \emph{singlet state}. From the composition rule for generators of the tensor product of arbitrary transformations, it follows that the effective spin-$1$ operators acting on the symmetric subspace of two-qubits have the following direct-sum form:
\begin{eqnarray}
\tilde{S_x} \mapsto S_x &=& \frac{1}{2} (\openone \otimes X + X \otimes \openone )\equiv \delta_1, \nonumber \\
\tilde{S_y} \mapsto S_y &=& \frac{1}{2} (\openone \otimes Y + Y \otimes \openone )\equiv \delta_2, \nonumber \\
\tilde{S_z} \mapsto S_z &=& \frac{1}{2} (\openone \otimes Z + Z \otimes \openone )\equiv \delta_3,
\label{Spin43}
\end{eqnarray}
where $X,Y,Z$ are standard qubit Pauli matrices (or their unitarily rotated equivalents), and $\delta_i$ is a shorthand notation analogous to \eqref{SpinG}. The other spin-$1$ basis operators \eqref{Spin32}--\eqref{Spin33} are transformed as follows:
\begin{eqnarray}
\tilde{S_x^2} \mapsto S_x^2 &=&\delta_4= \frac{1}{4} (\openone - X \otimes X + Y \otimes Y + Z \otimes Z) \nonumber\\
&=& \left| \Phi^- \right\rangle \left\langle \Phi^-\right|, \nonumber \\
\tilde{S_y^2} \mapsto S_y^2 &=&\delta_5= \frac{1}{4} (\openone + X \otimes X - Y \otimes Y + Z \otimes Z) \nonumber\\
&=& \left| \Phi^+ \right\rangle \left\langle \Phi^+\right|, \nonumber \\
\tilde{S_z^2} \mapsto S_z^2 &=&\delta_6= \frac{1}{4} (\openone + X \otimes X + Y \otimes Y - Z \otimes Z) \nonumber\\
&=& \left| \Psi^+ \right\rangle \left\langle \Psi^+\right|, \nonumber \\
\tilde{A_x} \mapsto A_x &=&\delta_7= \frac{1}{2} (Y \otimes Z + Z \otimes Y), \nonumber \\
\tilde{A_y} \mapsto A_y &=&\delta_8= \frac{1}{2} (X \otimes Z + Z \otimes X), \nonumber \\
\tilde{A_z} \mapsto A_z &=&\delta_9= \frac{1}{2} (X \otimes Y + Y \otimes X),
\label{Spin44}
\end{eqnarray}
where we used the standard notation $\ket{\Phi^-},\ket{\Phi^+},\ket{\Psi^+}$ for the symmetric Bell states of two qubits.
Here, the $S_k^2$ operators are obtained by using the formula:
\begin{equation}
S_k^2 = \openone_{\textrm{sym}} - (S_k)^2,
\end{equation}
where $\openone_{\textrm{sym}} = \openone - \left| \Psi^- \right\rangle \left\langle \Psi^-\right|$ is the identity matrix on a symmetric subspace of two qubits and $S_k^2 = \frac{1}{2} (\openone \otimes \openone + K \otimes K)$ ($K$ is k-th Pauli matrix).
Using the above defined representation of a spin-$1$ basis, we can decompose any $n$-qutrit operator as a symmetric $2n$-qubit operator:
\begin{eqnarray}
\hat B=\sum_{i_1,\ldots ,i_n}B_{i_1,\ldots , i_n}(\delta_{i_1}\otimes\ldots\otimes\delta_{i_n}),
\label{BtoSpin2}
\end{eqnarray}
where the expansion coefficients are the same as in \eqref{BtoSpin1}.
\color[rgb]{0,0,0}
So far we discussed the relations between the spin-$1$ operators, however the transformation rule for states is equally important. Let us denote some fixed qutrit orthonormal basis as $\{\tilde{\ket{0}},\tilde{\ket{1}},\tilde{\ket{2}}\}$. If we map the qutrit operators to symmetric two-qubit operators, the corresponding qutrit basis states are mapped to the basis $\{e_1,e_2,e_3\}$ consisting of some symmetric states of two qubits. These states have to transform in the same way under the action of corresponding spin operators as the $\{\tilde{\ket{0}},\tilde{\ket{1}},\tilde{\ket{2}}\}$. If we fix the operator representations for $\{\tilde{S_x},\tilde{S_y},\tilde{S_z}\}=\{\gamma_1,\gamma_2,\gamma_3\}$, the basis for Pauli matrices in \eqref{Spin43}, and the qutrit standard basis $\{\tilde{\ket{0}},\tilde{\ket{1}},\tilde{\ket{2}}\}$, then the new qutrit basis $e_i$ can be derived from the equality of the matrix elements:
\begin{eqnarray}
\forall_{i,k=0,1,2}\,\,\forall_{j=1,\ldots,9}\,\,\, \tilde{\bra{i}}\gamma_j\tilde{\ket{k}}=\bra{e_i}\delta_j\ket{e_k}.
\label{SpinState}
\end{eqnarray}
Let us now fix the standard representation for Pauli matrices in \eqref{Spin43} and the standard qutrit basis for $\{\tilde{\ket{0}},\tilde{\ket{1}},\tilde{\ket{2}}\}$. Then the choice of spin-$1$ operators in the form \eqref{SpinBasis1} implies the following transformation rules for states:
\begin{eqnarray}
&&\tilde{\ket{0}}\mapsto \ket{00},\nonumber\\
&&\tilde{\ket{1}}\mapsto\frac{1}{\sqrt{2}}(\ket{01}+\ket{10}),\nonumber\\
&&\tilde{\ket{2}}\mapsto \ket{11},
\label{NEW3to2}
\end{eqnarray}
whereas the choice of the set \eqref{SpinBasis2} implies the following:
\begin{eqnarray}
&&\tilde{\ket{0}}\mapsto\frac{i}{\sqrt{2}}(\ket{00}-\ket{11}),\nonumber\\
&&\tilde{\ket{1}}\mapsto\frac{1}{\sqrt{2}}(\ket{00}+\ket{11}),\nonumber\\
&&\tilde{\ket{2}}\mapsto\frac{i}{\sqrt{2}}(\ket{01}+\ket{10}).
\label{OLD3to2}
\end{eqnarray}
\section{CGLMP Bell operator in the symmetric two-qubit operators representation}
The CGLMP inequality in its simplest version \cite{CGLMPorigin, CGLMPacin} is a Bell inequality for two observers $\mathcal A$ and $\mathcal B$, each having two measurement settings $\{A_1,A_2\}$, $\{B_1,B_2\}$ with $3$ outcomes, labeled as $\{0,1,2\}$. The inequality is originally presented as a constraint on a linear function for two-outcome probabilities \cite{CGLMPacin}:
\begin{eqnarray}
&&I_3=P(A_1=B_1)+P(A_2+1=B_1)+P(A_2=B_2)+\nonumber\\
&&P(A_1=B_2)-P(A_1=B_1-1)-P(A_2=B_1)\nonumber\\
&&-P(A_2=B_2-1)-P(A_1-1=B_2).
\label{originCGLMP}
\end{eqnarray}
In the case of classical probabilities (admitting a joint probability distribution) the above value is bounded from both sides as follows \cite{Chen06}:
\begin{eqnarray}
-4&\leq& I_3\leq 2.
\label{CGLMPbound}
\end{eqnarray}
The effective way to discuss a maximal quantum violation of a Bell inequality by some quantum system is the method of a Bell operator \cite{Braunstein92}. This method relies on evaluation of the value of the body of a Bell inequality (in our case $I_3$ \eqref{originCGLMP}) for a given quantum state $\rho$ and given measurement settings $\{\hat A_1,\hat A_2,\hat B_1,\hat B_2\}$ by the Born rule:
\begin{eqnarray}
I_3(\rho)=\mathrm{Tr}(\hat B(\hat A_1,\hat A_2,\hat B_1,\hat B_2) \rho).
\label{BellOp}
\end{eqnarray}
The Bell operator $\hat B$ can be found by summing the single-run probabilities:
\begin{eqnarray}
P(A_k=i,B_m=j)&=&\mathrm{Tr}((\proj{i}\otimes\proj{j})\rho),
\label{BellOp1}
\end{eqnarray}
using the relation $P(A=B+k)=\sum_{j=0}^{2}P(A=j,B=j+k\textrm{ mod } 3)$ \cite{CGLMPorigin}.
The maximal quantum violation of a Bell inequality for a given set of settings equals to the largest eigenvalue of the Bell operator, and the optimal state is its corresponding eigenstate.
The CGLMP \eqref{originCGLMP} Bell operator for the optimal settings has the following form \cite{CGLMPacin}:
\begin{equation}
\hat B = \left(\begin{array}{ccccccccc}
0& 0& 0& 0& \frac{2}{\sqrt{3}}& 0& 0& 0& 2 \\
0& 0& 0& 0& 0& \frac{2}{\sqrt{3}}& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& \frac{2}{\sqrt{3}}& 0 \\
\frac{2}{\sqrt{3}}& 0& 0& 0& 0& 0& 0& 0& \frac{2}{\sqrt{3}}\\
0& \frac{2}{\sqrt{3}}& 0& 0& 0& 0& 0& 0& 0 \\
0& 0& 0& 0& 0& 0& 0& 0& 0 \\
0& 0& 0& \frac{2}{\sqrt{3}}& 0& 0& 0& 0& 0 \\
2& 0& 0& 0& \frac{2}{\sqrt{3}}& 0& 0& 0& 0\\
\end{array}
\right).
\label{BellOpC}
\end{equation}
The highest eigenvalue of the Bell operator equals $1+\sqrt{\frac{11}{3}}\approx 2.915$ and the corresponding eigenstate is:
\begin{eqnarray}
\ket{\psi_{max}}=a \tilde{|00 \rangle} + b \tilde{|11 \rangle} + a \tilde{|22\rangle},
\label{CGLMPmaxState}
\end{eqnarray}
where $a = \frac{5 \sqrt{3} + 3 \sqrt{11}}{\sqrt{462 + 78 \sqrt{33}}} \approx 0.617$ and $b = \frac{ 9 + \sqrt{33}}{\sqrt{462 + 78 \sqrt{33}}} \approx 0.489$.
In the case of a maximally entangled two-qutrit state $\ket{\psi_{ME}}=\frac{1}{\sqrt{3}}(\tilde{\ket{00}}+\tilde{\ket{11}}+\tilde{\ket{22}})$ the inequality \eqref{CGLMPbound} is violated slightly less: $I_3(\ket{\psi_{ME}})\approx 2.873$, leading to an inconsistency between maximal quantum entanglement and maximal nonclassicality in terms of violation of the optimal Bell inequality.
In order to resolve the paradox, we transform the Bell operator \eqref{BellOpC} to spin-$1$ bases. Using the transformation \eqref{BtoSpin1} we obtain the following tensor form of the CGLMP Bell operator:
\begin{eqnarray}
&&\hat B=\frac{2}{\sqrt{3}} (\tilde{S_x} \otimes \tilde{S_x} - \tilde{S_y} \otimes \tilde{S_y})+\tilde{S_x^2} \otimes \tilde{S_x^2}\nonumber\\
&&+ \tilde{S_y^2} \otimes \tilde{S_y^2} - \tilde{S_x^2} \otimes \tilde{S_y^2}- \tilde{S_y^2} \otimes \tilde{S_x^2}-\tilde{A_z} \otimes \tilde{A_z}.
\label{BellOpSpin1}
\end{eqnarray}
Note that since $\tilde{A_z}$ is defined as the anticommutator of $\tilde{S_x}$ and $\tilde{S_y}$, the above operator is build solely with the spin operators corresponding to $x-y$ plane. Further we transform the Bell operator to the symmetric qubit basis using \eqref{BtoSpin2}:
\begin{eqnarray}
\hat B=\frac{1}{4} \bigg(\frac{2}{\sqrt{3}} \Big\{
X\otimes \openone\otimes X\otimes \openone
&-& Y\otimes \openone\otimes Y\otimes \openone \nonumber\\
+ X\otimes \openone\otimes \openone\otimes X
&-& Y\otimes \openone\otimes \openone\otimes Y \nonumber\\
+ \openone\otimes X\otimes X\otimes \openone
&-& \openone\otimes Y\otimes Y\otimes \openone \nonumber\\
+ \openone\otimes X\otimes \openone\otimes X
&-& \openone\otimes Y\otimes \openone\otimes Y\Big\} \nonumber\\
+ X\otimes X\otimes X\otimes X &+& Y\otimes Y\otimes Y\otimes Y \nonumber\\- Y\otimes Y\otimes X\otimes X &-&
X\otimes X\otimes Y\otimes Y \nonumber\\
- Y\otimes X\otimes Y\otimes X &-& Y\otimes X\otimes X\otimes Y \nonumber\\ - X\otimes Y\otimes X\otimes Y &-&
X\otimes Y\otimes Y\otimes X\bigg).\nonumber\\
\label{BellOp2q}
\end{eqnarray}
The above operator as a $4$-qubit operator can be now related to known Bell operators for qubit Bell inequalities. Indeed, the first part of the operator (denoted in $\{\}$ braces) corresponds to Bell operators for CHSH inequalities \cite{CHSH, Braunstein92} for all four pairs of qubits, whereas the second part corresponds to a Bell operator of a four-qubit Mermin inequality \cite{Mermin90}. The structure of the Bell operator \eqref{BellOp2q} is schematically presented in the Fig. \ref{BellOperators}.
\begin{figure}
\includegraphics[width=0.80\columnwidth]{M-CHSH.eps}
\caption{Schematic presentation of a CGLMP Bell operator \eqref{BellOp2q} in the $4$-qubit representation. The operator consists of five parts, the one corresponding to a $4$-qubit Mermin's inequality, and $4$ corresponding to CHSH inequalities for all pairs of qubits.}
\label{BellOperators}
\end{figure}
Since the Mermin inequality is maximally violated by a GHZ state, whereas CHSH inequalities are maximally violated by two-qubit Bell states, one can expect that the eigenvector corresponding to the largest eigenvalue of \eqref{BellOp2q} is a superposition of a GHZ state and two Bell states (due to the monogamy of violation of CHSH inequalities \cite{Toner06}, only two of the four CHSH inequalities, corresponding to separate pairs of qubits, can be violated maximally at the same time):
\begin{equation}
|\psi (p) \rangle = \sqrt{p} |GHZ\rangle + \sqrt{1-p} |\psi^+\rangle|\psi^+\rangle,
\label{supGHZBell}
\end{equation}
for which:
\begin{eqnarray}
{\rm Tr}(\hat B |\psi(p)\rangle\langle\psi(p)|) = 2p + 4 \sqrt{\frac{2 p (1-p)}{3}}.
\label{EigB2q}
\end{eqnarray}
It turns out that the quantity \eqref{EigB2q} is maximized for $p_{\textrm{max}}=\frac{1}{22}(11+\sqrt{33})\approx 0.761$, and $\ket{\psi(p_{\textrm{max}})}$ is a representation of a state \eqref{CGLMPmaxState}, which maximally violates CGLMP inequality, in symmetric-two-qubit basis. Indeed, by taking the set of transformations \eqref{NEW3to2} one easily obtains the state \eqref{CGLMPmaxState} from $\ket{\psi(p_{\textrm{max}})}$. The maximally entangled state of two qutrits corresponds via transformations \eqref{NEW3to2} to the state $\ket{\psi(\frac{2}{3})}$:
\begin{eqnarray}
\label{psi4}
&&|\psi\left(\tfrac{2}{3}\right)\rangle=\sqrt{\frac{2}{3}}|GHZ\rangle + \sqrt{\frac{1}{3}} |\psi^+\rangle |\psi^+\rangle\nonumber\\
&&\equiv \frac{1}{\sqrt{3}}(\tilde{|00\rangle}+\tilde{|11\rangle}+\tilde{|22\rangle}).
\end{eqnarray}
The symmetric two-qubit form of a CGLMP Bell operator \eqref{BellOp2q} explains, why the maximally entangled state of two qutrits does not give rise to a maximal violation: the CGLMP inequality is violated by the state from the family \eqref{supGHZBell}, and the optimal $p$, which maximizes the violation \eqref{EigB2q}, is determined by the structure constants of the operator \eqref{BellOp2q}. In this representation the maximally entangled state of two-qutrits seems to be suboptimal.
One additional comment is necessary here. The Bell operator \eqref{BellOp2q}, which represents a CGLMP inequality, does not give rise to a four-qubit Bell inequality. Namely, the corresponding correlation-based Bell inequality for $4$ parties, $2$ settings and $2$ outcomes:
\begin{eqnarray}
&&1-\frac{4}{\sqrt{3}}\leq \frac{1}{4}\bigg(\frac{2}{\sqrt{3}} \Big(A_1 C_1 - A_2 C_2 + A_1 D_1 - A_2 D_2 \nonumber\\
&&+ B_1 C_1 - B_2 C_2 + B_1 D_1 - B_2 D_2\Big) \nonumber \\
&&+ A_1 B_1 C_1 D_1 + A_2 B_2 C_2 D_2 - A_1 B_1 C_2 D_2 - A_1 B_2 C_1 D_2 \nonumber \\
&&- A_2 B_1 C_1 D_2 - A_1 B_2 C_2 D_1 - A_2 B_1 C_2 D_1 -
A_2 B_2 C_1 D_1\bigg) \nonumber \\ &&\leq 1+ \frac{4}{\sqrt{3}},
\label{BellIn4q}
\end{eqnarray}
is not violated by any quantum state. Therefore, all the above considerations of the Bell operator \eqref{BellOp2q} within the symmetric-two-qubit representation must be treated as a tool for analyzing the physical properties of qutrits.
Finally we show, that the symmetric-two-qubit representation of a CGLMP Bell operator allows for deriving the maximal quantum violation (the Tsirelson bound) from complementarity of quantum correlations.
Let us first introduce the following notation:
\begin{eqnarray}
\alpha &=& \langle X\otimes \openone\otimes X\otimes \openone \rangle
=\langle X\otimes \openone\otimes \openone\otimes X\rangle \\ \nonumber
&=&\langle \openone\otimes X\otimes X\otimes \openone \rangle
=\langle \openone\otimes X\otimes \openone\otimes X \rangle \\
\beta &=& \langle Y\otimes \openone\otimes \openone\otimes Y \rangle
=\langle Y\otimes \openone\otimes Y\otimes \openone \rangle \\ \nonumber
&=&\langle \openone\otimes Y\otimes Y\otimes \openone \rangle
=\langle \openone\otimes Y\otimes \openone\otimes Y \rangle \\
\tau &=& \langle Y\otimes X\otimes Y\otimes X \rangle
=\langle Y\otimes X\otimes X\otimes Y\rangle\\ \nonumber
&=&\langle X\otimes Y\otimes X\otimes Y\rangle
=\langle X\otimes Y\otimes Y\otimes X \rangle \\
\epsilon &=& \langle X\otimes X\otimes Y\otimes Y\rangle=\langle Y\otimes Y\otimes X\otimes X\rangle \\
1 &=& \langle X\otimes X\otimes X\otimes X \rangle= \langle Y\otimes Y\otimes Y\otimes Y \rangle
\end{eqnarray}
Then the mean value of the Bell operator \eqref{BellOp2q} reads:
\begin{equation}
\langle B \rangle = \frac{1}{4} \left(\frac{8}{\sqrt{3}} (\alpha - \beta) - 4 \tau -
2 \epsilon + 2\right)
\end{equation}
We follow the approach of \cite{BellComplementarity}, in which one finds sets of mutually maximally anticommuting operators. It is shown, that squares of mean values of such operators are upper-bounded by $1$ due to the complementarity of correlations. It can be easily shown that the operator sets $\{\alpha,\beta,\epsilon\}$, $\{\alpha,\tau\}$ and $\{\beta,\tau\}$ are maximally anticommuting, therefore the following constraints are valid:
\begin{eqnarray}
&&\alpha^2 + \beta^2 + \epsilon^2 \leq 1 ,\nonumber\\
&&\alpha^2 + \tau^2 \leq 1,\nonumber\\
&&\beta^2 + \tau^2 \leq 1.
\label{cons1}
\end{eqnarray}
We add the following constraint:
\begin{eqnarray}
\epsilon - 2 \tau \leq 1,
\label{cons2}
\end{eqnarray}
which follows from the nonnegativity of the expectation value of any projector. Note that the following expression:
\begin{eqnarray}
\Pi&=&2(\openone^{\otimes 4}) - X \otimes X \otimes X \otimes X - X \otimes X \otimes Y \otimes Y \nonumber \\ &+& Y \otimes X \otimes Y \otimes X + Y \otimes X \otimes X \otimes Y,
\end{eqnarray}
is an operator with eigenvalues $4$ and $0$, therefore it is proportional to a projector. The expectation value of this expression:
$\langle\Pi\rangle=1-\epsilon+2\tau$ is greater than zero and hence we get (\ref{cons2}).
We maximize the mean of the Bell operator under the constraints \eqref{cons1} and \eqref{cons2}:
\begin{equation}
\max_{\alpha, \beta, \tau, \epsilon } \langle B \rangle = \frac{1}{4} \left(4 + 4 \sqrt{\frac{11}{3}}\right) \approx 2.915,
\end{equation}
which exactly gives the maximal quantum violation.
\section{Maximal entanglement \emph{vs} maximal non-classicality beyond CGLMP}
The two-qutrit CGLMP inequality \cite{CGLMPorigin} found its direct generalization to the case of three higher dimensional parties \cite{AcinChen04, Chen08}. Especially interesting is the case of a three-qutrit inequality \cite{AcinChen04}:
\begin{eqnarray}
&&P(A_1+B_1+C_1=0)+P(A_1+B_2+C_2=1)+\nonumber\\
&&P(A_2+B_1+C_2=0)+P(A_2+B_2+C_1=1)+\nonumber\\
&&2P(A_2+B_2+C_2=0)-P(A_2+B_1+C_1=2)-\nonumber\\
&&P(A_1+B_2+C_1=2)-P(A_1+B_1+C_2=2)\leq 3,\nonumber\\
\label{3CGLMP}
\end{eqnarray}
the features of which are very similar to the original CGLMP: it is tight, and its maximal violation of $4.372$ arises for a slightly non-maximally entangled state \eqref{CGLMPmaxState}
\begin{eqnarray}
\ket{\psi_{max}}=a \tilde{|000 \rangle} + b \tilde{|111 \rangle} + a \tilde{|222\rangle},
\label{MaxState3q}
\end{eqnarray}
where $a = \frac{5 \sqrt{3} + 3 \sqrt{11}}{\sqrt{462 + 78 \sqrt{33}}} \approx 0.617$ and $b = \frac{ 9 + \sqrt{33}}{\sqrt{462 + 78 \sqrt{33}}} \approx 0.489$.
For a maximally entangled three-qutrit state one obtains the violation of $4.333$. Using analogous techniques like in CGLMP case we can derive the Bell operator for this inequality and translate it to two-qubit symmetric basis \eqref{BtoSpin2}. Although the form of the Bell operator in the spin-$1$ basis is very complicated in this case (see Appendix \ref{App1}), we can easily discuss the form of an optimal state giving the maximal violation in the $6$-qubit representation. In full analogy to \eqref{supGHZBell}, the maximally entangled state of three qutrits translates under transformations \eqref{NEW3to2} to:
\begin{eqnarray}
&&\frac{1}{\sqrt{3}}(\ket{\tilde{0}}^{\otimes 3}+\ket{\tilde{0}}^{\otimes 3}+\ket{\tilde{0}}^{\otimes 3})\mapsto \sqrt{\frac{2}{3}} |GHZ\rangle + \sqrt{\frac{1}{3}} |\psi^+\rangle^{\otimes 3}.\nonumber\\
\end{eqnarray}
If we optimize the violation over all possible superpositions of $6$-qubit GHZ state and $3$ Bell states:
\begin{eqnarray}
\ket{\psi(p)}=\sqrt{p} |GHZ\rangle + \sqrt{1-p} |\psi^+\rangle^{\otimes 3},
\end{eqnarray}
we obtain the maximal value of $4.345$ for $p\approx 0.845$, which is slightly larger, but still suboptimal. It turns out, that in order to find a maximal violation we have to search over the following family of states:
\begin{eqnarray}
\ket{\psi(p,\theta)}=\sqrt{p} \left(\sin(\theta)|0\rangle^{\otimes 6}+\cos(\theta)|1\rangle^{\otimes 6}\right) + \sqrt{1-p} |\psi^+\rangle^{\otimes 3},\nonumber\\
\end{eqnarray}
which is a superposition of a generalized GHZ state (with unequal weights) and the product of $3$ Bell states. The maximal violation is attained for $\theta\approx 0.870$ and $p\approx 0.841$, which reproduces the state \eqref{MaxState3q}.
In this case the interpretation of the optimal form of a $6$-qubit equivalent of a $3$-qutrit state is not so straightforward as in the case of CGLMP.
\section{Conclusions}
In this work we discussed various aspects of a nonclassicality of qutrit states in terms of violation of tight Bell inequalities. We introduced a new method of analyzing the maximal violation of a Bell inequality by transforming its Bell operator to a local basis of symmetric two-qubit operators. In this way the analysis of a Bell inequality for $n$ qutrits is translated into the analysis of a corresponding Bell inequality for $2n$ qubits. Using this method in the case of a CGLMP inequality we resolved the paradox of a maximal violation by a non-maximally entangled two-qutrit state. Moreover, we were able to derive the Tsirelson bound for the CGLMP inequality solely from the complementarity of correlations, which has never been observed before for correlations between qutrits.
\section{Acknowledgements}
MM, PK, WL and AK are supported by NCN Grant No. 2014/14/M/ST2/00818. KK is supported by NCN Grant No. 2012/05/E/ST2/02352.
|
1,108,101,564,239 | arxiv | \section{Introduction}
Let $C$ be a non-empty finite set, and
$\Gamma$ a subgroup of the symmetric group $S(C)$.
Given a bijection $f:A \times C \to B \times C$,
the problem of \emph{$\Gamma$-equivariant division} is to
find a \emph{quotient bijection} $h:A \to B$
respecting whatever symmetries $f$ may have under
the action of $S(A) \times S(B) \times \Gamma$.
Specifically,
given
\[
(\alpha,\beta,\gamma) \in S(A) \times S(B) \times \Gamma
,
\]
let
\[
f_{\alpha,\beta,\gamma} = (\alpha^{-1} \times \gamma^{-1}) \lhd f \lhd (\beta \times \gamma)
,
\]
and
\[
h_{\alpha,\beta} = \alpha^{-1} \lhd h \lhd \beta
,
\]
where the symbol $\lhd$, pronounced `then',
represents the composition of functions
in the natural order, with first things first:
\[
(p \lhd q)(x)=q(p(x))
.
\]
We say that $h$ is a \emph{$\Gamma$-equivariant quotient} of $f$
if whenever $f_{\alpha,\beta,\gamma} = f$ we have $h_{\alpha,\beta} = h$.
$\Gamma$ is \emph{fully cancelling} if
every bijection $f:A \times C \to B \times C$
has a $\Gamma$-equivariant quotient,
and \emph{finitely cancelling} if this is true providing $A,B$ are finite.
Feldman and Propp
\cite{fp}
looked at the finite case.
They showed that
the subgroup $S(C,\star)$ fixing a designated basepoint $\star \in C$
is finitely cancelling,
but unless $C$ is a singleton, the full group $S(C)$ is
not.
Going further, they gave a beautiful proof that
$\Gamma$ is finitely cancelling just if it has a globally fixed point.
Here we are interested in the infinite case.
The general problem of division is to produce
from $f:A \times C \to B \times C$
\emph{any} quotient bijection $h:A \to B$,
equivariant or not.
Known division methods
that eschew the Axiom of Choice
(cf. \cite{conwaydoyle:three,doyleqiu:four,schwartz:four})
produce quotients
that respect any symmetries under the action of $S(A) \times S(B)$,
so they are at least $S_0(C)$-equivariant, where $S_0(C)$ is the
trivial subgroup of $S(C)$.
But these methods depend on fixing an ordering of $C$, suggesting that
this is the most equivariance we can hope for.
And indeed, we will show that $\Gamma$ is fully cancelling just if it
is the trivial subgroup $S_0(C)$.
\section{Finitely cancelling}
For starters,
Feldman and Propp showed that if you specify a base point $* \in C$,
the subgroup $S(C,*)$ of $S(C)$ that fixes $*$
is finitely cancelling.
Here's the argument.
For $c \in C$ define a map (not generally a bijection)
\[
f\row{c}:A \to B
,
\]
\[
f\row{c}(a) = f \lhd \pi_1(a,c)
,
\]
where
\[
\pi_1((x,y)) = x
.
\]
Let
\[
p(a) = f\row{*}(a) = (f \lhd \pi_1)(a,*)
\]
and
\[
q(b) = f^{-1}\row{*}(b) = (f^{-1} \lhd \pi_1)(b,*)
.
\]
Because $A$ is finite, the composition $p \lhd q$ has some cycles.
Let $X \subset A$ be the union of all these cycles.
The restriction $p|X$ is a partial bijection from $A$ to $B$.
Subtract $p|X \times {\mbox{id}}_C$ from $f$
(cf. \cite{conwaydoyle:three,doyleqiu:four,fp})
to get a bijection from $(A-X) \times C$ to
$(B-p(X)) \times C$.
Proceed by recursion to get a bijection $\mathrm{FP}(f,*): A \to B$.
To sum up:
\begin{prop}[Feldman-Propp] \label{star}
If some $* \in C$ is fixed by every
$g \in \Gamma$, $\Gamma$ is finitely cancelling.
\end{prop}
We can collect the various bijection $\mathrm{FP}(f,c)$ for $c \in C$
into a new bijection
\[
\bar{f}:A \times C \to B \times C
,
\]
\[
\bar{f}((a,c)) = (\mathrm{FP}(f,c)(a),c)
.
\]
This new bijection $\bar{f}$ satisfies
\[
\bar{f}((a,c)) = (\bar{f}\row{c}(a),c)
.
\]
We will call any bijection that preserves the second coordinate in this
way a \emph{parallel bijection}.
By combining all the bijections $\mathrm{FP}(f,c)$ in this parallelization $\bar{f}$,
we obviate the need to choose a basepoint, so Proposition \ref{star}
implies (and follows from):
\begin{prop} \label{parallel}
To a finite bijection
$f: A \times C \to B \times C$
we can associate in a fully equivariant manner a new bijection $\bar{f}$ with
\[
\bar{f}((a,c)) = (\bar{f}_c(a),c)
\]
where $\bar{f}_c:A \to B$ is a bijection for each $c \in C$.
\end{prop}
In light of Proposition \ref{parallel},
$\Gamma \subset S_C$ is finitely cancelling just if
any finite parallel bijection has a $\Gamma$-equivariant quotient.
Indeed, to any finite $f$ we can associate its parallelization $\bar{f}$;
if $\bar{f}$ has a $\Gamma$-equivariant quotient then so does $f$;
if it does not, then $\Gamma$ is not cancelling.
This does not necessarily mean that
in every finite division problem we can safely parallelize
$f$ as our first step.
It could be that $f$ has a $\Gamma$-equivariant quotient
while its parallelization $\bar{f}$
does not.
(See \ref{prob:parallel}.)
Proposition \ref{parallel} fails in the infinite case;
this fact underlies the counterexamples we will produce there.
\section{Not finitely cancelling}
We begin with counterexamples in the finite case,
all obtained using the method of Feldman and Propp.
The simplest case is $C=\{a,b\}$.
Take $A = \{x,y\}$, $B=\{1,2\}$, and
\[
\begin{gathered}
f =
\begin{array}{l|ll}
&x&y\\
\hline
a&1a&2a\\
b&2b&1b
\end{array}
\\
(a,b)(1,2)
\end{gathered}
\]
Here
$A \times C$ is the set of locations in a matrix with rows indexed by $C$
and columns indexed by $A$.
An entry $1a$ represents $(1,a) \in B \times C$, etc.
The $(1,2)(a,b)$ underneath indicates a symmetry of $f$,
obtained by taking
$\alpha$ to be the identity, $\beta=(1,2)$, and $\gamma=(a,b)$.
Performing these substitutions yields
\[
f_{\alpha,\beta,\gamma}=
\begin{array}{l|ll}
&x&y\\
\hline
b&2b&1b\\
a&1a&2a
\end{array}
\]
This is just a different representation of $f$,
as we see by swapping the rows,
so $f_{\alpha,\beta,\gamma}=f$.
But we can't have $h_{\alpha,\beta}=h$,
because $\alpha$ is the identity while $\beta$ is not,
so this $f$ has no $S(C)$-equivariant quotient,
hence $S(C)$ is not finitely cancelling.
We can simplify the display of this example as follows:
\[
\begin{array}{l|ll}
a&1&2\\
b&2&1
\end{array}
\]
\[
(a,b)(1,2)
\]
We don't need column labels as these aren't being permuted;
leaving out the labels from $C$ in the table entries
indicates this is a parallel bijection.
The example extends in an obvious way to show that $S(C)$ is not
finitely cancelling if $|C|>1$.
For example, take $C=\{a,b,c\}$, and
\[
\begin{gathered}
\begin{array}{l|lll}
a&1&2&3\\
b&2&3&1\\
c&3&1&2
\end{array}
\\
(a,b,c)(1,2,3)
\end{gathered}
\]
These examples come from the regular representation of a cyclic group.
A similar construction works for any finite group $G$.
(Cf. \ref{regrep} below.)
While we don't need it for what is to follow, we pause to illustrate
the construction in the case of the
noncyclic group $C_2 \times C_2$,
whose regular representation is the Klein 4-group
$\{(a)(b)(c)(d),(a,b)(c,d),(a,c)(b,d),(a,d)(b,c)\}$:
\[
\begin{gathered}
\begin{array}{l|llll}
a&1&2&3&4\\
b&2&1&4&3\\
c&3&4&1&2\\
d&4&3&2&1
\end{array}
\\
(a,b)(c,d)(1,2)(3,4)
\\
(a,c)(b,d)(1,3)(2,4)
\end{gathered}
\]
This bijection is more symmetrical than we need to show this $\Gamma$ is
not cancelling,
because $\Gamma$ has a subgroup the two element subgroup generated by
$(a,b)(c,d)$,
and to show this is noncancelling we can just duplicate our first example
above:
\[
\begin{gathered}
\begin{array}{l|ll}
a&1&2\\
b&2&1\\
c&1&2\\
d&2&1
\end{array}
\\
(a,b)(c,d)(1,2)
\end{gathered}
\]
By now it is clear how to a handle any nontrivial permutation
all of whose cycles have the same length.
Such permutations are called \emph{semiregular}.
A permutation group is semiregular just if every non-trivial
element is semiregular.
(Such groups are also called `fixed point free', but this invites
confusion with groups with no globally fixed point.)
To sum up:
\begin{prop}[Feldman-Propp] \label{bad}
No permutation group
containing a semiregular subgroup
is finitely cancelling.
\end{prop}
Going further, Feldman and Propp give a beautiful algebraic proof of
the following:
\begin{theorem}
[Feldman-Propp]
\label{finitejustif}
A permutation group is finitely cancelling just if it has a globally
fixed point.
\end{theorem}
For further discussion, see \ref{finitecase} below.
For now, we're set:
We already have the tools to dispose of the infinite case.
\section{Not fully cancelling} \label{generalcase}
When $A$ and hence $B$ may be infinite,
known division methods depend on fixing an ordering for $C$.
This raises the suspicion that no nontrivial permutation group
can be fully cancelling.
\begin{theorem} \label{infinite}
A permutation group is fully cancelling
just if it is trivial.
\end{theorem}
In other words, if we demand complete equivariance for
$A$ and $B$, we can't demand any equivariance at all for $C$.
The proof will proceed via a string of examples.
We begin by slightly varying the construction used above in the finite case,
substituting non-parallel bijections.
\begin{itemize}
\item
$(a,b)$
\[
\begin{gathered}
\begin{array}{l|ll}
a&Ka&Kb\\
b&Qb&Qa
\end{array}
\\
(a,b)(K,Q)
\end{gathered}
\]
\item
$(a,b,c)$
\[
\begin{gathered}
\begin{array}{l|lll}
a&Ka&Kb&Kc\\
b&Qb&Qc&Qa\\
c&Jc&Ja&Jb
\end{array}
\\
(a,b,c)(K,Q,J)
\end{gathered}
\]
\item
$(a,b)(c,d)$ (not the simplest example; better for generalization)
\[
\begin{gathered}
\begin{array}{l|llll}
a&Ka&Kb&Kc&Kd\\
b&Qb&Qa&Qd&Qc\\
c&Ja&Jb&Jc&Jd\\
d&Xb&Xa&Xd&Xc
\end{array}
\\
(a,b)(c,d)(K,Q)(J,X)
\end{gathered}
\]
\end{itemize}
Now we jazz up these examples to include fixed points for the action
on $C$,
which we can't do in the finite case.
\begin{itemize}
\item
$(a,b)(c)$
\[
\begin{gathered}
\begin{array}{l|lllllll}
a&Ka&Kb&Kc&1a&2a&3a&\ldots\\
b&Qb&Qa&Qc&1b&2b&3b&\ldots\\
c&1c&2c&3c&4c&5c&6c&\ldots
\end{array}
\\
(a,b)(K,Q)
\end{gathered}
\]
\item
$(a,b,c)(d)$
\[
\begin{gathered}
\begin{array}{l|lllllllll}
a&Ka&Kb&Kc&Kd&1a&2a&3a&4a&\ldots\\
b&Qb&Qc&Qa&Qd&1b&2b&3b&4b&\ldots\\
c&Jc&Ja&Jb&Jd&1c&2c&3c&4c&\ldots\\
d&1d&2d&3d&4d&5d&6d&7d&8d&\ldots
\end{array}
\\
(a,b,c)(K,Q,J)
\end{gathered}
\]
\item
$(a,b,c)(d)(e)$
\[
\begin{gathered}
\begin{array}{l|llllllllll}
a&Ka&Kb&Kc&Kd&Ke&1a&2a&3a&4a&\ldots\\
b&Qb&Qc&Qa&Qd&Qe&1b&2b&3b&4b&\ldots\\
c&Jc&Ja&Jb&Jd&Je&1c&2c&3c&4c&\ldots\\
d&1d&2d&3d&4d&5d&6d&7d&8d&9d&\ldots\\
e&1e&2e&3e&4e&5e&6e&7e&8e&9e&\ldots
\end{array}
\\
(a,b,c)(K,Q,J)
\end{gathered}
\]
\item
$(a,b)(c,d)(e)$
\[
\begin{gathered}
\begin{array}{l|lllllllllll}
a&Ka&Kb&Kc&Kd&Ke&1a&2a&3a&4a&\ldots\\
b&Qb&Qa&Qd&Qc&Qe&1b&2b&3b&4b&\ldots\\
c&Ja&Jb&Jc&Jd&Je&1c&2c&3c&4c&\ldots\\
d&Xb&Xa&Xd&Xc&Xe&1d&2d&3d&4d&\ldots\\
e&1e&2e&3e&4e&5e&6e&7e&8e&9e&\ldots
\end{array}
\\
(a,b)(c,d)(K,Q)(J,X)
\end{gathered}
\]
\end{itemize}
These examples illustrate the method to prove that we can never require
any kind of equivariance for $C$.
The reason is that any nontrivial $\Gamma$ will contain some element that
is a product of one or more disjoint non-trivial cycles of the same length,
together with some fixed points.
\section{More about the regular representation} \label{regrep}
For future reference,
let's look more closely at the construction that we've been using,
based on the regular representation.
Fix a finite group $G$.
Take $A=B=C=G$,
\[
f = \{((x,y),(xy,y))\}
.
\]
(The unbound variables $x$ and $y$ are understood to range over $G$.)
First we observe that any quotient $h$ that is even $S_0(C)$-equivariant
will need to agree with one of the `rows' $f\row{c}$ of $f$.
To see this, fix $g \in G$ and set
\[
\alpha = \beta = \{(x,gx)\}
.
\]
(The unbound variable $x$ is understood to range over $G$; you get the idea.)
Now
\[
f_{\alpha,\beta,{\mbox{id}}}
=
\{((gx,y),(gxy,y))\}
=
\{((x',y),(x'y,y))\}
= f
,
\]
so
\[
h(g) = h_{\alpha,\beta}(g) = \alpha^{-1} \lhd h \lhd \beta(g)
= gh(g^{-1}g)
= gh(1)
= f\row{h(1)}(g)
.
\]
Since this holds for every $g \in G$,
\[
h = f\row{h(1)}
.
\]
Any row of $f$ will do as an $S_0(C)$-equivariant quotient, but
we can't have equivariance for any non-trivial element of $G$ acting on the
right.
Indeed, for any $g \in G$, we can take
\[
\beta = \gamma =
\{(x,xg)\}
,
\]
\[
f_{{\mbox{id}},\beta,\gamma} =
\{((x,yg),(xyg,yg))\}
=
\{((x,g'),(xg',g'))\}
= f
.
\]
So we must have
\[
h = h_{{\mbox{id}},\beta} = h \lhd \beta
,
\]
that is,
\[
h(x) = h(x)g
,
\]
but this is impossible if $g$ is not the identity.
\section{Unfinished business}
\subsection{Back to the finite case} \label{finitecase}
Having determined exactly which groups $\Gamma \subset S_C$ are
fully cancelling,
we naturally turn our attention back to the finite case.
We've quoted Feldman and Propp's result
(Theorem \ref{finitejustif})
that $\Gamma$ is finitely cancelling just if it has a globally fixed point.
We've see that this condition is sufficient,
and shown that
if $\Gamma$
contains a fixed-point free subgroup
it is not finitely cancelling.
What about intermediate cases,
like the cyclic group generated by $(a,b,c)(d,e)$,
i.e. the group generated by $(a,b,c)$ and $(d,e)$,
where there is no fixed-point free subgroup?
Or the Klein-like 4-group
\[
\{{\mbox{id}},(a,b)(c,d),(a,b)(e,f),(c,d)(e,f)\}
,
\]
where there are no fixed-point free elements at all?
Feldman and Propp's beautiful algebraic proof does not immediately
provide counterexamples, though it gives a method to produce them.
They ask
\cite[Problem 4]{fp}
for more direct combinatorial arguments.
Let's at least dispose of $(a,b,c)(d,e)$:
\newcommand{\br}[1]{\bar #1}
\[
\begin{gathered}
\begin{array}{l|llllllllllll}
&\br0\br0&\br0\br1&\br0\br2&\br1\br0&\br1\br1&\br1\br2&00&01&02&10&11&12\\
\hline
a&\br00&\br01&\br02&\br10&\br11&\br12&0\br0&0\br1&0\br2&1\br0&1\br1&1\br2\\
b&\br01&\br02&\br00&\br11&\br12&\br10&0\br2&0\br0&0\br1&1\br2&1\br0&1\br1\\
c&\br02&\br00&\br01&\br12&\br10&\br11&0\br1&0\br2&0\br0&1\br1&1\br2&1\br0\\
d&0\br0&0\br1&0\br2&1\br0&1\br1&1\br2&\br00&\br01&\br02&\br10&\br11&\br12\\
e&1\br0&1\br1&1\br2&0\br0&0\br1&0\br2&\br10&\br11&\br12&\br00&\br01&\br02
\end{array}
\\
(a,b,c)
(\br00,\br01,\br02)(\br10,\br11,\br12)
(00,01,02),(10,11,12)\\
(d,e)
(0\br0,1\br0)(0\br1,1\br1)(0\br2,1\br2)
(00,10)(01,11)(02,12)\\
(a,b,c)(d,e)
(\br00,\br01,\br02)(\br10,\br11,\br12)
(0\br0,1\br0)(0\br1,1\br1)(0\br2,1\br2)
(00,11,02,10,01,12)
\end{gathered}
\]
This arises as follows.
Start with bijections
\[
p:X_1 \times \{a,b,c\} \to X_2 \times \{a,b,c\}
;\;
q:Y_1 \times \{d,e\} \to Y_2 \times \{d,e\}
,
\]
\[
\begin{gathered}
p =
\begin{array}{l|lll}
&\br0&\br1&\br2\\
\hline
a&0&1&2\\
b&1&2&0\\
c&2&0&1
\end{array}
\\
(\br0,\br1,\br2)(0,1,2)
\end{gathered}
\]
\[
\begin{gathered}
q =
\begin{array}{l|ll}
&\br0&\br1\\
\hline
d&0&1\\
e&1&0
\end{array}
\\
(\br0,\br1)(0,1)
\end{gathered}
.
\]
The inverses
\[
p^{-1}:X_2 \times \{a,b,c\} \to X_1 \times \{a,b,c\}
;\;
q^{-1}:Y_2 \times \{d,e\} \to Y_1 \times \{d,e\}
\]
are
\[
\begin{gathered}
p^{-1} =
\begin{array}{l|lll}
&0&1&2\\
\hline
a&\br0&\br1&\br2\\
b&\br2&\br0&\br1\\
c&\br1&\br2&\br0
\end{array},
\end{gathered}
\]
\[
\begin{gathered}
q^{-1} =
\begin{array}{l|ll}
&0&1\\
\hline
d&\br0&\br1\\
e&\br1&\br0
\end{array}
\end{gathered}
.
\]
Take the disjoint unions $X = X_1 \cup X_2$ and $Y= Y_1 \cup Y_2$
and augment $p$ and $q$ to involutions
\[
P = p \cup p^{-1} \in S(X \times \{a,b,c\});\;
Q = q \cup q^{-1} \in S(Y \times \{d,e\})
.
\]
Take products with the identity and combine to get an involution
\[
F=
P \times id_Y
\cup
Q \times id_X
\in
S(X \times Y \times \{a,b,c,d,e\})
.
\]
Let
\[
A = X_1 \times Y_1 \cup X_2 \times Y_2
\]
and
\[
B = X_2 \times Y_1 \cup X_1 \times Y_2
.
\]
Separate the involution $F$ into pieces
\[
F = f \cup f^{-1}
,
\]
\[
f : A \times \{a,b,c,d,e\} \to B \times \{a,b,c,d,e\}
.
\]
This checkered Cartesian product construction
can be extended to cover any permutation without fixed points.
Any transitive permutation group contains such an element,
because the average number of fixed points is $1$,
and the identity has more.
So no transitive permutation group is finitely cancelling.
This construction
also takes care of our Klein-like 4-group.
In fact, it should handle
any subdirect product of nontrivial
cyclic permutation groups
(cf. Hall \cite[p.\ 63]{hall:groups}).
Now (asks Shikhin Sethi),
what about the 6-element group
\[
\Gamma = \{{\mbox{id}},(a,b,c), (b,c,a), (a,b)(d,e), (a,c)(d,e), (b,c)(d,e)\}
?
\]
\subsection{Deducing an ordering from a division method} \label{prob:reading}
A \emph{division method for $C$}
associates to any bijection
\[
f:A \times C \to B \times C
\]
a quotient bijection $Q(f)$ with the property that for any bijections
\[
\alpha:A \to A',\;\beta:B \to B'
,
\]
for the transformed division problem
\[
f_{\alpha,\beta}
=
(\alpha^{-1} \times {\mbox{id}}_C) \lhd f \lhd (\beta \times {\mbox{id}}_C):
A' \times C \to B' \times C
\]
the quotient
\[
Q(f_{\alpha,\beta}): A' \to B'
\]
satisfies
\[
Q(f_{\alpha,\beta})
=
Q(f)_{\alpha,\beta} = \alpha^{-1} \lhd Q(f) \lhd \beta
.
\]
A division method produces $S_0(C)$-equivariant quotients,
as we see by restricting $(\alpha,\beta)$ to $S(A) \times S(B)$,
but more is required.
The method must not only respect symmetries of a particular problem,
it must give the same answer when presented with the same problem
in a different guise.
To see the distinction, consider that for an $f$ with no symmetries,
any bijection $h:A \to B$ is an $S_0(C)$-equivariant quotient,
and if a division method were required merely to respect the symmetries of $f$,
it could return a bijection depending on stupid properties of the set $A$,
like whether it consists entirely of natural numbers.
Once again we distinguish between full and finite division methods.
The method of Feldman and Propp is equivariant,
and yields finite division methods (one for each choice of basepoint in $C$).
In the infinite case we get division methods that depend on fixing an
ordering of $C$, and this dependence on the ordering
seems to be unavoidable.
\begin{problem}
Can we equivariantly associate a total ordering of $C$
to any full division method for $C$?
\end{problem}
In the finite case, we ask:
\begin{problem}
Can we equivariantly associate a single point in $C$
to any finite division method for $C$?
\end{problem}
The equivariance we're asking for here
means that we can't make arbitrary choices that favor one ordering
or point of $C$ over another.
\begin{comment}
Specifically, given a division method $Q$ associating to $f$
the quotient $Q(f)$,
let $P(Q) \in C$ be the point we associate to $Q$.
For any $\gamma \in S(C)$, we get a new division method
\[
Q_\gamma(f) = Q(f_{id_A,id_B,\gamma^{-1}})
.
\]
We require that $P(Q_\gamma) = \gamma(P(Q))$.
\end{comment}
Rather than fuss over the definition, let's consider the
particular case of division by three.
First, a general observation:
If $Q(f) = f\row{c}$ then
$Q(f_{\alpha,\beta}) = f_{\alpha,\beta}\row{c}$.
Indeed, for any $f$ we have
\[
(f\row{c})_{\alpha,\beta}
=
f_{\alpha,\beta}\row{c}
,
\]
so if $Q(f) = f\row{c}$,
\[
Q(f_{\alpha,\beta})
= Q(f)_{\alpha,\beta}
= (f\row{c})_{\alpha,\beta}
= f_{\alpha,\beta}\row{c}
.
\]
Now take $C=\{a,b,c\}$.
Consider the six bijections of the form
\[
\begin{gathered}
f[x,y,z] =
\begin{array}{l|lll}
&\br0&\br1&\br2\\
\hline
x&0&1&2\\
y&1&2&0\\
z&2&0&1
\end{array}
\\
(\br0,\br1,\br2)(0,1,2)
\end{gathered}
,
\]
where we propose to plug in for $x,y,z$
each of the six arrangements of $a,b,c$.
These six problems are really one and the same problem in six
different guises, because
\[
f[x,y,z]_{{\mbox{id}},(0,1,2)}
=
\begin{array}{l|lll}
&\br0&\br1&\br2\\
\hline
x&1&2&0\\
y&2&0&1\\
z&0&1&2\\
\end{array}
=
\begin{array}{l|lll}
&\br0&\br1&\br2\\
\hline
z&0&1&2\\
x&1&2&0\\
y&2&0&1\\
\end{array}
= f[z,x,y]
,
\]
and
\begin{eqnarray*}
f[x,y,z]_{(\br1,\br2),(1,2)}
&=&
\begin{array}{l|lll}
&\br0&\br2&\br1\\
\hline
x&0&2&1\\
y&2&1&0\\
z&1&0&2
\end{array}
\\&=&
\begin{array}{l|lll}
&\br0&\br1&\br2\\
\hline
x&0&1&2\\
y&2&0&1\\
z&1&2&0\\
\end{array}
\\&=&
\begin{array}{l|lll}
&\br0&\br1&\br2\\
\hline
x&0&1&2\\
z&1&2&0\\
y&2&0&1
\end{array}
\\&=&
f[x,z,y]
.
\end{eqnarray*}
A division method must produce a quotient respecting the symmetry
\[
f[x,y,z]_{(\br0,\br1,\br2),(0,1,2)} = f[x,y,z]
,
\]
so it must conjugate the cycle $(\br0,\br1,\br2)$ to the cycle $(0,1,2)$.
There are three ways to do this, corresponding to the three
rows $x,y,z$ in the table,
so (as observed in section \ref{regrep}) the quotient bijection
$Q(f[x,y,z])$
distinguishes one of the three elements of $C$, which we call $*[x,y,z]$:
\[
Q(f[x,y,z]) = f[x,y,z]\row{*[x,y,z]}
.
\]
By the general result above, these six basepoints $*[x,y,z]$ all coincide.
So we can distinguish a basepoint in $* \in C$
without making any arbitrary choices
of how to order the elements of $C$.
This is the kind of equivariance we're looking for.
For a finite division method, that's as far as we can go.
In the infinite case,
say that our distinguished basepoint $*$ is $c$.
We continue by presenting the two problems $f[a,b],f[b,a]$, where
\[
\begin{gathered}
f[x,y]=
\begin{array}{l|lllllll}
&\br1&\br2&\br3&\br4&\br5&\br6&\ldots\\
\hline
x&Kx&Ky&Kc&1x&2x&3x&\ldots\\
y&Qy&Qx&Qc&1y&2y&3y&\ldots\\
c&1c&2c&3c&4c&5c&6c&\ldots
\end{array}
\end{gathered}
.
\]
The bijection $f[x,y]$
in effect associates $K$ with $x$ and $Q$ with $y$;
depending on where $K$ and $Q$ wind up
under the quotient bijection $Q(f[x,y])$
(or rather, its inverse),
we can pick $K$ over $Q$, hence $x$ over $y$.
\begin{comment}
We could just look at which comes further to the left.
Better, the columns divide naturally in good columns $1,4,7,\ldots$;
bad columns $2,5,8,\ldots$; indifferent columns $3,6,9,\ldots$.
Prefer $K$ if it comes before $Q$ when we take the columns in the order
in which they've just been mentioned.
\end{comment}
Our preference of $a$ over $b$ will be the same whether we use
$f[a,b]$ or $f[b,a]$,
because these are really the same problem:
\[
f[x,y]_{{\mbox{id}},(K,Q)} = f[y,x]
.
\]
Now, what about division by four? Or five?
\begin{comment}
\subsection{What does it mean to be finite?} \label{prob:finite}
Feldman and Propp's basepoint division method works when $C$ is
Dedekind-finite (IV-finite),
and $A$ (hence $B$) is DCC-finite (II-finite),
a stronger condition meaning
that every chain of subsets has a minimal element.
To nail down the notion that in the infinite case, division depends on
picking an ordering of $C$, we need first to know whether $C$ needs
to be \emph{I-finite},
meaning that there is a bijection between $C$ and some natural number $n$,
as we've been tacitly assuming.
Perhaps the answer is known?
Francois claims that there are Dedekind-finite cardinals with
\[
k < k^2 < k^3 = k^4
,
\]
and cites Jech \cite{jech:choice}, chapter 11, problem 19.
But I don't see that these cardinals are necessarily Dedekind-finite.
\end{comment}
\subsection{Parallelizing a bijection} \label{prob:parallel}
We've already observed that while $\Gamma$ is finitely cancelling
just if every parallel bijection has an equivariant quotient,
if $\Gamma$ is not finitely cancelling there could be special
bijections $f$ which have a $\Gamma$-equivariant quotient,
while their Feldman-Propp parallelizations $\bar{f}$ do not.
\begin{problem}
If a finite bijection $f: A \times C \to B \times C$
has a $\Gamma$-equivariant quotient, must the parallelization
$\bar{f}$ also have a $\Gamma$-equivariant quotient?
\end{problem}
We haven't thought very hard about this one.
\subsection{Special cases}
There are plenty of other questions we could ask,
say concerning restrictions that will guarantee
that $S(C)$-equivariant division is possible.
For example, we might fix $n,k$ and
ask whether $S(C)$-equivariant division is always
possible
when $|A|=|B|=n$ and $|C|=k$.
It is easy to see that in this case we must have $\gcd(k,n!)=1$,
i.e.\ $k$ must have no prime factor $\leq n$.
This condition is sufficient for $n=1,2,3$ and maybe $4$;
the proofs get more involved as $n$ increases.
On the other hand, an example
(thanks to John Voight)
shows that division is not
always possible when $n=8$ and $k=11$.
\section*{Thanks}
Thanks to David Feldman and Shikhin Sethi for crucial advice.
|
1,108,101,564,240 | arxiv | \section{Introduction}\label{sec1}
Over the last 15 years, a lot of progress has been achieved in
high-dimensional statistics where the number of parameters can be much
larger than sample size, covering (nearly) optimal point estimation,
efficient computation and applications in many different areas; see, for
example, the books by \citet{hastetal09}, \citet{pbvdg11} or the review
article by \citet{fanlv10}. The core task of statistical inference
accounting for uncertainty, in terms of frequentist confidence intervals
and hypothesis testing, is much less developed. Recently, a few methods for
assigning $p$-values and constructing confidence intervals have been
suggested
(\citep{WR08};
\citep{memepb09};
\citep{pb13};
\citep{zhangzhang11};
\citep{covtest14};
\citep{vdgetal13};
\citep{jamo13b};
\citep{meins13}).
The current paper has three main pillars: (i) a (selective) review of the
development in frequentist high-dimensional inference methods for $p$-values
and confidence regions; (ii) presenting the first broad, comparative
empirical study among
different methods, mainly for linear models: since the methods are
mathematically justified under
noncheckable and sometimes noncomparable assumptions, a thorough
simulation study should lead to additional insights about reliability and
performance of various procedures; (iii) presenting the \texttt{R}-package
\texttt{hdi} (\emph{h}igh-\emph{d}imensional \emph{i}nference) which
enables to easily use many of the different methods for inference in
high-dimensional generalized linear models. In addition, we include a
recent line of methodology allowing to detect significant groups of
highly correlated variables which could not be inferred as individually
significant single variables (\cite{meins13}). The review and exposition in
\citet{bumeka13} is vaguely related to points (i) and (iii) above, but
much more
focusing on an application oriented viewpoint and covering much less statistical
methodology, theory and computational details.
Our comparative study, point (ii), mentioned above, exhibits interesting
results indicating that more ``stable'' procedures based on
Ridge-estimation or random sample splitting with subsequent
aggregation are somewhat
more reliable for type I error control than asymptotically power-optimal
methods. Such results cannot be obtained by comparing underlying
assumptions of different methods, since these assumptions are often too
crude and far from necessary. As expected, we are unable to
pinpoint to a method which is (nearly) best in all considered
scenarios. In view of this, we also want to offer a collection of useful
methods for the community, in terms of our \textrm{R}-package \texttt{hdi}
mentioned in point (iii) above.
\section{Inference for Linear Models}\label{sec.LM}
We consider first a high-dimensional linear model, while extensions are
discussed in Section~\ref{sec.GLM}:
\begin{equation}
\label{mod.lin} Y = \bx\beta^0 + \eps,
\end{equation}
with $n \times p$ fixed or random design matrix $\bx$, $n \times
1$ response and error
vectors $Y$ and $ \eps$, respectively. The errors are assumed to be
independent of $\bx$ (for random design) with i.i.d. entries having
$\EE[\eps_i] = 0$. We allow for high-dimensional settings where $p \gg
n$.
In further development, the active set or the set of relevant variables
\[
S_0 = \bigl\{j;\beta^0_j \neq0, j=1,
\ldots,p\bigr\},
\]
as well as its cardinality $s_0 = |S_0|$, are important quantities. The main
goals of this section are the construction of confidence intervals and
$p$-values for individual regression parameters $\beta^0_j (j=1,\ldots
,p)$ and corresponding multiple testing adjustment. The former is a highly
nonstandard problem in high-dimensional settings, while for the latter we
can use standard well-known techniques. When considering both
goals simultaneously, though, one can develop more powerful multiple testing
adjustments.
The Lasso (\cite{tibs96}) is among the most popular procedures for
estimating the unknown parameter $\beta^0$ in a high-dimensional linear
model. It exhibits desirable or sometimes even optimal properties for point
estimation such as prediction of $\bx\beta^0$ or of a new response
$Y_{\mathrm{new}}$, estimation in terms of $\|\hat{\beta} - \beta^0\|_q$
for $q = 1,2$, and variable selection or screening; see, for example,
the book of \citet{pbvdg11}. For assigning uncertainties in terms of
confidence intervals or hypothesis testing, however, the plain Lasso seems
inappropriate. It is very difficult to characterize the distribution of the
estimator in the high-dimensional setting; \citet{knfu00} derive asymptotic
results for fixed dimension as sample size $n \to\infty$ and already for
such simple situations, the asymptotic distribution of the Lasso has point
mass at zero. This implies, because of noncontinuity of the distribution,
that standard bootstrapping and subsampling schemes are delicate to apply
and uniform convergence to the limit seems hard to achieve. The latter
means that the estimator is exposed to undesirable super-efficiency
problems, as illustrated in Section~\ref{subsec.comparlm}. All the problems
mentioned are
expected to apply not only for the Lasso but also for other sparse
estimators as
well.
In high-dimensional settings and for general fixed design $\bx$, the
regression parameter is not identifiable. However, when making some
restrictions on the design, one can ensure that the regression vector is
identifiable. The so-called compatibility condition on the design $\bx$
(\cite{vandeGeer:07a}) is a rather weak assumption (\cite{van2009conditions})
which guarantees identifiability and oracle (near) optimality results for
the Lasso. For the sake of completeness, the compatibility condition is
described in Appendix~\ref{subsec.appadd}.
When assuming the compatibility condition with constant $\phi_0^2$
($\phi_0^2$ is close to zero for rather ill-posed designs, and sufficiently
larger than zero for well-posed designs), the
Lasso has the following property: for Gaussian errors and if $\lambda
\asymp\sqrt{\log(p)/n}$, we have with high probability that
\begin{equation}
\label{lasso-ell1} \bigl\|\hat{\beta} - \beta^0\bigr\|_1 \le4
s_0 \lambda/\phi_0^2.
\end{equation}
Thus, if $s_0 \ll\sqrt{n/\log(p)}$ and $\phi_0^2 \ge M > 0$, we have
$\|\hat{\beta} - \beta^0\|_1 \to0$ and, hence, the parameter $\beta^0$ is
identifiable.
Another often used assumption, although not necessary by any means, is the
so-called beta-min assumption:
\begin{equation}
\label{beta-min} \min_{j \in S_0}\bigl |\beta^0_j\bigr|
\ge\beta_{\mathrm{min}},
\end{equation}
for some choice of constant $\beta_{\mathrm{min}} > 0$.
The result in (\ref{lasso-ell1}) immediately implies the screening
property: if
$\beta_{\mathrm{min}} > 4 s_0 \lambda/\phi_0^2$, then
\begin{equation}
\label{screening} \hat{S} = \{j; \hat{\beta}_j \neq0\} \supseteq
S_0.
\end{equation}
Thus, the screening property holds when assuming the compatibility and
beta-min condition. The power of the screening property is a massive
dimensionality reduction (in the original variables) because $|\hat{S}|
\le
\min(n,p)$; thus, if $p \gg n$, the selected set $\hat{S}$ is
much smaller than the full set of $p$ variables. Unfortunately, the
required conditions are overly restrictive and exact variable screening
seems rather unrealistic in practical applications (\cite{pbmand13}).
\subsection{Different Methods}\label{subsec.lm-methods}
We describe here three different methods for construction of statistical
hypothesis tests or confidence intervals. Alternative procedures are
presented in Sections~\ref{subsec.othermeth} and \ref{subsec.comparlm}.
\subsubsection{Multi sample-splitting}\label{subsec.multisample-split}
A generic way for deriving $p$-values in hypotheses testing is given by
splitting the sample with indices $\{1,\ldots,n\}$ into two equal halves
denoted by $I_1$ and $I_2$, that is,
$I_r \subset\{1,\ldots,n\}\ (r=1,2)$ with $|I_1| = \lfloor n/2 \rfloor$,
$|I_2| = n - \lfloor n/2 \rfloor$, $I_1 \cap I_2 = \varnothing$ and $I_1
\cup
I_2 = \{1,\ldots, n\}$. The idea is to use the first half $I_1$ for variable
selection and the second half $I_2$ with the reduced set of selected
variables (from $I_1$) for statistical inference in terms of $p$-values. Such
a sample-splitting procedure avoids the over-optimism to use the data
twice for selection and inference after selection (without taking the effect
of selection into account).
Consider a method for variable selection based on
the first half of the sample:
\[
\hat{S}(I_1) \subset\{1,\ldots,p\}.
\]
A prime example is the Lasso which selects all the variables whose
corresponding estimated regression coefficients are different from
zero. We then use the second half of the sample $I_2$ for constructing
$p$-values, based on the selected variables $\hat{S}(I_1)$. If the
cardinality $|\hat{S}(I_1)| \le n/2 \le|I_2|$, we can run
ordinary least squares estimation using the subsample $I_2$ and the
selected variables $\hat{S}(I_1)$, that is, we regress $Y_{I_2}$ on
$\bx_{I_2}^{(\hat{S}(I_1))}$ where the sub-indices denote the sample
half and
the super-index stands for the selected variables, respectively. Thereby,
we implicitly assume that
the matrix $\bx_{I_2}^{(\hat{S}(I_1))}$ has full rank $|\hat{S}(I_1)|$. Thus,
from such a procedure, we
obtain $p$-values $P_{t\mbox{-}\mathrm{test},j}$ for testing $H_{0,j}: \beta^0_j
= 0$, for $j \in\hat{S}(I_1)$, from the classical $t$-tests,
assuming Gaussian errors or relying on asymptotic justification by the
central limit theorem. To be more precise, we define (raw) $p$-values
\begin{eqnarray*}
P_{\mathrm{raw},j} = \cases{ P_{t\mbox{-}\mathrm{test},j} \mbox{ based on $Y_{I_2},
\bx_{I_2}^{(\hat{S}(I_1))}$},\vspace*{2pt}\cr
\quad \hspace*{10pt}\mbox{if} j \in\hat {S}(I_1),
\vspace*{2pt}
\cr
1, \quad \mbox{if} j \notin\hat{S}(I_1).}
\end{eqnarray*}
An interesting feature of such a sample-splitting procedure is the
adjustment for multiple testing. For example, if we wish to control the
familywise error rate over all considered hypotheses $H_{0,j}
(j=1,\ldots
,p)$, a naive approach would employ a Bonferroni--Holm correction over the
$p$ tests. This is not necessary: we only need to control over the
considered $|\hat{S}(I_1)|$ tests in $I_2$. Therefore, a Bonferroni
corrected $p$-value for $H_{0,j}$ is given by
\[
P_{\mathrm{corr},j} = \min\bigl(P_{\mathrm{raw},j} \cdot\bigl|\hat{S}(I_1)\bigr|,1
\bigr).
\]
In high-dimensional scenarios, $p \gg n > \lfloor n/2 \rfloor\geq
|\hat{S}(I_1)|$, where the latter inequality is an implicit assumption
which holds for the Lasso (under weak assumptions), and thus, the
correction factor employed here is rather
small.
Such corrected $p$-values control the familywise error rate in multiple
testing when assuming the screening property in (\ref{screening}) for the
selector $\hat{S} = \hat{S}(I_1)$ based on the first half $I_1$ only,
exactly as stated in Fact~\ref{th1} below. The reason is that the
screening property ensures that the reduced model is a correct model, and
hence the result is not surprising. In
practice, the screening property typically
does not hold exactly, but it is not a necessary condition for constructing
valid $p$-values (\cite{pbmand13}).
The idea about sample-splitting and subsequent statistical inference is
implicitly contained in \citet{WR08}. We summarize the whole procedure as
follows:
\emph{Single sample-splitting for multiple testing of $H_{0,j}$ among
$j=1,\ldots,p$}:
\begin{longlist}[1.]
\item[1.] Split (partition) the sample $\{1,\ldots,n\} = I_1 \cup I_2$ with
$I_1 \cap I_2
= \varnothing$ and $|I_1| = \lfloor n/2 \rfloor$ and $|I_2| = n - \lfloor n/2
\rfloor$.
\item[2.] Using $I_1$ only, select the variables $\hat{S} \subseteq\{
1,\ldots
,p\}$. Assume or enforce that $|\hat{S}| \le|I_1| = \lfloor n/2 \rfloor
\le|I_2|$.
\item[3.] Denote the design matrix with the selected set of variables
by $\bx^{(\hat{S})}$. Based on $I_2$ with data
$(Y_{I_2},\bx_{I_2}^{(\hat{S})})$, compute $p$-values $P_{\mathrm
{raw,j}}$ for
$H_{0,j}$, for $j \in\hat{S}$, from classical least squares estimation
[i.e., $t$-test which can be used since $|\hat{S}(I_1)| \le|I_2|$]. For $j
\notin\hat{S}$, assign $P_{\mathrm{raw},j} = 1$.
\item[4.] Correct the $p$-values for multiple testing: consider
\[
P_{\mathrm{corr},j} = \min\bigl(P_j \cdot|\hat{S}|,1\bigr),
\]
which is an adjusted $p$-value for $H_{0,j}$ for controlling the familywise
error rate.
\end{longlist}
\begin{figure}
\includegraphics{527f01.eps}
\caption{Histogram of $p$-values $P_{\mathrm{corr},j}$ for a single
covariable, in the \texttt{riboflavin} data set, when doing 50
different (random) sample splits. The figure is taken from
B{\"{u}}hlmann, Kalisch and Meier (\citeyear{bumeka13}).}
\label{fig:pval_lottery}
\end{figure}
A major problem of the single sample-splitting method is its sensitivity
with respect to the choice of splitting the entire sample: sample
splits lead to
wildly different $p$-values. We call this undesirable phenomenon a $p$-value
lottery, and Figure~\ref{fig:pval_lottery} provides an illustration.
To overcome the ``$p$-value lottery,'' we can run the sample-splitting method
$B$ times, with $B$ large. Thus, we obtain a collection of $p$-values for the
$j$th hypothesis $H_{0,j}$:
\[
P_{\mathrm{corr},j}^{[1]},\ldots,P_{\mathrm{corr},j}^{[B]}\quad (j=1,
\ldots,p).
\]
The task is now to do an aggregation to a single $p$-value. Because of
dependence among $\{P_{\mathrm{corr},j}^{[b]}; b=1,\ldots,B\}$, because
all the different half samples are part of the same full sample, an
appropriate aggregation needs to be developed.
A simple solution is to use an empirical $\gamma$-quantile with $0 <
\gamma
< 1$:
\begin{eqnarray*}
&&Q_j(\gamma)\\
&&\quad = \min \bigl(\mbox{emp. $\gamma$-quantile}\bigl
\{P_{\mathrm{corr},j}^{[b]}/\gamma ; b=1,\ldots,B\bigr\},\\
&&\qquad 1 \bigr).
\end{eqnarray*}
For example, with $\gamma= 1/2$, this amounts to taking the sample median
$\{P_{\mathrm{corr},j}^{[b]}; b=1,\ldots,B\}$ and multiplying it with the
factor 2. A bit more sophisticated approach is to choose the best and
properly scaled $\gamma$-quantile in the
range $(\gamma_{\mathrm{min}},1)$ (e.g., $\gamma_{\mathrm{min}} = 0.05$),
leading to the aggregated $p$-value
\begin{eqnarray}
\label{aggreg} P_j = \min \Bigl(\bigl(1 - \log(\gamma_{\mathrm{min}})
\bigr) \inf_{\gamma\in
(\gamma_{\mathrm{min}},1)} Q_j(\gamma) \Bigr)
\nonumber
\\[-8pt]
\\[-8pt]
\eqntext{(j=1,
\ldots,p).}
\end{eqnarray}
Thereby, the factor $(1 - \log(\gamma_{\mathrm{min}}))$ is the price to be
paid for searching for the best $\gamma\in
(\gamma_{\mathrm{min}},1)$. This Multi sample-splitting procedure has been
proposed and analyzed in \citet{memepb09}, and we summarize it below. Before
doing so, we remark that
the aggregation of dependent $p$-values as described above is a general
principle as described in Appendix~\ref{subsec.appadd}.
\emph{Multi sample-splitting for multiple testing of $H_{0,j}$ among
$j=1,\ldots,p$}:
\begin{longlist}[1.]
\item[1.] Apply the single sample-splitting procedure $B$ times,
leading to $p$-values $\{P_{\mathrm{corr},j}^{[b]}; b=1,\ldots
,B\}$. Typical choices are $B=50$ or $B=100$.
\item[2.] Aggregate these $p$-values as in (\ref{aggreg}), leading to
$P_{j}$ which are adjusted $p$-values for $H_{0,j} (j=1,\ldots,p)$,
controlling the familywise error rate.
\end{longlist}
The Multi sample-splitting method enjoys the property that the resulting
$p$-values are approximately reproducible and not subject to a ``$p$-value lottery''
anymore, and it controls the
familywise error rate under the following assumptions:
\begin{longlist}[(A1)]
\item[(A1)] The screening\vspace*{1pt} property as in (\ref{screening}) for the
first half of
the sample: $\PP[\hat{S}(I_1) \supseteq S_0] \ge1 - \delta$ for some
$0 < \delta< 1$.
\item[(A2)] The reduced design matrix for the second half of the sample
satisfies
$\mathrm{rank}(\bx_{I_2}^{(\hat{S}(I_1))}) = |\hat{S}(I_1)|$.
\end{longlist}
\begin{theo}[{[\citet{memepb09}]}]\label{th1}
Consider a linear model as in (\ref{mod.lin}) with fixed design $\bx$ and
Gaussian errors. Assume \textup{(A1)--(A2)}.
Then, for a significance level $0 < \alpha< 1$ and denoting by $B$ the
number of sample splits,
\[
\PP\biggl[\bigcup_{j \in S_0^c} I(P_j \le
\alpha)\biggr] \le\alpha+ B \delta,
\]
that is, the familywise error rate (FWER) is controlled up to the
additional (small) value $B \delta$.
\end{theo}
A proof is given in Meinshausen, Meier and B{\"u}hlmann
(\citeyear{memepb09}). We note that the Multi
sample-splitting method can be used in conjunction with any reasonable,
sparse
variable screening method fulfilling (A1) for very small $\delta> 0$ and
(A2); and it does not necessarily rely on the Lasso for variable
screening. See also Section~\ref{subsec.othersparsemeth}.\vspace*{1pt} Assumption (A2) typically holds for the Lasso
satisfying $|\hat{S}(I_1)| \le|I_1| = \lfloor n/2 \rfloor\le|I_2| =
n -
\lfloor n/2 \rfloor$.
\emph{The screening property} (A1). The screening property (A1) with very
small $\delta> 0$ is not a
necessary condition for constructing valid $p$-values and can be replaced by
a zonal assumption requiring the following: there is a gap between large
and small regression coefficients and there are not too many small nonzero
regression coefficients (\cite{pbmand13}). Still, such a zonal assumption
makes a requirement about the unknown $\beta^0$ and the absolute values of
its components: but this is the essence of the question in hypothesis
testing to infer whether coefficients are sufficiently different from zero,
and one would like to do such a test without an assumption on the true
values.
The Lasso satisfies (A1) with $\delta\to0$ when
assuming the compatibility condition (\ref{compat}) on the design $\bx$,
the sparsity assumption $s_0 = o(\sqrt{n/\log(p)})$ [or $s_0 =
o(n/\log(p))$ when requiring a restricted eigenvalue assumption] and a
beta-min
condition (\ref{beta-min}), as shown in
(\ref{screening}). Other procedures also exhibit the screening
property such as the adaptive Lasso (\cite{zou06}), analyzed in detail in
\citet{geer11}, or methods with concave regularization penalty such as
SCAD (\cite{fan2001variable}) or MC$+$ (\cite{zhang2010}). As criticized
above, the required beta-min assumption should be avoided when
constructing a hypothesis test about the unknown components of $\beta^0$.
Fact~\ref{th1} has a corresponding asymptotic formulation
where the dimension $p = p_n$ and the model depends on sample size $n$: if
(A1) is replaced by $\lim_{n \to\infty} \PP[\hat{S}(I_{1;n}) \supseteq
S_{0;n}] \to1$ and for a fixed number $B$, $\limsup_{n \to\infty}
\PP[\bigcup_{j \in S_0^c} I(P_j \le\alpha)] \le\alpha$.
In such an asymptotic setting, the Gaussian
assumption in Fact~\ref{th1} can be relaxed by invoking the central
limit theorem (for the low-dimensional part).
The Multi sample-splitting method is very generic: it can be used for many
other models, and its basic assumptions are an approximate screening property
(\ref{screening}) and that the cardinality $|\hat{S}(I_1)| < |I_2|$ so that
we only have to deal with a fairly low-dimensional inference problem. See,
for example, Section~\ref{sec.GLM} for GLMs. An extension for testing group
hypotheses of the form $H_{0,G}: \beta_j = 0$ for all $j
\in G$ is indicated in Section~\ref{subsec.assfree}.
Confidence intervals can be constructed based on the duality with the
$p$-values from equation (\ref{aggreg}). A procedure is described in detail
in Appendix~\ref{subsec.appmssplitci}.
The idea to invert the $p$-value method is to apply a bisection method having
a point in and a point outside of the confidence interval. To verify if a
point is inside the
\emph{aggregated} confidence interval, one looks at the fraction of
confidence intervals from the splits which cover the point.
\subsubsection{Regularized projection: De-sparsifying the
Lasso}\label{subsec.desparslasso}
We describe here a method, first introduced by \citet{zhangzhang11}, which
does not require an assumption about $\beta^0$ except for sparsity.
It is instructive to give a motivation starting with the low-dimensional
setting where $p < n$ and $\mathrm{rank}(\bx) = p$. The $j$th component of
the ordinary least squares estimator $\hat{\beta}_{\mathrm{OLS};j}$ can
be obtained
as follows. Do an OLS regression of $\bx^{(j)}$ versus all other variables
$\bx^{(-j)}$ and denote the corresponding residuals by $Z^{(j)}$. Then
\begin{equation}
\label{proj-ols} \hat{\beta}_{\mathrm{OLS};j} = Y^T Z^{(j)}/
\bigl(\bx^{(j)}\bigr)^T Z^{(j)}
\end{equation}
can be obtained by a linear projection.
In a high-dimensional setting, the residuals $Z^{(j)}$ would be equal to
zero and the projection is ill-posed.
For the high-dimensional case with $p > n$, the idea is to pursue a
regularized projection. Instead of ordinary least squares regression, we
use a Lasso regression of $\bx^{(j)}$ versus $\bx^{(-j)}$ with
corresponding residual vector $Z^{(j)}$: such a penalized regression
involves a
regularization parameter $\lambda_j$ for the Lasso, and hence $Z^{(j)} =
Z^{(j)}(\lambda_j)$. As in (\ref{proj-ols}), we immediately obtain (for any
vector $Z^{(j)}$)
\begin{eqnarray}
\label{proj-lasso} \qquad \frac{Y^T Z^{(j)}}{(\bx^{(j)})^T Z^{(j)}} &=& \beta^0_j + \sum
_{k \neq
j} P_{jk} \beta^0_k +
\frac{\eps^T Z^{(j)}}{(\bx^{(j)})^T Z^{(j)}},
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
P_{jk}&=& \bigl(\bx^{(k)}\bigr)^T
Z^{(j)}/\bigl(\bx^{(j)}\bigr)^T Z^{(j)}.
\end{eqnarray}
We note that in the low-dimensional case with $Z^{(j)}$ being the residuals
from ordinary least squares, due to orthogonality, $P_{jk} = 0$. When using
the Lasso-residuals for $Z^{(j)}$, we do not have exact orthogonality
and a
bias arises. Thus, we make a bias correction by plugging in the Lasso
estimator $\hat{\beta}$ (of the regression $Y$ versus $\bx$): the
bias-corrected estimator is
\begin{equation}
\label{despars-lasso} \hat{b}_j = \frac{Y^T Z^{(j)}}{(\bx^{(j)})^T Z^{(j)}} - \sum
_{k \neq j} P_{jk} \hat{\beta}_k.
\end{equation}
Using (\ref{proj-lasso}), we obtain
\begin{eqnarray*}
\sqrt{n}\bigl(\hat{b}_j - \beta^0_j\bigr)
&= &\frac{n^{-1/2} \eps^T Z^{(j)}}{n^{-1}
(\bx^{(j)})^T Z^{(j)}}\\
&&{} + \sum_{k \neq j} \sqrt{n}
P_{jk}\bigl(\beta_k^0 - \hat{\beta}_k
\bigr).
\end{eqnarray*}
The first term on the right-hand side has a Gaussian
distribution, when assuming Gaussian errors; otherwise, it has an
asymptotic Gaussian distribution assuming that $\EE|\eps_i|^{2 + \kappa}
< \infty$ for $\kappa> 0$ (which suffices for the Lyapunov CLT). We will
argue in Appendix~\ref{subsec.appadd} that the second term is negligible
under the following assumptions:
\begin{longlist}[(B1)]
\item[(B1)] The design matrix $\bx$ has compatibility constant bounded away
from zero, and the sparsity is $s_0 = o(\sqrt{n}/\log(p))$.
\item[(B2)] The rows of $\bx$ are fixed realizations of i.i.d. random
vectors $\sim{\cal N}_p(0,\Sigma)$, and the minimal eigenvalue of
$\Sigma$ is bounded away from zero.
\item[(B3)] The inverse $\Sigma^{-1}$ is row-sparse with $s_j =
\sum_{k \neq j} I((\Sigma^{-1})_{jk} \neq0) =
o(n/\log(p))$.
\end{longlist}
\begin{theo}[(\cite{zhangzhang11};
van~de Geer et\break al., \citeyear{vdgetal13})]\label{th2}
Consider a linear model as in (\ref{mod.lin}) with fixed design and
Gaussian errors. Assume \textup{(B1)}, \textup{(B2)} and \textup{(B3)} (or an $\ell_1$-sparsity
assumption on the rows of $\Sigma^{-1}$).
Then
\begin{eqnarray*}
\sqrt{n} \sigma_{\eps}^{-1} \bigl(\hat{b} -
\beta^0\bigr) &=& W + \Delta,\quad W \sim{\cal N}_p(0,\Omega),
\\
\Omega_{jk} &=&
\frac{n(Z^{(j)})^T Z^{(k)}}{[(\bx^{(j)})^T Z^{(j)}][(X^{(k)})^T
Z^{(k)}]},
\\
\|\Delta\|_{\infty} &=& o_P(1).
\end{eqnarray*}
[We note that this statement holds with probability tending to one, with
respect to the variables $\bx\sim{\cal N}_P(0,\Sigma)$ as assumed in
\textup{(B2)}].
\end{theo}
The asymptotic implications of Fact~\ref{th2} are as follows:
\[
\sigma_{\eps}^{-1} \Omega_{jj}^{-1/2}
\sqrt{n} \bigl(\hat{b}_j - \beta^0_j\bigr)
\Rightarrow{\cal N}(0,1),
\]
from which we can immediately construct a confidence interval or hypothesis
test by plugging in an estimate $\hat{\sigma}_{\eps}$ as briefly discussed
in Section~\ref{subsec.addissues}. From a theoretical perspective, it is more
elegant to use the square root Lasso (\cite{belloni2011square}) for the
construction of $Z^{(j)}$; then one can drop (B3) [or the
$\ell_1$-sparsity version
of (B3)] (\cite{vdg14}). In fact, all that we then need is formula
(\ref{ell1bound})
\[
\bigl\|\hat{\beta} - \beta^0\bigr\|_1 = o_P\bigl(1/
\sqrt{\log(p)}\bigr).
\]
From a practical perspective, it seems to make essentially no difference
whether\vspace*{1pt} one takes the square root or plain Lasso for the construction of
the $Z^{(j)}$'s.
More general than the statements in Fact~\ref{th2}, the following
holds assuming (B1)--(B3) (\cite{vdgetal13}): the asymptotic variance
$\sigma_{\eps}^2 \Omega_{jj}$
reaches the Cram\'{e}r--Rao lower bound, which equals $\sigma_{\eps}^2
(\Sigma^{-1})_{jj}$ [which is bounded away from zero, due to (B2)], and the
estimator $\hat{b}_j$ is efficient in the sense
of semiparametric inference. Furthermore, the convergence in Fact~\ref{th2} is uniform over the subset of the parameter space where the
number of nonzero coefficients $\|\beta^0\|_0$ is small and, therefore, we
obtain \emph{honest} confidence intervals and tests. In particular,
both of
these results say that all the complications in post-model
selection do not arise (\cite{leebpoetsch03}), and yet $\hat{b}_j$ is
optimal for construction of confidence intervals of a single coefficient
$\beta^0_j$.
From a practical perspective, we need to choose the regularization
parameters $\lambda$ (for the Lasso regression of $Y$ versus $\bx$) and
$\lambda_j$ [for the nodewise Lasso regressions (\cite{mebu06}) of $\bx^{(j)}$
versus all other variables $\bx^{(-j)}$]. Regarding the former, we
advocate a choice using cross-validation; for the latter, we favor a
proposal for a smaller
$\lambda_j$ than the one from CV, and the details are described in Appendix~\ref{subsec.appadd}.
Furthermore, for a group $G \subseteq\{1,\ldots
,p\}$, we can test
a group hypothesis $H_{0,G}: \beta^0_j = 0$ for all $j \in G$ by
considering the test-statistic
\[
\max_{j \in G} \sigma_{\eps}^{-1}
\Omega_{jj}^{-1/2} \sqrt{n} |\hat{b}_j| \Rightarrow
\max_{j \in G} \Omega_{jj}^{-1/2}
|W_j|,
\]
where the limit on the right-hand side occurs if the null-hypothesis
$H_{0,G}$ holds true.
The distribution of $\max_{j \in G} |\Omega_{jj}^{-1/2} W_j|$ can be easily
simulated from dependent Gaussian random variables. We also remark that
sum-type statistics for large groups
cannot be easily treated because $\sum_{j \in G} |\Delta_j|$ might get out
of control.
\subsubsection{Ridge projection and bias correction}\label{subsec.ridge-proj}
Related to the desparsified Lasso estimator $\hat{b}$ in
(\ref{despars-lasso}) is an approach based on Ridge estimation. We sketch
here the main properties and refer to \citet{pb13} for a detailed
treatment.
Consider
\[
\hat{\beta}_{\mathrm{Ridge}} = \bigl(n^{-1} \bx^T \bx+
\lambda I\bigr)^{-1} n^{-1} \bx^T Y.
\]
A major source of bias occurring in Ridge estimation when $p > n$ comes
from the fact that the Ridge estimator is estimating a projected parameter
\[
\theta^0 = P_{R} \beta^0,\quad P_{R} =
\bx^T \bigl(\bx\bx^T\bigr)^{-}\bx,
\]
where $(\bx\bx^T)^{-}$ denotes a generalized inverse of $\bx\bx^T$. The
minor bias for $\theta^0$ then satisfies
\begin{eqnarray*}
\max_j\bigl|\EE[\hat{\beta}_{\mathrm{Ridge};j}] -
\theta^0_j\bigr| \le\lambda\bigl\| \theta^0
\bigr\|_2 \lambda_{\mathrm{min} \neq0}(\hat{\Sigma})^{-1},
\end{eqnarray*}
where $\lambda_{\mathrm{min} \neq0}(\hat{\Sigma})$ denotes the minimal
nonzero eigenvalue of $\hat{\Sigma}$ (\cite{shadeng11}). The quantity can
be made small by choosing $\lambda$ small. Therefore, for
$\lambda\searrow0^+$ and assuming Gaussian errors, we have that
\begin{equation}
\label{Ridge-distr}\quad \sigma_{\eps}^{-1} \bigl(\hat{
\beta}_{\mathrm{Ridge}} - \theta^0\bigr) \approx W,\quad W \sim{\cal
N}_p(0, \Omega_R),
\end{equation}
where $\Omega_R = (\hat{\Sigma} + \lambda)^{-1} \hat{\Sigma} (\hat
{\Sigma} +
\lambda)^{-1}/n$. Since
\[
\frac{\theta^0}{P_{R;jj}} = \beta^0_j + \sum
_{k \neq j} \frac{P_{R;jk}}{P_{R;jj}} \beta^0_k,
\]
the major bias for $\beta^0_j$ can be estimated and corrected with
\[
\sum_{k \neq j} \frac{P_{R;jk}}{P_{R;jj}} \hat{
\beta}_k,
\]
where $\hat{\beta}$ is the ordinary Lasso. Thus, we construct a
bias-corrected Ridge estimator, which addresses the potentially substantial
difference between $\theta^0$ and the target $\beta^0$:
\begin{eqnarray}
\label{corr-Ridge} \hat{b}_{R;j} = \frac{\hat{\beta}_{\mathrm{Ridge};j}}{P_{R;jj}} - \sum
_{k \neq
j} \frac{P_{R;jk}}{P_{R;jj}} \hat{\beta}_k,
\nonumber
\\[-8pt]
\\[-8pt]
\eqntext{j=1,
\ldots,p.}
\end{eqnarray}
Based on (\ref{Ridge-distr}), we derive in Appendix~\ref{subsec.appadd} that
\begin{eqnarray}
\label{Ridge-repr}&&\sigma_{\eps}^{-1} \Omega_{R;jj}^{-1/2}
\bigl(\hat{b}_{R;j} - \beta^0_j\bigr)\nonumber\\
&&\quad \approx
\Omega_{R;jj}^{-1/2} W_j / P_{R;jj}\nonumber \\
&&\qquad{}+
\sigma_{\eps}^{-1} \Omega_{R;jj}^{-1/2}
\Delta_{R;j},\quad
W \sim{\cal N}_p(0, \Omega_R),
\\
&&|\Delta_{R;j}| \le\Delta_{R\mathrm{bound};j} \nonumber\\
&&\hspace*{22pt}\quad:= \max_{k \neq
j}
\biggl\llvert \frac{P_{R;jk}}{P_{R;jj}}\biggr\rrvert \bigl(\log(p)/n\bigr)^{1/2 - \xi},
\nonumber\end{eqnarray}
with the typical choice $\xi= 0.05$. Sufficient conditions for deriving
(\ref{Ridge-repr}) are assumption (B1) and that the sparsity satisfies $s_0
=O((n/\log(p))^{\xi})$ for $\xi$ as above.
Unlike as in Fact~\ref{th2}, the term $\Delta_{R;j}$ is typically not
negligible and we correct the Gaussian part in (\ref{Ridge-repr}) by the
upper bound $\Delta_{R\mathrm{bound};j}$. For example,
for testing $H_{0,j}: \beta^0_j = 0$ we use the upper bound for the $p$-value
\begin{eqnarray*}
2\bigl(1 - \Phi\bigl(\sigma_{\eps}^{-1}\Omega_{R;jj}^{-1/2}
|P_{R;jj}|\bigl(|\hat {b}_{R;j}| - \Delta_{R\mathrm{bound};j}\bigr)_+\bigr)
\bigr).
\end{eqnarray*}
Similarly, for two-sided confidence intervals with coverage $1-\alpha$
we use
\begin{eqnarray*}
& &[\hat{b}_{R;j} -c_j,\hat{b}_{R;j} +
c_j],
\\
& &c_j = \Delta_{R\mathrm{bound};j} + \sigma_{\eps} \Omega
_{R;jj}^{1/2}/|P_{R;jj}| \Phi^{-1}(1-
\alpha/2).
\end{eqnarray*}
For testing a group hypothesis for $G \subseteq\{1,\ldots,p\}$,
$H_{0,G}: \beta^0_j = 0$ for all $j \in G$, we can proceed similarly
as at
the end of Section~\ref{subsec.desparslasso}: under the null-hypotheses
$H_{0,G}$, the statistic $\sigma_{\eps}^{-1}
\max_{j \in G} \Omega_{R;jj}^{-1/2} |\hat{b}_{R;j}|$ has a distribution
which is approximately stochastically upper
bounded by
\begin{eqnarray*}
\max_{j \in G} \bigl(\Omega_{R;jj}^{-1/2}
|W_j| / |P_{R;jj}| + \sigma_{\eps}^{-1}
\Omega_{R;jj}^{-1/2} |\Delta_{R;j}|\bigr);
\end{eqnarray*}
see also (\ref{Ridge-repr}).
When invoking an upper bound for $\Delta_{R\mathrm{bound};j} \ge
|\Delta_{R;j}|$ as in (\ref{Ridge-repr}), we can easily simulate this
distribution from dependent Gaussian random variables, which in turn
can be
used to construct a $p$-value; we refer for further details to \citet{pb13}.
\subsubsection{Additional issues: Estimation of the error variance and
multiple testing correction}\label{subsec.addissues}
Unlike the Multi sample-splitting procedure in Section~\ref{subsec.multisample-split}, the desparsified Lasso and Ridge
projection method outlined in
Sections~\ref{subsec.desparslasso}--\ref{subsec.ridge-proj} require to
plug-in an estimate of $\sigma_{\eps}$ and to adjust for multiple
testing. The scaled Lasso (\cite{sunzhang11})
leads to a consistent estimate of the error variance: it is a fully automatic
method which does not need any specification of a tuning parameter. In
\citet{reidtibsh13}, an empirical comparison of various
estimators suggests that the estimator based on a residual sum of
squares of
a cross-validated Lasso solution often yields good finite-sample
performance.
Regarding the adjustment when doing many tests for individual regression
parameters or groups thereof, one can use any valid standard
method to correct the $p$-values from the desparsified Lasso or Ridge
projection method. The prime examples are the Bonferroni--Holm procedure for
controlling the familywise error rate and the method from \citet{benyek01}
for controlling the false discovery rate. An approach for
familywise error control which explicitly takes the dependence among the
multiple hypotheses is proposed in \citet{pb13}, based on simulations for
dependent Gaussian random variables.
\subsubsection{Conceptual differences between the methods}
We briefly outline here conceptual differences while Section~\ref{subsec.comparlm} presents empirical results.
The Multi sample-splitting
method is very generic and in the spirit of Breiman's appeal for stability
(\citeauthor{brei96}, \citeyear{brei96,brei96b}), it enjoys some kind of stability due to multiple
sample splits and aggregation; see also the discussion in Sections~\ref{subsec.othersparsemeth} and \ref{subsec.mainass}. The disadvantage
is that, in the worst
case, the method needs a beta-min or a weaker zonal assumption on the
underlying regression parameters: this is somewhat unpleasant since a
significance test should
\emph{find out} whether a regression coefficient is sufficiently large or
not.
Both the desparsified Lasso and Ridge projection procedures do not make
any assumption on the underlying regression coefficient except
sparsity. The
former is most powerful and asymptotically optimal if the design were
generated from a population distribution whose inverse covariance
matrix is
sparse. Furthermore, the convergence is uniform over all sparse regression
vectors and, hence, the method yields honest confidence regions or tests.
The Ridge projection method does not require any assumption on the
fixed design but does not reach the asymptotic Cram\'{e}r--Rao efficiency
bound. The construction with the additional correction term in
(\ref{delta-bound}) leads to reliable type I error control at the cost of
power.
In terms of computation, the Multi sample-splitting and Ridge projection
method are substantially less demanding than the desparsified Lasso.
\subsubsection{Other sparse methods than the
Lasso}\label{subsec.othersparsemeth}
All the methods described above are used ``in default mode'' in conjunction
with the Lasso (see also Section~\ref{subsec.hdilin}). This is not
necessary, and other estimators can be used.
For the Multi sample-splitting procedure, assumptions (A1) with $\delta
\to
0$ and (A2) are
sufficient for asymptotic correctness; see Fact~\ref{th1}. These
assumptions hold for many reasonable sparse estimators when requiring a
beta-min assumption and some sort of identifiability condition such as the
restricted eigenvalue or the compatibility condition on the design matrix~$\bx$; see also the discussion after Fact~\ref{th1}. It is unclear whether
one could gain substantially by using a different screening method than the
Lasso. In fact, the Lasso has been empirically found to perform rather well
for screening in comparison to the elastic net
(\cite{zou2005regularization}), marginal correlation screening
(\cite{fanlv07}) or thresholded Ridge regression; see \citet{pbmand13}.
For the desparsified Lasso, the error of
the estimated bias correction can be controlled by using a bound for
$\|\hat{\beta} - \beta^0\|_1$. If we require (B2) and (B3) [or an $\ell_1$
sparsity assumption instead of (B3)], the estimation error in the bias
correction, based on an estimator
$\hat{\beta}$ in (\ref{despars-lasso}), is asymptotically negligible if
\begin{equation}
\label{ell1bound} \bigl\|\hat{\beta} - \beta^0\bigr\|_1 =
o_P\bigl(1/\sqrt{\log(p)}\bigr).
\end{equation}
This bound is implied by (B1) and (B2) for the Lasso, but other estimators
exhibit this bound as well, as mentioned below. When using such another
estimator, the wording ``desparsified Lasso'' does not make sense
anymore. Furthermore, when using the
square root Lasso for the construction of $Z^{(j)}$, we only need
(\ref{ell1bound}) to obtain asymptotic normality with the $\sqrt{n}$
convergence rate (\cite{vdg14}).
For the Ridge projection method, a bound for $\|\hat{\beta} - \beta^0\|_1$
is again the only assumption such that the procedure is asymptotically
valid. Thus, for the corresponding bias correction, other methods than the
Lasso can be used.
We briefly mention a few other methods for which we have reasons that (A1)
with very small $\delta> 0$ and (A2), or the bound in (\ref{ell1bound})
hold: the adaptive Lasso
(\cite{zou06}) analyzed in greater detail in \citet{geer11}, the MC$+$
procedure with its high-dimensional mathematical analysis
(\cite{zhang2010}), or methods with concave regularization penalty such as
SCAD (\cite{fan2001variable}) analyzed in broader generality and detail in
\citet{fan2014}. If the assumptions (A1) with small $\delta> 0$ and (A2)
fail for the Multi sample-splitting method, the multiple sample
splitting still allows to check the
stability of the $p$-values $P_{\mathrm{corr},j}^{[b]}$ across $b$ (i.e.,
across sample splits). If the variable screening is unstable, many of the
$P_{\mathrm{corr},j}^{[b]}$ (across $b$) will be equal to 1, therefore, the
aggregation has a tendency to produce small $p$-values if most of them, each
from a sample split, are stable and small. See also
\citet{manbu13}, Section~5. In connection with the desparsified method, a
failure of the single sufficient condition in (\ref{ell1bound}), when
using, for example, the square root Lasso for construction of the
$Z^{(j)}$'s, might result
in a too large bias. In absence of resampling or Multi sample
splitting, it
seems difficult to diagnose such a failure (of the desparsified or Ridge
projection method) with real data.
\subsection{\texttt{hdi} for Linear Models}\label{subsec.hdilin}
In the \textsf{R}-package \texttt{hdi}, available on R-Forge (\cite{hdipackage}), we provide implementations for the
Multi sample-splitting, the Ridge projection and the
desparsified Lasso method.
Using the \textsf{R} functions is straightforward:
\begin{verbatim}
> outMssplit
<- multi.split(x = x, y = y)
> outRidge
<- ridge.proj(x = x, y = y)
> outLasso
<- lasso.proj(x = x, y = y)
\end{verbatim}
For users that are very familiar with the procedures, we provide flexible
options. For example, we can easily use an alternative model selection
or another
``classical'' fitting procedure using the arguments \texttt{model.selector}
and \texttt{classical.fit} in \texttt{multi.split}. The default options
should be satisfactory for standard usage.
All procedures return $p$-values and confidence intervals. The Ridge and
desparsified Lasso methods return both single testing $p$-values as well as
multiple testing corrected $p$-values, unlike the Multi sample-splitting
procedure which only returns multiple testing corrected $p$-values. The
confidence intervals are for individual parameters only (corresponding to
single hypothesis testing).
The single testing $p$-values and the multiple testing corrected
$p$-values are extracted from the fit as follows:
\begin{verbatim}
> outRidge$pval
> outRidge$pval.corr
\end{verbatim}
By default, we correct for controlling the familywise error rate for
the $p$-values \texttt{pval.corr}.
Confidence intervals are acquired through the usual \texttt{confint}
interface. Below we extract the 95 \% confidence intervals for those
$p$-values that are smaller than \texttt{0.05}:
\begin{verbatim}
> confint(outMssplit,
parm = which(outMssplit$pval.corr
<= 0.05),
level = 0.95)
\end{verbatim}
Due to the fact that the desparsified Lasso method is quite computationally
intensive, we provide the option to parallelize the method on a
user-specified number of cores.
We refer to the manual of the package for more detailed information.
\subsection{Other Methods}\label{subsec.othermeth}
Recently, other procedures have been suggested for construction of
$p$-values and confidence intervals.
Residual-type bootstrap approaches are proposed and analyzed in
\citet{chatter13} and \citet{liuyu13}. A problem with these approaches
is the nonuniform convergence to a limiting distribution and exposure to
the super-efficiency phenomenon, that is, if the true parameter equals
zero, a confidence region might be the singleton $\{0\}$ (due to a finite
amount of bootstrap resampling), while for nonzero true parameter values,
the coverage might be very poor or a big length of the confidence interval.
The covariance test (\cite{covtest14}) is another proposal which
relies on the solution path of the Lasso and provides $p$-values for
conditional tests that all relevant variables enter the Lasso solution path
first. It is related to post-selection inference, mentioned in Section~\ref{subsec.postsel}.
In \citet{jamo13b}, a procedure was proposed that is very similar to the
one described in Section~\ref{subsec.desparslasso}, with the only
difference being that Z is picked
as the solution of a convex program rather than using the Lasso. The
method is aiming to relax the sparsity assumption (B3) for the design.
A conservative \emph{Group-bound} method which needs no regularity
assumption for the
design, for example, no compatibility assumption (\ref{compat}), has
been proposed
by \citet{meins13}. The method has the capacity to
automatically determine whether a regression coefficient is
identifiable or
not, and this makes the procedure very robust against ill-posed
designs. The
main motivation of the method is in terms of testing groups of correlated
variables, and we discuss it in more detail in Section~\ref{subsec.assfree}.
While all the methods mentioned above are considered in a comparative
simulation study in Section~\ref{subsec.comparlm}, we mention here some
others. The idea of estimating a low-dimensional component of a
high-dimensional parameter is also worked out in
\citet{belloni2012sparse}, \citet{beletal13}, bearing connections to the
approach of
desparsifying the Lasso. Based on stability selection
(\cite{mebu10}), \citet{shah13} propose a version which leads to $p$-values
for testing
individual regression parameters. Furthermore, there are new and
interesting proposals for controlling the false discovery rate, in a
``direct way'' (\citeauthor{bogdan13} \citeyear{bogdan13,bogdan14}; \cite{foygcand14}).
\subsection{Main Assumptions and Violations}\label{subsec.mainass}
We discuss here some of the main assumptions, potential violations and
some corresponding implications calling for caution when aiming for
confirmatory conclusions.
\textit{Linear model assumption}. The first one is that the linear (or
some other) model is
correct. This might be rather unrealistic and, thus, it is important to
interpret the output of software or a certain method. Consider a nonlinear
regression model
\begin{eqnarray*}
&&\mbox{random design}:\quad Y_0 = f^0(X_0) +
\eta_0,
\\
&&\mbox{fixed design}:\quad Y = f^0(\bx) + \eta,
\end{eqnarray*}
where, with some slight abuse of notation, $f^0(\bx) = f^0(\bx
_1),\ldots,
(f^0(\bx_n))^T$. We assume for the random design model, $\eta_0$ is
independent from $X_0$, $\EE[\eta_0] = 0$, $\EE[f^0(X_0)] = 0$, $\EE
[X_0] =
0$, and the data are $n$
i.i.d. realizations of $(X_0,Y_0)$; for the fixed design model, the $n
\times1$ random vector $\eta$ has i.i.d. components with
$\EE[\eta_i]=0$. For the random design model, we consider
\begin{eqnarray}
\label{betaproj} Y_0 &=& \bigl(\beta^0\bigr)^T
X_0 + \eps_0,\nonumber \\
\eps_0 &=& f^0(X_0)
- \bigl(\beta^0\bigr)^T X_0 +
\eta_0,
\\
\beta^0 &=&\argmin_{\beta} \EE\bigl[\bigl(f^0(X_0)
- \beta^T X_0\bigr)^2\bigr]\nonumber
\end{eqnarray}
[where the latter is
unique if $\Cov(X_0)$ is positive definite]. We note that $\EE[\eps_0|X_0]
\neq0$ while $\EE[\eps_0] = 0$ and, therefore, the inference should be
\emph{unconditional} on $\bx$ and is to be interpreted for the projected
parameter $\beta^0$ in (\ref{betaproj}). Furthermore, for correct asymptotic
inference of the projected parameter $\beta^0$, a modified estimator for
the asymptotic variance of the estimator is needed; and then both the
Multi sample-splitting and the desparsified Lasso are asymptotically
correct (assuming similar conditions as if the model were correct). The
Multi sample-splitting method is well suited for the random design case
because the sample splitting (resampling type) is coping well with
i.i.d. data. This is in contrast to fixed design, where the data is not
i.i.d. and the Multi sample-splitting method for a misspecified linear
model is typically not working anymore. The details are given in
\citet{pbvdg15}.
For a fixed design model with $\mathrm{rank}(\bx) = n$, we can always write
\[
Y = \bx\beta^0 + \eps,\quad \eps= \eta
\]
for many solutions $\beta^0$. For ensuring that the inference is valid, one
should consider a sparse $\beta^0$, for example, the basis pursuit
solution from
compressed sensing (\cite{candes2006near}) as one among many
solutions. Thus, inference should be
interpreted for a \emph{sparse} solution $\beta^0$, in the sense that a
confidence interval for the $j$th component would cover this $j$th
component of all sufficiently sparse solutions $\beta^0$. For the
high-dimensional fixed design case,
there is no misspecification with respect to linearity of the model;
misspecification might happen, though, if there is no solution $\beta
^0$ which
fulfills a required sparsity condition. The details are given again in
\citet{pbvdg15}.
The assumption about constant error variance might not hold. We note that
in the random design case of a nonlinear model as above, the error in
(\ref{betaproj}) has nonconstant variance when conditioning on $\bx$, but,
unconditionally, the noise is homoscedastic. Thus, as outlined, the
inference for a random design linear model is asymptotically valid
(unconditional on $\bx$) even though the conditional error distribution
given $\bx$ has nonconstant variance.
\textit{Compatibility or incoherence-type assumption}.
The methods in Section~\ref{subsec.lm-methods} require an identifiability
assumption such as the compatibility condition on the design matrix $\bx$
described in (\ref{compat}). The procedure in Section~\ref{subsec.assfree}
does not require such an assumption: if a component of the regression
parameter is not identifiable, the method
will not claim significance. Hence, some robustness against
nonidentifiability is offered with such a method.
\textit{Sparsity.}
All the described methods require some sparsity assumption of the parameter
vector $\beta^0$ [if the model is misspecified, this concerns the parameter
$\beta^0$ as in (\ref{betaproj}) or the basis pursuit solution]; see the
discussion of (A1) after Fact~\ref{th1} or assumption (B1). Such sparsity
assumptions can be somewhat relaxed to require weak sparsity in terms of
$\|\beta^0\|_r$ for some $0 < r < 1$, allowing that many or all regression
parameters are nonzero but sufficiently small (cf. \cite{vdg15}; \citep{pbvdg15}).
When the truth (or the linear approximation of the true model) is
nonsparse, the methods are expected to break down. With the Multi
sample-splitting procedure, however, a violation of sparsity might be
detected,
since for nonsparse problems, a sparse variable screening method will be
typically unstable with the consequence that the resulting aggregated
$p$-values are typically not small; see also Section~\ref{subsec.othersparsemeth}.
Finally, we note that for the desparsified Lasso, the sparsity assumption
(B3) or its weaker version can be dropped when using the square root Lasso;
see the discussion after Fact~\ref{th2}.
\textit{Hidden variables}.
The problem of hidden variables is most prominent in the area of causal
inference (cf. \cite{pearl00}). In the presence of hidden variables, the
presented techniques need to be adapted, adopting ideas from, for
example, the
framework of EM-type estimation (cf. \cite{dempster1977maximum}), low-rank
methods (cf. \cite{chandrasekaran2012}) or the FCI technique from causal
inference (cf. \cite{sgs00}).
\subsection{A Broad Comparison}\label{subsec.comparlm}
We compare a variety of methods on the basis of multiple testing corrected
$p$-values and single testing confidence intervals. The methods we look at
are the multiple sample-splitting method \emph{MS-Split} (Section~\ref{subsec.multisample-split}), the desparsified Lasso method
\emph{Lasso-Pro} (Section~\ref{subsec.desparslasso}), the Ridge
projection method \emph{Ridge} (Section~\ref{subsec.ridge-proj}), the covariance test \emph{Covtest} (Section~\ref{subsec.othermeth}), the method by Javanmard and Montanari
\emph{Jm2013} (Section~\ref{subsec.othermeth}) and the two bootstrap procedures mentioned in Section~\ref{subsec.othermeth} [\emph{Res-Boot} corresponds to
\citet{chatter13} and \emph{liuyu} to \citet{liuyu13}].
\subsubsection{Specific details for the methods}
For the estimation of the error variance, for the Ridge projection or the
desparsified Lasso method, the scaled Lasso is used as mentioned in
Section~\ref{subsec.addissues}.
For the choice of tuning parameters for the nodewise Lasso regressions
(discussed in Section~\ref{subsec.desparslasso}), we look at the two
alternatives of
using either cross-validation or our more favored alternative procedure (denoted
by Z\&Z) discussed in Appendix~\ref{subsec.appadd}.
We do not look at the bootstrap procedures in connection with multiple
testing adjustment due to the fact that the required
number of bootstrap samples grows out of proportion to go far enough in the
tails of the distribution; some additional importance sampling might help
to address such issues.
Regarding the covariance test, the procedure does not directly provide
$p$-values for the
hypotheses we are interested in. For the sake of comparison though, we use
the interpretation as in \citet{covtestpblmvdg14}.
This interpretation does not have a theoretical reasoning behind it and
functions more as a heuristic.
Thus, the results of the covariance test
procedure should be interpreted with caution.
For the method \emph{Jm2013}, we used our own implementation instead of the
code provided by the authors. The reason for this is that we had already
implemented our own version when we discovered that code was available and
our own version was (orders of magnitude) better in terms of error
control. Posed with the dilemma of fair comparison, we stuck to the best
performing alternative.
\subsubsection{Data used}\label{subsubsec.data}
For the empirical results, simulated design matrices as well as design
matrices from real data are used. The simulated design matrices are
generated $\sim\mathcal{N}_p(0,\Sigma)$ with covariance matrix $\Sigma
$ of
the following three types:
\begin{eqnarray*}
&&\mbox{Toeplitz:}\quad \Sigma_{j,k} = 0.9^{|j-k|},
\\
&&\mbox{Exp.decay:}\quad \bigl(\Sigma^{-1}\bigr)_{j,k} =
0.4^{|j-k|/5},
\\
&&\mbox{Equi.corr:}\quad \Sigma_{j,k} \equiv0.8 \quad\mbox{for all } j \neq k,
\\
&&\hspace*{56pt}\Sigma_{j,j} \equiv1\quad \mbox{ for all } j.
\end{eqnarray*}
The sample size and dimension are fixed at $n=100$ and $p=500$,
respectively. We note that the Toeplitz type has a banded inverse
$\Sigma^{-1}$,
and, vice-versa, the Exp.decay type exhibits a banded $\Sigma$.
The design matrix RealX from real gene expression data of Bacillus Subtilis
($n=71,p=4088$) was
kindly provided by DSM (Switzerland) and is publicly available
(\cite{bumeka13}). To make the
problem somewhat comparable in difficulty to the simulated designs, the
number of variables is reduced to $p=500$ by taking the variables with
highest empirical variance.
The cardinality of the active set is picked to be one of two levels $s_0
\in\{3,15\}$.
For each of the active set sizes, we look at 6 different ways of picking
the sizes of the nonzero coefficients:
\begin{eqnarray*}
&&\mbox{Randomly generated}:\quad U(0,2), U(0,4), U(-2,2),
\\
&&\mbox{A fixed value}:\quad 1, 2 \mbox{ or } 10.
\end{eqnarray*}
The positions of the nonzero coefficients as columns of the design
$\mathbf X$ are picked at random. Results where the nonzero
coefficients were positioned to be the first $s_0$ columns of
$\mathbf X$ can be found in the supplemental article
(\cite{supplement}).
Once we have the design matrix $\mathbf X$ and coefficient vector
$\beta^0$,
the responses $Y$ are generated according to the linear model equation with
$\eps\sim\mathcal{N}(0,1)$.
\begin{figure*}
\includegraphics{527f02.eps}
\caption{Familywise error rate (FWER), average number of false
positive [AVG(V)] and power
for multiple testing based on various methods for a linear model. The
desired control
level for the FWER is
$\alpha=0.05$. The average number of false positives AVG(V) for each
method is shown in the middle. The design matrix is of type
\emph{Toeplitz}, and the active set size being
$s_0=3$ (top) and $s_0=15$ (bottom).}
\label{fig:lintoeplitz}
\end{figure*}
\begin{figure*}[b]
\includegraphics{527f03.eps}
\caption{See caption of Figure \protect\ref{fig:lintoeplitz} with the only
difference being the type of design matrix. In this plot, the design
matrix type is \emph{Exp.decay}.}
\label{fig:linexpdecay}
\end{figure*}
\subsubsection{$p$-values}\label{subsubsec.pvals}
We investigate multiple testing corrected $p$-values for two-sided
testing of the null hypotheses $H_{0,j}: \beta^0_j = 0$ for $j=1,\ldots
,p$.
We report the power and the familywise error rate (FWER) for each method:
\begin{eqnarray*}
\mbox{Power}& =& \sum_{j \in S_0} \PP[H_{0,j}\mbox{
is rejected}]/s_0,
\\
\mbox{FWER} &=& \PP\bigl[\exists j \in S_0^c :
H_{0,j}\mbox{ is rejected}\bigr].
\end{eqnarray*}
We calculate
empirical versions of these quantities based on fitting 100 simulated
responses $Y$ coming from newly generated $\eps$.
For every design type, active set size and coefficient type combination we
obtain 50 data points of the empirical versions of ``Power'' and ``FWER,''
from 50 independent simulations. Thereby, each data point has a newly generated
$X$, $\beta^0$ (if not fixed) and active set positions $S_0 \in\{1,\ldots,
p\}$; thus, the 50 data points indicate the variability with respect to the
three quantities in the data generation (for the same covariance model of
the design, the same model for the regression parameter and its active set
positions). The data points are grouped in plots by design type and active
set size.
We also report the average number of false positives \texttt{AVG(V)} over
all data points per method next to the FWER plot.
The results, illustrating the performance for various methods,
can be found in Figures~\ref{fig:lintoeplitz},
\ref{fig:linexpdecay}, \ref{fig:linequi} and \ref{fig:linrealx}.
\begin{figure*}
\includegraphics{527f04.eps}
\caption{See caption of Figure \protect\ref{fig:lintoeplitz} with the only
difference being the type of design matrix. In this plot, the design
matrix type is \emph{Equi.corr}.}
\label{fig:linequi}
\end{figure*}
\begin{figure*}[b]
\includegraphics{527f05.eps}
\caption{See caption of Figure \protect\ref{fig:lintoeplitz} with the only
difference being the type of design matrix. In this plot, the design
matrix type is \emph{RealX}.}
\label{fig:linrealx}
\end{figure*}
\begin{figure*}
\includegraphics{527f06.eps}
\caption{Confidence intervals and their coverage rates for 100 realizations of
a linear model with fixed design of dimensions $n=100$, $p=500$. The
design matrix was of type Toeplitz and the active set was of size
$s_0=3$. The nonzero coefficients were chosen by sampling once from
the uniform distribution $U[0,2]$. For each method, 18 coefficients are
shown from left to right with the 100 estimated 95\%-confidence
intervals drawn for each coefficient.The first 3 coefficients are the
non-zero coefficients in descending order of value. The other 15
coefficients, to the right of the first 3, were chosen to be those
coefficients with the worst coverage. The size of each coefficient is
illustrated by the height of a black horizontal bar. To illustrate the
coverage of the confidence intervals, each confidence interval is
either colored red or black depending on the inclusion of the true
coefficient in the interval. Black means the true coefficient was
covered by the interval. The numbers written above the coefficients
are the number of confidence intervals, out of 100, that covered the
truth. All confidence intervals are on the same scale such that one
can easily see which methods have wider confidence intervals. To
summarize the coverage for all zero coefficients $S_0^c$ (including
those not shown on the plot), the rounded average coverage of those
coefficients is given to the right of all coefficients.}
\label{fig:lincitoeplitz}
\end{figure*}
\subsubsection{Confidence intervals}
We investigate confidence intervals for the one particular setup of the
Toeplitz design, active set size $s_0=3$ and coefficients $\beta^0_j
\sim
U[0,2]\ (j \in S_0)$. The active set positions are chosen to be the first
$s_0$ columns of $\mathbf X$. The results we show will correspond
to a
single data point in the $p$-value results.
In Figure~\ref{fig:lincitoeplitz}, 100 confidence intervals are plotted for
each coefficient for each method. These confidence intervals are the
results of fitting 100 different responses Y resulting from newly generated
$\eps$ error terms.
For the Multi sample-splitting method from Section~\ref{subsec.multisample-split}, if a variable did not get selected often
enough in the sample splits, there is not enough information to draw a
confidence interval for it. This is represented in the plot by only drawing
confidence intervals when this was not the case. If the (uncheckable) beta-min
condition (\ref{beta-min}) would be fulfilled, we would know that those
confidence intervals cover zero.
For the bootstrapping methods, an invisible confidence
interval is the result of the coefficient being set to zero in all
bootstrap iterations.
\subsubsection{Summarizing the empirical results}
As a first observation, the impact of the sparsity of the problem on
performance cannot be denied. The power clearly gets worse for $s_0=15$ for
the Toeplitz and Exp.decay setups. The FWER becomes too high for quite a
few methods for $s_0=15$ in the cases of Equi.corr and RealX.
For the sparsity $s_0=3$, the Ridge projection method manages to control
the FWER as desired for all setups. In the case of $s_0=15$, it is the
Multi sample-splitting method that comes out best in comparison to the
other methods. Generally speaking, good error control tends to be
associated with a
lower power, which is not too surprising since we are dealing with the
trade-off between type I and type II errors. The desparsified Lasso method
turns out to be a less conservative alternative with not perfect but
reasonable FWER control as long as the problem is sparse
enough ($s_0=3$). The method has a slightly too high
FWER for the Equi.corr and RealX setups, but FWER around 0.05
for Toeplitz and Exp.decay designs. Doing the Z\&Z tuning procedure
helps the error control, as can be seen most clearly in the Equi.corr
setup.
The results for the simulations where the positions for the nonzero
coefficients were not randomly chosen, presented in the supplemental
article (\cite{supplement}), largely give the same
picture. In comparison to the results presented before,
the Toeplitz setup is easier while the Exp.decay setup is
more challenging. The Equi.corr results are very similar to the ones
from before, which is to be expected from the covariance structure.
Looking into the confidence interval results, it
is clear that the confidence intervals of the Multi sample-splitting
method and the Ridge projection method are wider than the rest.
For the bootstrapping methods, the super-efficiency phenomenon
mentioned in Section~\ref{subsec.othermeth} is visible. Important to
note here is that the smallest nonzero coefficient, the third
column, has very poor coverage from these methods.
We can conclude that the coverage of the zero coefficients is decent
for all methods and that the coverage of the nonzero coefficients is
in line with the error rates for the $p$-values.
Confidence interval results for many other setup combinations are provided
in the supplemental article (\cite{supplement}). The observations are
to a large extent the same.
\section{Generalized Linear Models}\label{sec.GLM}
Consider a generalized linear model
\begin{eqnarray*}
& &Y_1,\ldots,Y_n\quad \mbox{independent},
\\
& &g\bigl(\EE[Y_i|X_i = x]\bigr) = \mu^0 +
\sum_{j=1}^p \beta^0_j
x^{(j)},
\end{eqnarray*}
where $g(\cdot)$ is a real-valued, known link function. As before, the goal
is to construct confidence intervals and statistical tests for the unknown
parameters $\beta^0_1,\ldots,\beta^0_p$, and maybe $\mu^0$ as well.
\subsection{Methods}\label{subsec.GLMmethods}
The Multi sample-splitting method can be modified for GLMs in an obvious
way: the variable screening step using the first half of the data can be
based on the $\ell_1$-norm regularized MLE, and $p$-values and confidence
intervals using the second half of the sample are constructed from the
asymptotic distribution of the (low-dimensional) MLE. Multiple testing
correction and aggregation of the $p$-values from multiple sample splits are
done exactly as for linear models in Section~\ref{subsec.multisample-split}.
A desparsified Lasso estimator for GLMs can be constructed as follows
(\cite{vdgetal13}): The
$\ell_1$-norm regularized MLE $\hat{\theta}$ for the parameters $\theta
^0 =
(\mu^0,\beta^0)$ is desparsified with a method based on the
Karush--Kuhn--Tucker (KKT) conditions for $\hat{\theta}$, leading to an
estimator with an asymptotic Gaussian distribution. The Gaussian
distribution can then be used to construct confidence intervals and
hypothesis tests.
\subsection{Weighted Squared Error Approach}\label{subsec.GLMweighted}
The problem can be simplified in such a way that we can apply the
approaches for the linear model from Section~\ref{sec.LM}. This can be done
for all types of generalized linear models (as shown in Appendix~\ref{subsec.app.general.wsqerr}), but we restrict ourselves in this section
to the specific case of logistic regression. Logistic regression is
usually fitted
by applying the iteratively reweighted least squares (IRLS)
algorithm where at every iteration one solves a weighted least squares
problem (\cite{hastetal09}).
The idea is now to apply a standard l1-penalized fitting of the model, build
up the weighted least squares problem at the l1-solution and then apply
our linear model methods on this problem.
We use the notation $\hat{\pi}_i, i = 1, \ldots,n$ for the
estimated probability of the binary outcome. $\hat{\pi}$ is the vector of
these probabilities.
\begin{figure*}
\includegraphics{527f07.eps}
\caption{Familywise error rate (FWER) and power
for multiple testing based on various methods for logistic regression.
The desired control
level for the FWER is $\alpha=0.05$. The design matrix is of type
\emph{Toeplitz} in the top plot and \emph{Equi.corr} in the bottom
plot. If the method name contains a capital \texttt{G}, it is the
modified glm version, otherwise the linear model methods are using
the weighted squared error approach.}
\label{fig:glmsimul}
\end{figure*}
From \citet{hastetal09}, the adjusted response variable becomes
\[
Y_{\mathrm{adj}} = \mathbf X \hat{\beta} + \mathbf W^{-1}(Y-\hat{
\pi}),
\]
and the weighted least squares problem is
\[
\hat{\beta}_{\mathrm{new}} = \argmin_{\beta} (Y_{\mathrm{adj}} - \mathbf
X \beta)^T \mathbf W (Y_{\mathrm{adj}} - \mathbf X \beta),
\]
with weights
\[
\mathbf W =
\pmatrix{ \hat{\pi}_1(1-\hat{
\pi}_1) & 0 & \ldots& 0\vspace*{2pt}
\cr
0 & \hat{
\pi}_2(1-\hat{\pi}_2) & \ddots& \vdots\vspace*{2pt}
\cr
\vdots& \ddots& \ddots& 0\vspace*{2pt}
\cr
0 & \ldots& 0 & \hat{
\pi}_n(1-\hat{\pi}_n) }
\hspace*{-0.5pt}.
\]
We rewrite $Y_{w} = \sqrt{\mathbf W} Y_{\mathrm{adj}}$ and $X_w =
\sqrt{\mathbf W} \mathbf X$ to get
\[
\hat{\beta}_{\mathrm{new}} = \argmin_{\beta} (Y_w - \mathbf
X_w \beta)^T(Y_w - \mathbf X_w
\beta).
\]
The linear model methods can now be applied to $Y_{w}$ and
$\mathbf X_{w}$, thereby the estimate $\hat{\sigma}_{\eps}$ has to
be set to the value
1. We note that in the low-dimensional case, the resulting $p$-values (with
unregularized residuals $Z_j$) are very similar to the $p$-values
provided by
the standard \texttt{R}-function \texttt{glm}.
\subsection{Small Empirical Comparison}
We provide a small empirical comparison of the methods mentioned in
Sections~\ref{subsec.GLMmethods} and \ref{subsec.GLMweighted}. When applying the linear
model procedures, we use the naming from Section~\ref{subsec.comparlm}. The new
GLM-specific methods from Section~\ref{subsec.GLMmethods} are referred to by their
linear model names with a capital G added to them.
For simulating the data, we use a subset of the variations presented in Section~\ref{subsubsec.data}.
We only look at Toeplitz and Equi.corr and an active set size of
$s_0=3$. The number of variables is fixed at $p=500$, but the sample
size is
varied $n\in\{100,200,400\}$.
The coefficients were randomly generated:
\begin{eqnarray*}
\mbox{Randomly generated}:\quad U(0,1), U(0,2), U(0,4).
\end{eqnarray*}
The nonzero coefficient positions are chosen randomly in one case and
fixed as the first $s_0$ columns of $\mathbf X$ in the other.
For every combination (of type of design, type of coefficients,
sample size and coefficient positions), 100 responses $Y$ are simulated to
calculate empirical versions of the ``Power'' and ``FWER'' described in
Section~\ref{subsubsec.pvals}.
In contrast to the $p$-value results from Section~\ref{subsubsec.pvals},
there is only one resulting data point per setup combination (i.e., no
additional replication with new random covariates, random coefficients and
random active set). For each
method, there are 18 data points, corresponding to 18 settings, in each plot.
The results can be found in Figure~\ref{fig:glmsimul}.
Both the modified GLM methods as well as the weighted squared error
approach work adequately. The Equi.corr setup does prove to be
challenging for \emph{Lasso-ProG}.
\subsection{\texttt{hdi} for Generalized Linear Models}
In the \texttt{hdi} \textsf{R}-package (\cite{hdipackage}) we also provide
the option to use the Ridge projection method and the desparsified Lasso
method with the weighted squared error approach.
We provide the option to specify the \texttt{family} of the response
$Y$ as
done in the \textsf{R}-package \texttt{glmnet}:
\begin{verbatim}
> outRidge
<- ridge.proj(x = x, y = y,
family = ''binomial'')
> outLasso
<- lasso.proj(x = x, y = y,
family = ''binomial'')
\end{verbatim}
$p$-values and confidence intervals are extracted in the exact same way
as for the linear model case; see Section~\ref{subsec.hdilin}.
\section{Hierarchical Inference in the Presence of Highly Correlated
Variables}\label{sect.hierinf}
The previous sections and methods assume in some form or another that
the effects are strong enough to enable accurate estimation of the
contribution of \emph{individual variables}.
Variables are often highly correlated for high-dimensional
data. Working with a small sample size, it is impossible to attribute
any effect to
an individual variable if the correlation between a block of variables
is too high. Confidence intervals for individual
variables are then very wide and uninformative. Asking for confidence
intervals for individual variables thus leads to poor power of all
procedures considered so far. Perhaps even worse, under high correlation
between variables the coverage of some procedures will also be
unreliable as the necessary conditions for correct coverage (such as
the compatibility assumption) are violated.
In such a scenario, the individual effects are not granular enough
to be resolved. However, it might yet still be possible to attribute an
effect to a group
of variables. The groups can arise naturally due to a specific
structure of
the problem, such as in applications of the \emph{group
Lasso} (\cite{yuan06}).
Perhaps more often, the groups are derived
via hierarchical clustering (\cite{hartigan1975clustering}), using the
correlation structure or some
other distance between the variables.
The main idea is as
follows. A hierarchy ${\cal T}$ is a set
of clusters or groups $\{{\cal C}_k; k\}$ with ${\cal C}_k \subseteq
\{1,\ldots,p\}$. The root node (cluster) contains all variables
$\{1,\ldots,p\}$. For any two clusters ${\cal C}_k, {\cal C}_{\ell}$,
either one cluster is a subset of the other or they have an empty
intersection. Usually, a hierarchical clustering has an additional notion
of a level such that, on each level, the corresponding clusters build a
partition of $\{1,\ldots,p\}$. We consider a hierarchy ${\cal T}$ and
first test the root node cluster
${\cal C}_0
= \{1,\ldots,p\}$ with
hypothesis $H_{0,{\cal C}_0}: \beta_1 = \beta_2 = \cdots= \beta_p =
0$. If this hypothesis is rejected, we test the next clusters ${\cal C}_k$
in the hierarchy (all clusters whose supersets are the root node
cluster ${\cal
C}_0$ only): the corresponding cluster hypotheses are $H_{0,{\cal C}_k}:
\beta_j = 0$ for all $j \in{\cal C}_k$. For the hypotheses which can be
rejected, we consider all smaller clusters whose only supersets are
clusters which have been rejected by the method before, and we continue to
go down the tree hierarchy until no more cluster hypotheses can be
rejected.
With the hierarchical scheme in place, we still need a test for the
null hypothesis $H_{0,{\cal C}}$ of a cluster of variables. The tests
have different properties. For example, whether a multiplicity
adjustment is necessary will depend on the chosen test.
We will describe below some methods that are useful for testing
the effect of a group of variables and which can be used in such a
hierarchical approach. The
nice and interesting feature of the procedures is that they adapt
automatically to the level of the hierarchical tree: if a signal of a small
cluster of variables is strong, and if that cluster is sufficiently uncorrelated
from all other variables or clusters, the cluster will be detected as
significant.
Vice-versa, if the signal is weak or if the cluster has too high a
correlation with other variables or clusters, the cluster will not
become significant. For example, a single variable
cannot be detected as significant if it has too much correlation to
other variables or clusters.
\subsection{Group-Bound Confidence Intervals Without Design
Assumptions}\label{subsec.assfree}
The \emph{Group-bound} proposed in \citet{meins13} gives confidence
intervals for the $\ell_1$-norm $\|\beta^0_{{\cal C}_k}\|_1$ of a
group ${{\cal C}_k}\subseteq\{1,\ldots,p\}$ of variables. If the
lower-bound of the $1-\alpha$ confidence interval is larger than 0,
then the null hypothesis $\beta^0_{{\cal C}_k}\equiv0$ can be rejected
for this group. The method combines a few properties:
\begin{longlist}[(iii)]
\item[(i)] The confidence intervals are valid without an assumption
like the compatibility condition (\ref{compat}). In general, they
are conservative, but if the compatibility condition holds, they have
good ``power'' properties (in terms of length) as well.
\item[(ii)] The test is hierarchical. If a set of variables can be
rejected, all
supersets will also be rejected. And vice-versa, if a group of
variables cannot be rejected, none of its subsets can be rejected.
\item[(iii)] The estimation accuracy has an optimal detection rate
under the so-called group effect compatibility condition,
which is weaker than the compatibility condition necessary to
detect the effect of individual variables.
\item[(iv)] The power of the test is unaffected by adding highly or
even perfectly correlated variables in ${\cal C}_k $ to the
group. The compatibility condition would fail to yield a
nontrivial bound, but the group effect compatibility
condition is unaffected by the addition of perfectly correlated
variables to a group.
\end{longlist}
The price to pay for the assumption-free nature of the bound is a weaker
power than with previously discussed approaches when the goal is to detect
the effect of individual variables. However, for groups of highly
correlated variables, the approach can be much more powerful than simply
testing all variables in the group.
\begin{figure*}
\includegraphics{527f08.eps}
\caption{A visualization of the hierarchical testing
scheme as described in the beginning of Section~\protect\ref{sect.hierinf}, for the examples described in
Section~\protect\ref{subsec.illustrations}. One moves top-down through
the output of a hierarchical clustering scheme, starting at the root
node. For each cluster encountered, the null hypothesis that all the
coefficients
of that particular cluster are 0 is tested. A rejection is visualized
by a red
semi-transparent circle at a vertical position that corresponds to
the size of the cluster. The chosen significance level was $\alpha=0.05$.
The children of significant clusters in the
hierarchy are connected by a black line. The process is repeated by
testing the null hypotheses for all
those children clusters until no more hypotheses could
be rejected.
The ordering of the hierarchy in the horizontal direction has
no meaning and was chosen for a clean separation of children hierarchies.
The hierarchical clustering and orderings are the same for all 6
plots since the design matrix was the same. Two different examples
were looked at (corresponding to top and bottom row, resp.) and
four different methods were applied to these
examples. The desparsified Lasso and the Ridge method gave identical
results and were grouped in the two plots on the left, while
results from the hierarchical Multi sample-splitting method are
presented in the middle column and the results for the Group-bound
method are
shown in the right column. In example 1,
the responses were simulated with 2 clusters of
highly correlated variables of size 3 having coefficients different
from zero. In example 2, the responses were
simulated with 2 clusters of highly correlated variables of sizes 11
and 21 having coefficients different from zero. More details about
the examples can be found in Section \protect\ref{subsec.illustrations}.}
\label{fig:treeridge}
\end{figure*}
We remark that previously developed tests can be adapted to the context of
hierarchical testing of groups with hierarchical adjustment for
familywise error control
(\cite{Meins08}); for the Multi sample-splitting
method, this is described next.
\subsection{Hierarchical Multi Sample-Splitting}\label{subsec.mssplitgroup}
The Multi sample-splitting method (Section~\ref{subsec.multisample-split}) can
be adapted to the context of
hierarchical testing of groups by using hierarchical adjustment of\vadjust{\goodbreak}
familywise error control (\cite{Meins08}).
When testing a cluster hypotheses $H_{0,{\cal C}}$, one can use a modified
form of the
partial $F$-test for high-dimensional settings; and the multiple testing
adjustment due to the multiple cluster hypotheses considered can be taken
care of by a hierarchical adjustment scheme proposed in \citet
{Meins08}. A
detailed description of the method, denoted here by \emph{Hier. MS-Split},
together with theoretical guarantees is given in \citet{manbu13}.
\subsection{Simultaneous Inference with the Ridge or Desparsified Lasso
Method}\label{subsec.simulcovridgelasso}
Simultaneous inference for all possible groups can be achieved by considering
$p$-values $P_j$ of individual hypotheses $H_{0,j}: \beta^0_j = 0$
($j=1,\ldots,p$) and adjusting them for simultaneous coverage, namely,
$P_{\mathrm{adjusted},j} = P_j \cdot p$. The individual $p$-values $P_j$ can
be obtained by the Ridge or desparsified Lasso method in
Section~\ref{sec.LM}.
We can then test any group hypothesis $H_{0,G}: \beta_j^0 = 0$ for all $j
\in G$ by simply looking whether $\min_{j \in G} P_{\mathrm{adjust},j}
\le
\alpha$, and we can consider as many group hypotheses as we want without
any further multiple testing adjustment.
\subsection{Illustrations}\label{subsec.illustrations}
A semi-real data example is shown in Figure~\ref{fig:treeridge}, where the
predictor variables are taken from the Riboflavin data set (\cite{bumeka13})\vadjust{\goodbreak}
($n=71, p=4088$) and the
coefficient vector is taken to have entries 0,
except for 2 clusters of highly correlated variables. In example 1, the
clusters both have size 3 with nonzero coefficient sizes equal to 1 for all
the variables in the clusters and Gaussian noise level
$\sigma=0.1$. In example 2, the clusters are bigger and have different sizes
11 and 21; the coefficient sizes for all the variables in the clusters is
again 1, but the Gaussian noise level here is
chosen to be $\sigma=0.5$.
In the first example, 6 out of the 6 relevant
variables are discovered as individually significant by the
\emph{Lasso-Pro}, \emph{Ridge} and \emph{MS-Split} methods (as outlined in
Sections~\ref{subsec.multisample-split}--\ref{subsec.desparslasso}),
after adjusting for
multiplicity.
In the second example, the methods cannot reject the single variables
individually any longer. The results for the \emph{Group-bound} estimator
are shown in the right column. The \emph{Group-bound} can reject a group
of 4 and 31 variables in the first example, each containing a true
cluster of
3 variables. The method can also detect a group of 2 variables (a
subset of
the cluster of~4) which contains 2 out of the 3 highly correlated
variables. In the second example, a group of 34
variables is rejected with the \emph{Group-bound} estimator, containing 16
of the group of 21 important variables. The smallest group of variables
containing the cluster of 21 that the method can detect is of size 360. It
can thus be detected that the variables jointly have a substantial
effect even
though the null hypothesis cannot be rejected for any variable
individually. The hierarchical Multi sample-splitting method (outlined in
Section~\ref{subsec.mssplitgroup}) manages to detect the same clusters as
the \emph{Group-bound} method. It even goes one step further by
detecting a
smaller subcluster.
\begin{figure}
\includegraphics{527f09.eps}
\caption{The power for the rejection of the group-hypothesis of all
variables (top) and the power for the rejection of the group-hypothesis
of the variables
in blocks highly correlated with $S_0$ variables (bottom). The design
matrix used is of type \emph{Block Equi.corr} which is similar to the
Equi.corr setup in that $\Sigma$ is block diagonal with blocks (of size
$20 \times20$) being the~$\Sigma$ of Equi.corr. The power is plotted
as a function of the correlations in the blocks, quantified by~$\rho$.
The Ridge-based method loses power as the correlation between variables
increases, while the group bound, Hier. MS-Split and Lasso-Pro methods
can maintain
power close to 1 for both measures of power.}\vspace*{6pt}
\label{fig:testgrouppower}
\end{figure}
\begin{figure}
\includegraphics{527f10.eps}
\caption{The power for the rejection of the group-hypothesis of all $S_0$
variables (top) and type I error rate corresponding to the rejection of
the group-hypothesis of all $S_0^c$ variables (bottom) for the design
matrix of type \emph{Block Equi.corr} when changing the correlation
$\rho$ between variables. The design matrix type is described in detail
in the caption of Figure \protect\ref{fig:testgrouppower} and in the text. The
desparsified Lasso, Hier. MS-Split and the Ridge-based
method lose power as the correlation between variables increases,
while the \emph{Group-bound} cannot reject the small group of
variables $S_0$ (3~in
this case). The desparsified Lasso and MS-Split methods also exceed the
nominal type I error rate for high correlations (as the design
assumptions break down), whereas the Ridge-based method and the
\emph{Group-bound} are both within the nominal 5\% error rate for
every correlation strength.}\vspace*{6pt}
\label{fig:testgroup}
\end{figure}
We also consider the following simulation model.
The type
of design matrix was chosen to be such that the
population covariance matrix $\Sigma$ is a block-diagonal matrix with
blocks of dimension $20 \times20$ being of the same type as $\Sigma$
for Equi.corr (see
Section~\ref{subsubsec.data}) with off-diagonal $\rho$ instead of
$0.8$. The dimensions of the problem were chosen to be $p=500$ number of
variables, $n=100$ number of samples and noise level $\sigma=1$. There
were only 3 nonzero coefficients chosen
with three different signal levels $U[0,2]$, $U[0,4]$ and $U[0,8]$ being used
for the
simulations. Aside from varying signal
level, we studied the two cases where in one case all the nonzero
coefficients were contained in one single highly correlated block and in
the other case each of those variables was in a different block.\vadjust{\goodbreak} We
look at
3 different measures of power. One can define the power as the fraction
of the 100
repeated simulations that the method managed to reject the group of all
variables $G =
{1,\ldots,p}$. This is shown at the top in Figure~\ref{fig:testgrouppower}. Alternatively, one can look at the rejection
rate of the hypothesis for the group $G$ that contains all variables in the
highly correlated blocks that contain a variable from $S_0$. This is the
plot at the bottom in Figure~\ref{fig:testgrouppower}.
Finally, one can look at the rejection rate of the hypothesis where the
group $G$ contains only the variables in $S_0$ (of size 3 in this
case). The type I error we define to be the fraction of the
simulations in which the method rejected the group hypothesis $H_{0,S_0^c}$
where all regression coefficients are equal to zero. These last two measures are
presented in Figure~\ref{fig:testgroup}.
The power of the Ridge-based method (\cite{pb13}) drops substantially
for high
correlations. The power of the \emph{Group-bound} stays close to 1
at the level of the highly correlated groups (Block-power) and above (Power
$G={1 ,\ldots, p}$) throughout the entire range of correlation values. The
\emph{Lasso-Pro} and \emph{MS-Split} perform well here as well. The power
of the\vadjust{\goodbreak} \emph{Group-bound} is 0 when attempting to reject the small
groups $H_{0,S_0}$.
The type I error rate is supposedly controlled at level $\alpha=0.05$
with all three
methods. However, the \emph{Lasso-Pro} and the hierarchical \emph
{MS-Split} methods fail
to control the error rates, with the type I error
rate even approaching 1 for large values of the correlation. The
\emph{Group-bound} and Ridge-based estimator have, in contrast, a type I
error rate close to 0 for all values of the correlation.
For highly correlated groups of variables, trying to detect the effect
of individual variables has thus two inherent dangers. The power to
detect interesting groups of variables might be very low. And the
assumptions for the methods might be violated, which invalidates the
type I error control. The assumption-free \emph{Group-bound} method
provides a powerful test for the group effects even if variables are
perfectly correlated, but suffers in power, relatively speaking, when
variables are not highly correlated.
\subsection{\texttt{hdi} for Hierarchical Inference}
An implementation of the \emph{Group-bound} method is provided in the
\texttt{hdi} \textsf{R}-package (\cite{hdipackage}).
For specific groups, one can provide a vector or a list of vectors where
the elements of the vector specify the desired columns of $\bx$
to be tested for.
The following code tests the group hypothesis if the group contains all
variables:
\begin{verbatim}
> group
<- 1:ncol(x)
> outGroupBound
<- groupBound(x = x, y = y,
group = group, alpha = 0.05)
> rejection
<- outGroupBound > 0
\end{verbatim}
Note that one needs to specify the significance level~$\alpha$.
One can also let the method itself apply the hierarchical clustering scheme
as described at the beginning of Section~\ref{sect.hierinf}.
This works as follows:
\begin{verbatim}
> outClusterGroupBound
<- clusterGroupBound(x = x,
y = y, alpha = 0.05)
\end{verbatim}
The output contains all clusters that were tested for significance in
\texttt{members}. The corresponding lower bounds are contained in
\texttt{lowerBound}.
To extract the significant clusters, one can do
\begin{verbatim}
> significant.cluster.numbers
<- which
(outClusterGroupBound
$lowerBound > 0)
> significant.clusters
<- outClusterGroupBound$members
[[significant.cluster.numbers]]
\end{verbatim}
The figures in the style of Figure~\ref{fig:treeridge} can be achieved by
using the function \texttt{plot} on \texttt{outCluster-\break GroupBound}.
Note that one can specify the distance matrix used for the hierarchical
clustering, as done for \texttt{hclust}.
To test group hypotheses $H_{0,G}$ for the Ridge and desparsified Lasso
method as described in Section~\ref{subsec.simulcovridgelasso}, one uses
the output from the original single parameter fit, as illustrated for the
group of all variables:
\begin{verbatim}
> outRidge
<- ridge.proj(x = x, y = y)
> outLasso
<- lasso.proj(x = x, y = y)
> group
<- 1:ncol(x)
> outRidge$groupTest(group)
> outLasso$groupTest(group)
\end{verbatim}
To apply a hierarchical clustering scheme as done in
\texttt{clusterGroupBound}, one calls \texttt{cluster-\break GroupTest}:
\begin{verbatim}
> outRidge$clusterGroupTest
(alpha = 0.95)
\end{verbatim}
To summarize, the \textsf{R}-package provides functions to test individual
groups as well as to test according to a hierarchical clustering scheme for
the methods \emph{Group-bound}, Ridge and desparsified Lasso.
An implementation of the hierarchical Multi sample-splitting method is not
provided at this point in time.
\section{Stability Selection and Illustration with \texttt{hdi}}
Stability selection (\cite{mebu10}) is another methodology to guard against
false positive selections, by controlling the expected number of false
positives $\EE[V]$. The focus is on selection of a single or a group of
variables in a regression model, or on a selection of more general discrete
structures such as graphs or clusters. For example, for a linear model in
(\ref{mod.lin}) and
with a selection of single variables, stability selection provides a subset
of variables $\hat{S}_{\mathrm{stable}}$ such that for $V = |\hat
{S}_{\mathrm{stable}}
\cap S_0^c|$ we have that $\EE[V] \le M$, where $M$ is a prespecified
number.
For selection of single variables in a regression model, the method does
not need a beta-min assumption, but the theoretical analysis of stability
selection for controlling $\EE[V]$ relies on a restrictive exchangeability
condition (which, e.g., is ensured by a restrictive condition on the design
matrix). This exchangeability condition seems far from necessary though
(\cite{mebu10}). A refinement of stability selection is given in
\citet{shah13}.
An implementation of the stability selection procedure is available in the
\texttt{hdi} \textsf{R}-package. It is called in a very similar way as the
other methods. If we want to control, for example, $\EE[V] \le1$, we use
\begin{verbatim}
> outStability
<- stability
(x = x, y = y, EV = 1)
\end{verbatim}
The ``stable'' predictors are available in the element \texttt{select}.
The default model selection algorithm is the Lasso (the first $q$
variables entering the Lasso paths).\vadjust{\goodbreak} The option \texttt{model.selector}
allows to apply a user defined model selection function.
\section{R Workflow Example}
We go through a possible \texttt{R} workflow based on the Riboflavin
data set
(\cite{bumeka13}) and methods provided in the \texttt{hdi}
\texttt{R}-package:
\begin{verbatim}
> library(hdi)
> data(riboflavin)
\end{verbatim}
We assume a linear model and we would like to investigate which effects are
statistically significant on a significance level of
$\alpha=0.05$. Moreover, we want to construct the corresponding confidence
intervals.
We start by looking at the individual variables.
We want a conservative approach and, based on the
results from Section~\ref{subsec.comparlm}, we choose the Ridge projection
method for its good error control:
\begin{verbatim}
> outRidge
<- ridge.proj
(x = riboflavin$x,
y = riboflavin$y)
\end{verbatim}
We investigate if any of the multiple testing corrected $p$-values are smaller
than our chosen significance level:
\begin{verbatim}
> any(outRidge$pval.corr <= 0.05)
[1] FALSE
\end{verbatim}
We calculate the 95\% confidence intervals for the first 3 predictors:
\begin{verbatim}
> confint(outRidge,parm=1:3,
level=0.95)
lower upper
AADK_at -0.8848403 1.541988
AAPA_at -1.4107374 1.228205
ABFA_at -1.3942909 1.408472
\end{verbatim}
Disappointed with the lack of significance for testing individual
variables, we want to investigate if we can find a significant group
instead. From the procedure proposed for the Ridge method in Section~\ref{sect.hierinf}, we know that if the Ridge method can not find any
significant individual variables, it would not find a significant group
either.
We apply the Group-bound method with its clustering option to try to find
a significant group:
\begin{verbatim}
> outClusterGroupBound
<- clusterGroupBound
(x = riboflavin$x,
y = riboflavin$y,
alpha = 0.05)
> significant.cluster.numbers
<- which(outClusterGroupBound
$lowerBound
> 0)
> significant.clusters
<- outClusterGroupBound
$members
[[significant.cluster.numbers]]
> str(significant.clusters)
num [1:4088] 1 2 3 4 5 6 7 8 9 10...
\end{verbatim}
Only a single group, being the root node of the clustering tree, is found
significant.
These results are in line with the results achievable in earlier
studies of
the same data set in \citet{bumeka13} and \citet{vdgetal13}.
\section{Concluding Remarks}
We present a (selective) overview of recent developments in frequentist
high-dimensional inference for constructing confidence intervals and
assigning \mbox{$p$-}values for the parameters in linear and generalized linear
models. We include some methods which are able to detect significant groups
of highly correlated variables which cannot
be individually detected as single
variables. We complement the methodology and theory viewpoints with
a broad empirical study. The latter indicates that more ``stable''
procedures based on Ridge estimation or sample splitting with subsequent
aggregation might be more reliable for type I error control, at the price
of losing power; asymptotically, power-optimal methods perform nicely in
well-posed scenarios but are more exposed to fail for error control in more
difficult settings where
the design or degree of sparsity are more ill-posed. We introduce the
\texttt{R}-package \texttt{hdi} which allows the user to choose from a
collection of frequentist inference methods and eases reproducible
research.
\subsection{Post-Selection and Sample Splitting
Inference}\label{subsec.postsel}
Since the main assumptions outlined in Section~\ref{subsec.mainass} might
be unrealistic in practice, one can consider a different route.
The
view and ``POSI'' (Post-Selection Inference) method by
\citet{berketal13} makes inferential statements which
are protected against all possible submodels and, therefore, the procedure
is not exposed to
the issue of having selected an ``inappropriate'' submodel. The way in
which \citet{berketal13} deal with misspecification of the (e.g., linear)
model is closely
related to addressing this issue with the Multi sample splitting or
desparsified Lasso method; see Section~\ref{subsec.mainass} and
\citet{pbvdg15}. The method by \citet{berketal13} is conservative, as it
protects against any possible submodel, and it is not feasible yet for
high-dimensional problems. \citet{wass14} briefly describes the ``HARNESS''
(High-dimensional Agnostic Regression Not Employing Structure or Sparsity)
procedure: it is based on single data splitting and making inference for
the selected submodel from the first half of the data. When giving up on
the goal to infer the true or best approximating parameter $\beta^0$ in
(\ref{betaproj}), one can drop many of the main assumptions which are needed
for high-dimensional inference.
The ``HARNESS'' is related to post-selection inference where the
inefficiency of sample splitting is avoided. Some recent work includes
exact post-selection inference, where the full data is used for
selection and inference: it aims to avoid the potential inefficiency of
single sample splitting and to be less conservative than ``POSI'', thereby
restricting the focus to a class of selection procedures which are
determined by
affine inequalities, including the Lasso and least angle regression
(\cite{lee13}; \citep{taylor14}; \citep{fithian14}).
Under some conditions, the issue of selective inference can be
addressed by using an adjustment factor (\cite{beye05}): this could be done
by adjusting the output of our high-dimensional inference procedures,
for example,
from the \texttt{hdi} \texttt{R}-package.
\begin{appendix}\label{app}
\section*{Appendix}
\setcounter{subsection}{0}
\subsection{Additional Definitions and Descriptions}\label{subsec.appadd}
\emph{Compatibility condition} (\cite{pbvdg11}, page106).
Consider a fixed design matrix $\bx$. We define the following:
The compatibility condition holds if for some $\phi_0 >0$ and all $\beta$
satisfying $\|\beta_{S_0^c}\|_1 \le3 \|\beta_{S_0}\|_1$,
\begin{eqnarray}
\label{compat} \|\beta_{S_0}\|_1^2 \le
\beta^T \hat{\Sigma} \beta s_0/\phi_0^2,\quad
\hat{\Sigma} = n^{-1} \bx^T \bx.
\end{eqnarray}
Here $\beta_{A}$ denotes the components $\{\beta_j;j \in A\}$ where $A
\subseteq\{1,\ldots,p\}$. The number $\phi_0$ is called the compatibility
constant.
\emph{Aggregation of dependent $p$-values.}
Aggregation of dependent $p$-values can be generically done as follows.
\begin{lemm}[{[Implicitly contained in \citet{memepb09}]}]
Assume that we have\vadjust{\goodbreak} $B$ $p$-values $P^{(1)},\ldots
,P^{(B)}$ for testing a null-hypothesis $H_0$, that is, for every $b
\in\{1,\ldots
,B\}$ and any $0 < \alpha< 1$, $\PP_{H_0}[P^{(b)} \le\alpha] \le
\alpha$. Consider for any $0 < \gamma< 1$ the empirical $\gamma$-quantile
\begin{eqnarray*}
&&Q(\gamma) \\
&&\quad= \min \bigl(\mbox{empirical $\gamma$-quantile} \bigl
\{P^{(1)}/\gamma,\ldots,P^{(B)}/\gamma\bigr\},\\
&&\qquad 1 \bigr),
\end{eqnarray*}
and the minimum value of $Q(\gamma)$, suitably corrected with a factor,
over the range
$(\gamma_{\mathrm{min}},1)$ for some positive (small)
$0<\gamma_{\mathrm{min}} < 1$:
\begin{eqnarray*}
P = \min \Bigl(\bigl(1 - \log(\gamma_{\mathrm{min}})\bigr) \min
_{\gamma\in
(\gamma_{\mathrm{min}},1)} Q(\gamma), 1 \Bigr).
\end{eqnarray*}
Then, both $Q(\gamma)$ [for any fixed $\gamma\in(0,1)$] and $P$ are
conservative $p$-values satisfying for any $0 < \alpha< 1$,
$\PP_{H_0}[Q(\gamma) \le\alpha] \le
\alpha$ or $\PP_{H_0}[P \le\alpha] \le\alpha$, respectively.
\end{lemm}
\emph{Bounding the error of the estimated bias correction in the
desparsified Lasso.} We will argue now why the error from the bias
correction
\[
\sum_{k \neq j} \sqrt{n} P_{jk}\bigl(\hat{
\beta}_k - \beta^0_k\bigr)
\]
is negligible. From the KKT conditions when using the Lasso of $\bx^{(j)}$
versus $\bx^{(-j)}$, we have (B{\"u}hlmann\break and van~de Geer, \citeyear{pbvdg11}, cf. Lemma~2.1)
\begin{equation}
\label{KKT} \max_{k \neq j} 2 \bigl|n^{-1}
\bigl(X^{(k)}\bigr)^T Z^{(j)}\bigr| \le
\lambda_j.
\end{equation}
Therefore,
\begin{eqnarray*}
&&\biggl|\sqrt{n} \sum_{k \neq j} P_{jk}\bigl(\hat{
\beta}_k - \beta^0_k\bigr)\biggr| \\
&&\quad\le\sqrt{n}
\max_{k\neq j} |P_{jk}| \bigl\|\hat{\beta} -
\beta^0\bigr\|_1
\\
&&\quad\le2 \sqrt{n} \lambda_j\bigl \|\hat{\beta} - \beta^0
\bigr\|_1 \bigl(n^{-1} \bigl(\bx^{(j)}
\bigr)^T Z^{(j)}\bigr)^{-1}.
\end{eqnarray*}
Assuming sparsity and the compatibility condition (\ref{compat}), and when
choosing
$\lambda_j \asymp\sqrt{\log(p)/n}$, one can show that
$(n^{-1} (\bx^{(j)})^T Z^{(j)})^{-1} = O_P(1)$ and $\|\hat{\beta} -
\beta^0\|_1 = O_P(s_0 \sqrt{\log(p)/n})$ [for the latter, see
(\ref{lasso-ell1})]. Therefore,
\begin{eqnarray*}
&&\biggl|\sqrt{n} \sum_{k \neq j} P_{jk}\bigl(\hat{
\beta}_k - \beta^0_k\bigr)\biggr| \\
&&\quad\le
O_P\bigl(\sqrt{n} s_0 \sqrt{\log(p)/n}
\lambda_j\bigr) \\
&&\quad= O_P\bigl(s_0 \log(p)
n^{-1/2}\bigr),
\end{eqnarray*}
where the last bound follows by assuming $\lambda_j \asymp
\sqrt{\log(p)/n}$. Thus, if $s_0 \ll n^{1/2} / \log(p)$, the error from
bias correction is asymptotically negligible.
\emph{Choice of $\lambda_j$ for desparsified Lasso.}
We see from (\ref{KKT}) that the numerator of the error in the bias
correction term (i.e., the $P_{jk}$'s) is decreasing as $\lambda_j
\searrow
0$; for controlling the denominator, $\lambda_j$ should not be too
small to ensure that the denominator [i.e., $n^{-1} (\bx^{(j)})^T
Z^{(j)}$] behaves
reasonable (staying away from zero) for a fairly large range of
$\lambda_j$.
Therefore, the strategy is as follows:
\begin{longlist}[1.]
\item[1.] Compute a Lasso regression of $\bx^{(j)}$ versus all
other variables $\bx^{(-j)}$ using CV, and the corresponding residual
vector is
denoted by $Z^{(j)}$.
\item[2.] Compute $\|Z^{(j)}\|_2^2/((\bx^{(j)})^T Z^{(j)})^2$ which is the
asymptotic variance of $\hat{b}_j/\sigma_{\eps}$, assuming that the error
in the bias correction is negligible.
\item[3.] Increase the variance by 25\%, that is,
$V_j = 1.25 \|Z^{(j)}\|_2^2/((\bx^{(j)})^T Z^{(j)})^2$.
\item[4.] Search for the smallest $\lambda_j$ such that the corresponding
residual vector $Z^{(j)}(\lambda_j)$ satisfies
\begin{eqnarray*}
\bigl\|Z^{(j)}(\lambda_j)\bigr\|_2^2/\bigl(
\bigl(\bx^{(j)}\bigr)^T Z^{(j)}(
\lambda_j)\bigr)^2 \le V_j.
\end{eqnarray*}
\end{longlist}
This procedure is similar to the choice of $\lambda_j$ advocated in
\citet{zhangzhang11}.
\emph{Bounding the error of bias correction for the Ridge projection.}
The goal is to derive the formula (\ref{Ridge-repr}). Based on
(\ref{Ridge-distr}), we have
\begin{eqnarray*}
&&\sigma_{\eps}^{-1} \Omega_{R;jj}^{-1/2}
\bigl(\hat{b}_{R;j} - \beta^0_j\bigr)\\
&&\quad\approx
\Omega_{R;jj}^{-1/2} W_j / P_{R;jj} \\
&&\qquad{}+
\sigma_{\eps}^{-1} \Omega_{R;jj}^{-1/2}
\Delta_{R;j},\quad W \sim{\cal N}_p(0, \Omega_R),
\nonumber
\\
&&|\Delta_{R;j}|\le\max_{k \neq j} \biggl\llvert
\frac
{P_{R;jk}}{P_{R;jj}}\biggr\rrvert \bigl\|\hat{\beta} - \beta^0
\bigr\|_1.
\end{eqnarray*}
In relation to the result in Fact~\ref{th2} for the desparsified Lasso,
the problem here is that the behaviors of $\max_{k \neq j} |P_{R;jj}^{-1}
P_{R:jk}|$ and of the diagonal elements $\Omega_{R;jj}$ are hard to
control, but, fortunately, these quantities are fixed and observed for fixed
design $\bx$.
By invoking the compatibility constant for the design~$\bx$, we
obtain the bound for $\|\hat{\beta} - \beta^0\|_1 \le s_0 4\lambda/\phi_0$
in (\ref{lasso-ell1}) and, therefore, we can upper-bound
\[
|\Delta_{R;j}| \le4 s_0 \lambda/\phi_0^2
\max_{k \neq j} \biggl\llvert \frac{P_{R;jk}}{P_{R;jj}}\biggr\rrvert .\vadjust{\goodbreak}
\]
Asymptotically, for Gaussian errors, we have with high probability
\begin{eqnarray}
\label{delta-bound} |\Delta_{R;j}| &=& O\biggl(s_0 \sqrt{
\log(p)/n} \max_{k \neq
j}\biggl\llvert \frac{P_{R;jk}}{P_{R;jj}}\biggr
\rrvert \biggr)
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&\le& O\biggl(\bigl(\log(p)/n\bigr)^{1/2
- \xi}\max_{k \neq j}
\biggl\llvert \frac{P_{R;jk}}{P_{R;jj}}\biggr\rrvert \biggr),
\end{eqnarray}
where the last inequality holds due to assuming $s_0 =O((n/\log(p))^{\xi})$
for some $0 < \xi< 1/2$.
In practice, we use the bound from (\ref{delta-bound}) in the form
\begin{eqnarray*}
\Delta_{R\mathrm{bound};j} := \max_{k \neq
j} \biggl\llvert
\frac{P_{R;jk}}{P_{R;jj}}\biggr\rrvert \bigl(\log(p)/n\bigr)^{1/2
- \xi},
\end{eqnarray*}
with the typical choice $\xi= 0.05$.
\subsection{Confidence Intervals for Multi Sample-Splitting}\label
{subsec.appmssplitci}
We construct confidence intervals that satisfy the duality with the $p$-values
from equation (\ref{aggreg}), and, thus, they are corrected already for
multiplicity:
\begin{eqnarray*}
&&\mbox{$(1-\alpha)$\% CI} \\
&&\quad= \mbox{Those values } c \mbox{ for which
the $p$-value }\geq\\
&&\qquad \alpha\mbox{ for testing the null hypothesis }
H_{0,j}:\beta=c,
\\
&&\quad=\mbox{Those } c \mbox{ for which the $p$-value resulting from}\\
&&\qquad\mbox{the $p$-value
aggregation procedure is} \geq\alpha,
\\
&&\quad= \{c | P_j \geq\alpha\},
\\
&&\quad= \Bigl\{c | (1-\log{\gamma_{\mathrm{min}}})\inf_{\gamma\in(\gamma_{\mathrm{min}},1)}
Q_j(\gamma) \geq\alpha\Bigr\},
\\
&&\quad= \bigl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1): (1-\log{
\gamma_{\mathrm{min}}}) Q_j(\gamma) \geq\alpha\bigr\},
\\
&&\quad= \bigl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1):\\
&&\qquad \min\bigl(1,\mathrm{emp.}\
\gamma\ \mathrm{quantile} \bigl(P_{\mathrm{corr};j}^{[b]}\bigr)/\gamma\bigr)\geq\\
&&\qquad
\alpha/(1-\log{\gamma_{\mathrm{min}}})\bigr\} ,
\\
&&\quad= \bigl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1):\\
&&\qquad \mathrm{emp.}\ \gamma\
\mathrm{quantile} \bigl(P_{\mathrm{corr};j}^{[b]}\bigr)/\gamma\geq\\
&&\qquad\alpha/(1-\log{
\gamma_{\mathrm{min}}})\bigr\} ,
\\
&&\quad = \biggl\{c | \forall\gamma\in(\gamma_{\mathrm{min}},1):\\
&&\qquad \mathrm{emp.}\ \gamma\ \mathrm{quantile} \bigl(P_{\mathrm{corr};j}^{[b]}\bigr) \geq\frac{\alpha
\gamma}{(1-\log{\gamma_{\mathrm{min}}})}\biggr
\}.
\end{eqnarray*}
We will use the notation $\gamma^{[b]}$ for the position of $P_{\mathrm{corr};j}^{[b]}$
in the ordering by increasing the value of the corrected $p$-values
$P_{\mathrm{corr};j}^{[i]}$,
divided by $B$.
We can now rewrite our former expression in a form explicitly using our
information from
every sample split
\begin{eqnarray*}
&& \mbox{$(1-\alpha)$\% CI}
\\
&&\quad= \biggl\{c |\forall b =1,\ldots,B: \bigl(\gamma^{[b]} \leq
\gamma_{\mathrm{min}}\bigr)\\
&&\qquad{}\lor\biggl(P_{\mathrm{corr};j}^{[b]} \geq
\frac{\alpha\gamma^{[b]}}{(1-\log{\gamma_{\mathrm{min}}})}\biggr) \biggr\}
\\
&&\quad= \biggl\{c | \forall b
=1,\ldots,B: \bigl(\gamma^{[b]} \leq\gamma_{\mathrm{min}}\bigr)\\
&&\qquad{}\lor
\biggl(c \in\mbox{ the } \biggl(1-\frac{\alpha\gamma^{[b]}}{(1-\log{\gamma_{\mathrm{min}}})|\hat
{S}^{[b]}|} \biggr)\\
&&\qquad{}\cdot 100\%
\mbox{ CI for split $b$}\biggr) \biggr\}.
\end{eqnarray*}
For single testing (not adjusted for multiplicity), the corresponding
confidence interval becomes
\begin{eqnarray*}
& &\mbox{$(1-\alpha)$\% CI}
\\
&&\quad = \biggl\{c | \forall b =1,\ldots,B: \bigl(\gamma^{[b]} \leq
\gamma_{\mathrm{min}}\bigr)\\
&&\qquad{}\lor\biggl(c \in\mbox{ the } \biggl(1-
\frac{\alpha\gamma^{[b]}}{(1-\log{\gamma_{\mathrm{min}}})} \biggr)\\
&&\qquad{}\cdot 100\% \mbox{ CI for split $b$}\biggr) \biggr\}.
\end{eqnarray*}
If one has starting points with one being in the confidence interval
and the other one outside of it, one can apply the bisection method to
find the bound in between these points.
\subsection{Weighted Squared Error Approach for General
GLM}\label{subsec.app.general.wsqerr}
We describe the approach presented in Section~\ref{subsec.GLMweighted}
in a
more general way. One algorithm for fitting generalized linear models
is to calculate the
maximum likelihood estimates $\hat{\beta}$ by applying iterative weighted
least squares (\cite{mccullagh1989generalized}).
As in Section~\ref{subsec.GLMweighted}, the idea is now to apply a standard
l1-penalized fitting of the model, then build up the weighted least squares
problem at the l1-solution and apply our linear model methods on this
problem.
From \citet{mccullagh1989generalized}, using the notation $\hat{z}_i =
g^{-1}((\mathbf X \hat{\beta})_i), i=1 , \ldots, n$,
the adjusted response variable becomes
\begin{eqnarray}
Y_{i,\mathrm{adj}} = (\mathbf X \hat{\beta})_i + (Y_i-
\hat{z}_i) \frac
{\partial
g(z)}{\partial z} \bigg|_{z=\hat{z}_i},\nonumber\\
\eqntext{i = 1 , \ldots, n .}
\end{eqnarray}
We then get a weighted least squares problem
\[
\hat{\beta}_{\mathrm{new}} = \argmin_{\beta} (Y_{\mathrm{adj}} - \mathbf
X \beta)^T \mathbf W (Y_{\mathrm{adj}} - \mathbf X \beta),
\]
with weights
\begin{eqnarray*}
&&\mathbf W^{-1} \\
&&\quad=
\left(\matrix{\displaystyle \biggl(\frac{\partial g(z)}{\partial z}
\biggr)^2 \bigg|_{z=\hat{z}_1} V(\hat{z}_1) & 0 \vspace*{2pt}\cr
0 & \displaystyle\biggl(\frac{\partial g(z)}{\partial z}\biggr)^2
\bigg|_{z=\hat{z}_2} V(\hat{z}_2) \vspace*{2pt}
\cr
\vdots& \ddots\vspace*{2pt}
\cr
0 & \ldots }
\right.
\\
&&\quad\hspace*{20pt}\left.\matrix{\ldots& 0
\vspace*{2pt}\cr
\ddots& \vdots
\vspace*{2pt}\cr
\ddots& 0
\vspace*{2pt}\cr
0 & \displaystyle\biggl(
\frac{\partial g(z)}{\partial z}\biggr)^2 \bigg|_{z=\hat{z}_n} V(\hat{z}_n)}
\right),
\end{eqnarray*}
with variance function $V(z)$.
The variance function $V(z)$ is related to the variance of the response
$Y$. To more clearly define this relation, we assume that the response $Y$
has a distribution of the form described in \citet{mccullagh1989generalized}:
\[
f_Y(y;\theta,\phi) = \exp{\bigl[\bigl(y \theta- b(\theta)\bigr)/a(
\phi) + c(y,\phi)\bigr]},
\]
with known functions $a(\cdot)$, $b(\cdot)$ and $c(\cdot)$. $\theta$ is
the canonical parameter and $\phi$ is the dispersion parameter.
As defined in \citet{mccullagh1989generalized}, the variance function
is then
related to the variance of the response in the following way:
\[
\Var(Y) = b^{\prime\prime}(\theta)a(\phi)=V\bigl(g^{-1}\bigl(\mathbf X
\beta^0\bigr)\bigr) a(\phi).
\]
We rewrite $Y_{w} = \sqrt{\mathbf W} Y_{\mathrm{adj}}$ and $X_w =
\sqrt{\mathbf W} \mathbf X$ to get
\[
\hat{\beta}_{\mathrm{new}} = \argmin_{\beta} (Y_w - \mathbf
X_w \beta)^T(Y_w - \mathbf X_w
\beta).
\]
The linear model methods can now be applied to $Y_{w}$ and
$\mathbf X_{w}$, thereby the estimate $\hat{\sigma}_{\eps}$ has to
be set to the value
1.
\end{appendix}
\section*{Acknowledgments}
We would like to thank some reviewers for
insightful and constructive comments.
\begin{supplement}[id=suppA]
\stitle{Supplement to ``High-Dimensional Inference:\break \mbox{Confidence} Intervals,
$p$-Values and \textsf{R}-Software \texttt{hdi}''}
\slink[doi]{10.1214/15-STS527SUPP}
\sdatatype{.pdf}
\sfilename{sts527\_supp.pdf}
\sdescription{The supplemental article contains additional empirical
results.}
\end{supplement}
|
1,108,101,564,241 | arxiv | \section{Overview of the Large-$N_c$ Approach to Meson-Baryon Scattering}
\label{sec:introdn}
It is well known\cite{tHooft,Veneziano,WittenI,ColWit,Georgi,Luty}
that QCD simplifies greatly in the limit
$N_c\rightarrow\infty$, $N_c$ being the number of colors.
Not surprisingly, the large-$N_c$ limit has likewise proved to be very useful
in studying effective low-energy hadron Lagrangians for the Strong
Interactions.
Broadly speaking, such effective theories fall into two categories.
On the one hand, there is the straightforward Feynman diagrammatic approach
in which mesons and baryons are each treated as explicit dynamical
fields, while on the other hand, there is the more economical
skyrmion picture\onlinecite{Skyrme,ANW}
in which baryons are viewed as solitons constructed
from the meson degrees of freedom. Since both these approaches
purport to describe the low-energy Strong Interactions, it follows
that if they are sensible, they should be equivalent to one another.
Furthermore, this equivalence must hold order by order in $1/N_c.$
The first steps towards establishing such an equivalence are just
recently being taken\cite{AM,DHM,DiakPet,Japs}.
In either approach, a particularly fruitful physical
process to examine has been
meson-baryon scattering in the large-$N_c$ limit. The present paper
furthers this study, taking as a tractable example of a multi-channel
Lagrangian a variant of the linear $\sigma$-model. Before
we specify the model, and our particular treatment of it,
it is helpful to put the present work in
historical context.
A review of the
relevant theoretical literature over the past decade reveals an
interesting sociological phenomenon: there are two disjoint bodies of
large-$N_c$ papers
devoted to two topologically distinct sets of diagrams,
namely \it Compton-type \rm versus \it exchange-type \rm graphs, that
contribute to the meson-baryon $S$-matrix.\footnote{
\divide\baselineskip by 2
So far as we are aware, the only attempt to date to treat these
two classes of graphs in a unified manner can be found in Sec.~7 of
Ref.~\onlinecite{DHM}.}
Examples of Compton-type and
exchange-type graphs are displayed in Fig.~1 and Fig.~2, respectively.
Topologically, they differ in the following way: in the exchange-type
graphs of Fig.~2, it is possible to trace a continuous line from the incoming
to the outgoing meson without ever traversing a baryon line segment,
whereas in the Compton-type graphs of Fig.~1 this cannot be done.
Let us review, briefly, some of the salient points of physics that emerge
from the study of each of these two classes of diagrams.
\subsection{ Compton-type graphs }
While presently the Compton-type graphs (Refs.~
\onlinecite{DiakPet,Japs,GerSak,DashMan,DashManII,DashJenMan,Jenkins})
are much less well understood than the
exchange-type graphs discussed below, they nevertheless yield some
interesting physics, as follows.
Look at Figs.~1a and 1b. Since each vertex
scales like $\sqrt{N_c}$ (see Ref.~\onlinecite{WittenI}),
these graphs individually scale like\footnote{
\divide\baselineskip by 2
The baryon propagator is approximated by
$i(v\cdot k+i\epsilon)^{-1}$ in the large-$N_c$ limit, where $v$
is the baryon's 4-velocity, $k$ is the momentum imparted by
the incoming meson (assuming the incoming baryon to be on shell),
and it is also understood that one throws away the two small
components of the Dirac 4-spinor.
We focus on the kinematic regime $k\sim N_c^0$ so that the baryon propagators
do not affect the $N_c$ counting.}
$N_c.$
However, we know from Witten's analysis of quark-gluon
diagrams\onlinecite{WittenI} that the total amplitude for
meson-baryon scattering must scale like $N_c^0,$ not
$N_c.$ Therefore there must be leading-order cancellations between
Figs.~1a and 1b. Add to this observation another
important piece of large-$N_c$ physics: the fact that for the case
of two light flavors (which we focus on exclusively herein) the
spectrum of stable baryons is a tower of states of equal spin and
isospin\onlinecite{ANW}:
$I=J=1/2,3/2,5/2,\cdots,N_c/2$, which are
all degenerate in the large-$N_c$ limit
(more precisely, in the limit $J^2/N_c\rightarrow0$). We then demand
leading-order cancellation between Figs.~1a and 1b, for the reason
described above, with the three baryon legs drawn from all possible
baryon states in the $I=J$ tower, consistent with triangle inequalities
for isospin and angular momentum at each vertex. This exercise is
carried out in Refs.~\onlinecite{GerSak} and \onlinecite{DashMan}.
The upshot is a set of proportionality relations between the various
coupling constants $g_{\pi NN},$ $g_{\pi N\Delta},$ $g_{\pi \Delta\Delta},$
and so forth up the $I=J$ tower, relating each of these \it a priori \rm
independent couplings to a single underlying coupling constant,
up to multiplication by Clebsch-Gordan coefficients.
We call this set of relations for the pion-baryon couplings
the ``proportionality rule.'' Furthermore, Dashen and Manohar have
shown that corrections to the proportionality rule do not occur
at order $1/N_c,$ as naively expected, but rather at order
$1/N_c^2$\onlinecite{DashManII}. This suggests that the proportionality rule
should be relatively robust. Calculationally, it implies that,
once the order $N_c$ contributions to the amplitude have cancelled,
the surviving order $N_c^0$ pieces arise solely from the
$1/N_c$ corrections to the baryon propagator, and not from
$1/N_c$ corrections at the vertices, as one might have thought.
Numerically, the proportionality rule for the pion-baryon couplings
works well. Not only does the decay width of the $\Delta$ work out to
within a few GeV of its measured value when $g_{\pi N\Delta}$ is
related, using this rule,
to the experimental value of $g_{\pi NN}$\onlinecite{ANW,DHM};
but furthermore, with the same input parameters, the widths of the
``large-$N_c$ artifacts,'' \it i.e. \rm the baryons with $I=J\ge5/2,$
are so large that they cannot be considered ``particles'' at all, and
as such, pose no problem for
phenomenology\onlinecite{DHM}. This latter observation
removes what has been, till recently, one of the chief objections to the entire
large-$N_c$ program. Another success of large $N_c$ is that the
group-theoretic predictions of the old $SU(2N_F)$ symmetry are
recaptured\onlinecite{Georgi,Luty,DashMan,DashJenMan}, without
the need to appeal to the construct of the nonrelativistic,
weakly interacting constituent quark model.
A further refinement was made recently by Jenkins\onlinecite{Jenkins},
who examined the one-loop chiral corrections to the masses
$M_J$ of the $I=J$ baryons, and deduced the consistency relations
\begin{equation}
M_J\ =\ M_0\ +\ {J(J+1)\over2{\cal I}}\ +\ {\cal O}(N_c^{-2})
\label{Jenkinseqn}
\end{equation}
where $M_0$ and $\cal I$ are constants of order $N_c$ that can
be fixed, for example, by pegging $M_{1/2}$ and $M_{3/2}$, respectively,
to the experimental nucleon and $\Delta$ masses.
While the large-$N_c$ results of
Refs.~\onlinecite{DashMan,DashManII,DashJenMan,Jenkins} are derived
using effective Lagrangians of mesons and explicit baryons,
the physics of the Compton-type graphs can also be accessed using
the skyrmion approach\onlinecite{DiakPet,Japs}. The parallelism between
the two approaches is manifest in expressions such as Eq.~(\ref{Jenkinseqn}).
In the language of the two-flavor Skyrme model, $M_0$ and $\cal I$
are interpreted as the mass and moment of inertia of the soliton,
respectively\onlinecite{ANW,Schulman}. It is reassuring that
the expression (\ref{Jenkinseqn}) can also be gotten directly
from looking at quark diagrams in
large-$N_c$ QCD\onlinecite{Georgi,Luty}, closing the circle.
\subsection{Exchange-type graphs}
Next we turn to the physics of the exchange-type graphs
(Refs.~\onlinecite{AM,HEHW,Sig,MandK,MandP,Karliner,ninj,Donohue,Muk,Action}),
which is the primary focus of this paper. Examples are shown in Fig.~2.
These graphs likewise contribute to the
scattering amplitude starting at order $N_c^0.$ Although the
summation of \it all \rm such graphs would appear to be an impossible task,
it can actually be carried out in a straightforward manner---so
long as one contents oneself with the leading-order answer in the
$1/N_c$ expansion\onlinecite{AM}. As will be reviewed in detail
in the Sections to follow, the key idea is to rewrite these
multiloop graphs as \it trees\rm, exploiting the large-$N_c$
approximation. Tree graphs have the great
advantage over loops that they can all be summed by solving \it classical \rm
equations of motion.\footnote{
\divide\baselineskip by 2
To remind the reader\onlinecite{AM}
that he or she already knows
a situation where ``loops'' become ``trees,'' recall the ancient problem
of electron-proton scattering in the low-energy regime where the proton
mass is much greater than all other scales in the problem. On the one
hand, these are evidently
multiloop interactions, in which the proton and electron
lines exchange a large number of photons in all possible tanglings. On
the other hand, we know that
the physics is accurately described by \it classical \rm equations:
first the proton generates a classical Coulomb field, and then the
electron propagates linearly through this non-trivial
background (Rutherford scattering). These
two disparate pictures are reconciled by the fact that
the loop graphs are really trees, by
exactly the same manipulations described in Sec.~II below. The insight of
Ref.~\onlinecite{AM} is that this same mechanism (modulo nonlinearities
due to the fact that bosons, unlike photons, are self-interacting)
holds for the exchange of arbitrary bosons in the large-$N_c$ limit, thanks to
the proportionality rule as well as the $I_t=J_t$ rule reviewed below.}
It is this summability property which justifies
our earlier statement that the exchange-type graphs are much better
understood than the Compton-type graphs.
While the analysis of this paper will be carried
out using explicit baryon fields, the set of classical
equations that emerges is, once again, highly reminiscent of the
skyrmion approach, in which the corresponding
classical equations describe a pion propagating through
the background field generated by the skyrmion
itself\onlinecite{HEHW,Sig,MandK,MandP,Karliner,ninj}.
In particular, the group-theoretic relations familiar from the Skyrme
model carry over intact to models such as the present one
with explicit baryons.
These include non-trivial, and experimentally reasonably well satisfied,
relations in which isospin-$3/2$ $\pi N$ scattering amplitudes are
expressed as linear combinations of the isospin-$1/2$
amplitudes\onlinecite{HEHW,MandP}. Similar relations hold for
kaon-nucleon scattering \onlinecite{Karliner}, and for $\pi N\rightarrow
\rho N$\onlinecite{ninj}, and in fact for all
quasielastic meson-baryon scattering processes.
If, extending
Donohue's original suggestion\onlinecite{Donohue}, one crosses these
relations among scattering
amplitudes from the $s$-channel
to the $t$-channel (\it e.g.\rm, $N\bar N\rightarrow\,$mesons),
they can be re-expressed concisely
as two large-$N_c$ selection rules\onlinecite{Muk,Action}.
First, there is the very same
``proportionality rule'' discussed earlier, in the context of the
Compton-type graphs. However, the derivation given in
Ref.~\onlinecite{Muk} makes clear that
the proportionality rule is \it completely independent
of the chiral limit\rm, and furthermore, that it applies not only to the
pion-baryon couplings but equally to the baryon couplings of
\it all \rm bosons. Beyond the width calculations noted
above\onlinecite{ANW,DHM}, the
phenomenological validity of the proportionality rule is put to the
test in Fig.~7 of Ref.~\onlinecite{MandP}, in which the appropriate
linear combinations of the experimental
$\pi N\rightarrow\pi N$ and $\pi N\rightarrow\pi\Delta$ scattering
amplitudes are compared.
In addition, a second large-$N_c$ selection rule emerges, the ``$I_t=J_t$
rule''\onlinecite{Muk,Action}. This rule states that
the isospin of the emitted/absorbed meson must equal its \it total \rm
(spin + orbital) angular momentum, measured
in the rest frame of the large-$N_c$
baryon. Concrete examples of meson-nucleon couplings that
satisfy the $I_t=J_t$ rule include the pseudovector coupling of the
pion, the tensor coupling of the $\rho,$ and the vector coupling of
the $\omega$ \onlinecite{Action}:
\begin{equation}
\big(g_{\pi NN}/2M_N\big)\partial_\mu\vec\pi\cdot\bar N
\gamma^5\gamma^\mu\vec\tau N\ ,
\quad
g^{\rm tens}_\rho\partial_\mu\vec\rho_\nu\cdot\bar N
\sigma^{\mu\nu}\vec\tau N\ ,
\quad
g^{\rm vec}_\omega\omega_\mu\cdot\bar N\gamma^\mu N\ ,
\end{equation}
each of which must be
augmented by couplings to the entire tower of $I=J$ baryons as
required by the proportionality rule.
Since these couplings obey the $I_t=J_t$ rule, the three coupling constants
are nonvanishing at leading order in the large-$N_c$ expansion:
\begin{equation}
{g_{\pi NN}\over2M_N}\ \sim\ g^{\rm tens}_\rho \sim\
g^{\rm vec}_\omega\ \sim\ \sqrt{N_c}\ .
\label{nonvanishing}
\end{equation}
In contrast, the $I_t=J_t$ rule \it forbids \rm at leading order
the other two canonical vector-meson
interactions, the \it vector \rm coupling of the $\rho$ and
the \it tensor \rm coupling of the $\omega,$
\begin{equation}
g^{\rm vec}_\rho\vec\rho_\mu\cdot\bar N
\gamma^{\mu}\vec\tau N\quad \hbox{and}\quad
g^{\rm tens}_\omega\partial_\mu\omega_\nu\cdot\bar N\sigma^{\mu\nu} N\ ,
\end{equation}
meaning that these coupling constants must be down by (at least)
one power of $1/N_c$ compared to Eq.~(\ref{nonvanishing}):
\begin{equation}
g^{\rm vec}_\rho \sim\ g^{\rm tens}_\omega\ \sim\ {1\over\sqrt{N_c}}\ .
\end{equation}
The relative unimportance of the vector (tensor) coupling of the
$\rho$ ($\omega$) has long been known to nuclear physicists who construct
one-boson exchange models of the nucleon-nucleon
potential\onlinecite{LeeTab,BarEb,Bonn,Paris}. It
is pleasing to see these phenomenological
rules of thumb emerge as theorems in the large-$N_c$ limit.
\subsection{Two interesting unresolved questions}
In lieu of a Conclusions section, we close this expanded Introduction
with two questions that are food for further thought.
First, is the complete meson-baryon
$S$-matrix properly obtained by adding the Compton-type and exchange-type
graphs together, or, as an alternative prescription, might it be the case
that either set of graphs \it by itself \rm
(assuming an infinite spectrum of mesons) contains the complete answer?
This latter possibility is suggested by the observation that mesons and
baryons are composite particles made up from quarks and gluons. Since
at the quark-gluon level there is no longer a topological distinction between
the graphs of Fig.~1 and Fig.~2, one must be especially careful to
avoid double counting, and this might conceivably preclude adding the
graphs of Fig.~1 and Fig.~2 together in a naive way.\footnote{
\divide\baselineskip by 2
For the resolution of similar issues in atomic physics, namely
the avoidance of double-counting when bound states are involved, see
Ref.~\onlinecite{Lynn} and references therein.}
Second, the exchange-graph formalism of Ref.~\onlinecite{AM} applies
not only to the meson-baryon system which we focus on here,
but equally to the baryon-baryon, baryon-antibaryon,
baryon-baryon-baryon, and in general to all $n$-baryon, $m$-antibaryon
interactions (Fig.~3). Of course, there are no analogs of
Compton-type graphs for these multi-baryon systems. It follows that
the exchange-graph formalism of Ref.~\onlinecite{AM}
gives---in principle---{\it the complete answer}
for these cases, to leading order in $1/N_c.$ By this we specifically
mean the following: given an effective hadron Lagrangian
whose meson-baryon couplings properly
embody the $I_t=J_t$ and proportionality rules,
the complete set of Feynman diagrams of the sort exhibited in
Fig.~3 can be summed to leading order in $1/N_c.$
It would be an interesting exercise
to carry out this program, starting from a well-motivated effective Lagrangian,
and to compare the results to the popular Bonn\onlinecite{Bonn} and
Paris\onlinecite{Paris} potential
models (which are derived from just the ladder diagrams with
at most one crossing) as well as to the recent work of
Weinberg and others that relies exclusively on chiral perturbation
theory\onlinecite{Weinbergchi}.
\subsection{Outline of paper}
The remainder of this paper is organized as follows. In Sections II
and III we review the exchange-graph
formalism of Ref.~\onlinecite{AM} and apply it to two warm-up problems,
a ``$\sigma$-only'' and a ``$\pi$-only'' model, respectively.
Sections IV-VI explore in detail the meson-baryon $S$-matrix in
a richer model comprising both pions and $\sigma$ mesons, a variation on the
Gell-Mann-Levy $\sigma$-model\onlinecite{linsigmod}.
Obviously, we do not take this model seriously as a
realistic depiction of hadron physics. Rather,
we aim only to illustrate how the formalism of Ref.~\onlinecite{AM}
leads in a concrete way to a quantitative calculation of the
exchange-graph contribution to the multi-channel meson-baryon
$S$-matrix. With the present model solved,
the scene is set for more ambitious, realistic calculations,
necessarily incorporating vector mesons.
We are also interested in comparing the large-$N_c$
effective Lagrangian approach that uses explicit
baryons, with earlier large-$N_c$ results from the Skyrme model.
We come to the conclusion that much of the detailed structure
of the meson-baryon $S$-matrix which hitherto has been
uncovered only with skyrmion methods, can equally be described by
models with explicit baryon fields. At the same time, both
approaches share significant problems in the low partial waves,
the complete resolution of which remains a major technical hurdle.
\section{First warm-up problem: a $\sigma$-only Model}
\label{sigonlymodel}
As a first calculation, let us consider a model with only $\sigma$ mesons and
(non-strange) baryons \cite{AM,Boulder}.
Because the $\sigma$ has $I=J=0$, this toy model avoids
the spin and isospin complications due to
non-commutativity of Pauli matrices at the meson-nucleon vertices.
It also avoids the complications of inelastic 2-body channels ($e.g.,$
nucleons cannot turn into $\Delta$'s).
The Lagrangian to be solved in this Section is the large-$N_c$ version of:
\begin{equation}
{\cal L}_{\sigma N} =
{1 \over 2} (\partial_\mu \sigma)^2 - V(\sigma) +
\overline{N} (i \gamma\cdot\partial - M_N) N -
g \sigma \overline{N} N \ ,
\label{sigLagn}
\end{equation}
where, for definiteness, the $\sigma$ self-interactions are
described by the fourth-order potential
\begin{equation}
V(\sigma) = {1 \over 2} {m_\sigma}^2 \sigma^2 +
{1 \over 6} \kappa \sigma^3 +
{1 \over 24} \lambda \sigma^4 \ .
\label{sigPotl}
\end{equation}
By the words ``large-$N_c$ version of'' we mean that the
coupling of the $\sigma$ to the nucleon in
Eq.~(\ref{sigLagn}) must, in principle, be augmented by analogous
couplings to the entire $I=J$ tower of large-$N_c$ baryons, starting
with the $\Delta$ ($I=J=3/2$) and continuing through the state with
$I=J=N_c/2$. The relative strengths of these couplings is given by
the proportionality rule \cite{Muk}.
However, in this simple model, since the
$\sigma$ carries the quantum numbers of the vacuum, it couples
diagonally to this tower (as noted above).
Therefore, so long as we restrict our
attention to nucleon targets, we can safely drop these additional couplings
to the higher baryon states
and work with the simplified Lagrangian (\ref{sigLagn}).
In the large-$N_c$ limit the nucleon has mass of order $N_c$ and its
degrees of freedom freeze out. This means that the nucleon
kinetic energy term in Eq.~(\ref{sigLagn}) can be dropped, and the Yukawa
term has $\overline{N} N$ replaced by a static source $j({\bf x})$.
The formal derivation of this intuitive prescription was given in
Ref.~\cite{AM}. For completeness, we review it here. Looking at
Fig.~4, the product of the nucleon propagators (reading from bottom to
top) is
\begin{eqnarray}
{i\over {p\llap/}+{k\llap/}_1-M_N^{}+i\epsilon } & &\times
{i\over {p\llap/}+{k\llap/}_1+{k\llap/}_2-M_N^{}+i\epsilon }
\times\cdots\times
{i\over {p\llap/}+{k\llap/}_1+\cdots+{k\llap/}_{n-1}-M_N^{}+i\epsilon }
\nonumber \\
& &\approx\
{{\gamma_0+1} \over 2} {i \over k_{10}+i\epsilon} \times\cdots\times
{i \over k_{10}+\cdots+k_{n-1,0}+i\epsilon} \ .
\label{PropProd}
\end{eqnarray}
In the above we have taken the large-$N_c$ (\it i.e.\rm, nonrelativistic)
limit of the nucleon propagators
\begin{equation}
{i\over {p\llap/}+{k\llap/}-M_N^{}+i\epsilon }\quad
{\stackrel{\scriptstyle{N_c\rightarrow\infty}}{\longrightarrow}}\quad
{\gamma_0+1\over2}{i\over k_0+i\epsilon} \ ,
\label{eqa}
\end{equation}
assuming that the nucleon is in its rest frame.
The prefactor $(\gamma_0+1)/2$ is the projector onto the large components
of the Dirac 4-spinor. From now on we suppress it, with the understanding
that we always throw away the small components.
Our desired result is obtained by summing over the $n!$ crossed ladders
(Fig.~5), and using the interesting identity for distributions,
\begin{eqnarray}
2\pi\delta \big( \sum_{i=1}^nk_{i0} \big) \!\!\!
\sum_{{\rm permutations}\atop{(i_1,\cdots,i_n)}} \
& &\!\!\!\!\!\!\!\!{i\over k_{i_10}+i\epsilon} \times
{i\over k_{i_10}+k_{i_20}+i\epsilon}
\times\cdots\times
{i\over k_{i_10}+\cdots+k_{i_{n-1}0}+i\epsilon} \nonumber \\
&=&
2\pi\delta(k_{10}) \times 2\pi\delta(k_{20}) \times\cdots\times
2\pi\delta(k_{n0}) \ .
\label{eqb}
\end{eqnarray}
(To prove this,
Fourier transform both sides of this identity in all $n$ momenta.)
Each of the $n!$ terms in this sum corresponds to a distinct crossing
or ordering of the $n$ rungs of the ladder.
The $\delta$-function on the left-hand side of this
equation reflects conservation of energy along the nucleon line in
the large-$N_c$ limit:
\begin{equation}
{2\pi\delta\big(-p_0'+p^{}_0+\sum_{i=1}^nk_{i0}\big)\quad
{\stackrel{\scriptstyle{N_c\rightarrow\infty}}{\longrightarrow}}\quad
2\pi\delta\big(\sum_{i=1}^nk_{i0}\big)\ .}
\label{eqc}
\end{equation}
Recognizing $2\pi\delta(k_0)$ as the 4-dimensional Fourier transform of
$\delta^3({\bf x}),$ we immediately understand the meaning of the
simple factorized right-hand side of Eq.~(\ref{eqb}) in terms of graphs.
Simply put, the sum of the $n!$ crossed ladders is equal to the
{\it single} graph of Fig.~6, generated by the effective Lagrangian
\begin{equation}
{{\cal L}_{\rm eff}\ =\ {1\over2}(\partial_\mu\sigma)^2
-V(\sigma)-\sigma j({\bf x})}
\label{eqd}
\end{equation}
where, as promised, the nucleon field has been frozen out in favor
of the external $c$-number source
\begin{equation}
{j({\bf x})\ =\ g\,\delta^3({\bf x})\ .}
\label{eqe}
\end{equation}
The complete exchange-graph contribution to
$\sigma N$ scattering in the large-$N_c$
limit now emerges from a two-stage numerical program,
which is most transparent in graphical terms. In the first stage,
one defines a ``classical'' $\sigma$ field $\sigma_{\rm cl}$ as the sum of
all one-point trees (Fig.~7). The reason one considers only trees is
that meson loops are suppressed by powers of
$1/N_c$\onlinecite{Veneziano,WittenI,Luty}. In the second stage,
one considers a propagating $\sigma$ field (which we call the ``quantum''
field $\sigma_{\rm qu}$ to distinguish it from $\sigma_{\rm cl}$) interacting
with an arbitrary number of $\sigma_{\rm cl}$ insertions (Fig.~8). By
inspection, this two-stage procedure is equivalent to summing
all the tree graphs of the form shown in Fig.~6 (the loop graphs being
subleading in $1/N_c$). As promised: the loops (Figs.~4-5) have turned into
trees, exactly as in the old electron-proton problem invoked in Sec.~I.
This two-stage graphical procedure is easily translated into the language of
differential equations. Solving for $\sigma_{\rm cl}$ as per Fig.~7 is
equivalent to solving the classical Euler-Lagrange equation for the
effective Lagrangian of Eq.~(\ref{eqd}), namely,
\begin{equation}
-\nabla^2 \sigma_{\rm cl}({\bf x}) +
V'\big(\sigma_{\rm cl}({\bf x})\big) + j({\bf x}) \ =\ 0\ .
\label{eqf}
\end{equation}
Note that $\sigma_{\rm cl}$ is time-independent because the source
$j({\bf x})$ has
this property. Next, solving for the propagating field $\sigma_{\rm qu}$,
as given by Fig.~8, is accomplished by noticing that at every vertex, there
are exactly two $\sigma_{\rm qu}$ legs, the rest being insertions
of $\sigma_{\rm cl}$, with the coupling constants read off from $V(\sigma)$.
Therefore, the relevant equation of motion comes from
the \it quadraticized \rm Lagrangian
\begin{equation}
{\cal L}_{\rm quad}\ =\ {1\over2}\partial_\mu\sigma_{\rm qu}
\partial^\mu \sigma_{\rm qu} -{1\over2}\sigma_{\rm qu}^2V''\big(
\sigma_{\rm cl}({\bf x})\big)\ ,
\label{eqg}
\end{equation}
which induces the linear time-dependent equation
\begin{equation}
\big[\partial_\mu \partial^\mu + V''(\sigma_{\rm cl}({\bf x}))\big]
\ \sigma_{\rm qu}(x)\ =\ 0\ .
\label{eqh}
\end{equation}
In short, we have outlined a two-stage numerical procedure,
the first stage involving
a non-linear time-independent equation for a ``classical'' meson field,
the second involving a linear time-dependent equation for a ``quantum''
meson field in the classical background.
This is reminiscent of the skyrmion approach to meson-baryon
scattering\onlinecite{HEHW,Sig,MandK,MandP}.
In the subsequent Sections, when pions are introduced,
this correspondence will be sharpened by the
emergence of a hedgehog structure to the classical pion field that is
familiar from the Skyrme model\onlinecite{Skyrme,ANW}. (The chief
\it difference \rm between the two approaches is, of course, that
baryon number is carried by topology in the Skyrme model, and by
smeared $\delta$-function sources when the baryon fields are explicit.)
The analog of the hedgehog Ansatz in
the present model with $I=0$ $\sigma$ mesons alone is just ordinary
spherical symmetry:
\begin{equation}
\sigma_{\rm cl}({\bf x}) \equiv G(r)\ .
\label{sigradial}
\end{equation}
The profile
function $G(r)$ is found by solving the non-linear radial
differential equation
\begin{equation}
G'' +{2\over r}G'- {m_\sigma}^2 G
- {\kappa \over 2}\,{G^2}
- {\lambda \over 6}\,{G^3} =
j({\bf x})
\label{sigGeqn}
\end{equation}
implied by Eqs.~(\ref{sigPotl}) and (\ref{eqf}).
Unfortunately, Eq.~(\ref{sigGeqn}) suffers from
ultraviolet problems when $j$ is literally
taken to be a $\delta$-function as per Eq.~(\ref{eqe}).
The source of these divergences (which are worse than in the original
loop graphs, Figs.~4-5) can be traced to the nonrelativistic
reduction of the propagator (\ref{eqa}), which is only valid so long
as the components of the exchanged meson momentum satisfy $|k_\mu|\ll M_N.$
(A similar breakdown of the large-$N_c$ approach is discussed in
Sec.~8 of Ref.~\onlinecite{WittenI}.)
A simple cure is to smear out the source, say, as a Gaussian:
\begin{equation}
j({\bf x})\ \longrightarrow\
{g \over {(a_N \sqrt\pi)^3}} \exp(-r^2/a_N^2) \ ,\quad
a_N^{}\sim N_c^0\ .
\label{gaussian}
\end{equation}
This approximation now renders Eq.~(\ref{sigGeqn}) tractable, at the expense of
introducing a ``nucleon size'' parameter $a_N$ into the problem.
This new parameter provides
an ultraviolet cutoff on the momentum allowed to flow
into or out of the nucleon.
We have checked that our numerical results are not
overly sensitive to $a_N$ over a reasonable range of values.
Equation (\ref{sigGeqn})
represents a two-boundary value problem which can be solved
in an iterative fashion using
a standard ``shoot and match'' Runge-Kutta integration procedure
\cite{NumRec}.\footnote{
\divide\baselineskip by 2
For details, see the Appendix.}
Fig.~9 shows the profile function $G(r)$ for the specific choice of
parameters $m_\sigma=600\,$MeV, $\kappa=18.5$, $\lambda=214$, $g=13.6$
and $a_N=0.5$ fm.
Note that $G(r)$ looks very much like a Yukawa function,
$\exp(-m_\sigma r)/r$, except that it is finite at the origin
(because of the smearing of the nucleon source term) and
has small deviations in the 0.5 to 1.0 fm region due to the non-linear terms
involving $\kappa$ and $\lambda$.
Given $G(r)$, we then solve Eq.~(\ref{eqh}) for
$\sigma_{\rm qu}$ by means of a standard partial wave analysis.
For angular momentum $l$, the radial scattering wave
function $u_l(r) = r \sigma_l(r)$ having energy $\omega$ satisfies
\begin{equation}
\left[{{d^2}\over{dr^2}} + q^2 - \kappa G(r) -
{\lambda\over2} G^2(r) - {l(l+1) \over {r^2}}\right] \, u_l(r) = 0 \ ,
\quad
q^2 = \omega^2 - m_\sigma^2\ .
\label{sigscat}
\end{equation}
This is a Schr\"odinger-like linear differential equation that can
also be solved by Runge-Kutta integration from the
origin (where $u_l(r)$ must be regular, going like $r^{l+1}$).
The asymptotic form of $u_l(r)$ is then fit in the usual way
to a linear combination of
spherical Ricatti-Bessel functions, $j_l(qr)$ and $n_l(qr)$, yielding
the phase shifts for $\sigma N$ elastic scattering.
The potential in Eq.~(\ref{eqh}) [or Eq.~(\ref{sigscat})]
has a short-range repulsive core
(coming from the quartic term in the Lagrangian) and intermediate-range
attraction (coming from the cubic term and the fact $G(r) < 0$).
Consequently, as shown in Fig.~10, the $S$-wave phase
shift at low energies is positive because of the medium-range
attraction, but it soon turns over and looks like the phase shift for
a hard-core repulsive potential.
At still higher energies (not shown), the phase shift returns to 0, since the
short-range repulsive core is finite. The higher partial waves exhibit
similar behavior, but offset to increasingly higher energies because of the
angular momentum barrier.
\section{second warm-up problem: a $\pi$-only model}
\label{pionlymodel}
As a second simplified
example, we consider a model of gradient-coupled pions and
$I=J$ baryons. The Lagrangian we want to solve is the large-$N_c$ version
of
\begin{eqnarray}
{\cal L}_{\pi N} &=&
{1 \over 2} \partial_\mu \pi^a \; \partial^\mu \pi^a
- {1 \over 2} m_\pi^2 \; \pi^a \pi^a
- {\lambda \over 24} ( \pi^a \pi^a )^2 \nonumber \\
& & \quad + \overline{N} (i \gamma\cdot\partial - M_N) N
-g_\pi\partial_\mu\vec\pi\cdot
\overline{N}\gamma^5\gamma^\mu\vec\tau N\ .
\label{piLagn}
\end{eqnarray}
The reason for choosing pseudovector coupling rather
than the pseudoscalar coupling,
$-g'_\pi\vec\pi\cdot\overline{N}i\gamma^5\vec\tau N$, is that it
is more amenable to a large-$N_c$ treatment, for the following
reason.
The matrix $\gamma^5$ is purely off-diagonal, connecting the large to
the small components of the Dirac 4-spinor. This means that taking
the nonrelativistic limit of the baryons is a singular operation
when the pion is not soft.
In contrast, $\gamma^5\gamma^\mu$ does connect the large components
to themselves for $\mu=1,2,3$
so that, with a pseudovector coupling, we can follow the
simple leading-order large-$N_c$ prescription given earlier of just
throwing away the small components (including, \it inter alia\rm,
the $\gamma^5\gamma^0$ contribution; remember that the $1/N_c$
expansion breaks up Lorentz invariants).
Of course, in a different limit from large $N_c,$ namely
the soft-pion limit in which the pion is emitted from the on-shell
nucleon at approximately zero 4-momentum, the pseudovector and
pseudoscalar couplings are indistinguishable, provided one takes
$g_\pi^{}=g'_\pi/2M_N.$
The meaning of the words ``large-$N_c$ version of''
preceding the Lagrangian (\ref{piLagn}) is that,
as in the $\sigma N$ model of the previous section,
the coupling of the pion to the nucleon
must be supplemented by analogous couplings to all the other members of
the tower of $I=J$ baryons, and likewise for the nucleon kinetic term.
In the previous case this was an irrelevant complication:
because the zero-isospin $\sigma$ cannot induce transitions
between states in this tower, the problem diagonalizes.
In contrast, pions can and do change nucleons to $\Delta$'s, etc.
The most convenient
way to implement the gradient coupling of the pion to the $I=J$
tower of baryons is to change baryon basis to the so-called collective
coordinate basis $|A\rangle$ familiar from the Skyrme model,
with $A$ an $SU(2)$ element \onlinecite{ANW}.
These basis elements are defined by\onlinecite{ANW,Schulman}.
\begin{equation}
\ket{A}\ =\!\!\!\!\sum_{R={1/2,\,3/2,\cdots}}(2R+1)^{1/2}
\!\!\!\!\sum_{i_z,s_z=-R,\cdots,R} (-)^{R-s_z}
D^{\scriptscriptstyle(R)}_{-s_z,i_z}(A^\dagger)
\ket{R\atop i_z\,s_z}\ ,
\label{eqi}
\end{equation}
normalizing the volume of $SU(2)$ to unity.
On the right-hand side of this equation,
the baryons are given in the usual spin-isospin
basis, e.g., a neutron of spin up and a $\Delta^+$ of spin projection $-3/2$
would be denoted as $\left|{1/2\atop-1/2,1/2}\right\rangle$ and
$\left|{3/2\atop1/2,-3/2}\right\rangle$, respectively.
In the collective coordinate language, the correct pion-baryon coupling reads
\begin{equation}
-3g_\pi\sum_{a,b=-1,0,1}\partial_b\pi^a\int_{SU(2)}dA\,
D^{\scriptscriptstyle(1)}_{ab}(A)\ket{A}\bra{A}\ .
\label{eqj}
\end{equation}
This coupling was first written down on general grounds (without
reference to soliton physics) by
Adkins, Nappi and Witten \onlinecite{ANW}, and is necessary for the
consistency of the Compton-type graphs with the overall $N_c^0$
scaling of the pion-baryon scattering amplitude in the large-$N_c$
limit, as reviewed earlier\onlinecite{GerSak,DashMan}. It has also
recently been established
using collective coordinate quantization of the skyrmion\onlinecite{DHM}.
Despite our convenient adoption of Skyrme-model notation, we emphasize
that the coupling~(\ref{eqj}) is, in the present context, understood
to be built from explicit baryon field operators, and not solitons.
Let us verify explicitly that Eq.~(\ref{eqj}) is indeed the correct large-$N_c$
pseudovector coupling of the pion to the baryon tower. In particular,
Eq.~(\ref{eqj}) has the following four desirable properties:
\begin{enumerate}
\item It is invariant under isospin and angular momentum;
\item It contains the pion-nucleon interaction shown in Eq.~(\ref{piLagn});
\item It correctly implements the ``proportionality rule'' governing
couplings to the higher states in the $I=J$ tower.
\item It accurately predicts the width of the $\Delta$, and furthermore,
gives widths so large for the large-$N_c$ artifacts of the model
(the baryons with $I=J\ge\textstyle{5\over2}$) that these pose no
phenomenological problems for the large-$N_c$ approach.
\end{enumerate}
\noindent
We deal with each of these assertions in turn:
1. The state $|A\rangle$ transforms as
\begin{equation}
\ket{A}\
{\stackrel{\rm\scriptscriptstyle isospin}{\longrightarrow}}\
\ket{U_I^{}A}\quad\hbox{and}\quad
\ket{A}\
{\stackrel{\rm\scriptscriptstyle ang. mom.}{\longrightarrow}}\
\ket{AU_J^\dagger}
\label{eqk}
\end{equation}
so that
\begin{eqnarray}
\int_{SU(2)} dA &\,
D^{\scriptscriptstyle(1)}_{ab}(A){\ket{A}}{\bra{A}}\ \longrightarrow\
\int_{SU(2)}dA\,
D^{\scriptscriptstyle(1)}_{ab}(A)\ket{U_I^{}AU_J^\dagger}
\bra{U_I^{}AU_J^\dagger} \nonumber\\
= &\
\int_{SU(2)}dA\,
D^{\scriptscriptstyle(1)}_{ab}(U_I^\dagger AU_J^{})
\ket{A}\bra{A} \nonumber\\
= &\
D^{\scriptscriptstyle(1)}_{aa'}(U_I^\dagger)
D^{\scriptscriptstyle(1)}_{bb'}(U_J^\dagger)
\int_{SU(2)}dA\, D^{\scriptscriptstyle(1)}_{a'b'}(A)\ket{A}\bra{A}\ .
\label{eql}
\end{eqnarray}
Here we have used the group invariance of the $SU(2)$ measure, $d(U_I^\dagger
AU_J^{})=dA,$ and the reality property of the
$D^{\scriptscriptstyle(1)}$ matrices. Similarly,
\begin{equation}
\partial_b\pi^a\ \longrightarrow\ \partial_{b''}\pi^{a''}
D^{\scriptscriptstyle(1)}_{a''a}(U_I^{})
D^{\scriptscriptstyle(1)}_{b''b}(U_J^{})\ .
\label{eqm}
\end{equation}
Combining these last two equations and using the composition property of the
Wigner $D$ matrices, we confirm that the coupling of Eq.~(\ref{eqj})
{\it is} invariant under isospin and angular momentum rotations.
2. Using Eq.~(\ref{eqi}), we rewrite the coupling of Eq.~(\ref{eqj}) as
\begin{eqnarray}
-3g_\pi\sum_{a,b} \partial_b\pi^a & & \sum_{R,i_z,s_z} \sum_{R',i'_z,s'_z}
(-)^{R-s_z} (-)^{R'-s_z'}
\big[(2R+1)(2R'+1)\big]^{1/2}
\ket{R'\atop i_z'\,s_z'}\bra{R\atop i_zs_z} \nonumber\\
& &\quad\quad\quad\quad\times\
\int_{SU(2)}dA\,
D^{\scriptscriptstyle(1)}_{ba}(A^\dagger)
D^{{\scriptscriptstyle(R')}*}_{-s_z',i_z'}(A^\dagger)
D^{\scriptscriptstyle(R)}_{-s_z,i_z}(A^\dagger) \nonumber\\
&=&\
-3g_\pi\sum_{a,b}\partial_b\pi^a\sum_{R,i_z,s_z}\sum_{R',i'_z,s'_z}
(-)^{R+R'}
\langle R\,1\,i_z\,a|R'\,i_z'\rangle
\langle R'\,1\,s_z'\,b|R\,s_z\rangle \nonumber\\
& &\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\times\
\ket{R'\atop i_z'\,s_z'}\bra{R\atop i_zs_z}\ ,
\label{eqo}
\end{eqnarray}
using standard $D$-matrix integration tricks.
We now pick out the terms with $R=R'=1/2$ in this expression
in order to study specifically the pion coupling
to $\overline{N}N$. Isospin and angular momentum invariance can be
made more manifest by rewriting this subset of terms as
\begin{equation}
g_\pi\sum_{a,b}\sum_{i_z,s_z}\sum_{i'_z,s'_z}
\tau^a_{i_z'\,i_z}\sigma^b_{s_z'\,s_z} \,
\partial_b\pi^a\ket{1/2\atop i_z'\,s_z'}\bra{1/2\atop i_zs_z}
\label{eqp}
\end{equation}
which we recognize as the nonrelativistic (or, equivalently,
in the present context, large-$N_c$) limit of the gradient coupling
$-g_\pi\partial_\mu\vec\pi\cdot\overline{N}\gamma^5\gamma^\mu\vec\tau N$.
3. A careful reading of Ref.~\onlinecite{Muk} reveals this
criterion will be automatically satisfied due to the diagonality of the
pion-baryon coupling, Eq.~(\ref{eqj}), in the collective coordinate $A$.
It is instructive nevertheless to see how this comes about explicitly.
The baryon-antibaryon Hilbert-space operator in Eq.~(\ref{eqo}) can
be written in terms of states with good $t$-channel (exchange-channel)
quantum numbers as follows:
\begin{eqnarray}
\ket{R'\atop i_z'\,s_z'} \bra{R\atop i_zs_z}\ &=& \
\sum_{I_t,I_{tz}} \sum_{J_t,J_{tz}}
(-)^{R+i_z} (-)^{R'+s_z'}
\langle I_t I_{tz} | R' R i_z',-i_z \rangle
\langle RR's_z,-s_z' | J_t J_{tz} \rangle
\nonumber\\
& &\quad\quad\quad\quad\quad\quad\times\
\ket{{I_t\,;RR'}\atop {I_{tz}}} \bra{{J_t\,;RR'}\atop {J_{tz}}}\ ,
\label{eqq}
\end{eqnarray}
where the phases in the above are the usual cost of turning bras into
kets in $SU(2)$ \cite{RebSlan}:
$|jm\rangle \leftrightarrow (-)^{j+m}\langle j,-m|$.
Plugging Eq.~(\ref{eqq}) into Eq.~(\ref{eqo})
and using Clebsch-Gordan orthogonality
gives for the pion-baryon coupling:
\begin{eqnarray}
-g_\pi\sum_{I_{tz},J_{tz}} \partial_{J_{tz}}\pi^{I_{tz}}
\sum_{R,R'}& &(-)^{R+R'}
\big[(2R+1)(2R'+1)\big]^{1/2}
\ket{I_t=1\,;RR'\atop I_{tz}}
\bra{J_t=1\,;RR'\atop J_{tz}}
\label{eqr}
\end{eqnarray}
This equation correctly embodies two large-$N_c$ selection rules:
the fact that the exchanged angular momentum $J_t$ is equated to the
isospin $I_t=1$ of the pion is a specific example of the more general
$I_t=J_t$ rule \cite{Muk,Action}, whereas the square-root
proportionality
factors relating the pion's couplings to the various baryon states in the $I=J$
tower illustrate the proportionality rule \cite{Muk}.
4. The coupling (\ref{eqj}) can be used to calculate the decay width of
a baryon with spin/isospin $J$ to the next-lower state $J-1$ via the
emission of a single pion. For the case $\Delta\rightarrow N\pi$ one
calculates $\Gamma_\Delta=114\,$GeV as against a measured width of
120$\pm5\,$ MeV \onlinecite{ANW,DHM}. Pleasingly,
for the higher states, $I=J\ge
\textstyle{5\over2},$ the widths turn out to be so large that these
large-$N_c$ artifacts cannot be said to exist as particles, and
therefore, pose no phenomenological problem for the large-$N_c$
program. One finds $\Gamma_{5\over2}\approx800\,$MeV,
$\Gamma_{7\over2}\approx2600\,$MeV,
$\Gamma_{9\over2}\approx6400\,$MeV, and so forth \onlinecite{DHM}.
As before, we seek to
sum the set of exchange-type graphs of the form shown in Fig.~2.
However, \it a priori\rm, the situation is not so simple as in
the $\sigma$-only model of Sec.~II. Look again at the interesting
identity~(\ref{eqb}) for distributions, which is the key to turning
loops into trees. The $n!$ terms on the left-hand side correspond
to the $n!$ distinct ``tanglings'' in which the exchanged $\sigma$ lines are
attached in a different order to the baryon line. Because the $\sigma$
carries no spin or isospin, each tangling enters with the same
relative group-theoretic weight in Eq.~(\ref{eqb}), and the identity
goes through as written (so, too, for photon exchange).
In contrast, $\pi,$ $\rho$ and $\omega$ mesons, etc., carry non-trivial
isospin and/or spin, and the $n!$ tanglings would not be expected to
occur with the same group-theoretic factors.
(Pauli spin/isospin matrices do not
commute.) Specifically, one expects a different product of $n$ spin
and $n$ isospin Clebsch-Gordan factors weighting each term on the
left-hand side of Eq.~(\ref{eqb}), and destroying the identity.
Nevertheless, acting together, the $I_t=J_t$ and proportionality rules
assure that, to \it leading \rm order in $1/N_c,$ these $n!$ group-theoretic
factors are indeed equal, once the intermediate baryon legs are
summed over all allowed $I=J$ states. Therefore,
the identity~(\ref{eqb}), derived for $\sigma$ (or photon) exchange, applies
as well to the exchange of these non-trivial mesons.
This theorem is proved in Ref.~\onlinecite{AM}, using elementary
properties of 6$j$ symbols. However, there is an easier way to
see this, which is to work directly in the $|A\rangle$ basis.
So, look again at Fig.~5, and understand the
baryon line to mean, not a nucleon or
a $\Delta$ or any specific member of the $I=J$ tower (which can change
identity at each pion interaction vertex), but rather a baryon state
$|A\rangle$ sharp in the $SU(2)$ collective coordinate $A,$ which is
\it preserved \rm at each vertex,
due to the diagonality in $A$ of the coupling (\ref{eqj}).
Initial and final nucleon, $\Delta,$
etc., states can be projected out at the very end of the calculation
using standard group-theoretic techniques
borrowed from the Skyrme model [i.e., inverting Eq.~(\ref{eqi})].
At earlier stages, however, we can use the full machinery of Sec.~II to
turn loops into trees with impunity.
Therefore, once again, the graphs of Fig.~5 can be summed
following a two-stage program. In the first stage, one solves a static
non-linear equation for $\vec\pi_{\rm cl}(A)$ (noting that the
classical pion field depends on the $SU(2)$ collective coordinate
$A$). Isospin covariance trivially
relates this quantity to $\vec\pi_{\rm cl}(A=1)$, henceforth called
just $\vec\pi_{\rm cl}.$
Using $D^{\scriptscriptstyle(1)}_{ab}(A=1)=\delta_{ab},$
one obtains the Euler-Lagrange equation
\begin{equation}
-\nabla^2\pi_{\rm cl}^a+m_\pi^2\pi_{\rm cl}^a
+{1\over6}\lambda\pi_{\rm cl}^a\vec\pi^{}_{\rm cl}\cdot\vec\pi^{}_{\rm cl}
-3g_\pi\partial_a \delta^3({\bf x})\ =\ 0\ .
\label{eqs}
\end{equation}
This equation is solved by smearing the $\delta$-function source to
a Gaussian as in Eq.~(\ref{gaussian}), and by
assuming a hedgehog Ansatz for the classical pion field
(anticipating the resemblance to the Skyrme model):
\begin{equation}
\pi _{\rm cl}^a({\bf x})={\hat {\bf r}}^a F(r) \ .
\label{HHA}
\end{equation}
Equation (\ref{eqs}) then becomes an ODE for the classical pion profile $F(r)$:
\begin{eqnarray}
F'' + {2 \over r}F' \
& - &\ ({2 \over r^2}+m_\pi^2) \; F - {\lambda\over6} F^3
= \ -{6g_\pi r\over a_N^5\pi^{3/2}}\, \exp(-r^2 / a_N^2) \ ,
\label{Fonlyeqn}
\end{eqnarray}
subject to the boundary conditions that
$F(r)$ be regular near $r=0$ and bounded as $r \to \infty$,
\begin{eqnarray}
F(r) &=& \; Br + {\cal O}(r^3) \ {\rm near} \ r = 0; \nonumber \\
F(r) &\to& \; C\exp(-m_\pi r)/r \ {\rm as} \ r \to \infty \ .
\label{piBCs}
\end{eqnarray}
$B$ and $C$ are scale parameters that are initially unknown to us but are
fixed implicitly by the non-linearity of Eq.~(\ref{Fonlyeqn}).
This is another two-boundary-value problem, which can be
numerically solved as before (see Fig.~11, and Appendix A).
In the second stage, one solves the linearized time-dependent equation
for $\pi_{\rm qu}$ propagating in the background of $\pi_{\rm cl}(A).$
Again, isospin invariance trivially relates this process to the propagation
of $\pi_{\rm qu}$ in the background of $\pi_{\rm cl}(A=1),$ the latter
quantity being given by Eqs.~(\ref{HHA}) and (\ref{Fonlyeqn}).
Initial and final nucleons or $\Delta$'s are then
projected from the hedgehog by inverting Eq.~(\ref{eqi}),
using the orthogonality over $SU(2)$ of Wigner $D$-matrices.
Finally, the initial and final pion-baryon
systems are combined into states of good total isospin and angular
momentum in the usual fashion to give the partial-wave $S$-matrix
for $\pi N\rightarrow\pi N$, $\pi N\rightarrow\pi\Delta$, etc.
Fortunately, this cumbersome (if straightforward)
sequence of group-theoretic steps can
be circumvented, once one realizes that they are \it identical \rm to
the procedure followed in the Skyrme model \cite{HEHW,Sig,MandK,MandP}.
Rather than
``reinventing the wheel'' one can therefore carry over intact the
machinery of Refs.~\onlinecite{HEHW,Sig,MandK,MandP} of
$K$-spin decomposition and 6$j$ symbols.
We postpone the explicit review of this formalism
to Sec.~\ref{physS}, in which we complete the analysis of
the richer model containing both pions and $\sigma$ mesons.
Unfortunately, the pion-only model discussed in this Section
is inherently uninteresting phenomenologically. Because of $G$-parity,
the pion-pion interactions can only come from even powers of $\vec{\pi}(x)$,
which means that the potentials entering into the coupled Schr\"odinger-like
scattering equations are strictly repulsive.
[They are proportional to $\lambda F^2(r)$.]
As a result, all $\pi N$ phase shifts exhibit repulsive behavior (i.e.,
clockwise motion in the Argand plots with increasing energy).
Thus there is no possibility for $\pi N$ resonances in such a model.
We need {\it something} like the $\sigma$ meson to provide a range
of attraction between $\pi$'s and $N$'s.
\section{Defining the $\sigma$-$\pi$ model}
\label{sec:sigpimodel}
In view of the two models discussed in the two previous Sections,
one might have some hope that a model combining $\sigma$ and $\pi$
mesons would provide a more promising (if still crude)
description of pion-nucleon interactions.
In this model the $\sigma$ meson will be taken as an ``elementary''
field, along
with the three $\pi$ fields. Indeed, in the large-$N_c$ limit, the $\sigma$,
if such a state exists, is necessarily a stable particle, as
the decay amplitude to two pions is suppressed by $1 / \sqrt N_c$.
For guidance in constructing our large-$N_c$ model of pions and $\sigma$
mesons, and selecting reasonable values of the coupling constants,
we recall the linear $\sigma$-model of Gell-Mann and Levy:\cite{linsigmod}
\begin{eqnarray}
{\cal L} &=&
{1 \over 2} \partial_\mu \sigma' \partial^\mu \sigma'
+ {1 \over 2} \partial_\mu \vec\pi \cdot \partial^\mu \vec\pi
- {\lambda\over 4} ({\sigma'}^2
+ \vec\pi\cdot\vec\pi - a^2 )^2
+ \alpha \sigma' \nonumber \\
& & \quad
- g \sigma' \overline{N}N
- g \vec\pi \cdot \overline{N} i\gamma^5\vec\tau N
+\overline{N}i\gamma\cdot\partial N\ .
\label{sigpiLagn}
\end{eqnarray}
In this well-known model, the nucleon and $\sigma$ get their masses through
dynamical symmetry breaking, the $\sigma$ vacuum
expectation value $v$ being $g^{-1}M_N,$ and
chiral symmetry emerges in the limit $\alpha \rightarrow 0$.
It is convenient to redefine the $\sigma$ field by subtracting the VEV,
\begin{equation}
\sigma'(x)\ = \ v+\sigma(x) \ .
\label{eqt}
\end{equation}
By substituting for $\sigma'$ and expanding,
the four coupling constants $\{g,\lambda,a,\alpha\}$ can be traded for
the more physical set of parameters, $\{g, M_N, m_\pi, m_\sigma\}$, using
\begin{eqnarray}
\lambda = {g^2 \over 2M_N^2} (m_\sigma^2 - m_\pi ^2)\ , \quad\quad
\alpha = {{m_\pi}^2 M_N \over g} \ , \quad\quad
a^2 = {M_N^2 \over g^2}
{(m_\sigma^2 - 3m_\pi^2) \over (m_\sigma^2 - m_\pi^2)}
\label{params}
\end{eqnarray}
In this paper we will take
\begin{equation}
g=13.6\ ,\quad
M_N = 5.0 \ {\rm fm}^{-1} \ , \quad m_\pi = 0.7 \ {\rm fm}^{-1} \ ,
\quad {\rm and } \quad m_\sigma = 5.0 \ {\rm fm}^{-1} \ .
\label{params1}
\end{equation}
This choice for the nucleon mass roughly averages the actual $N$ and $\Delta$
masses, while the $\sigma$ meson here could be
identified with the $f_0(975)$ meson for concreteness.
The value of $g$ is the measured pion-nucleon pseudoscalar coupling constant.
With these values, the non-linear
self-interaction strength has a large value, $\lambda \approx 91$.
For a large-$N_c$ treatment, the Gell-Mann-Levy model needs to be
modified in the following two ways.
First, as discussed in Sec.~\ref{pionlymodel},
the pseudoscalar $\pi N$ coupling is inappropriate,
and should be replaced by pseudovector coupling as in Eq.~(\ref{piLagn}),
with $g_\pi=g/(2M_N) = 1.42$ fm.
Unfortunately, with this replacement chiral symmetry is lost, even
for $\alpha=0.$ However, as stated in the introduction,
our purpose in this paper is to explore the large-$N_c$ approach in
a multi-channel model, not to present a fully realistic effective
Lagrangian of the low-lying hadrons, which would require not only
approximate chiral symmetry but also the incorporation of vector mesons.
(To look at the bright side, the fact that we are sacrificing chiral
symmetry re-emphasizes the point that our large-$N_c$ techniques
have nothing to do with the chiral limit.)
Second, the meson couplings to the nucleon must be augmented by
suitable couplings to the entire $I=J$ baryon tower (and likewise
for the nucleon kinetic energy). The prescription
for doing so is Eq.~(\ref{eqj}) for the pion. It is easy to check that the
analogous prescription for the $\sigma$ is given simply by
\begin{equation}
-g\sigma\overline{N}N\ \longrightarrow\
-g\sigma\int_{SU(2)}dA\,\ket{A}\bra{A}\ .
\label{equ}
\end{equation}
As previously, we solve for the classical meson fields, for
the reference choice of collective coordinate $A=1$, by means of
a hedgehog ansatz:
\begin{equation}
\pi _{\rm cl}^a({{\bf x}})={\hat {\bf r}}^a F(r) \ , \quad
\sigma_{\rm cl}({\bf x}) = G(r) \ .
\label{HHAsigpi}
\end{equation}
Smearing out the $\delta$-function baryon source as in Eq.~(\ref{gaussian}), we
find coupled non-linear Euler-Lagrange ODE's for $F$ and $G$:
\begin{mathletters}
\begin{eqnarray}
{d^2 \over {dr}^2 }F + {2 \over r}{d \over dr}F
& - & \left({{2 \over r^2 }+m_\pi ^2}\right)\, F
- \lambda \left[{F^3 +FG^2 +2v FG}\right] \nonumber \\
& = & - {3g r\over M_N^{}
a_N^5\pi^{3/2}} \exp\left({-{r^2 \big/ a_N^2}}\right) \\
{d^2 \over {dr}^2 }G + {2 \over r}{d \over dr}G
& - & m_\sigma ^2\, G
- \lambda \left[{G^3 +F^2 G+3v G^2}\right] \nonumber \\
& = & {g\over(a_N\sqrt{\pi})^3} \exp\left({-{r^2 \big/ a_N^2}}\right)
+ \, \lambda v F^2\ .
\label{Geqn}
\end{eqnarray}
\label{FGeqns}
\end{mathletters}
We will generally set the nucleon size parameter
$a_N$ = 0.52 fm, but we will also consider the
dependence of our results on $a_N$ in Sec.~VI(C) below.
The boundary conditions are that
$F$ and $G$ must be regular at the origin and exponentially decaying (rather
than growing) at infinity.
The classical pion profile $F(r)$ falls off like $\exp(-m_\pi r)/r$
at large distances. On the other hand, $G(r)$ falls off not like
$\exp(-m_\sigma r)/r$ as one might naively expect,
but rather like
$\exp(-2m_\pi r)/r^2$ due to the $F^2$ source term on the
right-hand side of Eq.~(\ref{Geqn}), and the fact that $2m_\pi < m_\sigma$.
Details of our numerical ``shoot and match''
procedure for solving Eq.~(\ref{FGeqns}) can be found in Appendix A.
The solution for $F(r)$ and $G(r)$ is shown in Fig.~12.
Note that $G(r)$ is negative with respect to $F(r)$ and $v$.
It is this relative sign that leads to the attractive $\pi N$
interaction found in this model.
\section{pion-hedgehog scattering}
\label{QScat}
Having solved for the classical pion and $\sigma$ fields, we
turn to the small-fluctuations
problem of meson-baryon scattering. As in the
Skyrme model\onlinecite{HEHW,Sig,MandK,MandP}, one first
solves for meson-{\it hedgehog} scattering, and subsequently one
folds in some group theory (6$j$ symbols) to obtain meson-{\it nucleon}
scattering. The meson-hedgehog $S$-matrix is the topic of
this Section, while the meson-nucleon $S$-matrix is the subject of
Section VI to follow.
We return to the $\sigma$-$\pi$
Lagrangian, Eq.~(\ref{sigpiLagn}) as modified subsequently in the
text in the manner suggested by large-$N_c$. Consider fluctuations of the
meson fields about their classical solutions,
\begin{equation}
\pi^a(x) \to {\bf \hat{r}}^a F(r) + \pi_{\rm qu}^a(x) \quad ,\quad
\sigma(x) \to G(r) + \sigma_{\rm qu}(x) \ .
\label{qflucs}
\end{equation}
Since $F$ and $G$ satisfy the Euler-Lagrange equations,
terms linear in the fluctuating fields vanish.
The quadratic terms then lead to linear
equations of motion for $\pi_{\rm qu}^a(x)$ and $\sigma_{\rm qu}(x)$.
Higher-order nonlinearities in the meson fields are subleading
in $1/N_c$, as previously noted.
We will work out the partial-wave scattering
amplitudes factoring out a uniform
time-dependence $\exp(- i \omega t)$ from all the fluctuating fields.
For the $\sigma$ this involves the usual expansion in
spherical harmonics,
\begin{equation}
\sigma_{\rm qu}(\omega,{\bf x}) = \sum_{K,K_z} \phi_{KK_z}
(\omega,r) \; Y_{KK_z}(\hat{\bf x})
\label{sigmaPW}
\end{equation}
For the pions the decomposition is slightly more complicated
\cite{HEHW,Sig,MandK,MandP}.
The conserved quantum numbers are not isospin and total angular
momentum but the so-called ``grand spin,'' $\vec{K} = \vec{I}+\vec{J}$.
Since pions are spinless, $\vec J$ is just $\vec L,$
the orbital angular momentum.
Thus the appropriate partial wave analysis for pions involves an expansion in
terms of {\em vector} spherical harmonics,
\begin{equation}
\vec{\pi}_{\rm qu}
(\omega,{\bf x}) = \sum_{K,K_z,L} \psi_{KK_zL}(\omega,r) \;
\vec{\cal Y}^{\ L}_{KK_z}(\hat{\bf x}) \ ,
\label{piPW}
\end{equation}
where $L$ runs over values $K-1$, $K$, and $K+1$.
For each value of $K$, the equations for the four radial
wavefunctions $\phi_K,$ $\psi_{K,K},$ and $\psi_{K,K\pm1}$
might be expected to form a $4 \times 4$ coupled system,
\footnote{\divide\baselineskip by 2
From now on we drop the $K_z$ label on $\phi$ and $\psi$ since the ensuing
equations are independent of $K_z$.}
but parity uncouples $\psi_{K,K}$ from the other three. It obeys
\begin{equation}
{d^2 \over {dr^2}}\psi_{K,K} +
{2 \over r}{d \over dr}\psi _{K,K} +
\left[\,q_\pi^2 - {K(K+1) \over r^2 } - V_\pi(r)\,\right]
\psi_{K,K} = 0 \ ,
\label{uncpld}
\end{equation}
where
\begin{equation}
q_\pi^2 = \omega^2 - m_\pi^2
\quad\hbox{and}\quad
V_\pi(r) = \lambda [F^2(r) + G(r)(2 v + G(r))] \ .
\label{Vpi}
\end{equation}
\def\sqrt{K+1\over2K+1}{\sqrt{K+1\over2K+1}}
\def\sqrt{K\over2K+1}{\sqrt{K\over2K+1}}
The remaining
$3 \times 3$ coupled system of equations\footnote{
\divide\baselineskip by 2
For the special
case $K=0$ this is a 2$\times$2 as $\psi_{0,-1}$ does not exist.} is
best expressed in matrix form. Assembling $\psi_{K,K\pm1}$ and
$\phi_K$ into the column vector
\begin{equation}
\Psi_K(r) = \left( \begin{array}{c}
\psi_{K,K-1}(r)\\
\psi_{K,K+1}(r)\\
\phi_K(r)
\end{array} \right) \ ,
\label{defPsifirst}
\end{equation}
we find\footnote{
\divide\baselineskip by 2
In so doing we are greatly assisted by the vector
spherical harmonic identities, Eq.~(10), in Ref.~\onlinecite{MandK}.
Note a typo there: $K$ in the numerator of the square-root
in the middle line of Eq.~(10) should instead be $K+1$.}
\begin{equation}
{d^2 \over {dr}^2}\Psi_K + {2 \over r}{d \over dr}\Psi_K +
\left[{\sf Q}_K-{\sf V}_K\right]\cdot\Psi_K=0\ .
\label{cpld}
\end{equation}
Here ${\sf Q}_K$ is the diagonal matrix
\begin{equation}
{\sf Q}_K={\rm diag}
\left(q_\pi^2-{(K-1)K\over r^2},\ \ q_\pi^2-{(K+1)(K+2)\over r^2},\ \
q_\sigma^2-{K(K+1)\over r^2}\right)\, ,
\label{QKdef}
\end{equation}
and ${\sf V}_K$ is the symmetric potential energy matrix
\begin{mathletters}
\begin{eqnarray}
{\sf V}_{11} \;&=&\;
V_\pi(r) + 2 \lambda F^2(r) \left({K \over 2K+1}\right) \\
{\sf V}_{12} \;&=&\; - 2 \lambda F^2(r) \left({
\sqrt{K(K+1)} \over 2K+1}\right) \\
{\sf V}_{13} \;&=&\;
2 \lambda F(r)( v + G(r)) \left({K \over 2K+1}\right)^{1/2} \\
{\sf V}_{22} \;&=&\;
V_\pi(r) + 2 \lambda F^2(r) \left({K+1 \over 2K+1}\right) \\
{\sf V}_{23} \;&=&\;
-2 \lambda F(r)( v + G(r)) \left({K+1 \over 2K+1}\right)^{1/2} \\
{\sf V}_{33} \;&=&\; V_\sigma(r) \ ,
\label{defV}
\end{eqnarray}
\end{mathletters}
where we have defined
\begin{equation}
q_\sigma^2 = \omega^2 - m_\sigma^2 \quad\hbox{and}\quad
V_\sigma(r) = \lambda [F^2(r) + 3 G(r)(2 v + G(r))] \ .
\label{Vsigma}
\end{equation}
Note that $q_\sigma^2$ can be positive or negative, depending on whether the
energy $\omega$ is above or below the $\sigma$ threshold.
The ``diagonal'' potentials $V_\pi$ and $V_\sigma$ are plotted in Fig.~13.
They are repulsive at short distances and attractive at intermediate range.
The factor of three in the definition of
$V_\sigma$ makes it about three times more repulsive and
attractive than $V_\pi$.
Note that the vertical scale is in inverse fermis; these are potential wells of
depths about 6 and 2 GeV, respectively, which means there is substantial
attraction in both the $\sigma N$ and $\pi N$ systems.
Also shown in Fig.~13 are the off-diagonal transition potentials
${\sf V}_{12}$ and ${\sf V}_{13}$ (but without $K$-dependent factors)
which are comparable in size to the diagonal potentials.
Numerically, the uncoupled
equation (\ref{uncpld}) is readily solved using the Runge-Kutta
technique employed in Secs.~II and III. This method also works for
the coupled equations (\ref{cpld}), but only
\it above \rm the $\sigma$-threshold, $\omega > m_\sigma$. The
problem below threshold is to ensure that the
$\sigma$ wavefunction remains exponentially decaying,
\begin{equation}
\phi_K(r) \to \exp(- \kappa r)/r \ ,\quad
\kappa = (m_\sigma^2 - \omega^2)^{1/2}\ .
\label{sigdecay}
\end{equation}
In our experience, numerical noise in the Runge-Kutta integration
invariably induces
exponential blow-up: $\phi_K(r) \to \exp(+ \kappa r)/r$. We emphasize
that even below threshold the $\sigma$ cannot be neglected as
it causes substantial attraction in the $\pi N$ channel.
(Recall that the ``box diagram'' for $\pi N \to \sigma N \to \pi N$,
Fig.~2c, is attractive.)
A numerically more robust approach that works both above and
below the $\sigma$-threshold is to convert
Eq.~(\ref{cpld}) into a set of
coupled Fredholm integral equations of the second kind,
\begin{equation}
\Psi^{(i)}_K(r) = {\cal J}^{(i)}_K(r)
+ \int {\sf G}_K(r,r') {\sf V}_K(r') \Psi^{(i)}_K(r') \, dr' \ ,
\label{IntEqn}
\end{equation}
where the index $i$ labels the linearly independent choices of
inhomogenous driving terms.
Above the $\sigma$ threshold, $i$ runs over 1,2,3 and the inhomogeneous
terms are
\begin{eqnarray}
{\cal J}^{(1)}(r) = \left( \begin{array}{c}
\hat{\jmath}_{K-1}(q_\pi r) \\ 0 \\ 0
\end{array} \right) \ , \quad
{\cal J}^{(2)}(r) = \left( \begin{array}{c}
0 \\ \hat{\jmath}_{K+1}(q_\pi r) \\ 0
\end{array} \right) \ , \quad
{\cal J}^{(3)}(r) = \left( \begin{array}{c}
0 \\ 0 \\ \hat{\jmath}_{K}(q_\sigma r)
\end{array} \right) \ .
\label{defJ}
\end{eqnarray}
Below threshold, only the first two of these should be kept.
The multi-channel Green's function ${\sf G}_K$ is the diagonal matrix
\begin{eqnarray}
{\sf G}_{11}(r,r') =
- &&{1\over q_\pi} \hat{\jmath}_{K-1}(q_\pi r_<) \hat{n}_{K-1}(q_\pi r_>)\ ,
\nonumber \\
{\sf G}_{22}(r,r') =
- &&{1\over q_\pi} \hat{\jmath}_{K+1}(q_\pi r_<) \hat{n}_{K+1}(q_\pi r_>)\ ,
\\
\label{defG}
{\sf G}_{33}(r,r') =
- &&{2\over{\pi\kappa}}\hat{\imath}_{K}(\kappa r_<)\hat{k}_{K}(\kappa r_>)\ ,
\ {\rm below\ threshold} \ , \nonumber \\
{\sf G}_{33}(r,r') =
- &&{1\over q_\sigma} \hat{\jmath}_{K}(q_\sigma r_<) \hat{n}_{K}(q_\sigma r_>)
\ , \ {\rm above\ threshold} \ . \nonumber
\end{eqnarray}
where $\hat{\jmath}_l$, $\hat{n_l}$ are spherical Ricatti-Bessel functions
\cite{Taylor} and $\hat{\imath}_l$, $\hat{k_l}$ are modified spherical
Ricatti-Bessel functions \cite{AandS}, regular at the origin and exponentially
decaying, respectively.
By design, the multi-channel Green's function assures regularity of
the wave functions at the origin {\em and} the asymptotic exponential
fall-off of the $\phi_K$ below the $\sigma$ threshold.
Note that $G_{33}$ is continuous through the threshold.
The $S$-matrix for the uncoupled pion scattering, Eq.~(\ref{uncpld}),
will be denoted here as the single-subscript quantity $S_K$,
where the orbital angular momentum quantum numbers $L=L'=K$ are suppressed.
It is derived from the asymptotic analysis of the wavefunction in the
usual way. The corresponding phase-shift $\delta_K$, defined as
\begin{equation}
S_K=e^{2i\delta_K}\ ,
\end{equation}
is plotted against pion momentum $k$ in Fig.~14 for $K\le5$. For each
$K$ the corresponding phase shift is attractive, if numerically small
apart from the case $K=1,$ and comparatively much less significant
than in the Skyrme model ($cf$. Fig.~1, Ref.~\onlinecite{MandK}).
As always in scattering problems,
the centrifugal barrier term in the scattering equations delays
the onset of the rise in the phase-shift for the higher-$L$ partial waves.
The coupled-channels $3 \times 3$ (above threshold)
or $2 \times 2$ (below threshold) part of the $S$-matrix
will be denoted ${\sf S}^K_{ij}$, $i,j = 1,2$ and/or 3,
according to $L= K-1,K+1,$ and/or $K$. It is obtained
as follows. First, the ${\sf K}^K$-matrix is formed according to
\begin{equation}
{\sf K}^K_{ij} = -(1/q_j) \int dr\, \hat{\jmath}_L(q_j r)
[{\sf V}(r) \Psi^{(i)}(r)]_{j} \ ,
\label{Kmat}
\end{equation}
where $L = K-1,K+1$ and/or $K$ for $j = 1,2$ and/or 3, respectively,
and also $q_1=q_2=q_\pi,$ $q_3=q_\sigma.$ From
the ${\sf K}^K$-matrix, the $S$-matrix is formed in the
usual way,
\begin{equation}
{\sf S}^K_{ij} = (q_j / q_i)^{1/2}
[(1 - i{\sf K}^K) (1 + i{\sf K}^K)^{-1}]_{ij} \ ,
\label{KtoS}
\end{equation}
where for an explanation of the square-root flux factors (needed only
for multichannel scattering) we refer the
reader to Ref.~\onlinecite{TaylorFactors}. Time-reversal
invariance implies ${\sf S}^K=\big({\sf S}^K\big)^T,$ which we have
found to be a stringent check on our numerics. We will parametrize
${\sf S}^K_{ij}$ as $\eta^K_{ij}\exp 2i\delta^K_{ij}$ subject to this
symmetry property as well as to unitarity.
The phase-shifts corresponding to the specific $S$-matrix elements
${\sf S}^K_{11}$ and ${\sf S}^K_{22}$
are plotted in Figs.~15-16. Recall that, with our notation,
these are the $S$-matrix elements that describe pion-baryon scattering
(no `in' or `out' $\sigma$'s, only intermediate $\sigma$'s)
in which the orbital angular momentum quantum number is preserved
$(L=L'),$ as opposed to changing up or down by two units (as it
can for $\pi N\rightarrow\pi\Delta$).
The bulk of the attraction in the present model, due primarily
to the intermediate $\sigma$-meson states, shows up in
the phase-shifts of Fig.~16, with $L=L'=K+1$.
Here one sees resonances (phase shifts rising rapidly
through 90 degrees) in each partial wave. In the $L=L'=K-1$
partial waves (Fig.~15), one also sees attraction, although not so strong as
to produce resonances. The surprise here
is in the channel $K=1,$ $L=L'=0$, which reveals the
presence of a \it bound state \rm (Levinson's theorem).
Once one folds in the appropriate group theory in the following Section
to project the hedgehog onto physical baryons, the existence of
such a bound state manifests itself as a parity conjugate to the nucleon.
This feature is, unfortunately, \it not \rm found
in Nature, nor in the Skyrme model, and is an
unwanted, unphysical artifact of the present strongly-coupled
$\sigma$-$\pi$ model.
On the other hand, an \it improvement \rm over the Skyrme model is the
fact that all these phase-shifts (as well as those not plotted) eventually
return to zero for sufficiently high energies. In contrast, in the
Skyrme model they apparently grow without bound,
eventually violating the unitarity constraints
of quantum field theory, although admittedly
at energies where several key approximations made
in Refs.~\onlinecite{HEHW,Sig,MandK,MandP}, such as the neglect
of skyrmion recoil, are clearly unwarranted.
Another point to note about Fig.~15 are the cusp effects due to the
opening of the $\sigma$ threshold at $\omega = 5 \ {\rm fm}^{-1}$.
This is most apparent in the $K=1, L=0$ phase shift, but the effect is
present in the higher partial waves as well.
\section{$\pi N$ ELASTIC SCATTERING}
\label{physS}
\subsection{Group-theoretics for meson-nucleon scattering}
In the previous Section we derived an $S$-matrix for the scattering
of pions and $\sigma$'s off hedgehogs. The scattering information
is encoded in partial-wave amplitudes we called $S_K$ and
${\sf S}^K_{ij}$ where ${\sf S}^K$ is a $2\times2$ matrix below the
$\sigma$ threshold and a $3\times3$ matrix above it (except when $K=0$
in which case ${\sf S}^K$ is $1\times1$ or $2\times2$,
respectively). The integer index $K$ labels the vectorial sum
of the incoming or outgoing meson's isospin and angular momentum. $K$
is conserved when the meson scatters off an object with hedgehog
symmetry, in the same way that orbital angular momentum $L$ is
conserved in scattering from a spherically symmetric potential.
Of course, what we are really interested is scattering, not from
a hedgehog, but rather from a nucleon or $\Delta.$ The relationship
between the two problems, ``physical scattering'' versus ``hedgehog
scattering,'' is contained in the following group-theoretic
expression\onlinecite{HEHW,Sig,MandK,MandP,Karliner,ninj}:
\begin{eqnarray}
S_{LL'\, RR'\, I_{\rm tot}J_{\rm tot}}(\omega) \, = \,
\sum_K\, S_{KLL'}(\omega) \cdot
&&(-)^{R'-R} [(2R+1)(2R'+1)]^{1/2} (2K+1) \nonumber\\
&&\times \left\{\matrix{K&I_{\rm tot}&J_{\rm tot}\cr R&L&I\cr }\right\}
\left\{\matrix{K&I_{\rm tot}&J_{\rm tot}\cr R' &L' &I'\cr }\right\}\ .
\label{Sphysical}
\end{eqnarray}
Here $\omega$ is the meson energy in the baryon rest frame (baryon recoil
being subleading in $1/N_c$), $L$
($L'$) is the initial (final) orbital angular momentum, $I$ ($I'$)
is the isospin of the incoming (outgoing) meson, and $R$ ($R'$) is
the spin/isospin of the initial (final) $I=J$ baryon (e.g., $R=1/2$
for a nucleon, $R=3/2$ for a $\Delta$, etc.). For physical scattering,
$K$ is no longer conserved; it is just a dummy of summation,
constrained by the triangle inequalities implicit
in the 6$j$ symbols.\footnote{
\divide\baselineskip by 2
Note: if either the incoming or the
outgoing meson is a $\sigma,$ then the associated 6$j$ symbol has a zero
in it and collapses to a product of Kronecker $\delta$'s. Conversely, the
generalization of this expression to mesons that carry both isospin and
spin, such as $\rho$'s, involves 9$j$ symbols, and is given in
Ref.~\onlinecite{ninj}.}
Instead, the conserved quantities are, as they must be, the total meson+baryon
isospin and angular momentum, $I_{\rm tot}$ and $J_{\rm tot}$.
The $S$-matrix element on the left-hand side is a physical
partial-wave amplitude that can be compared directly with experiment.
The ``reduced $S$-matrix'' under the summation is a meson-hedgehog
amplitude, in slightly different notation than that of the previous
Section. The relation between the two notations is: when the incoming
and outgoing mesons are each pions, then $S_{KKK}=S_K$,
$S_{K,K-1,K-1}={\sf S}^K_{11}$, $S_{K,K+1,K+1}={\sf S}^K_{22}$,
$S_{K,K-1,K+1}=S_{K,K+1,K-1}={\sf S}^K_{12}$; when they are both $\sigma$'s
then $S_{KKK}={\sf S}^K_{33}$; and when the incoming meson is a pion and the
outgoing meson is a $\sigma$, then $S_{K,K-1,K}={\sf S}^K_{13}$
and $S_{K,K+1,K}={\sf S}^K_{23}$; with all other elements vanishing.
\subsection{The Big-Small-Small-Big pattern}
For the remainder of this paper we specialize to the elastic case
$\pi N\rightarrow\pi N.$ For each value of $L=L',$ there are then four
\it a priori \rm independent partial wave amplitudes, traditionally denoted
$L^{}_{2I_{\rm tot},2J_{\rm tot}}$. For example, in the case of
$F$-wave scattering ($L=3$) the four physical amplitudes are $F_{15},$
$F_{17},$ $F_{35},$ and $F_{37}.$ But to leading-order in
large-$N_c,$ only two out of these four are independent. One can,
for instance, solve for the two isospin-$3/2$ amplitudes as energy-independent
linear combinations of the two isospin-$1/2$ amplitudes\onlinecite{HEHW,MandP};
this is an example of the $I_t=J_t$ rule\onlinecite{Muk,Action}. This holds
in the Skyrme model, and because the group-theoretic expression
(\ref{Sphysical})
is the same, necessarily in the present $\sigma$-$\pi$ model as well.
These relations are reasonably well obeyed by the experimental $\pi N$
partial-wave data\onlinecite{MandP}, and are model-independent
tests of large $N_c.$
Another interesting fact about the experimental data (see Fig.~4
\it ff.\rm,
Ref.~\onlinecite{MandP}): If for each $L$ one juxtaposes the
four amplitudes in the above order, namely $L_{1,2L-1}$, $L_{1,2L+1}$,
$L_{3,2L-1}$ and $L_{3,2L+1}$, then they reveal a striking pattern
termed the ``Big-Small-Small-Big'' pattern. Namely, the outer two
amplitudes, $L_{1,2L-1}$
and $L_{3,2L+1}$, are characterized by relatively large
excursions of the $S$-matrix element through the Argand circle, while
the inner two amplitudes, $L_{1,2L+1}$ and $L_{3,2L-1}$, show relatively
much less motion. The Big-Small-Small-Big pattern is the
single most consistent pattern characterizing the partial-wave $S$-matrix
as a whole (the only clear exception to it being the $D_{35}$).
Reproducing the Big-Small-Small-Big pattern is one of the noteworthy
successes of the Skyrme model\onlinecite{MandK}. It is equally
well reproduced by the present $\sigma$-$\pi$ model, as we illustrate
in Fig.~17. In fact, the pattern emerges
for the same dynamical reason\onlinecite{MandP}:
the fact that, for $K>1,$ in both the Skyrme model and in the $\sigma$-$\pi$
model, the phase-shifts associated with ${\sf S}^{L+1}_{11}$ are
much smaller than those of ${\sf S}^{L-1}_{22}$ (cf.~Figs.~15-16).
We therefore view
it as a model-independent success of the large-$N_c$ approach, whether
one chooses to use skyrmions or explicit baryon fields.
\subsection{The baryon spectrum of the large-$N_c$ $\sigma$-$\pi$ model}
{}From the partial-wave amplitudes it is easy to extract the baryon resonance
spectrum of the large-$N_c$ $\sigma$-$\pi$ model.
Rather than record when the phase-shifts cross 90 degrees (a crude
criterion sensitive to background potentials), a more robust definition
of a resonance, adopted by experimentalists, is to look for Lorentzian
peaks in the ``speed plots,'' i.e., the plots of
$|dT_{LI_{\rm tot}J_{\rm tot}}/d\omega|$ versus $\omega.$
The speed plots for a few selected
partial waves are depicted in Fig.~18. Some peaks are unambiguous,
whereas others are admittedly ``in the eye of the beholder,'' but the
same can be said about the corresponding experimental data.
Figure 19 displays the full resonance spectrum obtained in this
fashion, through the $H$-waves ($L=5$), limited to
what we subjectively consider to be ``two-star" resonances or better.
The step-like structure, in blocks of alternating parity,
is much more pronounced than in
the Skyrme model, and certainly than in Nature. It can be partially
accounted for by noting that, for $L>1,$ the reduced amplitudes
of Fig.~16 dominate those of Figs.~14-15, so that for any
fixed value of $L,$ the resonance location in the four physical
partial-wave amplitudes can be approximated by the resonance location
in the single underlying reduced amplitude ${\sf S}^{L-1}_{22}.$
But this does not explain why the steps
arrange themselves by definite parity (as we have indicated by the
black bars below the horizontal axis), a feature for which we have no
good understanding.
In general, the resonances in the $\sigma$-$\pi$ model occur at substantially
lower energies than
in the Skyrme model, and in Nature. We have not explored the
parameter space of our model [see Eq.~(\ref{params1})]
in an attempt to rectify this disparity
(as we are confident could be done),
not just because of the computationally-intensive character of these
multi-stage calculations, but also due to the frankly ``toy'' intent
of this model, which we have constructed for illustrative purposes.
We are optimistic that a more realistic model, incorporating the
vector mesons, and properly implementing chiral symmetry, would be in
better agreement with the observed baryon spectrum, while posing no
significant additional conceptual or numerical difficulties beyond those we
have already confronted herein.
The one parameter that we \it have \rm experimented with is the nucleon
size parameter $a_N$, defined in Eq.~(\ref{gaussian}), which acts
as an ultraviolet cutoff. A variation from our nominal value $a_N=0.52\,$fm
to $a_N=0.60\,$fm shows no discernible effect on the resonance
positions, and only slight changes in the Argand plots themselves,
primarily in the $P$-waves, one of which is shown in Fig.~20.
\subsection{Some familiar problems}
We have seen that this large-$N_c$ $\sigma$-$\pi$
model (and, we presume, others like it with
explicit baryon fields) shares some notable successes with the Skyrme
model---the Big-Small-Small-Big pattern, the energy-independent
relations between the $I_{\rm tot}=1/2$ and $I_{\rm tot}=3/2$
$\pi N$ amplitudes, the overall richness of the baryon resonance
spectrum, etc. Moreover, the high-energy behavior of the partial
wave amplitudes is much better than in the Skyrme model (see Sec.~V).
Not surprisingly, the $\sigma$-$\pi$ model also shares some of the Skyrme
model's failings. Figure 21 illustrates a specific partial wave
amplitude in the $\sigma$-$\pi$ model, juxtaposed with the experimental
data. Obviously the real-world amplitude is much more inelastic than
the present model. This is because, in the higher partial waves
especially, multiple pion production soon dominates the experimental $\pi N$
amplitudes. Yet, \it formally\rm, processes such as $\pi N\rightarrow
\pi\pi\pi N$ are down by powers of $1/N_c$ compared with $\pi N\rightarrow
\pi N$, and are therefore entirely absent from leading-order theoretical
treatments such as the present paper---as well as from the leading-order
skyrmion treatments\onlinecite{HEHW,Sig,MandK,MandP,Karliner}, which
share the same problem. Below the $\sigma$ threshold, the only source
of inelasticity in the present model is the $\pi\Delta$ channel, exactly
as in the Skyrme model. A theoretical means of summing at least
\it some \rm of the $1/N_c$ corrections, namely those associated with
multiple pion production, would immeasurably improve either approach.
Just as serious is the failure of the $\sigma$-$\pi$ model to bear even
passing resemblance to experiment in the $S$ and $P$ waves. As is
well known, these waves have been the ``Achilles heel'' of the Skyrme
model too. Interestingly, whereas the Skyrme model shows
\it too few \rm resonances in these waves, the $\sigma$-$\pi$ model
errs in the opposite direction: \it too many \rm resonances,
particularly in the $P_{13}$ and $P_{31}$ waves, and including
spurious bound states in the $S_{31}$ and $S_{11}$ channels as
already noted in Sec.~V. The interested reader is referred to
Ref.~\onlinecite{MandP} for a lengthy discussion of the problems
in these lower waves in the Skyrme model,
which are related, in part, to the failure to
incorporate the translational and (iso)rotational recoil of the
hedgehog (formally $1/N_c$ corrections, but numerically important).
We expect that commentary to apply as well to models
with explicit baryon fields. For example, the
Weinberg-Tomozawa expression for the
$\pi N$ scattering lengths\onlinecite{WeinTom}, which are predicted by current
algebra, and which dominate the experimental
$S$-wave amplitudes near threshold, formally appear only at
next-leading order in $1/N_c$\onlinecite{MandP}.
This suggests that if one were to start from
an improved effective hadron Lagrangian that respects chiral
symmetry (we remind the reader that the present $\sigma$-$\pi$ model
does \it not\,\rm), and if one were to calculate to next-leading order
in $1/N_c,$ the most glaring
disagreement with experiment in the $S$-waves ought to be repaired.
Fixing the $P$-waves will require, at the least, ($i$) the splitting
of the $\Delta$ from the nucleon (again, a $1/N_c$ effect), and
($ii$) the incorporation of the Compton-type diagrams,
particularly Figs.~1a and 1b, the ameliorating effect of which has already
been examined in the Skyrme model\onlinecite{DiakPet,Japs}.
\acknowledgments
We acknowledge valuable input from many of our colleagues, most notably
Peter Arnold, Charles Benesh, Nick Dorey, Jim Friar, Terry Goldman,
Gerry Hale, Jim Hughes, Marek Karliner, Arthur Kerman, Wim Kloet,
Jim McNeil, Charles Price, Rob Timmermanns, and John Tjon. We also
thank Aneesh Manohar for commenting on the draft.
This work has been supported by
the Division of High Energy and Nuclear Physics,
Energy Research, Department of Energy. MPM has also benefitted from
an SSC Fellowship during part of the time we have been
working on this problem.
|
1,108,101,564,242 | arxiv | \section{Introduction}
It is well known that vortex motion under the action of an applied current in superconducting films in a perpendicular magnetic field $B$ at high dissipation levels becomes unstable at some critical vortex velocity $v^\ast$. For the flux flow regime at temperatures near the superconducting transition temperature $T\lesssim T_c$ this instability was theoretically treated by Larkin and Ovchinnikov~(LO)~\cite{Lar75etp}. Their theory predicts that $v^\ast$ is \emph{independent} of $B$ that was experimentally confirmed for low-$T_c$ \cite{Kle85ltp,Vol92fnt,Per05prb} and high-$T_c$ \cite{Doe94prl,Xia96prb,Xia99prb} superconducting films. In subsequent experiments, a crossover from the magnetic-field independent behavior at high fields to $v^\ast\propto B^{-1/2}$ at low fields was reported by Doettinger \emph{et al.} \cite{Doe95pcs}. This low-field behavior was explained \cite{Doe95pcs} as $v^\ast$ multiplied with the inelastic quasiparticle scattering time must reach at least the intervortex distance to ensure the spatial homogeneity of the nonequilibrium quasiparticle distribution the LO theory is relying upon.
However, experiments performed on YBCO films at low temperatures $T\ll T_c$~\cite{Kun02prl,Kni06prb} showed an instability with a universal dependence $v^\ast\propto B^{-1/2}$ whose underlying physical mechanism was essentially different from the LO instability picture. With an account for the Bardeen-Stephen nonlinear conductivity \cite{Bar65prv}, Kunchur has shown \cite{Kun02prl,Kni06prb} that this new behavior can be explained by a simple model in which the electron gas has a thermal-like Fermi distribution function characterized by a higher temperature than that of the phonons and the bath. In contradistinction with the standard LO picture, the main effects in the Kunchur instability~\cite{Kun02prl,Kni06prb} are a rise of the electronic temperature, creation of additional quasiparticles, and a diminish of the superconducting gap. The vortex expands rather than shrinks, and the viscous drag is reduced because of a softening of gradients of the vortex profile rather than a removal of quasiparticles from the vortex core, as supposed within the framework of the LO theory. While the electron temperature rises, the resulting resistivity increase leads to a decrease in current above a certain value of the electric field. That is, the current-voltage-curve (CVC) becomes non-monotonic in $j$ and exhibits an electric field instability. All experimental observables for the hot-electron instability were calculated in Ref. \cite{Kun02prl}. The experimental results on YBCO were successfully fitted to the predicted $B$-dependences and $j(E)$ curves \emph{in absence of pinning} without any adjustable parameters~\cite{Kun02prl,Kni06prb}.
The objective of this paper is to theoretically consider the hot-electron vortex flow instability in low-$T_c$ superconducting thin films at $T \ll T_c$ in the \emph{presence of pinning}. This study is motivated by two aspects. Firstly, low-$T_c$ superconductors are characterized by a simple electronic structure, thus allowing one to use a more simple heat balance equation than that for YBCO. Secondly, vortex pinning in low-$T_c$ films is usually stronger than in high-$T_c$ epitaxial films so that it has to be properly taken into account. It should be emphasized that neither LO \cite{Lar75etp} nor Kunchur \cite{Kun02prl} approaches capture vortex pinning in the physical picture of the flux-flow instability in the nonlinear CVC. In experimental samples, however, vortex pinning is omnipresent and there is growing interest in addressing pinning effects on the instability critical parameters in superconductors
\cite{Leo10pcs,Gri12apl,Gri15prb,Leo16prb}, in particular those with artificial pinning structures \cite{Sil12njp}. While a recent theoretical account for the LO instability at $T \simeq T_c$ can be found in Ref. \cite{Shk17snd}, the respective generalization of the Kunchur approach at temperatures $T\ll T_c$ has not been elaborated so far.
Both these aspects will be addressed in this paper. Namely, Sec. \ref{SecInst} presents a phenomenological approach to account for pinning effects on two simplest CVCs in the flux-flow regime. These CVCs are exemplary, as they are calculated at $T=0$ for two pinning potentials of the washboard type, namely for a cosine washboard pinning potential (WPP) \cite{Shk08prb,Shk11prb} and for a saw-tooth WPP~\cite{Shk99etp,Shk06prb}. A cosine WPP is widely used in theoretical papers, see e.g. Refs. \cite{Mar76prl,Che91prb,Cof91prl,Maw97prb,Maw99prb,Shk14pcm}. At the same time, both model WPPs are realistic as they can be realized by various experimental techniques, see e.\,g. Ref. \cite{Dob17pcs} for a review. For instance, both WPPs can be used for modelling the resistive responses of nanopatterned superconductors with uniaxial anisotropic pinning induced either by ferromagnetic stripes deposited onto the film surface \cite{Dob10sst,Dob11pcs,Dob11snm} or nanogrooves milled in the film \cite{Dob11snm,Dob12njp,Dob16sst}. In addition, the understanding of pinning effects on the flux-flow instability is the key to expanding the current-operation range of microwave applications \cite{Lar15nsr,Dob15apl,Dob15met,Sil17inb} and it is crucial for the development of superconducting devices of the fluxonic type \cite{Dob17pcs}. Both WPPs allow one to reproduce the calculation of the hot-electron instability in the spirit of Refs. \cite{Kun02prl,Kni06prb} and to solve a more simple heat balance equation within the framework of the two-fluid model in Sec. \ref{SecPow}. While in the limiting case of no pinning the results of Refs. \cite{Kun02prl,Kni06prb} are recovered, the presented in what follows approach provides a more simple and intuitively clearer physics.
\section{Instability parameters}
\label{SecInst}
\subsection{Problem definition}
The effect of a WPP on the flux-flow overheating instability is considered at substrate temperature $T_0\ll T_c$, as earlier studied by Kunchur in absence of pinning \cite{Kun02prl}. For simplicity, the problem is considered at $T_0 = 0$, when the transport current flows along the WPP channels, refer to the upper inset in Fig.\,\ref{fig1}. In this geometry the vortices experience the action of the Lorentz force in the direction transverse to the pinning channels. The respective nonlinear CVC of the sample can be presented as
\begin{equation}
\label{eCVC}
\sigma E = j\nu(j).
\end{equation}
Here $E$ is the longitudinal electric field, $j$ is the density of the transport current, and $0\leq \nu(j) \leq 1$ is a nonlinear function with the condition $\nu(j) = 0$ for $j < j_c$, where $j_c$ is the critical (depinning) current density, refer to Fig. \ref{fig1}. The nonlinear function $\nu(j)$ appears in Eq. (\ref{eCVC}) due to the effect of the WPP on the vortex dynamics. In Eq.~\eqref{eCVC} $\sigma = \sigma(T) = \sigma_nH_{c2}(T)/B$ is the temperature-dependent Bardeen-Stephen~\cite{Bar65prv} flux-flow conductivity, $\sigma_n$ is the normal metal film conductivity at $T\approx T_c$, $H_{c2}$ is the upper critical field, and $B$ is the flux density applied perpendicular to the film.
If $j_c\rightarrow 0$, then $\nu(j) \rightarrow 1$ and the linear CVC $\sigma E = j$ follows from Eq.~\eqref{eCVC}. The expression $\sigma E = j$ was used by Kunchur \cite{Kun02prl,Kni06prb} as the initial form of the CVC. Two different WPP forms resulting in two different $\nu(j)$ functions plotted in Fig. \ref{fig1} will be considered next.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{./fig1}
\caption{Left axis: The nonlinear current-voltage curve $E(j)$ (red online) calculated in the limit of low temperatures for a cosine (blue online) WPP of Refs. \cite{Shk08prb,Shk11prb} (\emph{curve 1}) and a saw-tooth (black online) WPP of Refs. \cite{Shk99etp,Shk06prb} (\emph{curve 2}). The dashed line corresponds to the free flux-flow regime $E/j = \rho_f$, where $\rho_f$ is the flux-flow resistivity. Right axis: The respective nonlinear functions $\nu(j)$ calculated by Eq. (27) of Ref. \cite{Shk08prb} (\emph{curve 1}) and Eq. (28) at $\epsilon=1$ of Ref. \cite{Shk99etp} (\emph{curve 2}). Inset: Atomic force microscope image of a Nb film surface with a nanogroove array milled by focused ion beam \cite{Dob16sst} and inducing a pinning potential of the washboard type.}
\label{fig1}
\end{figure}
\subsection{Cosine pinning potential}
\label{ssCosine}
For the cosine WPP, $\nu(j) = \sqrt{1 - (j_c/j)^2}$~\cite{Shk08prb} and
\begin{equation}
\label{eCosine}
\sigma E = \sqrt{j^2 - j_c^2} \qquad \mathrm {or} \qquad j = \sqrt{j_c^2 + \sigma ^2 E^2}.
\end{equation}
In the overheating approach of Kunchur \cite{Kun02prl,Kni06prb}, in the vortex state of a film with quasiparticles temperature $T = T(E)$ the CVC instability in Eq.~\eqref{eCosine} appears as a region of negative differential conductivity, where $j$ decreases as a function of $E$. The values of the instability points $j^\ast$ and $E^\ast$ can be determined from a set of equations which include the heat balance equation
\begin{equation}
\label{eHeatBalance}
P\tau_e = \int_0^T C(T^\prime)dT^\prime
\end{equation}
and the CVC extremum condition
\begin{equation}
\label{eCVCextremum}
\frac{dj}{dE}\Big|_{E=E^\ast} = 0.
\end{equation}
Here $P = jE$ is the dissipated power, $\tau_e(T)$ is the energy relaxation time, and $C(T)$ is the electronic specific heat per unit volume. As follows from Eq.~\eqref{eCVCextremum},
\begin{equation}
\label{eCondition}
\frac{dE}{dT}\Big|_{E=E^\ast} = -E^\ast[\sigma^\prime(T)/\sigma(T)]\Big|_{T=T^\ast},
\end{equation}
where the prime denotes differentiation with respect to temperature. Substitution of Eq.~\eqref{eCondition} into the relation $d(P\tau_e)/dT = C(T)$, following form Eq.~\eqref{eHeatBalance}, leads to the expression
\begin{equation}
\begin{array}{lll}
\label{eCsigma}
C(T^\ast)\sigma(T^\ast) = E^\ast [\tau^\prime(T^\ast)\sigma(T^\ast) - \tau(T^\ast)\sigma^\prime(T^\ast)]\times\\[2mm]
\qquad\qquad\qquad\qquad\qquad\qquad\times\sqrt{j_c^2 + \sigma^2(T^\ast) E^{\ast 2}},
\end{array}
\end{equation}
which in absence of pinning (i.\,e. when $j_c = 0$) reduces to the expression for $E_0^\ast$ (see also Eq. (5) in Ref. \cite{Kni06prb}).
\begin{equation}
\label{eE0ast}
E_0^{\ast 2} = C(T^\ast)\rho_n B/[H_{c2}(T)\tau^\prime_e(T) + H_{c2}^\prime(T)\tau_e(T)]\Big|_{T = T^\ast}.
\end{equation}
Taking the square of Eq.~\eqref{eCsigma} it is easy to show that $z \equiv (E^\ast/E_0^\ast)^2$ can be found from the equation
\begin{equation}
\label{eZ2}
z^2 + 2\mu z - 1 = 0,
\end{equation}
where the dimensionless parameter $\mu$ links the instability problem with and without pinning through the relation
\begin{equation}
\label{e2mu}
2\mu \equiv (j_c/j_0^\ast)^2.
\end{equation}
For $j_c = 0$ one has $\mu = 0$ and $z =1$, i.\,e. one returns to the problem discussed in Refs.~\cite{Kun02prl,Kni06prb}. In the general case $0 \leq \mu <\infty$, and the solution of Eq.~\eqref{eZ2} reads
\begin{equation}
\label{eZsolution}
z = \sqrt{1 + \mu^2} - \mu = 1/(\sqrt{1 + \mu^2} + \mu).
\end{equation}
From Eq.~\eqref{eZsolution} it follows that $z(\mu)$ monotonically decreases with increasing $\mu$, i.\,e. $E^\ast$ decreases with increasing $j_c$. Next, from Eq.~\eqref{eCosine} it follows $j^{\ast2} = j_c^2 + z j_0^{\ast2}$ and, if we define $y\equiv(j^\ast/j_0^\ast)^2$,
\begin{equation}
\label{eY}
y = 1/z = \sqrt{1 + \mu^2} + \mu.
\end{equation}
From Eq.~\eqref{eY} it follows that $y(\mu)$ monotonically increases, i.\,e. $j^\ast$ increases with increasing $j_c$.
Now, having analyzed the $\mu$-behavior of $E^\ast$ and $j^\ast$, it is possible to derive the $\mu$-dependences of several related responses at the instability point. These responses are the critical vortex resistivity $\rho^\ast = E^\ast/j^\ast$, the critical vortex velocity $v^\ast/c = E^\ast/B$, and the dissipated power $P^\ast = E^\ast j^\ast$. Accordingly, using Eqs.~\eqref{eZsolution} and~\eqref{eY} one concludes that the critical velocity $v^\ast(\mu) \sim E^\ast(\mu)$ is monotonically decreasing in $\mu$, while $P^\ast = P^\ast_0$ does not depend on $\mu$, and
\begin{equation}
\label{eRast}
\rho^\ast = \rho^\ast_0/(\sqrt{1 + \mu^2} + \mu).
\end{equation}
From Eq.~\eqref{eRast} it follows that $\rho^\ast(\mu)$ monotonically decreases in $\mu$, i.\,e. $\rho^\ast$ decreases as $j_c$ increases.
After the analysis of the $j_c$-behavior of the critical parameters $E^\ast, j^\ast, \rho^\ast, P^\ast$, and $v^\ast$, it is instructive to analyze also their $B$-dependences at $j_c = const$. In other words, for a moment it is supposed that $j_c$ is independent of $B$. To proceed with the $B$-analysis, it is necessary to remind the $B$-dependences of the critical parameters $E^\ast_0, j^\ast_0, \rho^\ast_0, P^\ast_0$, and $v^\ast_0$ in absence of pinning (i.\,e. at $j_c =0$). Previously it was shown \cite{Kun02prl,Kni06prb} that $E_0^\ast = \kappa\sqrt{B}$, $j_0^\ast = \gamma/\sqrt{B}$, $\rho_0^\ast = \alpha B$, $v_0^\ast/c = \kappa/\sqrt{B}$, and $P_0^\ast$ is independent of $B$. Here the two constants $\alpha$ and $\gamma$ have been introduced such that $\kappa =\alpha\gamma$. Their values can be obtained from theory and compared with experiment, see Refs. \cite{Kun02prl,Kni06prb}. Then it follows that $\mu = j_c^2/2j_0^{\ast2} = \varepsilon_c B$, where $\varepsilon_c = j_c^2/2\gamma^2$. From the latter it follows that $\mu$ is an increasing function of $j_c$ and $B$.
Unfortunately, a direct inspection is not sufficient to check Eqs.~\eqref{eZsolution}--\eqref{eRast} for monotonicity in $B$. The corresponding critical parameters and their $B$-derivatives should be calculated for this. For $dE^\ast/dB$ one has
\begin{equation}
\label{edEdB}
dE^\ast/dB = \kappa/2\sqrt{B}\sqrt{1+\mu^2}(\sqrt{1+\mu^2} +\mu)^{3/2} >0.
\end{equation}
As it follows from Eq.~\eqref{edEdB}, $E^\ast(B)$ monotonically increases with growing $B$ while $dE^\ast/dB$ decreases. The behavior of $\rho^\ast(B)$ is similar, because
\begin{equation}
\label{edRdB}
d\rho^\ast/dB = \alpha/\sqrt{1+\mu^2}(\sqrt{1+\mu^2} +\mu)^2 >0,
\end{equation}
i.\,e. it monotonically increases with growing $B$ while $d\rho^\ast/dB$ decreases. For $dj^\ast/dB$ one obtains
\begin{equation}
\label{edJdB}
dj^\ast/dB = -\gamma/2B^{3/2}\sqrt{1+\mu^2}(\sqrt{1+\mu^2} +\mu) <0.
\end{equation}
From Eq. \eqref{edJdB} it follows that $j^\ast(B)$ monotonically decreases with growing $B$ while $dj^\ast/dB$ decreases. Finally, it is interesting to derive the $B$-dependence of the critical vortex velocity $v^\ast/c = E^\ast/B$, which for the LO instability~\cite{Lar75etp} does not depend on $B$ (see also the Bezuglyj-Shklovskij (BS) generalization of the LO instability where the $B$-dependence appears for fields larger than the overheating field $B>B_T$ \cite{Bez92pcs}). It can be shown that
\begin{equation}
\label{edVdB}
dv^\ast/dB = -(c\kappa/2B^{3/2})\sqrt{\sqrt{1+\mu^2}+\mu}/\sqrt{1+\mu^2} <0.
\end{equation}
From Eq. \eqref{edVdB} it follows that $v^\ast(B)$ monotonically decreases with growing $B$ and $d v^\ast/dB$ does so.
Up to this point, the analysis of Eqs.~\eqref{edEdB}-\eqref{edVdB} was done for $j_c$ being $B$-independent, when $\mu = j_c^2/j_0^{\ast2} = \varepsilon_c B$ was proportional to $B$. In reality, however, $j_c$ depends upon $B$ and $\mu(B) = j_c^2(B)B/2\gamma^2$ has a more complex $B$-dependence. In order to analyze the $B$-dependence of $\mu(B)$ following from the $j_c(B)$ behavior, the following scaling is assumed for simplicity
\begin{equation}
\label{eScaling}
j_c(B) = j_B(B_c/B)^m,
\end{equation}
where $j_B$ and $B_c$ are fitting parameters which provide correct values of $j_c(B)$, while the exponent $m>0$ is the main parameter which determines the $B$-behavior of $\mu(B)$. It is clear from Eq.~\eqref{eScaling} that for $m=0$ one returns to the $B$-independent case $j_c = const$, while for $m=1/2$ one has $\mu(B)=const$, i.\,e. it is independent of $B$. Hence, for the determination of the $B$-dependence of the critical parameters it is necessary to calculate the derivative $d\mu(B)/dB$. Whereas for $j_c=const$ the derivative $d\mu/dB = \varepsilon_c$ was $B$-independent, now it reads
\begin{equation}
\label{edMdB}
d\mu(B)/dB = (1-2m)\mu(B)/B.
\end{equation}
From Eq. \eqref{edMdB} it follows that $d\mu/dB$ equals to zero and changes its sign at $m=1/2$. In other words, $\mu(B)$ \emph{decreases} with growing $B$ for $m>1/2$, whereas $\mu(B)$ \emph{increases} for $0<m<1/2$.
Now it is time to turn to an analysis of the influence of the dependences given by Eqs. \eqref{eScaling} and \eqref{edMdB} on the $B$-dependence of the critical parameters $E^\ast, j^\ast, \rho^\ast, v^\ast$ and their $B$-derivatives. Since
\begin{equation}
\label{eEvB}
E^\ast(B) = \kappa\sqrt{B}/\sqrt{\sqrt{1+\mu^2}+\mu},
\end{equation}
then it follows that the denominator in Eq.~\eqref{eEvB} is decreasing with growing $B$ for $m>1/2$, thereby resulting in $E^\ast(B)$ increasing more rapidly than $E^\ast_0 = \kappa\sqrt{B}$. For $m<1/2$ the denominator is increasing with growing $B$ and, hence, the derivative $dE^\ast/dB$ should be analyzed. The result is
\begin{equation}
\label{edEvB}
\displaystyle\frac{dE^\ast}{dB} = \displaystyle\frac{\kappa}{2\sqrt{B}}\frac{1+2m\mu(\sqrt{1+\mu^2}+\mu)}{\sqrt{1+\mu^2}(\sqrt{1+\mu^2}+\mu)^{3/2}}>0.
\end{equation}
As follows from Eq. \eqref{edEvB}, $E^\ast$(B) increases with growing $B$ for any $m>0$, but the rate of this increase depends upon whether $m>1/2$ or $0<m<1/2$. It is instructive to point out that the $\mu(B)$ dependence reads
\begin{equation}
\label{eMuvB}
\mu(B) = j_c^2(B)B/2\gamma^2 = KB^{1-2m},
\end{equation}
where $K = j_B^2B_c^2/2\gamma^2$. Equation \eqref{edMdB} follows at once from Eq. \eqref{eMuvB}.
As for $j^\ast(B)$, one has [see Eq. \eqref{eY}]
\begin{equation}
\label{ejAst}
j^\ast = j_0^\ast\sqrt{\sqrt{1+\mu^2}+\mu} = (\gamma/\sqrt{B})\sqrt{\sqrt{1+\mu^2}+\mu}.
\end{equation}
It follows from Eq.~\eqref{ejAst} that for $m>1/2$, $j^\ast(B)$ is decreasing faster with growing $B$ than $j_0^\ast(B) = \gamma/\sqrt{B}$, whereas for $m<1/2$ the situation is not clear because of the $\mu$-dependent multiplier in Eq. \eqref{ejAst} increasing with growing $B$. The calculation of $dj^\ast/dB$ yields
\begin{equation}
\label{edjAstdB}
\displaystyle\frac{dj^\ast}{dB} = -\displaystyle\frac{\gamma}{2B\sqrt{B}}\sqrt{\sqrt{1+\mu^2}+\mu}\left[1-\displaystyle\frac{(1-2m)\mu}{\sqrt{1+\mu^2}}\right] <0,
\end{equation}
because at any $m>0$ and $\mu$ the expression in the brackets is positive.
Now it is possible to write down an expression for the $B$-dependent the resistivity at the instability point
\begin{equation}
\label{eRfvB}
\rho^\ast(B)=\alpha B/(\sqrt{1+\mu^2}+\mu).
\end{equation}
From Eq.~\eqref{eRfvB} it follows that $\rho^\ast(B)$ is increasing with growing $B$ more rapidly than $\rho^\ast_0(B) = \alpha B$ for $m > 1/2$ due to the denominator of Eq.~\eqref{eRfvB} decreasing with growing $B$. For $m < 1/2$, again, the derivative $d\rho^\ast/dB$ should be calculated. This yields
\begin{equation}
\label{edRAstdB}
d\rho^\ast/dB = \alpha \left[1 -\displaystyle\frac{(1-2m)\mu}{\sqrt{1+\mu^2}}\right]/(\sqrt{1+\mu^2} + \mu) >0
\end{equation}
because as in Eq.~\eqref{edjAstdB} the expression in the brackets is positive, i.\,e. $\rho^\ast(B)$ always increases with growing $B$. Finally, the $B$-dependence of $v^\ast$ should be considered
\begin{equation}
\label{eVastB}
v^\ast(B) = c\kappa /\sqrt{B}\sqrt{\sqrt{1+\mu^2} + \mu}.
\end{equation}
From Eq.~\eqref{eVastB} it follows that for $m< 1/2$ $v^\ast(B)$ is decreasing with growing $B$ faster than $v^\ast_0(B) = c\kappa/\sqrt{B}$ and for $m> 1/2$ the derivative $dv^\ast/dB$ should be calculated. The result is
\begin{equation}
\label{edVAstdB}
dv^\ast/dB = -c\kappa \left[1 +\displaystyle\frac{(1-2m)\mu}{\sqrt{1+\mu^2}}\right]/2B\sqrt{B}\sqrt{\sqrt{1+\mu^2} + \mu}.
\end{equation}
Equation~\eqref{edVAstdB} reduces to Eq.~\eqref{edVdB} in the limit $m=0$, when $dv^\ast/dB<0$. For $m>1/2$ it is easy to show that the bracket in Eq.~\eqref{edVAstdB} may be negative at $m>(1 + \sqrt{1+1/\mu^2})/2$. In consequence of this one has $m\simeq 1 + 1/(2\mu)^2$ for $\mu\gtrsim 2$. In this case $dv^\ast/dB > 0$, i.\,e. it \emph{changes its sign} when $B\rightarrow 0$.
The new results given by Eqs. \eqref{edEvB}-\eqref{edVAstdB} derived using the $j_c(B)$ and $\mu(B)$ dependences given by Eqs.~\eqref{eScaling} and~\eqref{eMuvB}, respectively, can be briefly summarized as follows. The main result for the $B$-dependences of the critical parameters $E^\ast(B)$, $j^\ast(B)$, $\rho^\ast(B)$ and $P^\ast(B)$ consists in \emph{maintaining the monotonicity of their $B$-dependences} for the case $j_c = j_c(B)$ given by Eq.~\eqref{eScaling}. In other words, the $B$-derivatives of these parameters maintain the same sign as for $j_c = const$, see Eqs. \eqref{edEvB}, \eqref{edjAstdB}, and \eqref{edRAstdB}. At the same time, for the $B$-dependent critical current given by Eq.~\eqref{eScaling} a \emph{sign change} of $dv^\ast/dB$ is possible for $m\gtrsim1$, see Eq.~\eqref{edVAstdB} and the subsequent discussion. That is, the monotonicity of $v^\ast(B)$ at small $B$ may be \emph{violated}. Moreover, since usually $j_c(B)$ at small $B$ can be approximated again by Eq.~\eqref{eScaling} with $m<1/2$, then there may be a \emph{second sign change} in $dv^\ast/dB$ at $B$ close to the first critical field $B_{c1}(T)$. This is sometimes observed in experiments \cite{Gri09pcm,Leo10pcs,Gri10prb,Gri12apl,Gri11snm,Sil12njp}. To conclude, the presented here phenomenological approach using the experimentally measured $B$-dependent critical current provides a simple physics which can explain the nonmonotonic behavior of $v^\ast(B)$ at small $B$.
\subsection{Saw-tooth pinning potential}
\label{SubsectsST}
For the saw-tooth WPP \cite{Shk99etp}, $\nu(j) = 1 - (j_c/j)^2$ for $j > j_c$ and $\nu(j) = 0$ for $0< j < j_c$. Then, $\sigma E = (j^2 - j_c^2)/j$ for $j>j_c$ or
\begin{equation}
\label{eSaw}
j = (\sigma E/2)[1 + \sqrt{1+(2j_c/\sigma E)^2}]
\end{equation}
Repeating the steps, which were detailed above for the cosine WPP in Sec.~\ref{ssCosine}, it can be shown that
\begin{equation}
\label{eEE0}
E^\ast/E_0^\ast = 1/\sqrt{1+x},
\end{equation}
where $x = 2\mu = (j_c/j_0^\ast)^2$ [see also Eq.~\eqref{e2mu}]. Then
\begin{equation}
\label{ejj0}
j^\ast/j_0^\ast = \sqrt{1+x},
\end{equation}
\begin{equation}
\label{eRR0}
\rho^\ast/\rho_0^\ast = 1/(1+x),
\end{equation}
and, finally, $P = P^\ast$.
Qualitatively, the $x$-behavior of the critical parameters given by Eqs.~\eqref{eEE0}-\eqref{eRR0} is similar to the $\mu$-behavior of the analogous quantities for the cosine WPP, see Eqs.~\eqref{eZsolution}-\eqref{eRast}. This similarity can also be extended to the $B$-behavior of the $B$-derivatives of $E^\ast$, $j^\ast$, $\rho^\ast$, and $v^\ast$ given previously by Eqs.~\eqref{edEdB}--\eqref{edVdB} for $j_c = const$. Moreover, using the $B$-dependent critical current $j_c(B)$ given by Eq.~\eqref{eScaling}, it is possible to repeat qualitatively all the conclusions about the monotonicity of $E^\ast(B)$, $j^\ast(B)$, $\rho^\ast(B)$, and the possible non-monotonicity of $v^\ast(B)$. For the latter the exact expression reads
\begin{equation}
\label{eVastdB}
\frac{dv^\ast}{dB}= -\displaystyle\frac{c\kappa[1 + 2x(1-m)]}{2\sqrt{B}B(1+x)^{3/2}}
\end{equation}
From Eq.~\eqref{eVastdB} it follows that the bracket in the denominator in Eq.~\eqref{eVastdB} may be negative at $m>1$ and $x> 1/2(m-1)$. In this case one has $dv^\ast/dB > 0$, i.\,e. it changes its sign at $B\rightarrow 0$. Summarizing these short comments on the $j_c(B)$-behavior for the saw-tooth WPP, one can state that the main results on the monotonicity of the critical parameters behavior for both CVC types are qualitatively similar, i.\,e. a \emph{particular form of the WPP does not affect the considered physics}.
\section{Dissipated power and quasiparticles temperature}
\label{SecPow}
\subsection{Two-fluid approach}
In Kunchur's approach~\cite{Kun02prl}, the quasiparticles temperature with respect to the substrate temperature $T_0 \ll T_c$ can be determined by \emph{numerical} integration of the heat balance equation~\eqref{eHeatBalance}, where the temperature-dependent functions $\tau_e(T)$ and $C(T)$ were specifically calculated for the considered YBCO sample. As follows from Eq.~\eqref{eHeatBalance} at given $\tau_e(T)$ and $C(T)$, the quasiparticles temperature $T$ depends on the dissipated power $P = Ej$ and $T_0$. In~\cite{Kun02prl,Kni06prb} it was shown that for YBCO the quasiparticles temperature at the instability point of the CVC, $T^\ast(P)$, \emph{weakly depends} on $T_0$ and equals to approximately $76$\,K, i.\,e. $T^\ast$ is \emph{not close} to $T_c \simeq 90$\,K.
In what follows the $T^\ast(P,T_0)$ dependence will be estimated for a more simple case of a low-temperature superconductor film like Nb \cite{Dob12tsf}. In this case the same physics of quasiparticles overheating can be explained by a more simple heat balance equation than Eq.~\eqref{eHeatBalance}. The main features of this more simple approach were presented by BS in~\cite{Bez92pcs}, see sections 1 and 2 therein. In the BS approach it is supposed that $P(T,T_0)$ dependence can be approximated by the same expression, see Eq.~(18) in~\cite{Bez92pcs}, as for normal electrons at temperature $T$ near $T_c$ as it can be made within the framework of the two-fluid model of superconductivity~\cite{Tin04boo}. It will be shown that this approach yields $T^\ast$ near $T_c$ (but not too close to $T_c$ where the mechanism of the LO instability~\cite{Lar75etp} dominates).
For the heat flow $Q$ from the film to the substrate one has the equation
\begin{equation}
\label{eQ}
Q = Ad[(kT)^5 - (kT_0)^5)],
\end{equation}
which is accurate to corrections of the order of $(\Delta/T)^2\ll 1$, where $\Delta(T)$ is the superconducting gap. In Eq.~\eqref{eQ} $d$ is the film thickness and $A$ is a coefficient which is not essential for the following reasoning and it is given by Eq.~(18) of Ref.~\cite{Bez92pcs}. Equation~\eqref{eQ} describes the case when nonequilibrium phonons escape from a thin film without reabsorption by quasiparticles. The heating regime of the film in this limit is known as electron overheating~\cite{Bez92pcs}, termed so as one describes the quasiparticles and phonons by different temperatures, $T$ and $T_0$, respectively. Taking into account that $Q = Pd$, where $P = Ej$, from Eq.~\eqref{eQ} follows
\begin{equation}
\label{eP}
P = A[(kT)^5 - (kT_0)^5)].
\end{equation}
First, the critical parameters will be considered in this approach without pinning, i.\,e. the calculations of Kunchur~\cite{Kun02prl} will be repeated for the case when the heat balance equation~\eqref{eHeatBalance} has the form of Eq.~\eqref{eP}. Since $P = \sigma(T)E^2$, where $\sigma(T) = \sigma_nH_{c2}(T)/B$ and $T$ is supposed to be close to $T_c$, it is possible to write $H_{c2}(T) \simeq Rk(T_c-T)$, where $R = 4c/\pi e D$ is valid for superconductors with a short mean free path and diffusivity $D$ ~\cite{Lar75etp}. Then, from Eq.~\eqref{eP} it follows that
\begin{equation}
\label{eE2T}
E^2(T) = Z^2 B[(kT)^5 - (kT_0)^5)]/k(T_c - T),
\end{equation}
where $Z^2 = A/R\sigma_n$. From Eq.~\eqref{eE2T} it follows that for $T_0 < T/2$ one may neglect $T_0$. If also $\theta \equiv T_c - T$ is rather small, i.\,e. $\theta\ll T_c$, then in Eq.~\eqref{eE2T} it is possible to change $T\rightarrow T_c$ in the bracket because $(kT)^5\simeq(kT_c)^5(1- 5\theta/T_c)$ and in this limit the main $T$-dependence of $E^2(T)$ on $\theta$ is
\begin{equation}
\label{eE2Tlimit}
E^2(T)\simeq Z^2 B(kT_c)^5/k\theta.
\end{equation}
From Eq.~\eqref{eE2Tlimit} it follows that for $T\rightarrow T_c$, $\theta \propto B/E^2$ or
\begin{equation}
\label{eTeTc}
T(E)\simeq T_c - Z^2 B(kT_c)^5/kE^2,
\end{equation}
that is, $T(E)$ \emph{monotonically increases with growing} $E$.
Returning to the main equation~\eqref{eP} of the present approach, it should be emphasized that it yields a \emph{single-valued and exact} simple relation between $T^\ast$ and $P^\ast$, while in the approach of Kunchur \cite{Kun02prl,Kni06prb} the calculation of the function $P^\ast(T^\ast)$ was possible only by \emph{numerical integration} of Eq.~\eqref{eHeatBalance}.
The next task is to derive an \emph{exact} formula for the critical temperature $T^\ast$ which does not depend on other critical parameters. To accomplish this, $E^\ast(T^\ast)$ can be calculated following two different ways. The first is obvious from considering Eq.~\eqref{eE2T}. It yields
\begin{equation}
\label{etildeE}
\tilde E_0^\ast (T^\ast)= Z\sqrt{B} \left\{[(kT^\ast)^5 - (kT_0)^5)]/k(T_c - T^\ast)\right\}^{1/2}.
\end{equation}
The second way is exploiting, as previously, the condition $dj/dE = 0$, where $j = \sigma(T)E$. A simple calculation then yields $(dE/dT)_{E = E^\ast} = E^\ast/(T_c - T^\ast)$. Finally, taking into account that $d[\sigma(T)E^2]/dT = 5Ak(kT)^4$, one has
\begin{equation}
\label{etildeEshort}
\tilde E_0^\ast (T^\ast) = Z\sqrt{5B} (kT^\ast)^2.
\end{equation}
A comparison of Eqs.~\eqref{etildeE} and~\eqref{etildeEshort} yields the following equation for $T^\ast$
\begin{equation}
\label{ekT5}
(kT^\ast)^5 - (kT_0)^5 = 5(kT^\ast)^4 k (T_c - T^\ast),
\end{equation}
from which it follows that $T^\ast$ depends only on $T_0$ and $T_c$ and it does not depend on $A$, $B$, $R$, and $\sigma_n$. Equation~\eqref{ekT5} can be presented in another form
\begin{equation}
\label{e6T5}
6T^{\ast5} - 5T_cT^{\ast4} - T_0^5 = 0.
\end{equation}
Finally, one obtains that
\begin{equation}
\label{eTast}
T^\ast = (5/6)T_c + (T_0/6)(T_0/T^\ast)^4.
\end{equation}
From Eq.~\eqref{eTast} it follows that for $T_0 \leq T^\ast/2$ the dependence of $T^\ast$ on $T_0$ is very weak and $T^\ast\simeq(5/6)T_c$, i.\,e. $T^\ast$ depends only on $T_c$. It is interesting to note that the two-fluid approach also leads to $B$-independent $T^\ast$ as in Fig.~3 of Ref. \cite{Kni06prb}. It is curiously that if one applies Eq.~\eqref{eTast} for the estimation of $T^\ast$ in YBCO samples~\cite{Kun02prl,Kni06prb}, then one comes with essentially the same $T^\ast(T_0,T_c)$ dependence as obtained in Refs. \cite{Kun02prl,Kni06prb}, see e.\,g., Fig.~4 in Ref. \cite{Kni06prb}.
Now it is worth to return to the determination of the $(B,T^\ast)$-dependences of the other critical parameters, namely $\tilde j_0^\ast$, $\tilde v_0^\ast$, $\tilde \rho_0^\ast$, and $\tilde P^\ast$ in the presented approach for the flux-flow regime, using for that Eqs.~\eqref{eP}, \eqref{etildeE}, \eqref{etildeEshort}, and \eqref{ekT5}. The result is
\begin{equation}
\begin{array}{lll}
\label{e4}
\tilde j_0^\ast = \sigma_n Z \sqrt{5}(kT^\ast)^2H_{c2}(T^\ast) /\sqrt{B},\\[1mm]
\tilde v_0^\ast = c Z \sqrt{5}(kT^\ast)^2 /\sqrt{B},\\[1mm]
\tilde \rho_0^\ast = \rho_n B/H_{c2}(T^\ast),\\[1mm]
\tilde P_0^\ast = 5Ak (T_c - T^\ast)(kT^\ast)^4.
\end{array}
\end{equation}
A comparison of the critical parameters, obtained in the two-fluid approximation and given by Eqs.~\eqref{eTast},\eqref{e4} with the similar parameters in Ref. \cite{Kun02prl}, reveals that their $B$-dependences are identical. The merit of Eqs~\eqref{eTast},\eqref{e4} consists in that the $T^\ast$-dependent functions in these equations can be at once calculated using Eq~\eqref{eTast} for $T^\ast$. In other words, the presented two-fluid approach, based on a more simple heat balance equation~\eqref{eP}, allows one to derive the same results for the hot electron instability as obtained in~\cite{Kun02prl,Kni06prb} in a more direct and simple way \emph{without numerical integration} of Eq.~\eqref{eHeatBalance}. Introduction of pinning into the two-fluid approach follows the same way as it was discussed in Sec. \ref{SecInst} for the cosine and saw-tooth WPPs, i.\,e. using the function $2\tilde \mu = \tilde x = (j_c/\tilde j_0^\ast)^2$ with $\tilde j_0^\ast$ given by Eq.\eqref{e4}, as detailed next.
\subsection{Cosine potential}
Using Eqs.~\eqref{eCosine} and \eqref{eP}, in the presence of the cosine WPP the equation for $\tilde E^\ast$ reads
\begin{equation}
\label{eCosineLong}
\left\{j_c^2 + [\sigma(T^\ast)\tilde E^\ast]^2 \right\}\tilde E^{\ast2}= A^2\left[(kT^\ast)^5 - (kT_0)^5\right]^2.
\end{equation}
Using Eq.~\eqref{etildeE}, Eq.~\eqref{eCosineLong} can be transformed into the previously derived Eq.~\eqref{eZ2} with $z\equiv (\tilde E^\ast/\tilde E_0^\ast)^2$ and $2\mu\equiv(j_c/\tilde j_0^\ast)^2$, where $\tilde E_0^\ast$ and $\tilde j_0^\ast$ are given by Eqs.~\eqref{etildeEshort} and \eqref{e4}. Here and in what follows the tilde denotes, as previously, the critical parameters derived in the two-fluid approach. The solution of Eq.~\eqref{eCosineLong} is given, as previously, by Eq.~\eqref{eZsolution}, the derivation of $(\tilde j^\ast/\tilde j_0)^2 = y$ repeats Eq.~\eqref{eY} and so on. Taking into account that all critical parameters given by Eqs.~\eqref{etildeEshort} and \eqref{e4} have the same $B$-dependences as in~\cite{Kun02prl,Kni06prb}, all the results obtained in Sec. \ref{SecInst} can be applied.
\subsection{Saw-tooth potential}
Using Eq.~\eqref{eSaw} for the CVC and Eq. \eqref{eP} for $P = Ej$ it is possible at once to obtain the equation for $\tilde E^\ast$ in the form
\begin{equation}
\label{eST1}
\sigma(\tilde E^\ast)^2[1 + \sqrt{1 + (2j_c/\sigma\tilde E^\ast)^2}] =2 A [(kT^\ast)^5 -(kT_0)^5].
\end{equation}
A simple transformation of Eq. \eqref{eST1} (which releases the square root) with taking into account that $\sigma(T^\ast) A[(kT^\ast)^5 -(kT_0)^5] =(\tilde j_0^\ast)^2 $ leads to the previous result given by Eq. \eqref{eEE0}, namely
\begin{equation}
\label{eST2}
\tilde E^\ast/\tilde E_0^\ast = 1/\sqrt{1 + \tilde x},
\end{equation}
where $\tilde x = (j_c / \tilde j_0^\ast)^2$. The calculation of $\tilde j^\ast$ from Eqs. \eqref{eSaw} and \eqref{eST2} yields
\begin{equation}
\label{eST3}
\tilde j^\ast/\tilde j_0^\ast = \sqrt{1 + \tilde x}.
\end{equation}
Equations \eqref{eST2} and \eqref{eST3} allow one to derive the results for $\tilde \rho^\ast$. $\tilde v^\ast$, and $\tilde P$ in the form calculated previously in Sec. \ref{SecInst}. In this way, all the results of Sec. \ref{SubsectsST} can be repeated.
\section{Discussion}
Before a comparison of the results obtained in this work with those of Kunchur \cite{Kun02prl,Kni06prb}, first it is suitable to briefly summarize the theoretical and experimental features of the hot-electron instability discussed in Refs. \cite{Kun02prl,Kni06prb} for epitaxial YBCO films at temperatures $T_0\leq T_c/2$. The heat balance equation~\eqref{eHeatBalance} is the basic equation of the considered electron overheating problem. It determines the nonlinear $T(E)$ behavior which is consistent with a nonlinear CVC, despite of the fact that the Bardeen-Stephen formula for the linear flux-flow conductivity with $T$-dependent $\sigma$ [due to $H_{c2}(T)$] is used. Unfortunately, Eq.~\eqref{eHeatBalance} allows one to find the $T(E)$ dependence and the CVC by numerical integration only. Using the $T(E)$ dependence it was also shown that $T^\ast = 76$\,K at $T_0 \ll T_c\approx 90$\,K and $T^\ast$ weakly depends on $T_0$ up to $T_0\approx 40$\,K~\cite{Kni06prb}. Finally, the $B$-dependences of the critical parameters without pinning (subscript ``0'') obtained in Refs. \cite{Kun02prl,Kni06prb} read
\begin{equation}
\label{eCriticalParam}
\begin{array}{lll}
E_0^\ast \propto \sqrt{B},\qquad j_0^\ast \propto 1/\sqrt{B},\qquad v_0^\ast \propto 1/\sqrt{B},\\[2mm]
\rho_0^\ast \propto B, \quad\qquad P_0^\ast\neq f(B).
\end{array}
\end{equation}
It should be recalled that the experimental results obtained for YBCO films~\cite{Kun02prl} were fitted, in neglect of pinning, to the predicted $B$-dependences by Eq.~\eqref{eCriticalParam} and the respective $(B,T_0)$-dependent CVCs \emph{without any adjustable parameters}.
Proceeding now to a brief discussion of the new results obtained in this work it is worth to begin with the description of the way of accounting for pinning in low-$T_c$ superconducting films. In fact, the introduction of pinning into the hot-electron instability problem here is phenomenological: Instead of the linear CVC $j = \sigma(T)E$ (at $T=const$) with $\sigma (T) = \sigma_n H_{c2}/B$ used by Kunchur \cite{Kun02prl,Kni06prb}, here the nonlinear CVC (at $T=const$) ``generated'' by the WPP and taken at $T=0$ has been used. Theoretically, it is possible to use the CVCs derived in Ref. \cite{Shk99etp,Shk08prb} at $T>0$ as well, however, in this work the consideration was limited to the two CVCs calculated at $T=0$ due to their simplicity. As the CVC curvature depends on the particular WPP used, two different simple WPPs have been used, namely a cosine WPP \cite{Shk08prb,Shk11prb} and a saw-tooth WPP \cite{Shk99etp,Shk06prb}, refer to Fig. \ref{fig1}. Both WPPs lead to the appearance of a new additional parameter in the CVC, $j_c$, the critical (depinning) current density which, in turn, depends on the WPP-specific parameters. The two CVC types addressed in this work can be realized in nanostructured superconducting films exhibiting a WPP \cite{Dob10sst,Dob11snm,Dob11pcs,Dob12njp,Dob16sst,Dob15apl,Dob15met}, see e.\,g. Ref.~\cite{Dob17pcs} for a review. The $\sigma(T)$ employed in Eqs.~\eqref{eCosine} and \eqref{eSaw} is the same as that used by Kunchur \cite{Kun02prl,Kni06prb}.
After the introduction of pinning, the task was to determine the $(j_c,B)$-dependences of the new critical parameters $E^\ast$, $j^\ast$, $v^\ast$, $\rho^\ast$ and $P^\ast$ for the CVC for the WPPs of both types. In Sec. \ref{SecInst}, formulae \eqref{eZsolution}--\eqref{eRast} for these critical parameters were obtained in terms of the dimensionless parameter $2\mu = (j_c/j_0^\ast)^2$ for the cosine WPP [see Eq.~\eqref{e2mu}], and $x\equiv 2\mu$ for the saw-tooth WPP. Then the problem of analyzing of the ($\mu,B$)-dependences of the aforementioned critical parameters was considered in two steps.
\begin{table*}[tbh!]
\centering
\begin{footnotesize}
\begin{tabular}{|c|c|c|c|c|}
\hline
Two-fluid approach: &\multicolumn{4}{|c|}{$T^\ast\simeq(5/6)T_c$ for $T_0 \leq T^\ast/2$, $\tilde \alpha=3\pi/4ecN(0)(kT_c)$ and $\tilde \gamma=\sqrt{ \tilde P_0^\ast/\tilde \alpha}$} \\
\hline
CVC: $j=\sigma E$, \newline
$\sigma = \sigma_nH_{c2}(T)/B$ & \multicolumn{2}{|c|}{cosine WPP: $ j = \sqrt{j_c^2 + \sigma ^2 E^2}$ for $j>j_c$} & \multicolumn{2}{|c|}{saw-tooth WPP: $ j = (\sigma E/2)[1 + \sqrt{1+(2j_c/\sigma E)^2}$}\\
\hline
clean limit $j_c=0$ & $ j_c=const $ & $j_c=j_c(B)$ Eq.~(17) & $ j_c=const $ & $j_c=j_c(B)$ Eq.~(17) \\
\hline
\multirow{1}{*}{$\tilde E_0^\ast =\tilde \gamma\tilde \alpha \sqrt{B}$} & \multicolumn{2}{|c|}{$\tilde E^\ast = \tilde E_0^\ast/\sqrt{\sqrt{1+\mu^2}+\mu}$} & \multicolumn{2}{|c|}{$ \tilde E^\ast/= \tilde E_0^\ast /\sqrt{1 + \tilde x}$}\\
\cline{2-5}
& $ d\tilde E^\ast/dB >0$ Eq.(13) & $ d\tilde E^\ast/dB >0$ Eq.~(20) & $ d{\tilde E^\ast}/dB >0$ & $ d\tilde E^\ast/dB >0$ \\
\hline
\multirow{1}{*}{$\tilde j_0^\ast=\tilde \gamma/\sqrt{B}$} & \multicolumn{2}{|c|}{$\tilde j^\ast =\tilde j_0^\ast\sqrt{\sqrt{1+\mu^2}+\mu}$} & \multicolumn{2}{|c|}{$ \tilde j^\ast=\tilde j_0^\ast \sqrt{1 + \tilde x}$}\\
\cline{2-5}
& $d\tilde j^\ast/dB<0$~Eq.~(15) & $d\tilde j^\ast/dB<0$ Eq.~(23) & $d\tilde j^\ast/dB<0$ & $d\tilde j^\ast/dB<0$ \\
\hline
\multirow{1}{*}{$\tilde \rho_0^\ast =\tilde \alpha B$} & \multicolumn{2}{|c|}{$\tilde \rho^\ast =\tilde \rho^\ast_0/(\sqrt{1 + \mu^2} + \mu)$} & \multicolumn{2}{|c|}{$\tilde \rho^\ast =\tilde \rho^\ast_0/(1+\tilde x)$}\\
\cline{2-5}
& $d\tilde \rho^\ast/dB>0$~Eq.~(14) & $d\tilde \rho^\ast/dB>0$ Eq.~(25) & $d\tilde \rho^\ast/dB>0$ & $d\tilde \rho^\ast/dB>0$ \\
\hline
\multirow{1}{*}{$ \tilde v_0^\ast = c\tilde \gamma\tilde \alpha/\sqrt{B}$, $d\tilde v^\ast/dB=- \tilde v_0^\ast/2B<0$} & \multicolumn{2}{|c|}{$\tilde v^\ast(B) = \tilde v_0^\ast /\sqrt{\sqrt{1+\mu^2} + \mu}$} & \multicolumn{2}{|c|}{$\tilde v^\ast = \tilde v_0^\ast /\sqrt{1 + \tilde x}$}\\
\cline{2-5}
& $ d\tilde v^\ast/dB<0$ Eq.~(16) & $ d\tilde v^\ast/dB>0 or<0$ Eq.~(27) & $ d\tilde v^\ast/dB<0$ & $ d\tilde v^\ast/dB>0 or<0$ Eq.~(32) \\
\hline
$ \tilde P_0^\ast =\tilde \gamma^2\tilde \alpha$ & $\tilde P^\ast=\tilde P_0^\ast$ & $\tilde P^\ast=\tilde P_0^\ast$ & $\tilde P^\ast=\tilde P_0^\ast$ & $\tilde P^\ast=\tilde P_0^\ast$ \\
\hline
\end{tabular}
\end{footnotesize}
\caption{Summary of the results obtained in Sec. \ref{SecPow} within the \emph{two-fluid approach}. In the left column formulae for the critical parameters $\tilde E_0^\ast$, $\tilde j_0^\ast$, $\tilde \rho_0^\ast$, $ \tilde v_0^\ast$, $\tilde P_0^\ast$ are derived in the clean limit, i.e. in absence of pinning ($j_c = 0$). They are presented in terms of the two parameters $\tilde \alpha$ and $\tilde \gamma$ calculated by Eqs.~\eqref{e4} taken at $T^\ast\simeq(5/6)T_c$ (see first the formulae for $\tilde \alpha$ and $\tilde \gamma$, $N(0)$ is the electron density of states of a metal film). In the second line of the table in the left column there is a CVC for the clean limit ($j_c = 0$), while in the second and third columns there are CVCs for the cosine and saw-tooth WPPs at $j > j_c$, respectively, which are both zero when $0< j < j_c$. The subsequent lines in the latter columns present the formulae for the pinning-dependent critical parameters $\tilde E^\ast$, $\tilde j^\ast$, $\tilde \rho^\ast$, $\tilde \rho^\ast$, $\tilde P^\ast$ and the behavior of their $B$-derivatives in terms of $2\mu=2\tilde \mu = \tilde x = (j_c/\tilde j_0^\ast)^2$, where $ j_c=const$ or $ j_c= j_c(B)$. }
\label{table}
\vspace{-5mm}
\end{table*}
First, it was supposed that $j_c$ is $B$-independent. Then, a direct inspection of Eqs.~\eqref{eZsolution}--\eqref{eRast} for the cosine WPP reveals that the critical parameters monotonically change with increasing $\mu$. Namely, at a fixed $j_0^\ast \propto 1/\sqrt{B}$, as $j_c$ increases, $E^\ast$, $\rho^\ast$, and $v^\ast \propto E^\ast$ decrease, $j^\ast$ increases, and $P^\ast$ does not depend on $j_c$. Analogous results have been derived for the saw-tooth WPP, see Eqs.~\eqref{eEE0}-\eqref{eRR0}. Then, taking into account Eq.~\eqref{eCriticalParam}, i.\,e. that $\mu\propto B$ at $j_c = const$, the $B$-dependence of the critical parameters and their $B$ derivatives for the cosine WPP [see Eqs.~\eqref{edEdB}-\eqref{edVdB}] have been analyzed. The main results of this analysis, which are similar for the cosine and the saw-tooth WPPs, can be summarized as follows: $E^\ast(B)$ monotonically increases with growing $B$ while $dE^\ast/dB > 0$ strongly decreases. The behavior of $\rho^\ast(B)$ upon $B$ is similar. $j^\ast(B)$ monotonically decreases with growing $B$ while its derivative $dj^\ast/dB <0$ strongly decreases. The critical velocity $v^\ast(B)$ and its derivative $dv^\ast/dB$ monotonically decrease with growing $B$. The power $P^\ast$ at the instability point is independent of $B$.
The second important step detailed in Sec.~\ref{SecInst} was to introduce a simple power-law dependence for $j_c(B)$ by Eq.~\eqref{eScaling}, because the previous assumption on the $B$-independence of $j_c$ is not realistic. In consequence of this, $\mu(B)\propto B^{1-2m}$ has a more complex $B$-dependence with $m\geq0$ that provides that $j_c(B)$ decreases with growing $B$ as observed in experiments. For $m=0$ one returns to the previous case with $j_c = const$ and at $m = 1/2$ a crossover appears from $\mu$ increasing with growing $B$ (for $0<m<1/2$) to $\mu$ decreasing in $B$ (for $m>1/2$). Turning to the influence of the $\mu(B)$ dependence on the $B$-behavior of the critical parameters and their $B$-derivatives, it has been derived that the $B$-derivatives of $E^\ast(B)$, $j^\ast(B)$, $\rho^\ast(B)$ at $m>0$ hold the same sign as for the case $j_c=const$. The $B$-behavior of $v^\ast(B)$ has been revealed to be quite different. Namely, $dv^\ast/dB$ changes its sign at $m>1$, i.\,e. $dv^\ast/dB>0$ at $B\rightarrow 0$. It should be noted that the main results of this analysis, as previously, are similar for both WPP types. Moreover, since usually $j_c(B)$ at $B\rightarrow0$ can be approximated by Eq.~\eqref{eScaling} with $m<1/2$, then \emph{ $dv^\ast/dB$ may exhibit a second sign change} at $B\ll B_{c2}$. This behavior is sometimes observed in experiments ~\cite{Gri09pcm,Leo10pcs,Gri10prb,Gri12apl,Gri11snm,Sil12njp}.
Finally, in Sec. \ref{SecPow} the simplest heat balance equation for electrons in low-$T_c$ superconducting films like Nb in the two-fluid approach has been considered. In this case the physics of quasiparticles overheating can be explained by a more simple heat balance equation than Eq.~\eqref{eHeatBalance}. The main features of this more simple approach were presented by BS in~\cite{Bez92pcs}, see Sections 1 and 2 therein. In using them it was supposed that $P(T,T_0)$ dependence can be approximated by the same expression, see Eq.~(18) in~\cite{Bez92pcs}, as for normal electrons at temperature $T$ near $T_c$ and this can be made within the framework of the two-fluid model of superconductivity~\cite{Tin04boo}. It was shown that $T^\ast$ appears near $T_c$ (but not too close to $T_c$ where the mechanism of the LO instability~\cite{Lar75etp} dominates). For the dissipated heat power $P$ flowing from the film to the substrate the heat balance equation has the form of Eq.~\eqref{eP} which is accurate to corrections of the order of $(\Delta/T)^2\ll 1$, where $\Delta(T)$ is the superconducting gap. Equation~\eqref{eP} describes the case when nonequilibrium phonons escape from the thin film without reabsorption by quasiparticles. The heating regime of the film in this limit is known as electron overheating~\cite{Bez92pcs}, termed so as one describes quasiparticles and phonons by different temperatures, $T$ and $T_0$, respectively. The main result of this section is Eq.~\eqref{eTast} from which follows that for $T_0 \leq T^\ast/2$ the dependence $T^\ast$ on $T_0$ is very weak and $T^\ast\simeq(5/6)T_c$, i.\,e. $T^\ast$ depends only on $T_c$.
A comparison of the critical parameters obtained in the clean limit within the framework of the two-fluid model [Eqs.~\eqref{eTast}--\eqref{e4}] with the respective parameters of Ref. \cite{Kun02prl}, see also Table \ref{table}, reveals that their $B$-dependences are identical. The merit of Eqs. \eqref{eTast} and \eqref{e4} consists in that the $T^\ast$-dependent functions in these equations can be at once calculated using Eq. \eqref{eTast} for $T^\ast$. In other words, the presented two-fluid approach based on a more simple heat balance equation~\eqref{eP} allows one to derive the same results for the hot electron instability as obtained by Kunchur \cite{Kun02prl,Kni06prb} in a more direct and simple way \emph{without numerical integration} of Eq.~\eqref{eHeatBalance}. Introduction of pinning into the two-fluid approach is done by the same way as discussed in Sec.\,\ref{SecInst} for the cosine and saw-tooth WPPs.
To draw parallels with the LO instability problem at $T \lesssim T_c$, it is worth emphasizing that a theoretical account for the pinning effect is possible in this case as well \cite{Shk17snd}. The introduction of pinning in Ref. \cite{Shk17snd} has been done in the same way as in this work, namely for a cosine WPP which can be realized in nanostructured low-$T_c$ superconducting films \cite{Dob10sst,Dob11pcs,Dob15apl,Dob15met,Dob12njp,Dob16sst}. The problem of pinning effects on the flux flow instability in Ref. \cite{Shk17snd} was at once considered relying upon the BS approach \cite{Bez92pcs} because one the LO instability corresponds to the limiting case of the BS instability at $B\ll B_T$, where $B_T$ is the quasiparticles overheating field introduced by BS in Ref. \cite{Bez92pcs}. In Ref. \cite{Shk17snd}, the heat balance equation in conjunction with the CVC extremum condition at the instability point has been augmented by a pinning strength parameter. A theoretical analysis \cite{Shk17snd} revealed that with increasing pinning strength at a fixed magnetic field value $E$ decreases, $j^\ast$ increases, while $P^\ast$ and $T\ast$ remain practically constant.
Lastly, turning to a comparison with experiment, it is worth noting that the presented account for pinning effects on the hot-electron vortex flow instability has recently allowed us to fit experimental data for the measured dependences $v^\ast(B)$ for epitaxial Nb films with different pinning types and its strength at $T = 0.4T_c$ \cite{Dob17arx} to the analytical expression \eqref{eVastB} derived here. In particular, we observed that the exponent $m \simeq 1$ in $j_c \propto 1 / B^m$ is larger in Nb films with stronger pinning (represented by ion-irradiated and nanopatterned films), while $m \simeq 0.5$ for as-grown films. In this way, we have been able to fit the observed crossover \cite{Dob17arx} from the monotonic decrease of $v^\ast(B)$ in the case of the as-grown films to the non-monotonic behavior of $v^\ast(B)$ for the films with stronger pinning.
\section{Conclusion}
To sum up, the proposed phenomenological approach for the introduction of pinning into the hot-electron instability problem has revealed the possibility for non-monotonicity of $v^\ast(B)$, as sometimes observed in experiments \cite{Gri09pcm,Leo10pcs,Gri10prb,Gri12apl,Gri11snm,Sil12njp,Dob17arx}. Addressing the experimental examination of the elaborated phenomenological theory, it should be pointed out that \emph{only two curves, namely the current-voltage characteristic and the $j_c(B)$ dependence have to be determined in experiment} thus allowing one to map the predicted results on the experimental data.
\section*{Acknowledgements}
The author thanks O. V. Dobrovolskiy for a proofreading of the manuscript. This work was supported through DFG project DO1511/3-1.
\vspace*{6mm}
|
1,108,101,564,243 | arxiv | \section{Introduction}
\label{s-intro}
A {\it blocker problem} asks whether given a graph $G$, a graph parameter $\pi$, a set $\mathcal{O}$ of one or more graph operations and an integer $k \geq 1$, $G$ can be transformed into a graph $G'$ by using at most $k$ operations from $\mathcal{O}$ such that $\pi(G') \leq \pi(G) - d$ for some {\it threshold} $d \geq 0$. Such a designation follows from the fact that the set of vertices or edges involved can be viewed as ''blocking'' the parameter $\pi$. Identifying such sets may provide information on the structure of the input graph; for instance, if $\pi = \alpha$, $k=d=1$ and $\mathcal{O} = \{$vertex deletion$\}$, the problem is equivalent to testing whether the input graph contains a vertex that is in every maximum independent set (see \cite{paulusma2017blocking}). Blocker problems have received much attention in the recent literature (see for instance \cite{BTT11,bazgan2013critical,RBPDCZ10,Bentz,CWP11,DPPR15,contracdom,keller2018blockers,keller2013blockers,PBP,nasirian2019exact,pajouh2015minimum,PPR16,paulusma2017blocking,paulusma2018critical,diner2018contraction}) and have been related to other well-known graph problems such as \textsc{Hadwiger Number}, \textsc{Club Contraction} and several graph transversal problems (see for instance \cite{DPPR15,PPR16}). The graph parameters considered so far in the literature are the chromatic number, the independence number, the clique number, the matching number and the vertex cover number while the set $\mathcal{O}$ is a singleton consisting of a vertex deletion, edge contraction, edge deletion or edge addition. In this paper, we focus on the domination number $\gamma$, let $\mathcal{O}$ consists of an edge contraction and set the threshold $d$ to one.
Formally, let $G=(V,E)$ be a graph. The {\it contraction} of an edge $uv\in E$ removes vertices $u$ and $v$ from $G$ and replaces them by a new vertex that is made adjacent to precisely those vertices that were adjacent to $u$ or $v$ in $G$ (without introducing self-loops nor multiple edges). We say that a graph $G$ can be \textit{$k$-contracted} into a graph~$G'$, if $G$ can be transformed into $G'$ by a sequence of at most~$k$ edge contractions, for an integer $k\geq 1$. The problem we consider is then the following (note that contracting an edge cannot increase the domination number).
\begin{center}
\begin{boxedminipage}{.99\textwidth}
$k$-\textsc{Edge Contraction($\gamma$)}\\
\begin{tabular}{ r p{0.8\textwidth}}
\textit{~~~~Instance:} &A connected graph $G=(V,E)$.\\
\textit{Question:} &Can $G$ be $k$-contracted into a graph $G'$ such that $\gamma(G')~\leq~\gamma(G) -1$?
\end{tabular}
\end{boxedminipage}
\end{center}
Reducing the domination number using edge contractions was first considered in \cite{HX10}. The authors proved that for a connected graph $G$ such that $\gamma(G)\geq 2$, we have $ct_\gamma (G)\leq 3$, where $ct_{\gamma}(G)$ denotes the minimum number of edge contractions required to transform $G$ into a graph $G'$ such that $\gamma (G') \leq \gamma (G) -1$ (note that if $\gamma(G) = 1$ then $G$ is a \no-instance for \kcontracd{} independently of the value of $k$). Thus, if $G$ is a connected graph with $\gamma (G) \geq 2$, then $G$ is always a \yes-instance for \kcontracd{} when $k \geq 3$. It was later shown in \cite{contracdom} that \kcontracd{} is $\mathsf{coNP}$-hard for $k \leq 2$ and so, restrictions on the input graph to some special graph classes were considered. In particular, the authors in \cite{contracdom} proved that for $k=1,2$, the problem is polynomial-time solvable for $P_5$-free graphs while for $k=1$, it remains $\mathsf{NP}$-hard when restricted to $P_9$-free graphs and $\{C_3,\ldots,C_\ell\}$-free graphs, for any $\ell \geq 3$.
In this paper, we continue the systematic study of the computational complexity of \contracd{} initiated in \cite{contracdom}. Ultimately, the aim is to obtain a complete classification for \contracd{} restricted to $H$-free graphs, for any (not necessarily connected) graph $H$, as it has been done for other blocker problems (see for instance \cite{paulusma2017blocking,paulusma2018critical,diner2018contraction}). As a step towards this end, we prove the following two theorems.
\begin{theorem}
\label{thm:clawfree}
\contracd{} is $\mathsf{coNP}$-hard when restricted to subcubic claw-free graphs.
\end{theorem}
\begin{theorem}
\label{thm:p7free}
\contracd{} is $\mathsf{coNP}$-hard when restricted to $P_7$-free graphs.
\end{theorem}
Since \contracd{} is $\mathsf{NP}$-hard when restricted to $\{C_3,\ldots,C_\ell\}$-free graphs, for any $\ell \geq 3$, it follows that \contracd{} is $\mathsf{NP}$-hard for $H$-free graphs when $H$ contains a cycle. If $H$ is a forest with a vertex of degree at least three, we conclude by Theorem \ref{thm:clawfree} that \contracd{} is $\mathsf{coNP}$-hard for $H$-free graphs; and if $H$ is a linear forest containing a path of length at least 7, then \contracd{} is $\mathsf{coNP}$-hard for $H$-free graphs by Theorem \ref{thm:p7free}. There remains to determine the complexity status of the problem restricted to $H$-free graphs when $H$ is a disjoint union of paths of length at most 6, which we leave as an open problem.
\section{Preliminaries}
\label{s-pre}
Throughout the paper, we only consider finite, undirected, connected graphs that have no self-loops or multiple edges. We refer the reader to~\cite{Di05} for any terminology and notation not defined here.
For $n\geq 1$, the path and cycle on $n$ vertices are denoted by $P_n$ and $C_n$ respectively. The \textit{claw} is the complete bipartite graph with one partition of size one and the other of size three.
Let $G=(V,E)$ be a graph and let $u\in V$. We denote by $N_G(u)$, or simply $N(u)$ if it is clear from the context, the set of vertices that are adjacent to $u$ i.e., the {\it neighbors} of $u$, and let $N[u]=N(u)\cup \{u\}$. The \textit{degree} of a vertex $u$, denoted by $d_G(u)$ or simply $d(u)$ if it is clear from the context, is the size of its neighborhood i.e., $d(u) = \vert N(u) \vert$. The maximum degree in $G$ is denoted by $\Delta (G)$ and $G$ is \textit{subcubic} if $\Delta (G) \leq 3$.
For any graph $H$, $G$ is said to be {\it $H$-free} if $G$ contains no induced subgraph isomorphic to $H$. For a subset $V'\subseteq V$, we let $G[V']$ denote the subgraph of $G$ {\it induced} by $V'$, which has vertex set~$V'$ and edge set $\{uv\in E\; |\; u,v\in V'\}$.
A subset $S \subseteq V$ is called an {\it independent set} or is said to be \textit{independent}, if no two vertices in $S$ are adjacent. A subset $D\subseteq V$ is called a {\it dominating set}, if every vertex in $V\setminus D$ is adjacent to at least one vertex in $D$; the {\it domination number} $\gamma(G)$ is the number of vertices in a minimum dominating set. For any $v \in D$ and $u \in N[v]$, $v$ is said to \textit{dominate} $u$ (in particular, $v$ dominates itself). We say that \textit{$D$ contains an edge} (or more) if the graph $G[D]$ contains an edge (or more). A dominating set $D$ of $G$ is \textit{efficient} if for every vertex $v \in V$, $\vert N[v] \cap D \vert = 1$ that is, $v$ is dominated by exactly one vertex.
In the following, we consider those graphs for which one contraction suffices to decrease their domination number by one. A characterization of this class is given in \cite{HX10}.
\begin{theorem}[\cite{HX10}]
\label{theorem:contracdom}
For a connected graph $G$, $ct_\gamma (G)=1$ if and only if there exists a minimum dominating set in $G$ that is not independent.
\end{theorem}
In order to prove Theorems \ref{thm:clawfree} and \ref{thm:p7free}, we introduce to two following problems.
\begin{center}
\begin{boxedminipage}{.99\textwidth}
\textsc{\sc All Efficient MD}\\[2pt]
\begin{tabular}{ r p{0.8\textwidth}}
\textit{~~~~Instance:} &A connected graph $G=(V,E)$.\\
\textit{Question:} &Is every minimum dominating set of $G$ efficient?
\end{tabular}
\end{boxedminipage}
\end{center}
\begin{center}
\begin{boxedminipage}{.99\textwidth}
\textsc{\sc All Independent MD}\\[2pt]
\begin{tabular}{ r p{0.8\textwidth}}
\textit{~~~~Instance:} &A connected graph $G=(V,E)$.\\
\textit{Question:} &Is every minimum dominating set of $G$ independent?
\end{tabular}
\end{boxedminipage}
\end{center}
The following is then a straightforward consequence of Theorem \ref{theorem:contracdom}.
\begin{fact}
\label{obs:equi}
Given a graph $G$, $G$ is a \yes-instance for \contracd{} if and only if $G$ is a \no-instance for {\sc All Independent MD}.
\end{fact}
\section{The proof of Theorem \ref{thm:clawfree}}
\setcounter{observation}{0}
In this section, we show that \contracd{} is $\mathsf{coNP}$-hard when restricted to subcubic claw-free graphs. To this end, we first prove the following.
\begin{lemma}
\label{lemma:efficient}
{\sc All Efficient MD} is $\mathsf{NP}$-hard when restricted to subcubic graphs.
\end{lemma}
\begin{proof}
We reduce from {\sc Positive Exactly 3-Bounded 1-In-3 3-Sat}, where each variable appears in exactly three clauses and only positively, each clause contains three positive literals, and we want a truth assignment such that each clause contains exactly one true literal. This problem is shown to be $\mathsf{NP}$-complete in \cite{moore}. Given an instance $\Phi$ of this problem, with variable set $X$ and clause set $C$, we construct an equivalent instance of {\sc All Efficient MD} as follows. For any variable $x \in X$, we introduce a copy of $C_9$, which we denote by $G_x$, with three distinguished \textit{true vertices} $T^1_x$, $T^2_x$ and $T^3_x$, and three distinguished \textit{false vertices} $F^1_x$, $F^2_x$ and $F^3_x$ (see Fig. \ref{fig:vargad}). For any clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, we introduce the gadget $G_c$ depicted in Fig. \ref{fig:clausegad} which has one distinguished \textit{clause vertex} $c$ and three distinguished \textit{variable vertices} $x_1$, $x_2$ and $x_3$ (note that $G_c$ is not connected). For every $j \in \{1,2,3\}$, we then add an edge between $x_j$ and $F_{x_j}^i$ and between $c$ and $T_{x_j}^i$ for some $i \in \{1,2,3\}$ so that $F_{x_j}^i$ (resp. $T_{x_j}^i$) is adjacent to exactly one variable vertex (resp. clause vertex). We denote by $G_{\Phi}$ the resulting graph. Note that $\Delta (G_{\Phi}) = 3$.
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.6]
\node[circ,label=below:{\small $F_x^2$}] (f2) at (1,0) {};
\node[circ,label=below:{\small $T_x^2$}] (t2) at (2,0) {};
\node[circ,label=below:{\small $u_x^2$}] (u2) at (3,0) {};
\node[circ,label=right:{\small $F_x^1$}] (f1) at (4,.5) {};
\node[circ,label=right:{\small $T_x^1$}] (t1) at (4,1.5) {};
\node[circ,label=above:{\small $u_x^1$}] (u1) at (3,2) {};
\node[circ,label=above:{\small $F_x^3$}] (f3) at (2,2) {};
\node[circ,label=above:{\small $T_x^3$}] (t3) at (1,2) {};
\node[circ,label=left:{\small $u_x^3$}] (u3) at (0,1) {};
\draw[-] (u3) -- (f2) -- (u2) -- (f1) -- (t1) -- (u1) -- (t3) -- (u3);
\end{tikzpicture}
\caption{The variable gadget $G_x$.}
\label{fig:vargad}
\end{subfigure}
\hspace*{.5cm}
\begin{subfigure}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.85]
\node[circ,label=left:{\small $x_1$}] (x1) at (0,1) {};
\node[circ,label=left:{\small $x_2$}] (x2) at (0,2) {};
\node[circ,label=left:{\small $x_3$}] (x3) at (0,3) {};
\node[circ,label=below:{\small $l_{\{x_1\}}$}] (l1) at (1.5,1) {};
\node[circ,label=below:{\small $l_{\{x_2\}}$}] (l2) at (1.5,2) {};
\node[circ,label=below:{\small $l_{\{x_3\}}$}] (l3) at (1.5,3) {};
\node[circ,label=right:{\small $c$}] at (3,2) {};
\draw[-] (x1) -- (l1)
(x2) -- (l2)
(x3) -- (l3);
\draw (1.05,.25) rectangle (1.9,3.25);
\node[draw=none] at (1.5,3.5) {\small $K_c$};
\end{tikzpicture}
\caption{The clause gadget $G_c$.}
\label{fig:clausegad}
\end{subfigure}
\caption{Construction of the graph $G_{\Phi}$ (the rectangle indicates that the corresponding set of vertices induces a clique).}
\end{figure}
\begin{nestedobservation}
\label{obs:size}
For any dominating set $D$ of $G_{\Phi}$, $\vert D \cap V(G_x) \vert \geq 3$ for any $x \in X$ and $\vert D \cap V(G_c) \vert \geq 1$ for any $c \in C$. In particular, $\gamma(G_{\Phi}) \geq 3 \vert X \vert + \vert C \vert$.
\end{nestedobservation}
Indeed, for any $x \in X$, since $u_x^1$, $u_x^2$ and $u_x^3$ must be dominated and their neighborhoods are pairwise disjoint and contained in $G_x$, it follows that $\vert D \cap V(G_x) \vert \geq 3$. For any $c \in C$, since the vertices of $K_c$ must be dominated and their neighborhoods are contained in $G_c$, $\vert D \cap V(G_c) \vert \geq 1$. $\diamond$
\begin{nestedobservation}
\label{obs:mingx}
For any $x \in X$, if $D$ is a minimum dominating set of $G_x$ then either $D = \{u_x^1,u_x^2,u_x^3\}$, $D = \{T_x^1,T_x^2,T_x^3\}$ or $D = \{F_x^1,F_x^2,F_x^3\}$.
\end{nestedobservation}
\begin{nestedclaim}
\label{clm:phisat}
$\Phi$ is satisfiable if and only if $\gamma (G_{\Phi}) = 3 \vert X \vert + \vert C \vert$.
\end{nestedclaim}
\begin{claimproof}
Assume that $\Phi$ is satisfiable and consider a truth assignment satisfying $\Phi$. We construct a dominating set $D$ of $G_{\Phi}$ as follows. For any variable $x \in X$, if $x$ is true, add $T_x^1$, $T_x^2$ and $T_x^3$ to $D$; otherwise, add $F_x^1$, $F_x^2$ and $F_x^3$ to $D$. For any clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, exactly one variable is true, say $x_1$ without loss of generality; we then add $l_{\{x_1\}}$ to $D$. Clearly, $D$ is dominating and we conclude by Observation~\ref{obs:size} that $\gamma (G_{\Phi}) = 3 \vert X \vert + \vert C \vert$.
Conversely, assume that $\gamma (G_{\Phi}) = 3 \vert X \vert + \vert C \vert$ and consider a minimum dominating set $D$ of $G_{\Phi}$. Then by Observation \ref{obs:size}, $\vert D \cap V(G_x) \vert = 3$ for any $x \in X$ and $\vert D \cap V(G_c) \vert = 1$ for any $c \in C$. Now, for a clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, if $D \cap \{c,x_1,x_2,x_3\} \neq \emptyset$ then $D \cap V(K_c) = \emptyset$ and so, at least two vertices from $K_c$ are not dominated; thus, $D \cap \{c,x_1,x_2,x_3\} = \emptyset$. It follows that for any $x \in X$, $D \cap V(G_x)$ is a minimum dominating set of $G_x$ which by Observation \ref{obs:mingx} implies either $\{T_x^1,T_x^2,T_x^3\} \subset D$ or $D \cap \{T_x^1,T_x^2,T_x^3\} = \emptyset$; and we conclude similarly that either $\{F_x^1,F_x^2,F_x^3\} \subset D$ or $D \cap \{F_x^1,F_x^2,F_x^3\} = \emptyset$. Now given a clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, since $D \cap \{c,x_1,x_2,x_3\} = \emptyset$, at least one true vertex adjacent to the clause vertex $c$ must belong to $D$, say $T_{x_1}^i$ for some $i \in\{1,2,3\}$ without loss of generality. It then follows that $\{T_{x_1}^1,T_{x_1}^2,T_{x_1}^3\} \subset D$ and $D \cap \{F_{x_1}^1,F_{x_1}^2,F_{x_1}^3\} = \emptyset$ which implies that $l_{\{x_1\}} \in D$ (either $x_1$ or a vertex from $K_c$ would otherwise not be dominated). But then, since $x_j$ for $j \neq 1$, must be dominated, it follows that $\{F_{x_j}^1,F_{x_j}^2,F_{x_j}^3\} \subset D$. We thus construct a truth assignment satisfying $\Phi$ as follows: for any variable $x \in X$, if $\{T_x^1,T_x^2,T_x^3\} \subset D$, set $x$ to true, otherwise set $x$ to false.
\end{claimproof}
\begin{nestedclaim}
\label{clm:eff}
$\gamma (G_{\Phi}) = 3 \vert X \vert + \vert C \vert$ if and only if every minimum dominating set of $G_{\Phi}$ is efficient.
\end{nestedclaim}
\begin{claimproof}
Assume that $\gamma (G_{\Phi}) = 3 \vert X \vert + \vert C \vert$ and consider a minimum dominating set $D$ of $G_{\Phi}$. Then by Observation~\ref{obs:size}, $\vert D \cap V(G_x) \vert = 3$ for any $x \in X$ and $\vert D \cap V(G_c) \vert = 1$ for any $c \in C$. As shown previously, it follows that for any clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, $D \cap \{c,x_1,x_2,x_3\} = \emptyset$; and for any $x \in X$, either $\{T_x^1,T_x^2,T_x^3\} \subset D$ or $D \cap \{T_x^1,T_x^2,T_x^3\} = \emptyset$ (we conclude similarly with $\{F_x^1,F_x^2,F_x^3\}$ and $\{u_x^1,u_x^2,u_x^3\}$). Thus, for any $x \in X$, every vertex in $G_x$ is dominated by exactly one vertex. Now given a clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, since the clause vertex $c$ does not belong to $D$, there exists at least one true vertex adjacent to $c$ which belongs to $D$. Suppose to the contrary that $c$ has strictly more than one neighbor in $D$, say $T_{x_1}^i$ and $T_{x_2}^j$ without loss of generality. Then, $\{T_{x_k}^1,T_{x_k}^2,T_{x_k}^3\} \subset D$ for $k=1,2$ which implies that $D \cap \{F_{x_1}^1,F_{x_1}^2,F_{x_1}^3,F_{x_2}^1,F_{x_2}^2,F_{x_2}^3\} = \emptyset$ as $\vert D \cap V(G_{x_k}) \vert = 3$ for $k=1,2$. It follows that the variable vertices $x_1$ and $x_2$ must be dominated by some vertices in $G_c$; but $\vert D \cap V(G_c) \vert = 1$ and $N[x_1] \cap N[x_2] = \emptyset$ and so, either $x_1$ or $x_2$ is not dominated. Thus, $c$ has exactly one neighbor in $D$, say $T_{x_1}^i$ without loss of generality. Then, necessarily $D \cap V(G_c) = \{l_{\{x_1\}}\}$ for otherwise either $x_1$ or some vertex in $K_c$ would not be dominated. But then, it is clear that every vertex in $G_c$ is dominated by exactly one vertex; thus, $D$ is efficient.
Conversely, assume that every minimum dominating set of $G_{\Phi}$ is efficient and consider a minimum dominating set $D$ of $G_{\Phi}$. If for some $x \in X$, $\vert D \cap V(G_x) \vert \geq 4$, then clearly at least one vertex in $G_x$ is dominated by two vertices in $D \cap V(G_x)$. Thus, $\vert D \cap V(G_x) \vert \leq 3$ for any $x \in X$ and we conclude by Observation \ref{obs:size} that in fact, equality holds. The next observation immediately follows from the fact that $D$ is efficient.
\begin{nestedobservation}
\label{obs:effgx}
For any $x \in X$, if $\vert D \cap V(G_x) \vert = 3$ then either $\{u_x^1,u_x^2,u_x^3\} \subset D$, $\{T_x^1,T_x^2,T_x^3\} \subset D$ or $\{F_x^1,F_x^2,F_x^3\} \subset D$.
\end{nestedobservation}
Now, consider a clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$ and suppose without loss of generality that $T_{x_1}^1$ is adjacent to $c$ (note that then the variable vertex $x_1$ is adjacent to $F_{x_1}^1$). If the clause vertex $c$ belongs to $D$ then, since $D$ is efficient, $T_{x_1}^1 \notin D$ and $u_{x_1}^1,F_{x_1}^1 \notin D$ ($T_{x_1}^1$ would otherwise be dominated by at least two vertices) which contradicts Observation \ref{obs:effgx}. Thus, no clause vertex belongs to $D$. Similarly, suppose that there exists $i \in \{1,2,3\}$ such that $x_i \in D$, say $x_1 \in D$ without loss of generality. Then, since $D$ is efficient, $F_{x_1}^1 \notin D$ and $T_{x_1}^1,u_{x_1}^2 \notin D$ ($F_{x_1}^1$ would otherwise be dominated by at least two vertices) which again contradicts Observation \ref{obs:effgx}. Thus, no variable vertex belongs to $D$. Finally, since $D$ is efficient, $\vert D \cap V(K_c) \vert \leq 1$ and so, $\vert D \cap V(G_c) \vert = 1$ by Observation \ref{obs:size}.
\end{claimproof}
Now by combining Claims \ref{clm:phisat} and \ref{clm:eff}, we obtain that $\Phi$ is satisfiable if and only if every minimum dominating set of $G_{\Phi}$ is efficient, that is, $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD}.
\end{proof}
\begin{theorem}
\label{thm:indepmd2}
{\sc All Independent MD} is $\mathsf{NP}$-hard when restricted to subcubic claw-free graphs.
\end{theorem}
\begin{proof}
We use a reduction from {\sc Positive Exactly 3-Bounded 1-In-3 3-Sat}, where each variable appears in exactly three clauses and only positively, each clause contains three positive literals, and we want a truth assignment such that each clause contains exactly one true literal. This problem is shown to be $\mathsf{NP}$-complete in \cite{moore}. Given an instance $\Phi$ of this problem, with variable set $X$ and clause set $C$, we construct an equivalent instance of {\sc All Independent MD} as follows. Consider the graph $G_{\Phi} = (V,E)$ constructed in the proof of Lemma \ref{lemma:efficient} and let $V_i = \{v \in V: d_{G_{\Phi}}(v) = i\}$ for $i =2,3$ (note that no vertex in $G_{\Phi}$ has degree one). Then, for any $v\ \in V_3$, we replace the vertex $v$ by the gadget $G_v$ depicted in Fig. \ref{fig:dv3}; and for any $v \in V_2$, we replace the vertex $v$ by the gadget $G_v$ depicted in Fig. \ref{fig:dv2}. We denote by $G'_{\Phi}$ the resulting graph. Note that $G'_{\Phi}$ is claw-free and $\Delta (G'_{\Phi}) = 3$ (also note that no vertex in $G'_{\Phi}$ has degree one). It is shown in the proof of Lemma \ref{lemma:efficient} that $\Phi$ is satisfiable if and only if $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD}; we here show that $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD} if and only if $G'_{\Phi}$ is a \yes-instance for {\sc All Independent MD}. To this end, we first prove the following.
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.5]
\node[circ,label=below:{\tiny $v_2$}] (v2) at (0,0) {};
\node[circ,label=below:{\tiny $u_2$}] (u2) at (1,0) {};
\node[circ,label=left:{\tiny $w_2$}] (w2) at (.5,1) {};
\node[circ,label=below:{\tiny $v_3$}] (v3) at (5,0) {};
\node[circ,label=below:{\tiny $w_3$}] (w3) at (4,0) {};
\node[circ,label=right:{\tiny $u_3$}] (u3) at (4.5,1) {};
\node[circ,label=right:{\tiny $v_1$}] (v1) at (2.5,5) {};
\node[circ,label=left:{\tiny $u_1$}] (u1) at (2,4) {};
\node[circ,label=right:{\tiny $w_1$}] (w1) at (3,4) {};
\draw[-] (v2) -- (v1) node[circ,pos=.5,label=left:{\tiny $b_1$}] {} node[circ,pos=.35,label=left:{\tiny $c_1$}] {} node[circ,pos=.65,label=left:{\tiny $a_1$}] {};
\draw[-] (v3) -- (v1) node[circ,pos=.5,label=right:{\tiny $b_3$}] {} node[circ,pos=.35,label=right:{\tiny $a_3$}] {} node[circ,pos=.65,label=right:{\tiny $c_3$}] {};
\draw[-] (v2) -- (v3) node[circ,pos=.5,label=below:{\tiny $b_2$}] {} node[circ,pos=.35,label=below:{\tiny $a_2$}] {} node[circ,pos=.65,label=below:{\tiny $c_2$}] {};
\draw[-] (u1) -- (w1)
(u2) -- (w2)
(u3) -- (w3);
\draw[-] (-.5,-.5) -- (v2) node[near start,left] {\tiny $e_2$};
\draw[-] (5.5,-.5) -- (v3) node[near start,right] {\tiny $e_3$};
\draw[-] (2.5, 5.5) -- (v1) node[near start,left] {\tiny $e_1$};
\draw[-Implies,line width=.6pt,double distance=2pt] (-1,2.5) -- (0,2.5);
\node[circ,label=below:{\tiny $v$}] (v) at (-3,2.5) {};
\node[circ] (e2) at (-4,1.5) {};
\node[circ] (e3) at (-2,1.5) {};
\node[circ] (e1) at (-3,3.5) {};
\draw[-] (v) -- (e1) node[midway,right] {\tiny $e_1$};
\draw[-] (v) -- (e2) node[midway,left] {\tiny $e_2$};
\draw[-] (v) -- (e3) node[midway,right] {\tiny $e_3$};
\end{tikzpicture}
\caption{$d_{G_{\Phi}}(v) = 3$.}
\label{fig:dv3}
\end{subfigure}
\hspace*{.5cm}
\begin{subfigure}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.6]
\node[circ,label=above:{\tiny $v$}] (v) at (1,0) {};
\node[circ] (1) at (0,0) {};
\node[circ] (2) at (2,0) {};
\draw[-] (1) -- (v) node[midway,above] {\tiny $e_1$};
\draw[-] (v) -- (2) node[midway,above] {\tiny $e_2$};
\draw[-Implies,line width=.6pt,double distance=2pt] (2.5,0) -- (3.5,0);
\node[circ,label=above:{\tiny $v_1$}] (v1) at (4.5,0) {};
\node[circ,label=above:{\tiny $u_1$}] at (5,0) {};
\node[circ,label=above:{\tiny $a_1$}] at (5.5,0) {};
\node[circ,label=above:{\tiny $b_1$}] at (6,0) {};
\node[circ,label=above:{\tiny $c_1$}] at (6.5,0) {};
\node[circ,label=above:{\tiny $u_2$}] at (7,0) {};
\node[circ,label=above:{\tiny $v_2$}] (v2) at (7.5,0) {};
\draw[-] (4,0) -- (8,0) node[pos=0,above] {\tiny $e_1$} node[pos=1,above] {\tiny $e_2$};
\node[invisible] at (3,-3) {};
\end{tikzpicture}
\caption{$d_{G_{\Phi}}(v) = 2$.}
\label{fig:dv2}
\end{subfigure}
\caption{The gadget $G_v$.}
\end{figure}
\begin{nestedclaim}
\label{clm:gammas}
$\gamma (G'_{\Phi}) = \gamma (G_{\Phi}) + 5 \vert V_3 \vert + 2 \vert V_2 \vert$.
\end{nestedclaim}
\begin{claimproof}
Let $D$ be a minimum dominating set of $G_{\Phi}$. We construct a dominating set $D'$ of $G'_{\Phi}$ as follows. For any $v \in D$, if $v \in V_3$, add $v_1$, $v_2$, $v_3$, $b_1$, $b_2$, and $b_3$ to $D'$; otherwise, add $v_1$, $v_2$ and $b_1$ to $D'$. For any $v \in V \setminus D$, let $u \in D$ be a neighbor of $v$, say $e_1 =uv$ without loss of generality. Then, if $v \in V_3$, add $a_1$, $c_3$, $w_2$, $u_3$ and $b_2$ to $D'$; otherwise, add $a_1$ and $u_2$ to $D'$. Clearly, $D'$ is dominating and $\vert D' \vert = \gamma (G_{\Phi}) + 5 \vert V_3 \vert + 2 \vert V_2 \vert \geq \gamma (G'_{\Phi})$.
\begin{nestedobservation}
\label{obs:size2}
For any dominating set $D'$ of $G'_{\Phi}$, the following holds.
\begin{itemize}
\item[(i)] For any $v \in V_2$, $\vert D' \cap V(G_v) \vert \geq 2$. Moreover, if equality holds then $D' \cap \{v_1,v_2\} = \emptyset$ and there exists $j \in \{1,2\}$ such that $u_j \notin D'$.
\item[(ii)] For any $v \in V_3$, $\vert D' \cap V(G_v) \vert \geq 5$. Moreover, if equality holds then $D' \cap \{v_1,v_2,v_3\} = \emptyset$ and there exists $j \in \{1,2,3\}$ such that $D' \cap \{u_j,v_j,w_j\} = \emptyset$.
\end{itemize}
\end{nestedobservation}
(i) Clearly, $D' \cap \{v_1,u_1,a_1\} \neq \emptyset$ and $D' \cap \{c_1,u_2,v_2\} \neq \emptyset$ as $u_1$ and $u_2$ must be dominated. Thus, $\vert D' \cap V(G_v) \vert \geq 2$. Now, suppose that $D' \cap \{v_1,v_2\} \neq \emptyset$ say $v_1 \in D'$ without loss of generality. Then $D' \cap \{u_1,a_1,b_1\} \neq \emptyset$ as $a_1$ must be dominated which implies that $\vert D' \cap V(G_v) \vert \geq 3$ (recall that $D' \cap \{c_1,u_2,v_2\} \neq \emptyset$). Similarly, if both $u_1$ and $u_2$ belong to $D'$, then $\vert D' \cap V(G_v) \vert \geq 3$ as $D' \cap \{a_1,b_1,c_1\} \neq \emptyset$ ($b_1$ would otherwise not be dominated).\\
(ii) Clearly, for any $i \in \{1,2,3\}$, $D' \cap \{a_i,b_i,c_i\} \neq \emptyset$ as $b_i$ must be dominated. Now, if there exists $j \in \{1,2,3\}$ such that $D' \cap \{u_j,v_j,w_j\} = \emptyset$, say $j= 1$ without loss of generality, then $a_1, c_3 \in D'$ (one of $u_1$ and $w_1$ would otherwise not be dominated). But then, $D' \cap \{b_1,c_1,w_2\} \neq \emptyset$ as $c_1$ must be dominated, and $D' \cap \{a_3,b_3,u_3\} \neq \emptyset$ as $a_3$ must be dominated; and so, $\vert D' \cap V(G_v) \vert \geq 5$ (recall that $D' \cap \{a_2,b_2,c_2\} \neq \emptyset$). Otherwise, for any $j \in \{1,2,3\}$, $D' \cap \{u_j,v_j,w_j\} \neq \emptyset$ which implies that $\vert D' \cap V(G_v) \vert \geq 6$.
Now suppose that $D' \cap \{v_1,v_2,v_3\} \neq \emptyset$, say $v_1 \in D'$ without loss of generality. If there exists $j \neq 1$ such that $D' \cap \{u_j,v_j,w_j\} = \emptyset$, say $j = 2$ without loss of generality, then $c_1,a_2 \in D'$ (one of $u_2$ and $w_2$ would otherwise not be dominated). But then, $D' \cap \{a_1,b_1,u_1\} \neq \emptyset$ as $a_1$ should be dominated, and $D' \cap \{b_2,c_2,w_3\} \neq \emptyset$ as $c_2$ must be dominated. Since $D' \cap \{a_3,b_3,c_3\} \neq \emptyset$, it then follows that $\vert D' \cap V(G_v) \vert \geq 6$. Otherwise, $D' \cap \{u_j,v_j,w_j\} \neq \emptyset$ for any $j \in \{1,2,3\}$ and so, $\vert D' \cap V(G_v) \vert \geq 6$ (recall that $D' \cap \{a_i,b_i,c_i\} \neq \emptyset$ for any $i \in \{1,2,3\}$). $\diamond$
\begin{nestedobservation}
\label{obs:min}
If $D'$ is a minimum dominating set of $G'_{\Phi}$, then $\vert D' \cap V(G_v) \vert \leq 3$ for any $v \in V_2$ and $\vert D' \cap V(G_v) \vert \leq 6$ for any $v \in V_3$.
\end{nestedobservation}
Indeed, if $v \in V_2$ then $\{v_1,b_1,v_2\}$ is a dominating set of $V(G_v)$; and if $v \in V_3$, then $\{v_1,v_2,v_3,b_1,b_2,b_3\}$ is a dominating set of $V(G_v)$. $\diamond$ \\
Now, consider a minimum dominating set $D'$ of $G'_{\Phi}$ and let $D_3 = \{v \in V_3: \vert D' \cap V(G_v) \vert = 6\}$ and $D_2 = \{v \in V_2 : \vert D' \cap V(G_v) \vert = 3\}$. We claim that $D = D_3 \cup D_2$ is a dominating set of $G_{\Phi}$. Indeed, consider a vertex $v \in V \setminus D$. We distinguish two cases depending on whether $v \in V_2$ of $v \in V_3$.
\noindent
\textbf{Case 1.} $v \in V_2$. Then $\vert D' \cap V(G_v) \vert = 2$ by construction, which by Observation \ref{obs:size2}(i) implies that there exists $j \in \{1,2\}$ such that $D' \cap \{v_j,u_j\} = \emptyset$ , say $j = 1$ without loss of generality. Since $v_1$ must be dominated, $v_1$ must then have a neighbor $x_i$ belonging to $D'$, for some vertex $x$ adjacent to $v$ in $G_{\Phi}$. But then, it follows from Observation \ref{obs:size2} that $\vert D' \cap V(G_x) \vert > 2$ if $x \in V_2$, and $\vert D' \cap V(G_x) \vert > 5$ if $x \in V_3$ (indeed, $x_i \in D'$); thus, $x \in D$.
\noindent
\textbf{Case 2.} $v \in V_3$. Then $\vert D' \cap V(G_v) \vert = 5$ by construction, which by Observation \ref{obs:size2}(ii) implies that there exists $j \in \{1,2,3\}$ such that $D' \cap \{u_j,v_j,w_j\} = \emptyset$, say $j = 1$ without loss of generality. Since $v_1$ must be dominated, $v_1$ must then have a neighbor $x_i$ belonging to $D'$, for some vertex $x$ adjacent to $v$ in $G_{\Phi}$. But then, it follows from Observation \ref{obs:size2} that $\vert D' \cap V(G_x) \vert > 2$ if $x \in V_2$, and $\vert D' \cap V(G_x) \vert > 5$ if $x \in V_3$ (indeed, $x_i \in D'$); thus, $x \in D$.
Hence, $D$ is a dominating set of $G_{\Phi}$. Moreover, it follows from Observations \ref{obs:size2} and \ref{obs:min} that $\vert D' \vert = 6 \vert D_3 \vert + 5 \vert V_3 \setminus D_3 \vert + 3 \vert D_2 \vert + 2 \vert V_2 \setminus D_2 \vert = \vert D \vert+ 5 \vert V_3 \vert + 2 \vert V_2 \vert$. Thus, $\gamma (G'_{\Phi}) = \vert D' \vert \geq \gamma (G_{\Phi}) + 5 \vert V_3 \vert + 2 \vert V_2 \vert$ and so, $\gamma (G'_{\Phi}) = \gamma (G_{\Phi}) + 5 \vert V_3 \vert + 2 \vert V_2 \vert$. Finally note that this implies that the constructed dominated set $D$ is in fact minimum.
\end{claimproof}
We next show that $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD} if and only if $G'_{\Phi}$ is a \yes-instance for {\sc All Independent MD}. Since $\Phi$ is satisfiable if and only if $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD}, as shown in the proof of Lemma \ref{lemma:efficient}, this would conclude the proof.\\
Assume first that $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD} and suppose to the contrary that $G'_{\Phi}$ is a \no-instance for {\sc All Independent MD} that is, $G'_{\Phi}$ has a minimum dominating set $D'$ which is not independent. Denote by $D$ the minimum dominating set of $G_{\Phi}$ constructed from $D'$ according to the proof of Claim \ref{clm:gammas}. Let us show that $D$ is not efficient. Consider two adjacent vertices $a,b \in D'$. If $a$ and $b$ belong to gadgets $G_x$ and $G_v$ respectively, for two adjacent vertices $x$ and $v$ in $G_{\Phi}$, that is, $a$ is of the form $x_i$ and $b$ is of the form $v_j$, then by Observation \ref{obs:size2} $x,v \in D$ and so, $D$ is not efficient. Thus, it must be that $a$ and $b$ both belong the same gadget $G_v$, for some $v \in V_2 \cup V_3$. We distinguish cases depending on whether $v \in V_2$ or $v \in V_3$.\\
\noindent
\textbf{Case 1.} $v \in V_2$. Suppose that $\vert D' \cap V(G_v) \vert = 2$. Then by Observation \ref{obs:size2}(i), $D' \cap \{v_1,v_2\} = \emptyset$ and there exists $j \in \{1,2\}$ such that $u_j \notin D'$, say $u_1 \notin D'$ without loss of generality. Then, necessarily $a_1 \in D'$ ($u_1$ would otherwise not be dominated) and so, $b_1 \in D'$ as $D' \cap V(G_v)$ contains an edge and $\vert D' \cap V(G_v) \vert = 2$ by assumption; but then, $u_2$ is not dominated. Thus, $\vert D' \cap V(G_v) \vert \geq 3$ and we conclude by Observation \ref{obs:min} that in fact, equality holds. Note that consequently, $v \in D$. We claim that then, $\vert D' \cap \{v_1,v_2\} \vert \leq 1$. Indeed, if both $v_1$ and $v_2$ belong to $D'$, then $b_1 \in D'$ (since $\vert D' \cap V(G_v) \vert = 3$, $D'$ would otherwise not be dominating) which contradicts that fact that $D' \cap V(G_v)$ contains an edge. Thus, $\vert D' \cap \{v_1,v_2\} \vert \leq 1$ and we may assume without loss of generality that $v_2 \notin D'$. Let $x_i \neq u_2$ be the other neighbor of $v_2$ in $G'_{\Phi}$, where $x$ is a neigbhor of $v$ in $G_{\Phi}$.
Suppose first that $x \in V_2$. Then, $\vert D' \cap V(G_x) \vert = 2$ for otherwise $x$ would belong to $D$ and so, $D$ would contain the edge $vx$. It then follows from Observation \ref{obs:size2}(i) that there exists $j \in \{1,2\}$ such that $D' \cap \{x_j,y_j\} = \emptyset$, where $y_j$ is the neighbor of $x_j$ in $V(G_x)$. We claim that $j \neq i$; indeed, if $j = i$, since $v_2,x_i,y_i \notin D'$, $x_i$ would not be dominated. But then, $x_j$ must have a neighbor $t_k \neq y_j$, for some vertex $t$ adjacent to $x$ in $G_{\Phi}$, which belongs to $D'$; it then follows from Observation \ref{obs:size2} and the construction of $D$ that $t \in D$ and so, $x$ has two neighbors in $D$, namely $v$ and $t$, a contradiction.
Second, suppose that $x \in V_3$. Then, $\vert D' \cap V(G_x) \vert = 5$ for otherwise $x$ would belong to $D$ and so, $D$ would contain the edge $vx$. It then follows from Observation \ref{obs:size2}(ii) that there exists $j \in \{1,2,3\}$ such that $D' \cap \{x_j,y_j,z_j\} = \emptyset$, where $y_j$ and $z_j$ are the two neighbors of $x_j$ in $V(G_x)$. We claim that $j \neq i$; indeed, if $j = i$, since $v_2,x_i,y_i,z_i \notin D'$, $x_i$ would not be dominated. But then, $x_j$ must have a neighbor $t_k \neq y_j,z_j$, for some vertex $t$ adjacent to $x$ in $G_{\Phi}$, which belongs to $D'$; it then follows from Observation \ref{obs:size2} and the construction of $D$ that $t \in D$ and so, $x$ has two neighbors in $D$, namely $v$ and $t$, a contradiction.\\
\noindent
\textbf{Case 2.} $v \in V_3$. Suppose that $\vert D' \cap V(G_v) \vert = 5$. Then, by Observation \ref{obs:size2}(ii), $D' \cap \{v_1,v_2,v_3\} = \emptyset$ and there exists $j \in \{1,2,3\}$ such that $D' \cap \{u_j,v_j,w_j\} = \emptyset$, say $j= 1$ without loss of generality. Then, $a_1,c_3 \in D'$ (one of $u_1$ and $w_1$ would otherwise not be dominated), $D' \cap \{c_1,w_2,u_2\} \neq \emptyset$ ($w_2$ would otherwise not be dominated), $D' \cap \{a_3,u_3,w_3\} \neq \emptyset$ ($u_3$ would otherwise not be dominated) and $D' \cap \{a_2,b_2,c_2\} \neq \emptyset$ ($b_2$ would otherwise not be dominated); in particular, $b_1,b_3 \notin D'$ as $\vert D' \cap V(G_v) \vert = 5$ by assumption. Since $D' \cap V(G_v)$ contains an edge, it follows that either $u_2,a_2 \in D'$ or $c_2,w_3 \in D'$; but then, either $c_1$ or $a_3$ is not dominated, a contradiction. Thus, $\vert D' \cap V(G_v) \vert \geq 6$ and we conclude by Observation \ref{obs:min} that in fact, equality holds. Note that consequently, $v \in D$. It follows that $\{v_1,v_2,v_3\} \not\subset D'$ for otherwise $D' \cap V(G_v) = \{v_1,v_2,v_3,b_1,b_2,b_3\}$ and so, $D' \cap V(G_v)$ contains no edge. Thus, we may asssume without loss of generality that $v_1 \notin D'$. Denoting by $x_i \neq u_1,w_1$ the third neighbor of $v_1$, where $x$ is a neighbor of $v$ in $G_{\Phi}$, we then proceed as in the previous case to conclude that $x$ has two neighbors in $D$.\\
Thus, $D$ is not efficient, which contradicts the fact that $G_{\Phi}$ is a \yes-instance for {\sc All Efficient MD}. Hence, every minimum dominating set of $G'_{\Phi}$ is independent i.e., $G'_{\Phi}$ is a \yes-instance for {\sc All Independent MD}.\\
Conversely, assume that $G'_{\Phi}$ is a \yes-instance for {\sc All Independent MD} and suppose to the contrary that $G_{\Phi}$ is a \no-instance for {\sc All Efficient MD} that is, $G_{\Phi}$ has a minimum dominating set $D$ which is not efficient. Let us show that $D$ either contains an edge or can be transformed into a minimum dominating set of $G_{\Phi}$ containing an edge. Since any minimum dominating of $G'_{\Phi}$ constructed according to the proof of Claim~\ref{clm:gammas} from a minimum dominating set of $G_{\Phi}$ containing an edge, also contains an edge, this would lead to a contradiction and thus conclude the proof.
Suppose that $D$ contains no edge. Since $D$ is not efficient, there must then exist a vertex $v \in V \setminus D$ such that $v$ has two neighbors in $D$. We distinguish cases depending on which type of vertex $v$ is.\\
\noindent
\textbf{Case 1.} \textit{$v$ is a variable vertex.} Suppose that $v = x_1$ in some clause gadget $G_c$, where $c \in C$ contains variables $x_1$, $x_2$ and $x_3$, and assume without loss of generality that $x_1$ is adjacent to $F_{x_1}^1$. By assumption, $F_{x_1}^1,l_{\{x_1\}} \in D$ which implies that $D \cap \{l_{\{x_2\}},l_{\{x_3\}},T_{x_1}^1,u_{x_1}^2\} = \emptyset$ ($D$ would otherwise contain an edge). We may then assume that $F_{x_2}^i$ and $F_{x_3}^j$, where $F_{x_2}^ix_2, F_{x_3}^jx_3 \in E(G_{\Phi})$, belong to $D$; indeed, since $x_2$ (resp. $x_3$) must be dominated, $D \cap \{F_{x_2}^i,x_2\} \neq \emptyset$ (resp. $D \cap \{F_{x_3}^j,x_3\} \neq \emptyset$) and since $l_{\{x_1\}} \in D$, $(D \setminus \{x_2\}) \cup \{F_{x_2}^i\}$ (resp. $(D \setminus \{x_3\}) \cup \{F_{x_3}^j\}$) remains dominating. We may then assume that $T_{x_2}^i,T_{x_3}^j \notin D$ for otherwise $D$ would contain an edge. It follows that $c \in D$ ($c$ would otherwise not be dominated); but then, it suffices to consider $(D \setminus \{c\}) \cup \{T_{x_1}^1\}$ to obtain a minimum dominating set of $G_{\Phi}$ containing an edge.\\
\noindent
\textbf{Case 2.} \textit{$v = u_x^i$ for some variable $x \in X$ and $i \in \{1,2,3\}$.} Assume without loss of generality that $i = 1$. Then $T_x^1,F_x^3 \in D$ by assumption, which implies that $F_x^1,T_x^3 \notin D$ ($D$ would otherwise contain an edge). But then, $\vert D \cap \{u_x^2,F_x^2,T_x^2,u_x^3\} \vert \geq 2$ as $u_x^2$ and $u_x^3$ must be dominated; and so, $(D \setminus \{u_x^3,F_x^2,T_x^2,u_x^2\}) \cup \{F_x^2,T_x^2\}$ is a dominating set of $G_{\Phi}$ of size at most that of $D$ which contains an edge.\\
\noindent
\textbf{Case 3.} \textit{$v$ is a clause vertex.} Suppose that $v = c$ for some clause $c \in C$ containing variables $x_1$, $x_2$ and $x_3$, and assume without loss of generality that $c$ is adjacent to $T_{x_i}^1$ for any $i \in \{1,2,3\}$. By assumption $c$ has two neighbors in $D$, say $T_{x_1}^1$ and $T_{x_2}^1$ without loss of generality. Since $D$ contains no edge, it follows that $F_{x_1}^1,F_{x_2}^1 \notin D$; but then, $\vert D \cap \{x_1,x_2,l_{\{x_1\}},l_{\{x_2\}}\} \vert \geq 2$ (one of $x_1$ and $x_2$ would otherwise not be dominated) and so, $(D \setminus \{x_1,x_2,l_{\{x_1\}},l_{\{x_2\}}\}) \cup \{l_{\{x_1\}},l_{\{x_2\}}\}$ is a dominating set of $G_{\Phi}$ of size at most that of $D$ which contains an edge.\\
\noindent
\textbf{Case 4.} \textit{$v \in V(K_c)$ for some clause $c \in C$.} Denote by $x_1$, $x_2$ and $x_3$ the variables contained in $c$ and assume without loss of generality that $v = l_{\{x_1\}}$. Since $l_{\{x_1\}}$ has two neighbors in $D$ and $D$ contains no edge, necessarily $x_1 \in D$. Now assume without loss of generality that $x_1$ is adjacent to $F_{x_1}^1$ (note that by construction, $c$ is then adjacent to $T_{x_1}^1$). Then, $F_{x_1}^1 \notin D$ ($D$ would otherwise contain an edge) and $T_{x_1}^1,u_{x_1}^2 \notin D$ for otherwise $(D \setminus \{x_1\}) \cup \{F_{x_1}^1\}$ would be a minimum dominating set of $G_{\Phi}$ containing an edge (recall that by assumption, $D \cap V(K_c) \neq \emptyset$). It follows that $T_{x_1}^2 \in D$ ($u_{x_1}^2$ would otherwise not be dominated) and so, $F_{x_1}^2 \notin D$ as $D$ contains no edge. It follows that $\vert D \cap \{u_{x_1}^1,F_{x_1}^3,T_{x_1}^3,u_{x_1}^3\} \vert \geq 2$ as $u_{x_1}^1$ and $u_{x_1}^3$ must be dominated. Now if $c$ belongs to $D$, then $(D \setminus \{u_{x_1}^1,F_{x_1}^3,T_{x_1}^3,u_{x_1}^3\}) \cup \{F_{x_1}^3,T_{x_1}^3\}$ is a dominating set of $G_{\Phi}$ of size at most that of $D$ which contains an edge. Thus, we may assume that $c \notin D$ which implies that $u_{x_1}^1 \in D$ ($T_{x_1}^1$ would otherwise not be dominated) and that there exists $j \in \{2,3\}$ such that $T_{x_j}^i \in D$ with $cT_{x_j}^i \in E(G_{\Phi})$ ($c$ would otherwise not be dominated). Now, since $u_{x_1}^3$ must be dominated and $F_{x_1}^2 \notin D$, it follows that $D \cap \{u_{x_1}^3,T_{x_1}^3\} \neq \emptyset$ and we may assume that in fact $T_{x_1}^3 \in D$ (recall that $T_{x_1}^2 \in D$ and so, $F_{x_1}^2$ is dominated). But then, by considering the minimum dominating set $(D \setminus \{u_{x_1}^1\}) \cup \{T_{x_1}^1\}$, we fall back into Case 3 as $c$ is then dominated by both $T_{x_1}^1$ and $T_{x_j}^i$.\\
\noindent
\textbf{Case 5.} \textit{$v$ is a true vertex.} Assume without loss of generality that $v = T_x^1$ for some variable $x \in X$. Suppose first that $u_x^1 \in D$. Then since $D$ contains no edge, $F_x^3 \notin D$; furthermore, denoting by $t \neq u_x^1,T_x^3$ the variable vertex adjacent to $F_x^3$, we also have $t \notin D$ for otherwise $(D \setminus \{u_x^1\}) \cup \{F_x^3\}$ would be a minimum dominating set containing an edge (recall that $T_x^1$ has two neighbors in $D$ by assumption). But then, since $t$ must be dominated, it follows that the second neighbor of $t$ must belong to $D$; and so, by considering the minimum dominating set $(D \setminus \{u_x^1\}) \cup \{F_x^3\}$, we fall back into Case 1 as the variable vertex $t$ is then dominated by two vertices. Thus, we may assume that $u_x^1 \notin D$ which implies that $F_x^1,c \in D$, where $c$ is the clause vertex adjacent to $T_x^1$. Now, denote by $x_1 = x$, $x_2$ and $x_3$ the variables contained in $c$ (note that by construction, $x_1$ is then adjacent to $F_{x_1}^1$). Then, $x_1 \notin D$ ($D$ would otherwise contain the edge $F_{x_1}^1x_1$) and we may assume that $l_{\{x_1\}} \notin D$ (we otherwise fall back into Case 1 as $x_1$ would then have two neighbors in $D$). It follows that $D \cap V(K_c) \neq \emptyset$ ($l_{\{x_1\}}$ would otherwise not be dominated) and since $D$ contains no edge, in fact $\vert D \cap V(K_c) \vert = 1$, say $l_{\{x_2\}} \in D$ without loss of generality. Then, $x_2 \notin D$ as $D$ contains no edge and we may assume that $F_{x_2}^j \notin D$, where $F_{x_2}^j$ is the false vertex adjacent to $x_2$, for otherwise we fall back into Case 1. In the following, we assume without loss of generality that $j = 1$, that is, $x_2$ is adjacent to $F_{x_2}^1$ (note that by construction, $c$ is then adjacent to $T_{x_2}^1$). Now, since the clause vertex $c$ belongs to $D$ by assumption, it follows that $T_{x_2}^1 \notin D$ ($D$ would otherwise contain the edge $cT_{x_2}^1$); and as shown previously, we may assume that $u_{x_2}^1 \notin D$ (indeed, $T_{x_2}^1$ would otherwise have two neighbors in $D$, namely $c$ and $u_{x_2}^1$, but this case has already been dealt with). Then, since $u_{x_2}^1$ and $F_{x_2}^1$ must be dominated, necessarily $F_{x_2}^3$ and $u_{x_2}^2$ belong to $D$ (recall that $D \cap \{x_2,F_{x_2}^1,T_{x_2}^1,u_{x_2}^1 \} = \emptyset$) which implies that $T_{x_2}^3,T_{x_2}^2 \notin D$ ($D$ would otherwise contain an edge). Now since $u_{x_2}^3$ must be dominated, $D \cap \{u_{x_2}^3,F_{x_2}^2\} \neq \emptyset$ and we may assume without loss of generality that in fact, $F_{x_2}^2 \in D$. But then, by considering the minimum dominating set $(D \setminus \{u_{x_2}^2\}) \cup \{F_{x_2}^1\}$, we fall back into Case~1 as $x_2$ is then dominated by two vertices.\\
\noindent
\textbf{Case 6.} \textit{$v$ is a false vertex.} Assume without loss of generality that $v = F_{x_1}^1$ for some variable $x_1 \in X$ and let $c \in C$ be the clause whose corresponding clause vertex is adjacent to $T_{x_1}^1$. Denote by $x_2$ and $x_3$ the two other variables contained in $c$. Suppose first that $x_1 \in D$. Then, we may assume that $D \cap V(K_c) = \emptyset$ for otherwise either $D$ contains an edge (if $l_{\{x_1\}} \in D$) or we fall back into Case 4 ($l_{\{x_1\}}$ would indeed have two neighbors in $D$). Since every vertex of $K_c$ must be dominated, it then follows that $x_2,x_3 \in D$; but then, by considering the minimum dominating set $(D \setminus \{x_1\}) \cup \{l_{\{x_1\}}\}$ (recall that $F_{x_1}^1$ has two neighbors in $D$ by assumption), we fall back into Case 4 as $l_{\{x_2\}}$ is then dominated by two vertices. Thus, we may assume that $x_1 \notin D$ which implies that $T_{x_1}^1,u_{x_1}^2 \in D$ and $T_{x_1}^2,u_{x_1}^1 \notin D$ as $D$ contains no edge. Now, denote by $c'$ the clause vertex adjacent to $T_{x_1}^2$. Then, we may assume that $c' \notin D$ for otherwise we fall back into Case 5 ($T_{x_1}^2$ would indeed have two neighbors in $D$); but then, there must exist a true vertex, different from $T_{x_1}^2$, adjacent to $c'$ and belonging to $D$ ($c'$ would otherwise not be dominated) and by considering the minimum dominating set $(D \setminus \{u_{x_1}^2\}) \cup \{T_{x_1}^2\}$, we then fall back into Case 3 ($c'$ would indeed be dominated by two vertices).\\
Consequently, $G_{\Phi}$ has a minimum dominating set which is not independent which implies that $G'_{\Phi}$ also has a minimum dominating set which is not independent, a contradiction which concludes the proof.
\end{proof}
Theorem \ref{thm:clawfree} now easily follows from Theorem \ref{thm:indepmd2} and Fact \ref{obs:equi}.
\section{The proof of Theorem \ref{thm:p7free}}
In this section, we show that \contracd{} is $\mathsf{coNP}$-hard when restricted to $P_7$-free graphs. To this end, we prove the following.
\begin{theorem}
\label{thm:indp7free}
{\sc All Independent MD} is $\mathsf{NP}$-hard when restricted to $P_7$-free graphs.
\end{theorem}
\begin{proof}
We reduce from {\sc 3-Sat}: given an instance $\Phi$ of this problem, with variable set $X$ and clause set $C$, we construct an equivalent instance of {\sc All Independent MD} as follows. For any variable $x \in X$, we introduce a copy of $C_3$, which we denote by $G_x$, with one distinguished \textit{positive literal vertex} $x$ and one distinguished \textit{negative literal vertex} $\bar{x}$; in the following, we denote by $u_x$ the third vertex in $G_x$. For any clause $c \in C$, we introduce a \textit{clause vertex} $c$; we then add an edge between $c$ and the (positive or negative) literal vertices whose corresponding literal occurs in $c$. Finally, we add an edge between any two clause vertices so that the set of clause vertices induces a clique denoted by $K$ in the following. We denote by $G_{\Phi}$ the resulting graph.
\begin{nestedobservation}
\label{obs:size4}
For any dominating set $D$ of $G_{\Phi}$ and any variable $x \in X$, $\vert D \cap V(G_x) \vert \geq 1$. In particular, $\gamma (G_{\Phi}) \geq \vert X \vert$.
\end{nestedobservation}
\begin{nestedclaim}
\label{clm:phisat4}
$\Phi$ is satisfiable if and only if $\gamma (G_{\Phi}) = \vert X \vert$.
\end{nestedclaim}
\begin{claimproof}
Assume that $\Phi$ is satisfiable and consider a truth assignment satisfying $\Phi$. We construct a dominating set $D$ of $G_{\Phi}$ as follows. For any variable $x \in X$, if $x$ is true, add the positive literal vertex $x$ to $D$; otherwise, add the negative variable vertex $\bar{x}$ to $D$. Clearly, $D$ is dominating and we conclude by Observation \ref{obs:size4} that $\gamma (G_{\Phi}) = \vert X \vert$.
Conversely, assume that $\gamma (G_{\Phi}) = \vert X \vert$ and consider a minimum dominating set $D$ of $G_{\Phi}$. Then by Observation \ref{obs:size4}, $\vert D \cap V(G_x) \vert = 1$ for any $x \in X$. It follows that $D \cap K = \emptyset$ and so, every clause vertex must be adjacent to some (positive or negative) literal vertex belonging to $D$. We thus construct a truth assignment satisfying $\Phi$ as follows: for any variable $x \in X$, if the positive literal vertex $x$ belongs to $D$, set $x$ to true; otherwise, set $x$ to false.
\end{claimproof}
\begin{nestedclaim}
\label{clm:indep2}
$\gamma (G_{\Phi}) = \vert X \vert$ if and only if every minimum dominating set of $G_{\Phi}$ is independent.
\end{nestedclaim}
\begin{claimproof}
Assume that $\gamma (G_{\Phi}) = \vert X \vert$ and consider a minimum dominating set $D$ of $G_{\Phi}$. Then by Observation \ref{obs:size4}, $\vert D \cap V(G_x) \vert = 1$ for any $x \in X$. It follows that $D \cap K = \emptyset$ and since $N[V(G_x)] \cap N[V(G_{x'})] \subset K$ for any two $x,x' \in X$, $D$ is independent.
Conversely, consider a minimum dominating set $D$ of $G_{\Phi}$. Since $D$ is independent, $\vert D \cap V(G_x) \vert \leq 1$ for any $x \in X$ and we conclude by Observation \ref{obs:size4} that in fact, equality holds. Now suppose that there exists $c \in C$, containing variables $x_1$, $x_2$ and $x_3$, such that the corresponding clause vertex $c$ belongs to $D$ (note that since $D$ is independent, $\vert D \cap K \vert \leq 1$). Assume without loss of generality that $x_1$ occurs positively in $c$, that is, $c$ is adjacent to the positive literal vertex $x_1$. Then, $x_1 \notin D$ since $D$ is independent and so, either $u_{x_1} \in D$ or $\bar{x_1} \in D$. In the first case, we immediately obtain that $(D \setminus \{u_{x_1}\}) \cup \{x_1\}$ is a minimum dominating set of $G_{\Phi}$ containing an edge, a contradiction. In the second case, since $c \in D$, any vertex dominated by $\bar{x_1}$ is also dominated by $c$; thus, $(D \setminus \{\bar{x_1}\}) \cup \{x_1\}$ is a minimum dominating set of $G_{\Phi}$ containing an edge, a contradiction. Consequently, $D \cap K = \emptyset$ and so, $\gamma (G_{\Phi}) = \vert D \vert = \vert X \vert$.
\end{claimproof}
Now by combining Claims \ref{clm:phisat4} and \ref{clm:indep2}, we obtain that $\Phi$ is satisfiable if and only if every minimum dominating set of $G_{\Phi}$ is independent, that is, $G_{\Phi}$ is a \yes-instance for {\sc All Independent MD}. There remains to show that $G_{\Phi}$ is $P_7$-free. Let $P$ be a path in $G_{\Phi}$ of maximum length. Then $P$ contains at most two vertices from $K$ and at most two vertices from $G_x$, for any $x \in X$, as those are cliques. Since $N(V(G_x)) \subset V(G_x) \cup K$ for any $x \in X$, it follows that $P$ contains vertices from at most two distinct variable gadgets (it would otherwise contain at least three vertices from $K$); thus, $P$ has length at most 6 which concludes the proof.
\end{proof}
Theorem \ref{thm:p7free} now easily follows from Theorem \ref{thm:indp7free} and Fact \ref{obs:equi}.
\section{Conclusion}
It was shown in \cite{contracdom} that \contracd{} is polynomial-time solvable for $P_5$-free graphs. Thus, if we require $H$ to be connected, there only remains to settle the complexity status of \contracd{} restricted to $H$-free graphs for $H = P_6$ .
\bibliographystyle{siam}
|
1,108,101,564,244 | arxiv | \section*{Supplemental Material}
\subsection{Generating functional for exponential correlation functions}
\label{sec:exp_phi}
Here we provide detailed derivation of the phase correlation functions Eq. (8) of the main text.
The imaginary time action for the Hamiltonian Eq. (7) reads
\begin{equation}
S[\phi_i(\tau)]=\int_0^{\beta}d\tau \left[\frac{1}{4E_1}\sum_{i=1}^N \dot{\phi}_i^2-\frac{g}{N}\sum_{i<j} \cos(\phi_i-\phi_j)\right].
\end{equation}
This is the action for $N$ particles on the ring interacting by the cosine potential. We employ Hubbard-Stratonovich decoupling, introducing the order parameter field for the phase synchronization transition
\begin{equation}
\rho=\frac{1}{N}\sum_{i=1}^N\left\langle e^{i\phi_i}\right\rangle,
\label{Def_rho}
\end{equation}
which results in the following partition function
\begin{equation}
Z=\int[D \bar{\rho}^{\tau}, \rho^{\tau}]\exp\left[-N S[\bar{\rho}^{\tau}, \rho^{\tau}]\right],
\end{equation}
where
\begin{equation}
S[\bar{\rho}^{\tau}, \rho^{\tau}]=\frac{1}{g}\int d\tau \bar{\rho}^{\tau}\rho^{\tau}-\ln Z_{\phi}[\bar{\rho}^{\tau}, \rho^{\tau}],
\label{action_rho}
\end{equation}
\begin{equation}
Z_{\phi}[\bar{\rho}^{\tau}, \rho^{\tau}]=\int D[\phi^{\tau}]\exp\left[-\int d\tau \left\{\frac{\dot\phi^2}{4E_1}-\left(\bar{\rho}^{\tau}e^{i\phi^{\tau}}+\rho^{\tau}e^{-i\phi^{\tau}}\right)\right\}\right]
\label{Zphi}
\end{equation}
To obtain the correlation function of the phase exponents at different sites, we introduce site-local source fields
$\bar{\eta}_i^{\tau}, \eta_i^{\tau}$, thus obtaining the generating functional
\begin{equation}
Z_{\eta}=\int[D \bar{\rho}^{\tau}, \rho^{\tau}]\exp\left[-\frac{N}{g}\int d\tau \bar{\rho}^{\tau}\rho^{\tau} + \sum_{i=1}^N \ln Z_{\phi}[\rho, \eta_i]\right],
\end{equation}
where
\begin{equation}
Z_{\phi}[\rho, \eta_i]=\int[D\phi]\exp\left[-\int d\tau \left\{ \frac{\dot{\phi}^2}{4E_1}-(\bar{\rho}^{\tau}+\bar{\eta}_i^{\tau})e^{i\phi^{\tau}}
-(\rho^{\tau}+\eta_i^{\tau})e^{-i\phi^{\tau}} \right\} \right].
\end{equation}
Further we expand $\ln Z_{\phi}$ up to quadratic order in $\rho$ in the non-synchronized phase ($\langle e^{\pm i \phi}\rangle=0$) to get
\begin{equation}
\ln Z_{\phi}[\rho, \eta_i]\approx \ln Z_0+ \int d\tau d\tau' \left\langle e^{i\phi^{\tau}} e^{-i\phi^{\tau'}} \right\rangle (\bar{\rho}^{\tau}+\bar{\eta}_i^{\tau}) (\rho^{\tau'}+\eta_i^{\tau'}),
\end{equation}
where
\begin{equation}
Z_{0}=\int[D\phi]\exp\left[-\int d\tau \frac{\dot{\phi}^2}{4E_1} \right].
\label{Z0}
\end{equation}
Note that $Z_0$ is the partition function of a free quantum rotor, hence it can be calculated exactly using the energy spectrum $E_{\ell}=E_1\ell^2$, where $\ell$ denotes $z$-projection of the angular momentum.
Going over to Matsubara frequency space, the total generating function can be written as
\begin{eqnarray}
\nonumber &&
Z_{\eta}=\int[D \bar{\rho}, \rho]\exp\left[-\sum_{\omega_n} \left\{ \bar{\rho}(\omega_n) \left[\frac{N}{g}- N D_0(\omega_n)\right] \rho(\omega_n) -
\right. \right. \\
\nonumber && \left. \left.
D_0(\omega_n) \sum_{i=1}^N\left[\bar{\rho}(\omega_n)\eta_i(\omega_n)+ \bar{\eta}_i(\omega_n)\rho(\omega_n)\right]-
D_0(\omega_n) \sum_{i=1}^N\bar{\eta}_i(\omega_n) \eta_i(\omega_n)\right\} \right], \\
\end{eqnarray}
where $D_0(\omega_n)$ denotes the on-site correlation function of phase exponents in the absence of Kuramoto interaction,
\begin{equation}
D_0(\omega_n)=\left\langle T_{\tau}\left( e^{i\phi^{\tau}} e^{-i\phi^{\tau'}}\right) \right\rangle_{ \omega_n}= \frac{1}{\mathcal{N}}\sum_{\ell=0}^{\infty} D_{\ell}(\omega) \left[e^{-\frac{E_1}{T}\ell^2}-e^{-\frac{E_1}{T}(\ell+1)^2}\right],
\label{D0}
\end{equation}
where $T_{\tau}$ denotes the time-ordering in the imaginary time $\tau$,
\begin{equation}
D_{\ell}(\omega)=\frac{2E_1(2\ell +1)}{\omega_n^2+E_1^2(2\ell +1)^2}, \, \, \, \, \mathcal{N}=\sum_{\ell=-\infty}^{\infty} e^{-\frac{E_1}{T}\ell^2}.
\label{Dl_0}
\end{equation}
The normalization factor $\mathcal{N}$ ensures the condition $\left\langle e^{i\phi^{\tau}} e^{-i\phi^{\tau}} \right\rangle=1$.
At low temperatures, $T\ll E_1$, $D_0(\omega_n)$ can be approximated by the term with $\ell=0$,
\begin{equation}
D_0(\omega_n)\approx \frac{2E_1}{\omega_n^2+E_1^2}.
\label{D0_lowT}
\end{equation}
This approximation is used in the main text of the paper.
Integrating out the fields $\bar{\rho}$, $\rho$, we obtain
\begin{equation}
Z_{\eta}=\exp\left[\sum_{\omega_n}\sum_{i,j=1}^N \bar{\eta}_i(\omega_n) \mathcal{D}_{ij}(\omega_n)\eta_j(\omega_n)\right],
\end{equation}
where
\begin{equation}
\mathcal{D}_{ij}(\omega_n)=D_0(\omega_n)\delta_{ij} +\frac{1}{N}\frac{D_0^2(\omega_n)}{\frac{1}{g}-D_0(\omega_n)}
\label{Dij}
\end{equation}
Explicitly
\begin{equation}
\mathcal{D}_{ij}(\omega_n) = \frac{2E_1}{\omega_n^2+E_1^2}\delta_{ij} + \frac{4E_1^2 g/N}{(\omega_n^2+E_1^2)[\omega_n^2+E_1(E_1-2g)]}.
\label{Dij_explicit}
\end{equation}
Eq. (\ref{Dij_explicit}) gives the correlation functions of phase exponents, according to
\begin{equation}
\left\langle e^{i\phi_i^{\tau}} e^{-i\phi_j^{\tau'}} \right\rangle_{ \omega_n}=
\frac{\partial^2 \ln Z[\bar{\eta}, \eta]}{\partial\bar{\eta}_i(\omega_n) \partial\eta_j(\omega_n)}= \mathcal{D}_{ij}(\omega_n).
\end{equation}
The second term in Eqs. (\ref{Dij}), (\ref{Dij_explicit}) diverges for $\omega_n=0$ at the critical point $g=E_1/2$, although the critical fluctuations enter the correlation function on two different sites as a $1/N$ correction only. However, the following correlation function retains the critical behavior in the limit $N\rightarrow \infty$
\begin{equation}
\mathcal{D}(i\omega_n)=\frac{1}{N}\sum_{i,j}\left\langle e^{i\phi_i^{\tau}} e^{-i\phi_j^{\tau'}} \right\rangle_{ \omega_n}=
\frac{2E_1}{\omega_n^2+E_1(E_1-2g)}.
\label{D_iomega}
\end{equation}
This is exactly the correlation function of phase exponents in Eq. (9) of the main text of the paper with $\epsilon_1=\sqrt{E_1(E_1-2g)}$ given by Eq. (10) in the main text.
\subsection{One particle Green function}
The one particle Green function of the SYK+U model has been determined analytically in the limiting cases of low and high temperature in Ref. \cite{Wang2020}. At high temperature, the Green function retains the form of the pure SYK model without the additional Hubbard interaction. At low temperature, the presence of Cooper pairs leads to the gap in the one particle excitation spectrum, which has been evaluated in Ref. \cite{Wang2020} as
\begin{equation}
\Delta_1\approx 4\sqrt{6\pi}\frac{\Delta^2}{J},
\label{Delta1}
\end{equation}
where $\Delta$ denotes the absolute value of the local Cooper pair amplitude given by the solution of the mean field equation. In contrast to the conventional BCS superconductor, the one particle energy gap is strongly reduced due to the non-Femi-liquid ground state of the underlying SYK model. The presence of the gap leads to the exponential decay of the one particle Green function with the decay time $\tau_{\Delta}=1/\Delta_1$
\begin{equation}
G(\tau)\approx
-\left(\frac{8}{\pi}\right)^{1/4}\frac{e^{-|\tau|/\tau_{\Delta} }}{\sqrt{J|\tau|}}\, \mathrm{sgn}(\tau).
\end{equation}
For further calculations we adopt simplified form of the one-particle Green functions, replacing the decay on the time-scale $\tau_{\Delta}$ by a hard cutoff. With that approximation, the imaginary time one particle Green function at zero temperature reads
\begin{equation}
G(\tau)=-\left(\frac{8}{\pi}\right)^{1/4}\frac{\mathrm{sign}(\tau)}{\sqrt{J|\tau|}} \theta(\tau_{\Delta}-|\tau|),
\label{Gtau}
\end{equation}
where $\theta(\tau_{\Delta}-|\tau|)$ denotes the Heaviside step function.
\subsection{Inter-grain Josephson coupling}
In this subsection we present the approximate evaluation of the Josephson coupling between two SYK+U grains. The diagram for the interaction between the superconducting order parameters in the two grains is shown in Fig. \ref{fig:JCoupling}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.3\textwidth]{JCoupling.pdf}
\caption{Vertex diagram for the Josephson coupling between the grains $r$ and $r'$. Solid lines denote one-particle Green functions in the grain $r$, dashed lines denote the one-particle Green functions in the grain $r'$. The circles denote the superconducting amplitudes $\Delta$.}
\label{fig:JCoupling}
\end{figure}
The corresponding analytic expression in imaginary time action is given by
\begin{eqnarray}
\nonumber &&
S_{\Delta\Delta}=\int d\tau d\tau'
\frac{t_0^2}{N}\sum_{i, j}^N\bar{\Delta}_{ri}(\tau) \Delta_{r'i}(\tau')\int d\tau_1d\tau_2 G(\tau-\tau_1)G(\tau_1-\tau')G(\tau-\tau_2)G(\tau_2-\tau')+c.c.\\
&&
=\frac{1}{N}\sum_{i, j}^N\int d\tau d\tau' \bar{\Delta}_{ri}(\tau) \Delta_{r'j}(\tau') (\mathcal{P}(\tau, \tau'))^2 + c.c.,
\label{SDeltaDelta}
\end{eqnarray}
where we defined
\begin{equation}
\mathcal{P}(\tau, \tau')=t_0 \int d\tau_1 G(\tau-\tau_1)G(\tau_1-\tau').
\label{P}
\end{equation}
Evaluation of the integral in Eq. (\ref{P}) using the approximate Green functions Eq. (\ref{Gtau}) results in
\begin{eqnarray}
\nonumber &&
\mathcal{P}(\tau, \tau')= \left(\frac{8}{\pi}\right)^{1/2}
\frac{t_0}{J} \left\{\pi
-4 \mathrm{arsinh}\left[\sqrt{\frac{\tau_{\Delta}-|\tau-\tau'|}{|\tau-\tau'|}}\right] \theta(\tau_{\Delta}-|\tau-\tau'|)- \right. \\
&& \left.
2\left[\arcsin\left(\sqrt{\frac{\tau_{\Delta}}{|\tau-\tau'|}}\right)
-\arctan\left(\sqrt{\frac{|\tau-\tau'|-\tau_{\Delta}}{\tau_{\Delta}}}\right)
\right]\theta(|\tau-\tau'|-\tau_{\Delta})\theta(2\tau_{\Delta}-|\tau-\tau'|)
\right\}.
\label{Pintegrated}
\end{eqnarray}
Taking into account that the integration kernel depends only logarithmically on the difference $\tau-\tau'$ with maximum at small time-differences, we can write the action Eq. (\ref{SDeltaDelta}) for $\Delta(\tau)$ varying at the time-scale much larger than $\tau_{\Delta}$ in the approximately local form. Substituting Eq. (\ref{Pintegrated}) into Eq. (\ref{SDeltaDelta}), performing the integration over the time-difference $\tau-\tau'$, and using the definition $\tau_{\Delta}=1/\Delta_1=\frac{J}{4\sqrt{6\pi}\Delta^2}$, we obtain
\begin{equation}
S_{\Delta\Delta}\approx 3.7 \frac{t_0^2}{N J \Delta^2} \int d\tau \sum_{i, j}^N\bar{\Delta}_{ri}(\tau) \Delta_{r'j}(\tau) + c.c.
\label{SDeltaDeltafin}
\end{equation}
Finally, separating the superconducting order parameter at each site of each SYK+U grain into the constant mean field amplitude and fluctuating phase, $\Delta_{ri}=
\Delta e^{i\phi_{ri}(\tau)}$, we can represent Eq. (\ref{SDeltaDeltafin}) in the form of the action for the Josephson junction
\begin{equation}
S_{JJ}=\frac{E_J}{N}\int d\tau \sum_{i,j}^N \cos(\phi_{ri}^{\tau}-\phi_{r', j}^{\tau}),
\label{SJJ}
\end{equation}
where the Josephson energy is obtained as
\begin{equation}
E_J\approx 3.7 \frac{t_0^2}{J}.
\label{EJ}
\end{equation}
\subsection{Resonant Cooper pair tunneling}
\label{LowTBroadening}
In this section we calculate tunneling contribution to the broadening of the Cooper pair resonant state (the Cooperon) at the energy $\epsilon_1$. The broadening of the Cooper pair resonance is calculated using the Dyson equation for the propagator of the phase correlations, which is represented by diagrams in Fig. \ref{fig:DysonD}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.5\textwidth]{DysonD2.pdf}
\caption{Diagrammatic Dyson equation for Cooperon. Thin wavy lines denote the naked correlation functions of the phase exponents $\mathcal{D}_0(\omega)$, thick wavy lines denote the dressed correlations functions $\mathcal{D}(\omega)$.}
\label{fig:DysonD}
\end{figure}
\noindent
The boxes in Fig. \ref{fig:DysonD} correspond to the Josephson couplings given by Eq. (\ref{EJ}). The thin wavy lines denote the correlation function of phase exponents in the isolated grain without the broadening, which are obtained by the analytic continuation of Eq. (\ref{D_iomega}) to real frequencies $i\omega_n\rightarrow \omega+io$
\begin{equation}
\mathcal{D}^R_0(\omega) = -\frac{2E_1}{\omega^2-\epsilon_1^2+io},
\label{DR0}
\end{equation}
where $\epsilon_1=\sqrt{E_1(E_1-2g)}$.
The thick wavy lines in Fig.\ref{fig:DysonD} denote the correlator of the phase exponents with the self-consistently determined broadening due to the escape of the Cooper pair from the grain. The analytic expression corresponding to Fig. \ref{fig:DysonD} reads
\begin{equation}
\mathcal{D}^R(\omega)= \mathcal{D}^R_0(\omega)+Z E_J^2\mathcal{D}^R_0(\omega)\left( \mathcal{D}^R(\omega)\right)^2,
\label{DysonEq}
\end{equation}
where $Z$ denotes the coordination number of the array.
The formal solution of Eq. (\ref{DysonEq}) can be written as
\begin{equation}
\mathcal{D}^R(\omega)=\frac{1-\sqrt{1-4ZE_J^2\left(\mathcal{D}^R_0(\omega)\right)^2}}{2ZE_J^2\mathcal{D}^R_0(\omega)}.
\label{DR_formal}
\end{equation}
Substituting the explicit form Eq. (\ref{DR0}) in Eq. (\ref{DR_formal}), we obtain
\begin{equation}
\mathcal{D}^R(\omega)=-\frac{4E_1}{\omega^2-\epsilon_1^2+i\sqrt{16ZE_J^2E_1^2-(\omega^2-\epsilon_1^2)^2}}
\label{DR_omega}
\end{equation}
for $|\omega^2-\epsilon_1^2|<4\sqrt{Z}E_JE_1$. Therefore, due to the escape of Cooper pairs, the resonant state at the energy $\epsilon_1$ broadens into the semi-circular band. For $|\omega-\epsilon_1|\ll \epsilon_1$ we approximate
\begin{equation}
\mathcal{D}^R(\omega)\approx -\frac{2E_1/\epsilon_1}{\omega-\epsilon_1+i\left(2\sqrt{Z}E_JE_1/\epsilon_1\right)}
\label{DR_omegaPole}
\end{equation}
For the calculation of conductivity represented in the next Section, we use the simplified expression
\begin{equation}
\mathcal{D}^R(\omega)=-\frac{4E_1}{(\omega+i\gamma)^2-\epsilon_1^2}.
\label{DR_approx}
\end{equation}
Comparison of Eqs. (\ref{DR_omegaPole}) and (\ref{DR_approx}) for $\gamma \ll \epsilon_1$ and
$\omega\approx \epsilon_1$ allows identification
\begin{equation}
\gamma\approx \frac{2\sqrt{Z} E_J E_1}{\epsilon_1}.
\label{gamma}
\end{equation}
The solution Eq. (\ref{gamma}) is valid for $\epsilon_1\gg Z^{1/4} \sqrt{E_J E_1}$. This condition is realized at small $E_J$ in the insulating regime with thermally activated conductivity (see the main text of the paper).
To evaluate the broadening $\gamma$ close to the critical point for $\epsilon_1\ll Z^{1/4} \sqrt{E_J E_1}$, we consider the equality between Eqs. (\ref{DR_omega}) and (\ref{DR_approx}) at $\epsilon_1=0$, which results in
\begin{equation}
\omega^2+i\sqrt{16ZE_J^2E_1^2-\omega^4}=2(\omega^2+2i\omega \gamma-\gamma^2).
\label{gamma-omegaQC}
\end{equation}
To estimate a typical value of $\gamma$ at low frequencies $\omega<2Z^{1/4}\sqrt{E_1E_J}$, we choose $\omega$ such, that the solution for $\gamma$ is real. Then we obtain from Eq. (\ref{gamma-omegaQC})
\begin{equation}
\gamma=Z^{1/4}\sqrt{2E_1E_J/3}
\label{gamma_QC}
\end{equation}
at $\omega=\sqrt{2}\gamma$. At different values of $\omega$, $\gamma$ acquires an imaginary part, which corresponds to the real part of the self-energy. However, due to the continuity of $\gamma$ as a function of frequency, we accept Eq. (\ref{gamma_QC}) as an estimation of the level broadening at all frequencies for low values of $\epsilon_1$, in particular, in the quantum critical regime.
\subsection{Pair conductivity in the Kuramoto-Josephson array model}
Here we calculate the conductance between the two Kuramoto grains coupled by the Josephson coupling $E_J$. Assuming that the quantum coherence is lost after a single act of the inter-grain tunneling, the conductivity of the array is obtained form the conductivity of the single junction using the electric circuit theory.
The action for two Kuramoto grains reads
\begin{eqnarray}
\nonumber &&
S[A, \phi]=\int_0^{\beta}d\tau \sum_{r=1,2}\left[E_1 \sum_{i=1}^N \dot{\phi}_{ri}^2-\frac{g}{N}\sum_{i<j} \cos(\phi_{ri}-\phi_{rj})\right]-\frac{E_J}{N}\sum_{i,j}^N\int_0^{\beta}d\tau \cos\left[\phi_{1i}-\phi_{2j} -2eA(\tau)\right].
\end{eqnarray}
Here the source vector potential $A(\tau)$ is introduced in such a way that is creates the voltage difference between the two grains ($A=v \tau$ corresponds to a constant voltage $v$ between the grains). The partition function is given by
\begin{equation}
Z[A]=\int D[\phi_{ir}^{\tau}] e^{-S[A, \phi]}.
\label{Z[A]}
\end{equation}
The conductivity is calculated as the response to the source vector potential
\begin{equation}
\sigma=-\lim_{\omega\rightarrow 0} \mathrm{Im}Q(\omega)/\omega.
\label{sigmaQ}
\end{equation}
Here $Q(\omega)$ is the retarded response function, which is obtained by analytic continuation to real frequencies of the derivative with respect to the source vector potential
\begin{equation}
Q(i\omega_n)=\frac{\delta^2 \ln Z[A]}{\delta A(i\omega_n)\delta A(-i\omega_n)}\bigg|_{A=0} = \int d \tau \frac{\delta^2 \ln Z[A]}{\delta A(\tau)\delta A(0)}\bigg|_{A=0} e^{-i\omega_n \tau} \bigg|_{A=0}
\end{equation}
The derivatives with respect to the source vector potential generate correlation functions of phase exponents, which are given by the following expressions
\begin{eqnarray}
&& \left\langle \sum_j e^{\pm i \phi_{rj}(\tau)}\right\rangle=0, \\
&& \left\langle \sum_{j,j'} e^{ i \phi_{rj}(\tau)} e^{- i \phi_{r'j'}(0)}\right\rangle=N \mathcal{D}(\tau)\delta_{r,r'}.
\end{eqnarray}
where the Fourier-transform of $\mathcal{D}(\tau)$ is given by Eq. (\ref{D_iomega}). After taking the derivatives and performing the Fourier transform to Matsubara frequencies, we obtain the analytic expression for the response function $Q(i\omega_n)$ in the form
\begin{equation}
Q(i\omega_n)=(2e)^2E_J^2 T\sum_{\Omega_n} \mathcal{D}(i\Omega_n+i\omega_n)\mathcal{D}(i\Omega_n)=
\frac{(2e)^2 E_J^2}{2\pi i} \int_{-\infty}^{\infty} \frac{dx}{e^{\frac{x}{T}}-1} \left[\mathcal{D}^R(x) -\mathcal{D}^A(x)\right]\left[
\mathcal{D}^R(x+i\omega_n)+ \mathcal{D}^A(x-i\omega_n)\right],
\label{Q_iomega}
\end{equation}
where, according to Eq. (\ref{DR_approx})
\begin{equation}
\mathcal{D}^R(x)=\frac{-4 E_1}{(x-\epsilon_1+i\gamma)(x+\epsilon_1+i\gamma)}.
\label{DRx}
\end{equation}
\begin{figure}[htb]
\vskip -0.5cm
\centering
\includegraphics[width=0.3\textwidth]{IntContourDD.pdf}
\caption{Integration contour in the complex plane $i\Omega_n\rightarrow z$ for calculation of the AL response $Q(i\omega_n)$ in Eq. (\ref{Q_iomega}). Dashed lines denote the branch cuts.}
\label{fig:IntContourB}
\end{figure}
The response function $Q(i\omega_n)$ is calculated by the analytic continuation of the Matsubara frequency $i\Omega_n$ to the complex plain as shown in Fig. \ref{fig:IntContourB}. Expanding up to the linear term in $i\omega_n$, we obtain
\begin{equation}
Q(i\omega_n)\approx (2e)^2 E_J^2\frac{ i\omega_n}{2\pi i} \int_{-\infty}^{\infty} \frac{dx}{e^{\frac{x}{T}}-1} \left(\mathcal{D}^R(x) -\mathcal{D}^A(x)\right) \partial_x\left(\mathcal{D}^R(x)- \mathcal{D}^A(x)\right)=
(2e)^2 E_J^2
\frac{ \omega_n}{2\pi } \int_{-\infty}^{\infty}
\frac{dx}{8T \sinh^2\left(\frac{x}{2T}\right) } \left(\mathcal{D}^R(x) -\mathcal{D}^A(x)\right)^2,
\label{Q_omega}
\end{equation}
Finally, performing the analytic continuation to real frequency $i\omega_n\rightarrow\omega+io$ and using Eq. (\ref{sigmaQ}), we obtain the expression for the conductivity (here we restore $\hbar=h/(2\pi)$)
\begin{equation}
\sigma\! =-(2e)^2
\frac{E_J^2}{h} \int_{-\infty}^{\infty}
\frac{dx}{8T \sinh^2\left(\frac{x}{2T}\right) } \left(\mathcal{D}^R(x) -\mathcal{D}^A(x)\right)^2 =
\! \frac{(2e)^2}{h} \frac{E_J^2 \gamma^2 E_1^2}{T^6} \!
\int_{-\infty}^{\infty}\frac{dy}{\sinh^2 y}\frac{y^2}{\left[\left(y^2-\frac{\epsilon_1^2}{4T^2}-\frac{\gamma^2}{4 T^2}\right)^2 +\frac{\gamma^2 y^2}{T^2}\right]^2}.
\label{sigmaAL_result}
\end{equation}
\begin{figure}[htb]
\centering
\vskip -0.2 cm
\includegraphics[width=0.5\textwidth]{ALCooperon_supplement10.pdf}
\vskip -0.2 cm
\caption{Aslamasov-Larkin (AL) contribution to the conductivity.}
\label{fig:Amplitude_supplement}
\end{figure}
In the insulating phase at low temperatures, $\gamma\ll \epsilon_1$, Eq. (\ref{sigmaAL_result}) can be approximated by taking the most singular part of the residue of the second order poles, which results in Eq. (11) in the main text of the paper (here we restore $\hbar=h/(2\pi)$)
\begin{equation}
\label{sigma_ALmaintext}
\sigma_{\mathrm{AL}} \! \approx \! \frac{(2e)^2}{h} \frac{8\pi E_1^2 E_J^2}{\gamma\left(\epsilon_1^2 \!+ \!\gamma^2 \right)T } e^{-\epsilon_1/T} \approx \frac{(2e)^2}{h} \frac{8\pi E_1^2 E_J^2}{\epsilon_1^2 \gamma T } e^{-\epsilon_1/T}.
\end{equation}
Close to the quantum phase transition, for $\epsilon_1\ll \gamma$ the dissipation-induced broadening cuts off the divergence in Eq. (\ref{sigma_ALmaintext}) for $\epsilon_1\rightarrow 0$.
Evaluation of the integral in Eq. (\ref{sigmaAL_result}) in the quantum critical regime $\epsilon_1/T\ll 1$ gives
\begin{eqnarray}
\nonumber &&
\int_{-\infty}^{\infty}\frac{dy}{\sinh^2 y}\frac{y^2}{\left[\left(y^2-\frac{\epsilon_1^2}{4T^2}-\frac{\gamma^2}{4 T^2}\right)^2 +\frac{\gamma^2 y^2}{T^2}\right]^2}\approx
2\int_{0}^{1}\frac{dy}{\left[\left(y^2-\frac{\epsilon_1^2}{4T^2}-\frac{\gamma^2}{4 T^2}\right)^2 +\frac{\gamma ^2 y^2}{T^2}\right]^2}\approx \left\{ \begin{array}{c}
2(2T/\gamma)^8, \, \, \, \mbox{for} \, \, \, T/\gamma \ll 1,
\\
\frac{5\pi}{16} (2 T/\gamma )^7, \, \, \, \mbox{for} \, \, \, T/\gamma\gg 1,
\end{array} \right.
\end{eqnarray}
which results in the approximate temperature dependence of the conductivity provided in the main text of the paper
\begin{equation}
\sigma_{\mathrm{AL}}\propto \frac{e^2}{h} \frac{E_J^2 E_1^2 T}{\gamma^6} \left\{
\begin{array}{c}
T \, \, \, \mbox{for} \, \, T/\gamma \ll 1, \\
\gamma \, \, \, \mbox{for} \, \, T/\gamma \gg 1.
\end{array}\right.
\label{sigmaAL_QCrit_sup}
\end{equation}
\subsection{Escape rate of a Cooper pair in the incoherent JJ-array}
The array of Josephson junctions is determined by the conventional Hamiltonian
\begin{equation}
H_{\mathrm{JJ}}=\frac{E_C}{2}\sum_r \partial^2_{\phi_r}-E_J \sum_{r,r'} \cos(\phi_r-\phi_{r'}),
\label{HJJ}
\end{equation}
where the sum runs over the nearest neighbor grains.
The regime of the incoherent Josephson array is characterized by the relation $T\gg E_J\gg E_C$, where $T$ denotes the temperature, $E_J$ is the Josephson energy, and $E_C$ is the charging energy of a single superconducting grain. In that regime, each grain has a well defined local superconducting order, although the phases of different grains are uncorrelated due to large temperature fluctuations. It is then reasonable to adopt a set of isolated grains at finite temperature as a zero approximation and treat the Josephson coupling as a perturbation. The eigenstates of the isolated grain are quantized according to the angular momentum $\ell$ canonically conjugated to the superconducting phase $\phi$ (physically those states correspond to a fixed number of Cooper pairs in the grain). At high temperature, each grain is in the mixed state that is characterized by the diagonal density matrix determined by the thermodynamic Boltsmann distribution
\begin{equation}
P_{\ell}=\exp[-E_C\ell^2/(2T)]/\mathcal{N},
\label{P_ell}
\end{equation}
where the normalization
\begin{equation}
\mathcal{N}=\sum_{\ell=-\infty}^{\infty} e^{-E_C\ell^2/(2T)} \approx \sqrt{\frac{2\pi T}{E_C}}.
\label{Zresult}
\end{equation}
The Josephson coupling acts as a hopping amplitude for Cooper pairs between neighbor grains. Because the superconducting phases of different grains are uncorrelated, the escape of a Cooper pair from the grain results in the relaxation of the superconducting phase.
In this section we estimate the escape rate of a Cooper pair from a grain as the broadening of the angular momentum energy levels. The latter can be evaluated self-consistently, using the Dyson equation (see Fig. \ref{fig:DysonD}). For the retarded Green function of the angular momentum $\ell$, the Dyson equation reads
\begin{equation}
\mathcal{D}^R_{\ell}(\omega)=D^R_{\ell}(\omega)+D^R_{\ell}(\omega)E_J^2 Z \left(\sum_{\ell'} P_{\ell'} \mathcal{D}^R_{\ell'}(\omega)\right)
\mathcal{D}^R_{\ell}(\omega)
\label{DysonDl}
\end{equation}
Here $D^R_{\ell}(\omega)=\left(\omega-\delta_{\ell}+io\right)^{-1}$ denotes the retarded Green function of the single superconducting grain in the excited state with the angular momentum $\ell$, $\delta_{\ell}=E_C(\ell+1/2)$, and $Z$ denotes the coordination number. The function $D^R_{\ell}(\omega)$ is easily obtained from Eq. (\ref{Dl_0}) by the replacement $E_1\rightarrow E_C/2$. $P_{l'}$ denotes the probability of a grain to be thermally activated in the state with the angular momentum $l'$ as given by Eq. (\ref{P_ell}). Assuming for the exact Green function in the form
\begin{equation}
\mathcal{D}^R_{\ell}(\omega)=\frac{1}{\omega-\delta_{\ell}+i\gamma}
\label{Dlexact}
\end{equation}
one obtains from Eq. (\ref{DysonDl}) the self-consistency condition for $\gamma$, which reads
\begin{equation}
1=E_J^2 Z \sum_{\ell} \frac{P_{\ell}}{\delta_{\ell}^2+\gamma^2}.
\label{gamma_sc-highT}
\end{equation}
Substituting explicit expressions $P_{\ell}=\sqrt{\frac{E_C}{2\pi T}}\exp[-
E_C\ell^2/(2T)]$, $\delta_{\ell}=E_C(\ell+1/2)$, we estimate the sum in Eq. (\ref{gamma_sc-highT}) in the high-temperature regime $E_C/T\ll 1$ replacing it by the integral, thus obtaining
\begin{equation}
1\approx \frac{Z E_J^2}{\sqrt{2\pi E_C T}} \int dx \frac{\exp[-x^2/(2E_C T)] }{x^2+\gamma^2} =\frac{\sqrt{\pi} Z E_J^2}{\sqrt{2 E_C T}\gamma} \exp\left(\frac{\gamma^2}{2E_CT}\right)\mathrm{erfc} \left(\frac{\gamma}{\sqrt{2 E_C T}}\right),
\label{gamma_selfcons}
\end{equation}
where $\mathrm{erfc}(x)$ denotes the complementary error function.
Finally, we represent the self-consistent equation for the level broadening in the form
\begin{equation}
\gamma=\frac{\sqrt{\pi} Z E_J^2}{\sqrt{2 E_C T}}\exp\left(\frac{\gamma^2}{2E_CT}\right) \mathrm{erfc} \left(\frac{\gamma}{\sqrt{2 E_C T}}\right).
\label{gamma_highT}
\end{equation}
Eq. (\ref{gamma_highT}) can be solved explicitly in the two limit cases using the asymptotic behavior
\begin{equation}
\exp \left(x^2\right) \mathrm{erfc} (x) \approx \left\{
\begin{array}{c}
\frac{1}{\sqrt{\pi}x}, \, \, \, \mbox{for}, x\gg 1, \\
1, \, \, \, \mbox{for}, x\ll 1.
\end{array}\right.
\end{equation}
We obtain
\begin{eqnarray}
&&
\gamma\approx \frac{\sqrt{\pi}Z E_J^2}{\sqrt{2E_C T}}, \, \, \mbox{for} \, \, E_J^2\ll E_C T,
\label{gamma_EJsmall}\\
&&
\gamma\approx \sqrt{Z} E_J, \, \, \mbox{for} \, \, E_J^2\gg E_C T.
\label{gamma_EJlarge}
\end{eqnarray}
\subsection{Conductivity in the incoherent Josephson array}
In this section we evaluate the temperature dependence of conductivity for the incoherent Josephson array at temperatures much exceeding the superconducting transition temperature $T\gg E_J$. We adopt the picture of the array as an electric circuit of Josephson junctions, for which the total conductivity can be calculated from the conductivity of a single junction using circuit theory rules. Therefore, the temperature dependence of the total conductivity is determined by that of the single Josephson junction. The dynamics of the relative superconducting phase of the Josephson junction is governed by the action
\begin{equation}
S_{JJ}=\int dt \left\{\frac{C\dot{\phi}^2}{2(2e)^2} + E_J \cos(\phi)\right\},
\end{equation}
where $C$ denotes the capacitance of the junction related to the charging energy in Eq. (\ref{HJJ}) by $E_C=(2e^2)/C$. For a Josephson junction embedded in the incoherent Josephson array, the equation of motion for the superconducting phase should be supplemented by terms describing the dissipation and thermal noise, so that the phase dynamics is governed by the Langevin equation
\begin{equation}
C\ddot{\phi} + \sigma \dot{\phi}=(2e)^2 E_J \sin(\phi)+\xi(t).
\label{LangevinPhase_sup}
\end{equation}
Here $\sigma$ introduces the phase relaxation and $\xi(t)$ is the Langevin source related to the relaxation by the fluctuation-dissipation relation
\begin{equation}
\langle \xi(t) \xi(t')\rangle=2 \sigma T \delta(t-t').
\label{FDT_sup}
\end{equation}
Taking into account the relations between the phase $\phi$, the voltage $v$, and the total charge of the grain $Q$, $\dot{\phi}=2e v$, $Q=Cv$, Eq. (\ref{LangevinPhase_sup}) can be rewritten in form of the
prominent resistively and capacitively shunted junction (RCSJ)
\begin{equation}
I_C+I_R+I_S=\xi(t)/(2e),
\label{RCSI0}
\end{equation}
where $I_C=C\dot{v}=\frac{C}{2e}\ddot{\phi}$, $I_R=v/R=\frac{\sigma}{2e}\dot{\phi}$, and $I_S=-2e E_J \sin(\phi)$. Here we assume that the phase coherence of a Cooper pair is lost along any indirect path connecting the two grains of the junction. Therefore, the escape of a Cooper pair into the rest of the array acts as the source of the resistive current $I_R$.
For the current-biased RCSJ, rhs of Eq. (\ref{RCSI0}) should be amended with the external current
\begin{equation}
I_C+I_R+I_S=\xi(t)/(2e)+I,
\label{RCSI}
\end{equation}
In turn, the external dc current can be eliminated from RCSI equation Eq. (\ref{RCSI}) by stepping back to the representation in terms of phase variables and performing the gauge transformation $\phi\rightarrow\phi+2ev t$, where $v=I/\sigma$. This transformation reveals $\sigma$ as the conductivity of the RCSI junction. After the gauge transformation the equation of the current biased RCSI junction becomes
\begin{equation}
I_C+I_R+\tilde{I}_S=\xi(t)/(2e),
\label{RCSI_gauged}
\end{equation}
where $\tilde{I}_S=-2e E_J \sin(\phi+2ev t)$. The relation of the parameter $\sigma$ to the escape rate of a Cooper pair out of the junction into the rest of the array can be clarified by considering the time derivative of Eq. (\ref{RCSI_gauged}). Using the relations $\dot{I}_R=\frac{\sigma}{2e}\ddot{\phi}=\frac{\sigma}{C}I_C$, one represents the equation for the time-derivative of the current in the form
\begin{equation}
\dot{I}_C+\dot{\tilde{I}}_S=-\frac{\sigma}{C} I_C+ \frac{1}{2e} \dot{\xi}(t).
\label{LangevinCurrent}
\end{equation}
After averaging over the thermal fluctuations for temperatures much larger than the superconducting transition temperature in the array, $T\gg E_J$, one can neglect the contribution of the superconducting current, and obtain the equation for the dynamics of the $RC$ junction
\begin{equation}
\dot{I}_C=-\frac{\sigma}{C} I_C.
\label{RC}
\end{equation}
Therefore, the parameter $C/\sigma$ constitutes the time constant of the $RC$ junction, which in our case should be associated with the escape rate $\gamma$ of a Cooper pair out of the grain into the rest of the array, $\gamma=\sigma/C$.
The parameter $\gamma$ is determined by Eq. (\ref{gamma_highT}). Finally, using the relation between the capacitance and the charging energy, $E_C=(2e)^2/C$, we obtain
\begin{eqnarray}
&&
\sigma\sim \frac{Z E_J^2}{E_C^{3/2}\sqrt{ T}}, \, \, \mbox{for} \, \, E_J^2\ll E_C T,
\label{sigma_EJsmall}\\
&&
\sigma\sim \sqrt{Z} E_J/E_C, \, \, \mbox{for} \, \, E_J^2\gg E_C T.
\label{sigma_EJlarge}
\end{eqnarray}
\end{document}
|
1,108,101,564,245 | arxiv | \section{Introduction}\label{sec:intro}
In the last few years there has been a great deal of interest in
the `landscape' program as a mechanism for extracting phenomenological
predictions from string theory by doing statistics on sets of potential
vacua. One of the potential problems with this program is that the
potential vacua are classified by low-energy effective supergravity
theories, and it is not clear to what extent all possible supergravity
theories can be described within string theory \cite{bankstalk,vafaswamp}.
In this paper we will analyze examples potentially lacking
UV-completions, in heterotic strings.
Specifically, we begin by observing that not all principal $E_8$
bundles with connection that satisfy the conditions for a supergravity
vacuum can be described within traditional formulations of perturbative
heterotic string theory. The basic problem is that traditional heterotic
string constructions build each $E_8$ from a $\mathrm{Spin}(16)/\BZ_2$
subgroup, and so can only describe those $E_8$ bundles with connection
reducible to $\mathrm{Spin}(16)/\BZ_2$. However, not all
principal $E_8$ bundles with connection are reducible to
$\mathrm{Spin}(16)/\BZ_2$, and those which cannot be so reduced,
cannot be described within traditional heterotic string constructions.
That lack of reducibility suggests there may be a problem with
the existence of UV completions for such heterotic supergravity theories.
However, we point out that there exists evidence from string duality
that suggests UV completions for these exotic heterotic supergravities
should still exist, and in the rest of the paper we go on to build
new worldsheet theories which can be used to describe more general
$E_8$ bundles with connection than the traditional constructions.
This paper can be broken into three main sections:
\begin{enumerate}
\item After initially reviewing the construction of $E_8$ bundles
via $\mathrm{Spin}(16)/\BZ_2$ subbundles in section~\ref{wsobs},
in sections~\ref{ppale8} and \ref{e8conn} we analyze the extent to
which $E_8$ bundles with connection can be described by the usual
fermionic realization of the heterotic string. We find that there
is a topological obstruction to describing certain $E_8$ bundles
in dimension 10, but more alarmingly, in lower dimensions there is
an obstruction to describing all gauge fields.
In particular, we describe some examples of $E_8$ bundles with connection
in dimension less than 10 which satisfy the usual constraints for
a perturbative string vacuum but which cannot be described by
traditional
worldsheet realizations of the heterotic string.
This seems to suggest that not all $E_8$ bundles with connection
can be realized perturbatively.
However, in section~\ref{fthydual} we observe that other evidence such as
F theory calculations suggests that, in fact, the other
$E_8$ bundles with connection {\it can} be realized perturbatively,
just not with traditional constructions. In the rest of the paper
we describe alternative constructions of heterotic strings which
can be used to describe the `exceptional' gauge fields above.
\item The next part of this paper, section~\ref{alt10d},
is a discussion of alternative constructions of each $E_8$ in
a ten-dimensional theory. The usual fermionic construction builds
each $E_8$ using a $\mathrm{Spin}(16)/\BZ_2$ subgroup -- the left-moving
worldsheet fermions realize a $\mathrm{Spin}(16)$ and a left-moving
$\BZ_2$ orbifold realizes the $/\BZ_2$.
However, there are other subgroups of $E_8$ that can also be used
instead, such as $( SU(5) \times SU(5) )/\BZ_5$ and
$SU(9)/\BZ_3$.
At the level of characters of affine algebras,
such constructions have previously been described in {\it e.g.}
\cite{kacsan}.
We check that the ten-dimensional partition function
of current algebras realizing other\footnote{For example,
the $(SU(5) \times SU(5))/\BZ_5$ subgroup describes a
$\BZ_5$ orbifold of an $SU(5) \times SU(5)$ current algebra,
just as the traditional $\mathrm{Spin}(16)/\BZ_2$ subgroup describes
a $\BZ_2$ orbifold of a $\mathrm{Spin}(16)$ current algebra
(realized by free fermions).}
$E_8$ subgroups correctly
reproduces the usual self-dual modular invariant
partition function.
\item To make this useful we need to understand how more general
current algebras can be fibered nontrivially over a base,
and so in the third part of this paper, sections~\ref{symmfibwzw}
and \ref{chirfibwzw}, we develop\footnote{After the initial publication
of this paper it was pointed out to us that chiral fibered WZW models
with $(0,1)$ supersymmetry have been previously considered,
under the name ``lefton, righton Thirring models,'' see for example
\cite{gates1,gates2,gates3,gates4,gates5}. We develop the notion
further, by studying anomaly cancellation, spectra, elliptic genera,
and so forth in chiral fibered WZW models with $(0,2)$ supersymmetry.}
and analyze
``fibered WZW models,'' which allow us to work with heterotic $(0,2)$
supersymmetric SCFT's in which the left-movers couple to
some general $G$-current algebra at level $k$, for general $G$ and $k$,
fibered nontrivially over the target space.
Only for certain $G$ and $k$ can these CFT's be used in critical heterotic
string compactifications, but the general result is of interest to
the study of heterotic CFT's. The construction of these theories
is interesting: bosonizing the left-movers into a WZW model
turns quantum features of fermionic realizations into classical
features, and so to understand the resulting theory requires mixing
such classical features against quantum effects such as the chiral
anomaly of the right-moving fermions.
This construction also gives us a physical realization of some
elliptic genera constructed in the mathematics community previously.
The generalization of the anomaly cancellation condition that we
derive in our model, for example, was independently derived by
mathematicians thinking about generalizations of elliptic genera.
\end{enumerate}
To a large extent, the three parts of this paper can nearly be read
independently of one another. For example, readers who only wish to
learn about fibered WZW model constructions should be able to
read sections~\ref{symmfibwzw} and \ref{chirfibwzw} without having
mastered the earlier material.
Higher-level Kac-Moody algebras in heterotic
compactifications have been considered previously in the
context of free fermion models, see for example \cite{lewellen,dienes} which
discuss their phenomenological virtues. In \cite{dienes}, for example,
the higher-level Kac-Moody algebras are constructed by starting
with critical heterotic strings realized in the usual fashion
and then orbifolding in such a way as to realize higher-level
Kac-Moody algebras from within the original level one structure.
However, in each of those previous works the higher-level Kac-Moody
algebras were all essentially embedded in an ambient level one algebra,
the ordinary $E_8$ algebra.
We are not aware of any previous work discussing heterotic compactifications
with higher-level Kac-Moody algebras that realize those algebras
directly, without an embedding into some ambient algebra,
as we do in this paper with `fibered WZW' models.
\section{Worldsheet obstruction in standard constructions}\label{wsobs}
How does one describe an $E_8$ bundle on the worldsheet?
It is well-known how to construct the $E_8$ current algebra,
and bundles with structure groups of the form $SU(n) \times U(1)^m$ are
also understood in this language, but to understand more exotic cases,
let us carefully work through the details for general nontrivial bundles.
For each $E_8$, there are\footnote{There is, of course, also a representation
of the bosonic string in terms of chiral abelian bosons.
However, that abelian bosonic representation can describe even fewer
bundles with connection than the fermionic representation -- essentially,
only those in which the bundle with connection is reducible to a maximal
torus -- and so we shall focus on the fermionic presentation.}
16 left-moving fermions which couple to the
pullback of a real vector bundle on the target space associated
to a principal $\mathrm{Spin}(16)$ bundle.
The worldsheet left-moving fermion kinetic terms have the form
\begin{equation*}
h_{\alpha \beta} \lambda_-^{\alpha} D \lambda_-^{\beta}
\end{equation*}
where $h_{\alpha \beta}$ is a fiber metric on a real rank 16 vector bundle,
and $D$ is a covariant derivative which implicitly includes the pullback
of a connection on such a bundle, so we see that we can describe
only $\mathrm{Spin}(16)$ gauge fields.
The worldsheet GSO projection is equivalent to a $\BZ_2$ orbifold
in which each of those fermions is acted upon by a sign.
Performing the GSO projection is therefore equivalent to projecting
the $\mathrm{Spin}(16)$ bundle to a $\mathrm{Spin}(16)/\BZ_2$ bundle,
and the surviving adjoint and spinor representations of
$\mathrm{Spin}(16)/\BZ_2$ are built into an $E_8$ bundle,
into which the $\mathrm{Spin}(16)/\BZ_2$ bundle
injects.
(The $\mathrm{Spin}(32)/\BZ_2$ heterotic string is much simpler;
the 32 left-moving spinors couple to a vector bundle associated
to a principal $\mathrm{Spin}(32)$ bundle, and the GSO projection
projects to $\mathrm{Spin}(32)/\BZ_2$.)
Factors of $\BZ_2$ will play an important role in what follows,
so let us take a moment to carefully check the statement above.
Of the groups $O(16)$, $SO(16)$, $\mathrm{Spin}(16)$,
and $\mathrm{Spin}(16)/\BZ_2$, only $\mathrm{Spin}(16)/\BZ_2$
is a subgroup of $E_8$ \cite{bryantpriv,adams},
so after performing the GSO projection
we had better recover a $\mathrm{Spin}(16)/\BZ_2$ bundle.
Also, the fact that the adjoint representation of $E_8$ decomposes
into the adjoint representation of $so(16)$ plus one chiral spinor
gives us another clue -- if the subgroup were $SO(16)$, then no spinors
could appear in the decomposition. The $\BZ_2$ quotient in
$\mathrm{Spin}(16)/\BZ_2$ projects out one of the chiral spinors
but not the other, giving us precisely the matter that we see
perturbatively.
Furthermore, $\mathrm{Spin}(16)/\BZ_2$ does not have a
16-dimensional representation, so the left-moving fermions cannot
be in a vector bundle associated to a principal $\mathrm{Spin}(16)/\BZ_2$
bundle. Instead, they couple to a $\mathrm{Spin}(16)$ bundle,
and the GSO projection plays a crucial role.
Any data about a bundle with connection on the target space must be
encoded in the fermion kinetic terms
\begin{equation*}
h_{\alpha \beta} \lambda_-^{\alpha} D \lambda_-^{\beta}
\end{equation*}
Since the only data encoded concerns $\mathrm{Spin}(16)$ bundles,
if we had an $E_8$ bundle with connection that could not be reduced
to $\mathrm{Spin}(16)/\BZ_2$ and then lifted to $\mathrm{Spin}(16)$,
we would not be able to describe it on the worldsheet using the
conventional fermionic realization of the heterotic string.
So far we have described what worldsheet structures define the
$E_8$ bundle on the target space.
Let us now think about the reverse operation.
Given an $E_8$ bundle, what does one do to construct the corresponding
heterotic string?
First, one reduces the structure group from $E_8$ to
$\mathrm{Spin}(16)/\BZ_2$, if possible, and then lifts from
$\mathrm{Spin}(16)/\BZ_2$ to $\mathrm{Spin}(16)$, if possible.
The resulting $\mathrm{Spin}(16)$ bundle defines the left-moving
worldsheet fermions.
The catch is that not all $E_8$ bundles are reducible to
$\mathrm{Spin}(16)/\BZ_2$, and not all
$\mathrm{Spin}(16)/\BZ_2$ bundles can be lifted to
$\mathrm{Spin}(16)$ bundles. The second obstruction is defined by
an analogue of a Stiefel-Whitney class, which is more or less reasonably
well understood. We will be primarily concerned in this paper with the first
obstruction, which to our knowledge has not been discussed in the physics
literature previously.
\section{Principal $E_8$ bundles}\label{ppale8}
\subsection{Reducibility of principal $E_8$ bundles}
In this section we shall briefly outline\footnote{We are indebted to
A.~Henriques for a lengthy discussion in which he explained the
points of this section, and for giving us
permission to repeat
his homotopy analysis here.}
the technical issues involved
in computing the obstruction to reducing an $E_8$ bundle to
a $\mathrm{Spin}(16)/\BZ_2$ bundle. We shall find that the only
obstruction is an element of $H^{10}(M, \BZ_2)$, where $M$ is the
spacetime ten-manifold on which the $E_8$ bundle lives.
An $E_8$ bundle is the same thing as a map $M \rightarrow BE_8$.
In order to reduce the structure group of the bundle to $\mathrm{Spin}(16)/\BZ_2$, we want to lift the map above to a map
$M \rightarrow B\mathrm{Spin}(16)/\BZ_2$. In
fact, for our purposes, we can equivalently consider $B SO(16)$,
which is technically somewhat simpler.
In general, if $M$ is simply-connected (which we shall assume
throughout this section), then the obstructions to reducing a principal
$G$-bundle on $M$ to a principal $H$-bundle for $H \subset G$ live in
$H^k(M, \pi_{k-1}(G/H))$, which can be proven with
Postnikov towers. Since this technology is not widely
used in the physics community, let us expound upon this method
for $H=1$, and study
the obstructions to trivializing a principal $G$ bundle which,
from the general statement above, live in
$H^k(M, \pi_{k-1}(G))$. It is well-known that a principal $G$ bundle can
be trivialized if its characteristic classes vanish, and so one would
be tempted to believe that the group $H^k(M, \pi_{k-1}(G))$ correspond
to characteristic classes, but the correct relationship\footnote{We would
like to thank M.~Ando for a patient explanation of this point.}
is more complicated. In the case of $E_8$ bundles and
$U(n)$ bundles, it is straightforward to check that the groups in which the
obstructions live are the same as the ones the characteristic classes
live in, making the distinction obscure:
for $E_8$, since $\pi_3(E_8) = \BZ$ is
the only nonzero homotopy group in dimension ten or less,
the obstructions to trivialing a principal $E_8$ bundle on a manifold
of dimension ten or less live in $H^4(M, \BZ)$, same as the characteristic
class,
and, for $U(n)$ bundles,
$\pi_i(U(n))$ is $\BZ$ for $i$ odd and less than $2n$,
so the obstructions to trivializing $U(n)$ bundles live in $H^{even}(M,
\BZ)$, the same groups as the Chern classes.
Principal $O(n)$ bundles are more confusing, and better illustrate
the distinction between obstructions and characteristic classes. The homotopy
groups
\begin{equation*}
\pi_{3 + 8k}(O(n)) \: = \: \BZ \: = \: \pi_{7+8k}(O(n))
\end{equation*}
(for $n$ sufficiently large)
and the corresponding obstructions correspond to the Pontryagin classes
in degrees any multiple of four. However, there are additional
$\BZ_2$-valued characteristic classes of $O(n)$ bundles,
known as the Stiefel-Whitney classes, and
\begin{equation*}
\pi_{0+8k}(O(n)) \: = \: \BZ_2 \: = \: \pi_{1+8k}(O(n))
\end{equation*}
(for $n$ sufficiently large)
corresponding to the first two Stiefel-Whitney classes $w_1$, $w_2$.
However, other homotopy groups vanish
\begin{equation*}
\pi_{2+8k}(O(n)) \: = \: 0 \: = \: \pi_{4+8k}(O(n)) \: = \:
\pi_{5+8k}(O(n))
\end{equation*}
and so there are no obstructions living in $H^3(M,\BZ_2)$,
$H^5(M,\BZ_2)$, or $H^6(M, \BZ_2)$, for example,
despite the fact that
there are Stiefel-Whitney classes in those degrees.
An $O(n)$ bundle can be trivialized only if its characteristic classes
all vanish, and yet we have found no obstructions corresponding to
many Stiefel-Whitney classes, which appears to be a contradiction.
Part of the resolution is that the relationship between characteristic
classes and obstructions is complicated: for example, the degree four
obstruction is $p_1/2$, and is only defined if the lower-order
obstructions vanish (so that $p_1$ is even).
Higher-order obstructions have an even more
complicated relationship. At the same time, one can use Steenrod square
operations and the Wu formula to determine many higher-order Stiefel-Whitney
classes from lower ones -- for example, if $w_1=w_2=0$ then necessarily
$w_3=0$.
The upshot of all this is that if the obstructions all vanish,
then the characteristic classes will all vanish,
and so the bundle is trivializable, and there is no contradiction.
In any event, the obstructions to reducing a principal $E_8$ bundle to
a principal $\mathrm{Spin}(16)/\BZ_2$ bundle live
in $H^k(M,\pi_{k-1}(F))$, where
$F = E_8/(\mathrm{Spin}(16)/\BZ_2)$ denotes the
fiber of
\begin{equation*}
B \, \mathrm{Spin}(16)/\BZ_2 \: \longrightarrow \: BE_8.
\end{equation*}
We can compute the homotopy groups of that quotient using
the long exact
sequence in homotopy induced by the fiber
sequence
\begin{equation*}
E_8/(\mathrm{Spin}(16)/\BZ_2) \: \longrightarrow \:
B\mathrm{Spin}(16)/\BZ_2 \: \longrightarrow \: BE_8.
\end{equation*}
One can compute the following:
\begin{center}
\begin{tabular}{ccccccccccccc}
$\pi_i$ for $i=$ & 1& 2& 3& 4& 5& 6& 7& 8& 9& 10& 11& 12 \\ \hline
$E_8/(\mathrm{Spin}(16)/\BZ_2)$ & 0 & $\BZ_2$ & 0& 0& 0& 0& 0& $\BZ$ & $\BZ_2$ & $\BZ_2$ & 0 & $\BZ$ \\
$B\mathrm{Spin}(16)/\BZ_2$ & 0 & $\BZ_2$ & 0 & $\BZ$ & 0 & 0 & 0 &
$\BZ$ & $\BZ_2$ & $\BZ_2$ & 0 & $\BZ$ \\
$BE_8$ & 0 & 0 & 0 & $\BZ$ & 0 & 0& 0& 0 & 0 & 0 & 0 & 0 \\
\end{tabular}
\end{center}
We used the following facts to compute this table.
First, we know that $E_8$ looks like a $K(\BZ,3)$ up to dimension 14,
and we also know $\pi_*(BSO)$ by Bott periodicity (see for example
\cite{hatcher}[section 4.2]). So, to
determine the long exact senquence in the relevant range, we only need to
compute $\pi_4(B\mathrm{Spin}(16)/\BZ_2) \rightarrow \pi_4(BE_8)$.
It turns out that $\pi_4(B\mathrm{Spin}(16)/\BZ_2) \rightarrow \pi_4(BE_8)$
is an isomorphism. This is the
case since $\mathrm{Spin}(16)/\BZ_2 \rightarrow E_8$ comes from an
inclusion of simply laced root systems
and the $SU(2)$s coming from the roots are the generators of $\pi_3$.
The obstructions in $H^k(M,\pi_{k-1}(F))$ are pulled
back from universal obstructions
\begin{equation*}
H^k(BE_8,\pi_{k-1}(F)).
\end{equation*}
By the previous
observation, this is isomorphic to $H^k(K(\BZ,4),\pi_{k-1}(F))$ in the relevant
range.
From the table above, there are three possible obstructions, living in
the groups
\begin{equation*}
H^3(M, \BZ_2), \: \:
H^9(M, \BZ), \: \:
H^{10}(M, \BZ_2).
\end{equation*}
The first of these we can eliminate immediately, since it is a pullback
from $H^3(BE_8,\BZ_2)$ but that group vanishes.
Next we check $H^9(K(\BZ,4), \BZ)=\BZ_3$ and
$H^{10}(K(\BZ,4), \BZ_2)=\BZ_2+\BZ_2$.
These groups will yield two potential obstructions:
an element of $H^9(M, \BZ)$, pulled back from a class
in $H^9(K(\BZ, 4), \BZ)$,
and an element of $H^{10}(M, \BZ_2)$, pulled back from a class
in $H^{10}(K(\BZ,4), \BZ_2)$.
In principle, the universal obstruction in $H^9(K(\BZ,4), \BZ)$ can be
nonzero because it agrees
with the $k$-invariant of $KO$ at $p=3$. Its name is ``Milnor's $Q_1$.''
It is a
cohomology operation $Q_1:H^n(-,\BZ) \rightarrow H^{n+5}(-,\BZ)$.
So, let us concentrate at $p=3$ for a moment. The question is,
does there exist a 10 dimensional manifold $M$ with a 4-dimensional
cohomology class $x$ on which $Q_1$ is non zero?
It can be shown by a cobordism invariance argument
\cite{francispriv} that on any oriented 10-manifold
$M$, there is no such cohomology class.
Thus, so long as our 10-manifold $M$ is oriented,
the potential obstruction in $H^9(M, \BZ)$ always vanishes,
leaving us with only one potential obstruction to reductibility
of the structure group of the $E_8$ bundle,
living in $H^{10}(M, \BZ_2)$.
Unfortunately, this obstruction can sometimes be nonzero.
(Examples of oriented 10-manifolds with nonreducible $E_8$ bundles
are described in \cite{dmw}, albeit to different ends.)
Although we have been unable to find any prior references discussing
this obstruction, we have found some that came close to uncovering it.
For example, in \cite{edsymp}, Witten points out the necessity of
reducing $E_8$ to $\mathrm{Spin}(16)/\BZ_2$, and also looks
for obstructions, but only up to degree six: he observes
that for compactifications to four dimensions, such a reduction is always
possible.
\subsection{Target space interpretation}
So far we have discussed a technical issue that arises when
trying to understand certain `exotic' $E_8$ bundles on a heterotic
string worldsheet. Next, we shall discuss the interpretation of this
obstruction in the ten-dimensional supergravity.
For chiral fermions in dimension $8k+2$, it is known
\cite{edcmp}[p. 206] that the number of zero modes of the
chiral Dirac operator is a topological invariant mod 2.
(The number of zero modes of the nonchiral Dirac operator is a topological
invariant mod 4.)
In particular, since the ten-dimensional gaugino is a Majorana-Weyl spinor,
the number of posititive chirality gaugino zero modes is a
topological invariant mod 2. For $E_8$ bundles, this topological invariant
was discussed in \cite{dmw}[section 3], where it was labelled
$f(a)$ (where $a$ is the analogue of the Pontryagin invariant for
$E_8$ bundles).
Curiously, the element of $H^{10}(X, \BZ_2)$ that defines the
obstruction to reducing an $E_8$ bundle to a $\mathrm{Spin}(16)/\BZ_2$
bundle, is that same invariant \cite{hopkinspriv}.
In other words, the number of chiral gaugino zero modes
of the ten-dimensional Dirac operator is odd
precisely when the $E_8$ bundle cannot be reduced to
$\mathrm{Spin}(16)/\BZ_2$, and hence cannot be described
perturbatively on a heterotic string worldsheet.
This makes the current phenomenon sound analogous to the
anomaly in four-dimensional $SU(2)$ gauge theories with an odd number
of left-handed fermion doublets, described in \cite{edsu2}.
There, the anomaly could be traced to the statement that the
five-dimensional Dirac operator had an odd number of zero modes,
which translated into the statement that the relevant operator
determinant in the four-dimensional theory was not well-behaved
under families of gauge transformations. There, however,
it was the Dirac operator in one higher dimension that had an odd
number of zero modes, whereas in the case being studied in this
paper it is the Dirac operator in ten dimensions, not eleven dimensions,
that has an odd number of zero modes. Also, in the anomaly
studied in \cite{edsu2}, the fact that
$\pi_4(SU(2))$ is nonzero was crucial,
whereas by contrast $\pi_{10}(E_8)$ vanishes.
In fact that last fact was used in \cite{edcmp}[p. 198] to argue that
there should not be any global gauge anomalies in heterotic
$E_8 \times E_8$ strings.
\section{Connections}\label{e8conn}
So far we have discussed reducibility of topological $E_8$ bundles
to $\mathrm{Spin}(16)/\BZ_2$ bundles, but to realize a given $E_8$
gauge field in standard heterotic string constructions, we must
also reduce the connection on the bundle, not just the bundle itself.
In particular, on a principal $G$-bundle, even a trivial principal
$G$-bundle, one can find connections with holonomy that fill out all
of $G$, and so cannot be understood as coming from connections on
any principal $H$-bundle for $H$ a subgroup of $G$.
It is easy to see this statement locally \cite{rthompriv}:
one can pick a connection
whose curvatures at points in a small open set generate the Lie algebra
of $G$, and then the local holonomy will generate (the identity component of)
$G$, and since our bundles are reducible (in fact, trivial) locally, one
gets the desired result.
However, for our purposes it does not suffice to consider reducibility
of generic connections.
After all, for a perturbative vacuum of heterotic string theory,
the connection must
satisfy some stronger conditions: it must satisfy the Donaldson-Uhlenbeck-Yau
equation, the curvature must be of type $(1,1)$,
and it must satisfy anomaly cancellation.
However, even when the Donaldson-Uhlenbeck-Yau condition is satisfied,
it is still possible to have bundles with connection such that the
bundle is reducible but not the connection.
Examples of this were implicit in \cite{kcs},
which discussed how stability of bundles depends upon the metric.
Briefly, the K\"ahler cone breaks up into subcones, with a different
moduli space of bundles on each subcone. Some stable irreducible bundles
will, on the subcone wall, become reducible.
This means that the holomorphic structure (and also the
holonomy of the connection) was generically irreducible,
but becomes reducible at one point. For this to be possible at the
level of holomorphic structures means that the bundle was always
topologically reducible. Thus, implicitly in \cite{kcs}
there were examples of topologically reducible bundles with
irreducible connections satisfying the Donaldson-Uhlenbeck-Yau condition.
We shall construct some examples on K3 surfaces of $E_8$ gauge fields
which satisfy all the conditions above for a perturbative heterotic
string vacuum, but which cannot be reduced to $\mathrm{Spin}(16)/\BZ_2$.
\subsection{Moduli spaces of flat connections}
As a quick warm-up, let us briefly study how the moduli space
of flat $E_8$ connections on $T^2$ arises in a heterotic compactification
on $T^2$.
The moduli space of flat $E_8$ connections on $T^2$ and one component
of the moduli space of flat $\mathrm{Spin}(16)/\BZ_2$ connections
both have the form $(T^2)^8/W$, where $W$ is the respective Weyl group.
However, $W(D_8) \subset W(E_8)$, and in fact $| W(E_8)/W(D_8)| = 135$,
so the component of the moduli space of flat $\mathrm{Spin}(16)/\BZ_2$
connections is a 135-fold cover of the moduli space of flat $E_8$ connections.
The projection to the moduli space of flat $E_8$ connections is induced
by T-dualities. The discrete automorphism group (T-dualities) of the heterotic
moduli space includes a $O(\Gamma_8)$ factor, which acts as the
$E_8$ Weyl group action above. When forming the moduli space,
we mod out by this factor, and so we get the moduli space of
flat $E_8$ connections, rather than that of $\mathrm{Spin}(16)/\BZ_2$
connections.
\subsection{Analysis of connections}
In this section we will construct
an example\footnote{We would like to thank R.~Thomas for an
extensive discussion of this matter in late March and April, 2006.}
of an $E_8$ gauge field on a Calabi-Yau $X$
which cannot be reduced to $\mathrm{Spin}(16)/\BZ_2$, but which does
satisfy the conditions for a consistent perturbative vacuum,
namely
\begin{equation*}
F_{0,2} \: = \: F_{2,0} \: = \:
g^{i \overline{\jmath}} F_{i \overline{\jmath}} \: = \: 0
\end{equation*}
and that
\begin{equation*}
\mbox{Tr } F^2 \: - \: \mbox{Tr }R^2
\end{equation*}
is cohomologous to zero.
To build this example, we use the fact that
$E_8$ contains a subgroup $\left( SU(5) \times SU(5) \right)/\BZ_5$.
This subgroup is not a subgroup of $\mathrm{Spin}(16)/\BZ_2$,
and so an $SU(5) \times SU(5) / \BZ_5$ gauge field whose holonomy
is all of the group is an example of an $E_8$ gauge field that cannot
be reduced to $\mathrm{Spin}(16)/\BZ_2$.
To construct such an $(SU(5) \times SU(5))/\BZ_5$ gauge field,
it suffices to construct an $SU(5) \times SU(5)$ gauge field,
then take the image under a $\BZ_5$ action (whose existence is
always guaranteed).
The perturbative anomaly cancellation condition is stated simply as
a matching of $\mbox{Tr } F^2$ and $\mbox{Tr }R^2$ in cohomology,
but for general groups the precise interpretation of that statement
in terms of degree four characteristic classes.
For an $SU(5) \times SU(5)$ bundle, anomaly cancellation should be
interpreted as the statement
\begin{equation*}
c_2({\cal E}_1) \: + \: c_2({\cal E}_2) \: = \: c_2(TX)
\end{equation*}
where ${\cal E}_1$, ${\cal E}_2$ are principal $SU(5)$ bundles.
As a check of anomaly cancellation in this context,
suppose that $SU(n)$ is a subgroup of $SU(5)$.
We can either embed the $SU(n)$ in $\mathrm{Spin}(16)/\BZ_2$,
and then build up a standard perturbative worldsheet, or
we can embed it in $SU(5)\times SU(5) / \BZ_5$, which does not
admit a perturbative description.
This gives two paths to $E_8$, but these two paths commute\footnote{We would
like to thank A.~Knutson for a helpful discussion of this matter at the
end of March 2006. Also, note the automorphism exchanging the two $SU(5)$'s
does not extend to $E_8$, which can also be seen from the asymmetry of
the decomposition of the adjoint representation of $E_8$ under
the subgroup above. Another way to see this is from
the fact that the $\BZ_5$ one quotients by is
not symmetric under such a switch.}.
A careful reader might point out another subtlety in the
statement of anomaly cancellation.
For example, the degree four characteristic class of an $SU(n)/\BZ_n$
bundle obtained from an $SU(n)$ bundle ${\cal E}$ can be
naturally taken to be\footnote{Alternatively,we can get the same result from the fact thatthe trace in the adjoint rep of $SU(n)$
is $2n$ times the trace in the fundamental rep \cite{erler},
which is also twice the dual Coxeter number.
}
$c_2(\mbox{End}_0 {\cal E}) = 2n c_2({\cal E})$,
so in the case above there could plausibly be extra numerical factors.
In any event, our methods are sufficiently robust that
such modifications of the anomaly cancellation condition will not
change the fact that there exist families of examples\footnote{If the reader
objects that a wandering factor of $5$ or $10$, as might be expected
in some interpretations of $SU(5)^2/\BZ_5$, would make examples
on K3's difficult, the quintic threefold has $c_2$ divisible by
5, in fact $c_2 = 10 H^2$, and there exist further examples there.}.
Put another way, nonreducible
connections are common, not rare or unusual.
We need to find a bundle with connection that not only satisfies
anomaly cancellation, but also the Donaldson-Uhlenbeck-Yau condition.
By working with $SU(n)$ gauge fields, we can translate such questions about
connections
into algebraic geometry questions.
In particular, the requirement that
the gauge field satisfy the Donaldson-Uhlenbeck-Yau equation becomes
the requirement that the corresponding holomorphic rank 5 vector bundle
be stable.
Ordinarily, checking stability can be rather cumbersome, but there is
an easy way to build examples sufficient for our purposes.
We can build holomorphic vector bundles on elliptic fibrations with
section using the techniques of \cite{fmw,bjps}.
(See also {\it e.g.} \cite{bjorn1,bjorn2,bjorn3,bjorn4,ovrut1,ovrut2} for some
more modern applications of the same technology.)
Furthermore, these bundles are automatically stable (for metrics in the
right part of the K\"ahler cone).
One must specify a (spectral) cover of the base of the fibration, plus a line
bundle on that cover.
Following the conventions of \cite{bjps},
to describe an $SU(r)$ bundle on an elliptic K3 with section we use
a spectral cover describing an $r$-fold cover of the base of the fibration.
The spectral cover will be in the class $| r \sigma + k f|$
where $\sigma$ is the class of the section and $f$ is the class
of the fiber, and $k$ is the second Chern class of the bundle
\cite{bjps}[p. 5].
Furthermore, there is a line bundle that must be specified on that cover,
and it can be shown \cite{bjps} that that line bundle must
have degree $-(r+g-1)$, where $g = rk - r^2 + 1$ is the genus of
the spectral cover (as it is a cover of ${\bf P}^1$, it is some
Riemann surface). If the spectral curve is reduced and irreducible
then the corresponding bundle will be stable; Bertini's theorem
implies that such curves exist in the linear system.
In the present case, we want a holomorphic vector bundle of
rank $5$, $c_1=0$, $c_2=12$. The spectral cover that will produce
such a result is in the linear system
$|5 \sigma + 12 f|$. The genus of such a curve is $36$, and the
line bundle has degree $-40$.
The dimension of the moduli space of spectral data is then $2 \cdot 36
= 72$.
So far we have established the existence of stable $SU(5) \times
SU(5)$ bundles satisfying all the conditions for a consistent
perturbative vacuum; we still need to demonstrate that the holonomy
of the connection cannot be reduced below $SU(5) \times SU(5)$.
To do this we can apply the recent work \cite{kollar}, which says
that it is sufficient for each factor to be irreducible and to have
irreducible second symmetric power. As this will be generically
true \cite{donagipriv}, we see that the holonomy cannot be reduced
below $SU(5) \times SU(5)$, and so by projecting along a $\BZ_5$
automorphism we have a family of $(SU(5) \times SU(5))/\BZ_5$
bundles with the desired properties.
Thus, using the embedding of $(SU(5) \times SU(5))/\BZ_5$ in
$E_8$, we now have a family of $E_8$ bundles with connection on
K3's which satisfy all the requirements for a consistent perturbative
vacuum, but which cannot be reduced to $\mathrm{Spin}(16)/\BZ_2$,
and so cannot be described with standard constructions of heterotic
strings.
\subsection{Low energy theory}
Compactification on a bundle with structure group $(SU(5) \times
SU(5)) / \BZ_5$ breaks the $E_8$ to a mere $\BZ_5$ --
the commutant in $E_8$ is $\BZ_5$.
Similarly, if one were to compactify on a bundle with structure
group $\mathrm{Spin}(16)/\BZ_2$, the commutant inside $E_8$
is $\BZ_2$.
If it were the case that the low-energy theory in any $E_8$ bundle
not describable on the worldsheet had gauge group only a finite
group, then this might not be considered very interesting.
However, there are other examples of subgroups of $E_8$ whose
commutant has rank at least one, and which cannot be
embedded in $\mathrm{Spin}(16)/\BZ_2$.
For example, the group $(E_7 \times U(1))/\BZ_2$
is a subgroup of $E_8$ (that sits inside the $(E_7 \times SU(2))/\BZ_2$
subgroup of $E_8$) which has commutant $U(1)$,
and is not a subgroup of $\mathrm{Spin}(16)/\BZ_2$.
For another example, $(E_6 \times SU(3))/\BZ_3$ is a subgroup
of $E_8$, and so its $E_6$ subgroup has commutant $SU(3)$,
but $E_6$ cannot be embedded in $\mathrm{Spin}(16)/\BZ_2$.
To see this, note that if $E_6$ could be embedded in $\mathrm{Spin}(16)/\BZ_2$,
then the Lie algebra $so(16)$ would have an $e_6$ subalgebra,
and since there is a 16-dimensional representation of $so(16)$,
that means $e_6$ would have a possibly reducible nontrivial
16-dimensional representation
as well, just from taking the subalgebra described by some of the
$16 \times 16$ matrices describing $so(16)$. However, the smallest
nontrivial representation of $e_6$ is 27-dimensional, a contradiction.
(Note this is closely related to but distinct from the standard
embedding for Calabi-Yau three-folds:
the $SU(3)$ subgroup of $( E_6 \times SU(3) )/\BZ_3$ {\it does}
sit inside $\mathrm{Spin}(16)/\BZ_2$, unlike the $E_6$.)
\section{F theory duals and the existence of perturbative realizations}\label{fthydual}
So far we have argued that there exist some bundles with connection that
cannot be realized using the standard description of heterotic $E_8 \times
E_8$ strings. Does that mean that they do not arise in string theory?
Such questions are important to the landscape program, for example,
where one of the current issues involves understanding which
backgrounds admit UV completions \cite{bankstalk,vafaswamp}.
Some insight into this question can be made with F theory duals.
For example, \cite{paulrec}[section 2.3] describes an F theory dual
to a heterotic compactification in which the bundle with connection
has structure group $(E_7 \times U(1))/\BZ_2$, and so cannot
be realized with the standard construction of heterotic strings.
Such examples tell us that at least some of these bundles with connection
can nevertheless be realized within string theory.
More abstract considerations lead one to the same conclusion.
Imagine starting with a bundle with connection reducible to
$\mathrm{Spin}(16)/\BZ_2$, and deforming to an $E_8$ bundle with
connection that is not reducible. Since the adjoint representation of
$E_8$ decomposes into the adjoint and a chiral spinor representation of
$\mathrm{Spin}(16)/\BZ_2$, the deformation described would involve
giving a vacuum expectation value to a spinor.
This sounds reminiscent of describing Ramond-Ramond fields in type II
strings with nonzero
vacuum expectation values. In the case of type II strings,
giving those fields vacuum expectation values involved formally adding
terms to the lagrangian coupled to the superconformal ghosts,
which is problematic, and is the reason that Ramond-Ramond field vevs
are problematic in basic formulations of type II strings.
In a heterotic string, however, giving a vev to a gauge spinor does
{\it not} involve coupling to superconformal ghosts, unlike the type II
case, so there is no obstruction in principle.
Thus, from this consideration, one is led to believe that
$E_8$ bundles with connection that cannot be reduced to
$\mathrm{Spin}(16)/\BZ_2$ should nevertheless define
well-behaved CFT's, even though they cannot
be described within traditional heterotic worldsheet constructions.
In the remainder of this paper we will describe alternative constructions
of perturbative heterotic strings which can explicitly
realize more general $E_8$ bundles with
connection. First, in the next section we will describe how
subgroups other than $\mathrm{Spin}(16)/\BZ_2$ can be used to build
$E_8$ in ten dimensions, and will check by comparing modular forms
that corresponding current algebra constructions realize all of the
degrees of freedom of the left-moving part of the standard constructions.
To make such constructions practical in less than ten dimensions,
however, one needs suitable technology for fibering current algebras
over a base, and so we introduce ``fibered WZW models,'' which will
enable us to fiber a current algebra for any group at any level
over a base, using a principal bundle with connection to define the
fibering.
\section{Alternative constructions of 10d heterotic strings}\label{alt10d}
The reader might ask whether the heterotic string could be formulated
in some alternative fashion that might be more amenable to some of the
constructions above. For example, might it be possible to formulate
a worldsheet string with, for each $E_8$, two sets of five complex fermions,
realizing the $E_8$ from $(SU(5) \times SU(5) ) / \BZ_5$?
Unfortunately, two sets of five complex fermions would have a
$U(5) \times U(5)$ global symmetry, and if we try to gauge
each $U(1)$ on the worldsheet, we would encounter a $U(1)^2$ anomaly
which would force $c_2$ of each bundle to vanish separately.
Instead, we are going to take an alternative approach to this issue.
We are going to develop a notion of fibered current algebras,
realized by fibered WZW models,
which will allow us to realize current algebras at any level
and associated to any group $G$, fibered nontrivially over
any compactification manifold.
The standard $E_8 \times E_8$ heterotic
string construction is, after all, one realization of a fibered
$E_8 \times E_8$ current algebra at level 1; our technology will
enable us to talk about fibering $G$-current algebras at level $k$.
Before doing that, however,
we will check to what extent subgroups of $E_8$ other than
$\mathrm{Spin}(16)/\BZ_2$ can be used to build up the left-moving
$E_8$ partition function in ten dimensions.
For example, one could take a pair of $SU(5)$ current algebras,
then perform a $\BZ_5$ orbifold (replacing the ``left-moving GSO''
used to build $\mathrm{Spin}(16)/\BZ_2$ from a $\mathrm{Spin}(16)$
current algebra in the usual construction)
so as to get
an $( SU(5) \times SU(5) )/\BZ_5$ global symmetry on the
worldsheet, or take an $SU(9)$ global symmetry and perform a
$\BZ_3$ orbifold to get an $SU(9)/\BZ_3$ global symmetry.
Both $(SU(5)\times SU(5))/\BZ_5$ and $SU(9)/\BZ_3$
are subgroups of $E_8$,
and we will find that such alternative subgroups correctly reproduce
the $E_8$ partition function, and so give alternative constructions
of the $E_8$ current algebra in ten dimensions.
At the level of characters and abstract affine algebras, the idea
that $E_8$ can be built from other subgroups has appeared previously
in \cite{kacsan}; we shall review some pertinent results and also
describe how those character decompositions are realized physically
in partition functions, via orbifold twisted sectors.
First, let us recall how $E_8$ is built from $\mathrm{Spin}(16)/\BZ_2$
in ten dimensions.
The adjoint representation of $E_8$ decomposes as
\begin{equation} \label{e8spin16}
{\bf 248} \: = \: {\bf 120} \: + \: {\bf 128}
\end{equation}
under $\mathrm{Spin}(16)/\BZ_2$.
At the level of ordinary Lie algebras, we get the elements of the $E_8$
Lie algebra from the adjoint plus a spinor representation of
$\mathrm{Spin}(16)/\BZ_2$, and assigning them suitable commutation relations.
At the level of WZW conformal families, we could write
\begin{equation*}
[{\bf 1}] \: = \: [{\bf 1}] \: + \: [{\bf 128}]
\end{equation*}
which implicitly includes equation~(\ref{e8spin16}) as a special case,
since the (adjoint-valued) currents are non-primary descendants of the
identity operator.
That statement about conformal families implies a statement about
characters of the corresponding affine Lie algebras, namely that
\begin{equation} \label{spin16base}
\chi_{E_8}({\bf 1},q) \: = \: \chi_{Spin(16)}({\bf 1}, q)
\: + \: \chi_{Spin(16)}({\bf 128}, q)
\end{equation}
where \cite{gswv1}[section 6.4.8]
\begin{equation*}
\chi_{E_8}({\bf 1}, q) \: = \: \frac{E_2(q)}{\eta(q)^8}
\end{equation*}
and where $E_2(q)$ is the degree four Eisenstein modular form
\begin{equation*}
\begin{split}
E_2(q) = \: & 1 \: + \: 240 \sum_{m=1}^{\infty} \sigma_3(m) q^m \\
= \: & 1 \: + \: 240 \left[ q \: + \: (1^3 + 2^3) q^2 \: + \: (1^3 + 3^3) q^3
\: + \: \cdots \right] \\
= \: & 1 \: + \: 240 q \: + \: 2160 q^2 \: + \: 6720 q^3 \: + \:
17520 q^4 \: + \: 30240 q^5 \: + \: 60480 q^6 \: + \: \cdots
\end{split}
\end{equation*}
with
\begin{equation*}
\sigma_3(m) \: = \: \sum_{d | m} d^3
\end{equation*}
The identity~(\ref{spin16base}) is discussed in for example
\cite{gswv1}[section 6.4] and \cite{gannonlam1}[eqn (3.4a)].
The $\BZ_2$ orbifold plays a crucial role in the expression above.
Without the $\BZ_2$ orbifold, we would only consider the single
conformal family $[{\bf 1}]$ and the single character
$\chi_{Spin(16)}({\bf 1}, q)$. The $[{\bf 128}]$
arises from the $\BZ_2$ orbifold twisted sector.
(The fact that the twisted sector states are still representatives of
the same affine Lie algebra as the untwisted sector states, despite
being in a twisted sector, is a consequence of the fact that the
orbifold group action preserves the currents -- it acts on the center
of the group, preserving the algebra structure.)
Next, we shall check to what extent other subgroups of $E_8$ can be
used to duplicate the same left-moving degrees of freedom.
\subsection{ Some maximal-rank subgroups}
In this subsection, we shall argue that the left-moving
$E_8$ degrees of freedom can be reproduced by using the
maximal-rank
$SU(5)^2/\BZ_5$ and $SU(9)/\BZ_3$ subgroups of $E_8$,
in place of $\mathrm{Spin}(16)/\BZ_2$.
Just as for $\mathrm{Spin}(16)/\BZ_2$, the finite group quotients
will be realized by orbifolds and will play a crucial role.
At the level of characters of affine algebras, the ideas have
appeared previously in {\it e.g.} \cite{kacsan}, but we shall also
explain how those character decompositions are realized physically
in partition functions.
For more information on determining such finite group quotients,
see appendix~\ref{gpthy}.
First, let us check central charges.
From \cite{diFranc}[section 15.2],
the central charge of a bosonic WZW model at level $k$ is
\begin{equation*}
\frac{ k \, \mbox{dim }G }{ k + C }
\end{equation*}
where $C$ is the dual Coxeter number.
For the case of $G = SU(N)$, $\mbox{dim }G = N^2 - 1$
and $C=N$ (see {\it e.g.} \cite{ps}[p. 502]), hence the central charge
of the bosonic $SU(N)$ WZW is
\begin{equation*}
\frac{ k(N^2 - 1) }{k + N}
\end{equation*}
For $k=1$, this reduces to $N-1$. Thus, the $SU(5)$ current algebra
at level 1 has central charge 4, and the $SU(9)$ current algebra has
central charge 8.
In particular, this means that the $SU(5)\times SU(5)$ current
algebra at level 1
has central charge $4+4=8$,
just right to be used in critical
heterotic strings to build an $E_8$. Similarly,
the $SU(9)$ current algebra at level 1 has central charge $8$,
also just right to be used in critical heterotic strings to build an $E_8$.
Similarly, for $E_6$, $E_7$, $E_8$, the dual Coxeter numbers are 12, 18, 30,
respectively, and it is easy to check that at level 1, each
current algebra has central charge equal to 6, 7, 8, respectively.
More generally, for ADE groups, the level 1 current algebras have central
charge equal to the rank of the group.
For $SU(5)$, the integrable representations (defining WZW primaries)
are ${\bf 5}$, ${\bf 10} = \Lambda^2 {\bf 5}$,
${\bf \overline{10}} = \Lambda^3 {\bf 5}$, and
${\bf \overline{5}} = \Lambda^4 {\bf 5}$.
The fusion rules obeyed by the WZW conformal families have
the form
\begin{equation*}
\begin{split}
[{\bf 5}] \times [{\bf 5}] = \: & [{\bf 10}] \\
{[{\bf 5}]} \times [{\bf \overline{5}}] = \: & [{\bf 1}] \\
{[{\bf \overline{10}}]} \times [{\bf \overline{5}}] = \: & [{\bf 10}] \\
{[{\bf 10}]} \times [{\bf \overline{5}}] = \: & [{\bf 5}] \\
{[{\bf \overline{10}}]} \times [{\bf \overline{10}}] = \: & [{\bf 5}] \\
{[{\bf \overline{10}}]} \times [{\bf 10}] = \: & [{\bf 1}]
\end{split}
\end{equation*}
The adjoint representation of $E_8$ decomposes under $SU(5)^2/\BZ_5$ as
\cite{slansky}
\begin{equation*}
{\bf 248} \: = \: ({\bf 1}, {\bf 24}) \: + \: ({\bf 24},{\bf 1}) \: + \:
({\bf 5},{\bf \overline{10}}) \: + \: ({\bf \overline{5}},{\bf 10})
\: + \: ({\bf 10},{\bf 5}) \: + \: ({\bf \overline{10}},{\bf \overline{5}})
\end{equation*}
from which one would surmise that the corresponding statement about
conformal families is
\begin{equation} \label{su5conffam}
[{\bf 1}] \: = \: [{\bf 1},{\bf 1}] \: + \:
[{\bf 5},{\bf \overline{10}}] \: + \:
[{\bf \overline{5}},{\bf 10}] \: + \:
[{\bf 10},{\bf 5}] \: + \:
[{\bf \overline{10}},{\bf \overline{5}}]
\end{equation}
which can be checked by noting that the right-hand side above squares
into itself under the fusion rules.
Next, we shall check partition functions, which will provide the
conclusive demonstration that
the $E_8$ of a ten-dimensional heterotic string
can be built from $( SU(5) \times SU(5) )/\BZ_5$
instead of $\mathrm{Spin}(16)/\BZ_2$.
The character of the identity representation of $SU(5)$ is
\begin{equation*}
\chi_{SU(5)}({\bf 1}, q) = \frac{1}{\eta(\tau)^{4}}\sum_{\vec{m}\in\BZ^{4}}
q^{(\sum m_i^2 +(\sum m_i)^2)/2}
\end{equation*}
Taking modular transformations, the characters of the other
needed integrable representations are
\begin{equation*}
\chi_{SU(5)}({\bf 5}, q) = \frac{1}{\eta(\tau)^4} \sum_{{\vec{m}\in\BZ^4, \:
\sum m_i=1 \bmod
5}}
q^{(\sum m_i^2 -\frac{1}{5}(\sum m_i)^2)/2}
\end{equation*}
and
\begin{equation*}
\chi_{SU(5)}({\bf 10}, q) = \frac{1}{\eta(\tau)^4} \sum_{{\vec{m}\in\BZ^4, \:
\sum m_i=2 \bmod 5}}
q^{(\sum m_i^2 -\frac{1}{5}(\sum m_i)^2)/2}
\end{equation*}
The remaining two characters (given by $\sum m_i = 3,4 \bmod 5$) are equal
to these, by taking $\vec{m}\to -\vec{m}$.
Now, we need to verify that
\begin{equation} \label{e8su5chars}
\chi_{E_8}({\bf 1}, q) = \chi_{SU(5)}({\bf 1}, q)^2 + 4 \chi_{SU(5)}(
{\bf 5}, q)\, \chi_{SU(5)}({\bf 10}, q)
\end{equation}
which corresponds to equation~(\ref{su5conffam}) for the conformal families.
This character decomposition, along with character decompositions for
other subgroups, has appeared previously in \cite{kacsan}, but since it
plays a crucial role in our arguments, we shall explain in detail why it is
true, and then explain how it is realized physically in partition functions.
The $E_8$ character is given by \cite{gswv1}[section 6.4.8]
\begin{equation*}
\chi_{E_8}({\bf 1},q) \: = \: \frac{E_2(q)}{\eta(\tau)^8}
\end{equation*}
where $E_2(q)$ denotes the relevant Eisenstein series.
The $\BZ_5$ orbifold is implicit here -- $\chi({\bf 1},q)^2$ arises
from the untwisted sector, and each of the four
$\chi({\bf 5},q)\chi({\bf 10},q)$'s arises from a twisted sector.
(As for $\mathrm{Spin}(16)/\BZ_2$, since the orbifold action preserves
the currents, the twisted sector states must form a well-defined module
over the (unorbifolded) affine Lie algebra.)
Ample numerical evidence for equation~(\ref{e8su5chars})
is straightforward to generate.
For example:
\begin{equation*}
\begin{split}
\eta(\tau)^4 \chi_{SU(5)}({\bf 1}, q) = \: & 1 \: + \: 20 q \: + \: 30 q^2 \: + \:
60 q^3 \: + \: 60 q^4 \: + \: 120 q^5 \: + \: 40 q^6 \: + \:
180 q^7 \\
& \: + \: 150 q^8 \: + \: 140 q^9 \: + \: 130 q^{10} \: + \:
240 q^{11} \: + \: 180 q^{12} \: + \: 360 q^{13} \: + \: \cdots \\
\eta(\tau)^4 \chi_{SU(5)}({\bf 5}, q) = \: & 5 q^{2/5} \: + \: 30 q^{7/5} \: + \:
30 q^{12/5} \: + \: 80 q^{17/5} \: + \: 60 q^{22/5} \: + \:
100 q^{27/5} \\
& \: + \: 104 q^{32/5} \: + \: 168 q^{37/5} \: + \:
54 q^{42/5} \: + \: 206 q^{47/5} \: + \: 168 q^{52/5} \\
& \: + \: 172 q^{57/5} \: + \: 140 q^{62/5} \: + \:
270 q^{67/5} \: + \:
153 q^{72/5} \: + \: \cdots \\
\eta(\tau)^4 \chi_{SU(5)}({\bf 10}, q) = \: & 10 q^{3/5} \: + \: 25 q^{8/5} \: + \:
60 q^{13/5} \: + \: 35 q^{18/5} \: + \: 110 q^{23/5} \: + \:
90 q^{28/5} \\
& \: + \: 120 q^{33/5} \: + \: 96 q^{38/5} \: + \:
198 q^{43/5} \: + \: 98 q^{48/5} \: + \: 244 q^{53/5} \\
& \: + \: 126 q^{58/5} \: + \: 192 q^{63/5} \: + \:
208 q^{68/5} \: + \:
300 q^{73/5} \: + \: \cdots
\end{split}
\end{equation*}
Putting this together, we find
\begin{multline*}
\lefteqn{ \eta(\tau)^8\left( \chi_{SU(5)}({\bf 1}, q)^2 \: + \:
4 \chi_{SU(5)}({\bf 5}, q) \, \chi_{SU(5)}({\bf 10}, q)
\right) \: = \: } \\
1 \: + \: 240 q \: + \: 2160 q^2 \: + \: 6720 q^3 \: + \:
17520 q^4 \: + \: 30240 q^5 \: + \: 60480 q^6 \: + \: \cdots
\end{multline*}
which are precisely the first few terms of the appropriate Eisenstein series
$E_2(q)$, numerically verifying the prediction~(\ref{e8su5chars}).
More abstractly, the equivalence can be proven as follows\footnote{This
argument is due to E.~Scheidegger, and we would like to thank him
for allowing us to print it here.}.
In the notation of \cite{gannonlam1}, we need to relate the theta
function of the $E_8$ lattice to a product of theta functions for
$SU(5)$ lattices. Briefly, first one argues that
\begin{equation*}
\Theta(E_8) \: = \: \Theta( \{ A_4, A_4 \}[1,2])
\end{equation*}
Using \cite{gannonlam1}[eqns (1.1), (1.5)], this can be written as
\begin{equation*}
\Theta\left( \bigcup_{i=1}^5 [ig]\{A_4, A_4 \} \right) \: = \:
\sum_{i=1}^5 \Theta\left( [ig]\{ A_4, A_4 \} \right)
\end{equation*}
where $g$ denotes the generator of the $\BZ_5$ action
(shift by 1 on first $A_4$, shift by 2 on second).
Using \cite{gannonlam1}[eqn (1.4)], this can be written as
\begin{equation*}
\sum_{i=1}^5 \Theta([ig]A_4) \Theta([ig]A_4)
\: = \: \sum_{i=1}^5 \Theta([i]A_4) \Theta([2i]A_4)
\end{equation*}
Using the symmetry
\begin{equation*}
\Theta([5-i]A_4) \: = \: \Theta([i]A_4)
\end{equation*}
the result then follows after making the identifications
\begin{equation*}
\eta(\tau)^4 \chi({\bf 1},q) \: = \: \Theta(A_4), \: \: \:
\eta(\tau)^4 \chi({\bf 5},q) \: = \: \Theta([1]A_4), \: \: \:
\eta(\tau)^4 \chi({\bf 10},q) \: = \: \Theta([2]A_4)
\end{equation*}
Merely verifying the existence of a character decomposition does not suffice
to explain how this can be used in alternative constructions of heterotic
strings -- one must also explain how that character decomposition is
realized physically. In the case of $\mathrm{Spin}(16)/\BZ_2$, the
two components of the character decomposition were realized physically
as the untwisted and twisted sectors of a $\BZ_2$ orbifold
of a $\mathrm{Spin}(16)$ current algebra. That orbifold structure
precisely correlates with the group-theoretic fact that the
subgroup of $E_8$ is $\mathrm{Spin}(16)/\BZ_2$ and not
$\mathrm{Spin}(16)$ or $SO(16)$ -- the finite group factor that one gets
from the group theory of $E_8$, appears physically as the orbifold of the
current algebra that one needs in order to reproduce the correct
character decomposition.
There is a closely analogous story here.
Group-theoretically, the subgroup of $E_8$ is not $SU(5) \times SU(5)$
but rather $( SU(5) \times SU(5) )/\BZ_5$, and so one should expect
that a $\BZ_5$ orbifold of the $SU(5)\times SU(5)$ current algebra
should appear. Indeed, that is precisely what happens.
If we only considered an $SU(5)\times SU(5)$ current algebra without
an orbifold, the only contribution to the heterotic partition function
would be from the characters $\chi_{SU(5)}({\bf 1},q)^2$,
which would not reproduce the $E_8$ character.
In order to realize the complete $E_8$ character decomposition, we need
more, and the extra components of the character decomposition are realized
in twisted sectors of a $\BZ_5$ orbifold, the same $\BZ_5$ arising
in group-theoretic considerations.
Each $\chi_{SU(5)}({\bf 5}, q) \chi_{SU(5)}({\bf 10}, q)$ arises
in a twisted sector. The individual ${\bf 5}$, ${\bf 10}$,
${\bf \overline{5}}$, and ${\bf \overline{10}}$ are not invariant under
the $\BZ_5$, but the products $({\bf 5}, {\bf \overline{10}})$,
$({\bf \overline{5}}, {\bf 10})$, $({\bf 10}, {\bf 5})$,
$({\bf \overline{10}}, {\bf \overline{5}})$ {\it are} invariant
under the $\BZ_5$ orbifold, as discussed in appendix~\ref{gpthy}.
For $SU(9)/\BZ_3$, there is an analogous\footnote{At the level of
character decompositions, this and other examples are discussed in {\it e.g.}
\cite{kacsan}.} story.
The adjoint representation of $E_8$ decomposes as \cite{slansky}
\begin{equation*}
{\bf 248} \: = \: {\bf 80} \: + \: {\bf 84} \: + \:
{\bf \overline{84}}
\end{equation*}
and so proceeding as before the conformal families of $E_8$, $SU(9)$ should
be related by
\begin{equation} \label{su9conffam}
[ {\bf 1} ] \: = \: [ {\bf 1} ] \: + \: [ {\bf 84} ] \: + \:
[ {\bf \overline{84}} ]
\end{equation}
(which includes the decomposition above as a special case as the
currents in the current algebra are descendants of the identity).
The relevant $SU(9)$, level 1, characters are given by
\begin{equation*}
\chi_{SU(9)}({\bf 1}, q) = \frac{1}{\eta(\tau)^8}
\sum_{\vec{m}\in \BZ^8} q^{(\sum m_i^2 +(\sum m_i)^2)/2}
\end{equation*}
and
\begin{equation*}
\chi_{SU(9)}({\bf 84}, q) = \frac{1}{\eta(\tau)^8}
\sum_{{\vec{m}\in \BZ^8, \:
\sum m_i =3 \bmod 9}} q^{(\sum m_i^2 -\frac{1}{9}(\sum m_i)^2)/2}
\end{equation*}
(The character for ${\bf \overline{84}}$ is identical.)
Then, from equation~(\ref{su9conffam}) it should be true that
\begin{equation*}
\chi_{E_8}({\bf 1}, q) = \chi_{SU(9)}({\bf 1}, q) \: + \:
2 \chi_{SU(9)}({\bf 84}, q)
\end{equation*}
This identity is proven in \cite{gannonlam1}[table 1].
The same statement is also made for lattices in
\cite{lerchelattice}[section A.3, p. 109] and
\cite{go}[eqn (8.12)], and of course also appeared in
\cite{kacsan}.
Again, it is important to check that this character decomposition
really is realized physically in a partition function,
and the story here closely mirrors the $(SU(5)\times SU(5))/\BZ_5$
and $\mathrm{Spin}(16)/\BZ_2$ cases discussed previously.
Group-theoretically, the subgroup of $E_8$ is $SU(9)/\BZ_3$
and not $SU(9)$ or $SU(9)/\BZ_9$, so one would expect that we need
to take a $\BZ_3$ orbifold of the $SU(9)$ current algebra.
Indeed, if we did not take any orbifold at all, and only coupled
the $SU(9)$ current algebra by itself, then the only contribution
to the heterotic partition function would be from the
character $\chi_{SU(9)}({\bf 1},q)$, which does not suffice to reproduce
the $E_8$ character. Instead, we take a $\BZ_3$ orbifold,
and each of the two characters $\chi_{SU(9)}({\bf 84}, q)$,
$\chi_{SU(9)}({\bf \overline{84}},q)$ appears in a $\BZ_3$ orbifold
twisted sector. Taking those orbifold twisted sectors into account
correctly reproduces the $E_8$ character decomposition within the
heterotic partition function.
\subsection{A non-maximal-rank subgroup }
So far we have discussed how $E_8$ can be built from maximal-rank subgroups.
Somewhat surprisingly, on the level of characters, it appears that
one can build it from non-maximal-rank subgroups also.
We will discuss the case of $G_2 \times F_4$.
Although it satisfies many highly nontrivial checks, unfortunately
we will eventually conclude that it cannot be used, unlike the
maximal-rank subgroups discussed so far.
First, we should mention that the construction of the ordinary
Lie group $E_8$ from $G_2 \times F_4$ is described in \cite{adams}[chapter 8].
Very roughly, the idea is that if one takes $\mathrm{Spin}(16)$
and splits it into $\mathrm{Spin}(7) \times \mathrm{Spin}(9)$,
then $G_2 \subset \mathrm{Spin}(7)$ and $F_4 \subset \mathrm{Spin}(9)$.
Under the $g_2 \times f_4$ subalgebra, the adjoint representation of $e_8$
decomposes as \cite{slansky}
\begin{equation} \label{e8g2f4}
{\bf 248} \: = \: ({\bf 14}, {\bf 1}) \: + \: ({\bf 1}, {\bf 52})
\: + \: ({\bf 7}, {\bf 26})
\end{equation}
The commutant of $G_2 \times F_4$ in $E_8$ has rank zero.
One way to see this is from the construction outlined above, but a simpler
way is from the decomposition of the adjoint representation of $E_8$:
if the commutant had rank greater than zero, then the adjoint of the
commutant would secretly appear in the decomposition of the adjoint of
$E_8$, as a set of singlets, but there are no singlets in the $E_8$
adjoint decomposition, and so the commutant must have rank zero.
Thus, even though $G_2 \times F_4$ is not of maximal rank, its commutant
in $E_8$ can be no more than a finite group.
This may sound a little surprising to some readers, but is in fact
a relatively common occurrence in representation theory.
For example, a dimension $n$ representation of $SU(2)$ embeds
$SU(2)$ in $SU(n)$, and has rank zero commutant inside $SU(n)$,
even though $SU(2)$ is not a maximal-rank subgroup.
This is a consequence of Schur's lemma.
We are going to discuss whether the $E_8$ degrees of freedom can be
described by this non-maximal-rank subgroup, namely $G_2 \times F_4$.
As one initial piece of evidence,
the fact stated above that the commutant of $G_2 \times F_4$ in $E_8$
has rank zero is consistent. After all,
if it is possible to describe all of the $E_8$ current algebra using
$G_2\times F_4$ on the internal space, then there will be no
left-moving worldsheet degrees of freedom left over to describe
any gauge symmetry in the low-energy compactified heterotic theory.
That can only be consistent if the commutant has rank zero,
{\it i.e.}, if there is no low-energy gauge symmetry left over to
describe.
Next, let us check that the central charges of the algebras work out
correctly. The dual Coxeter number of $G_2$ is $4$ and that of $F_4$
is $9$, so the central charge of the $G_2$ algebra at level 1 is
$14/5$ and that of the $F_4$ algebra at level 1 is $52/10$,
which sum to $8$, the same as the central charge of the $E_8$ algebra
at level 1.
Both $G_2$ and $F_4$ affine algebras at level one have only two\footnote{
This is a short exercise using \cite{slansky}, let us briefly outline
the details for $G_2$. The condition for a representation with highest
weight $\lambda$ to be integrable at level $k$ is $2 \psi \cdot \lambda / \psi^2
\leq k$, where $\psi$ is the highest weight of the adjoint representation.
Using \cite{slansky} tables 7 and 8, a representation of $G_2$ with
Dynkin labels $(a,b)$ has $2 \psi \cdot \lambda / \psi^2 = 2a+b$,
where $a$, $b$ are nonnegative integers, and so can only be $\leq 1$
when $a=0$ and $b$ is either $0$ or $1$, which gives the
${\bf 1}$ and ${\bf 7}$ (\cite{slansky}[table 13]) representations
respectively.
}
integrable representations:
\begin{equation*}
\begin{array}{cc}
G_2: & [{\bf 1}], [{\bf 7}] \\
F_4: & [{\bf 1}], [{\bf 26}]
\end{array}
\end{equation*}
The conformal weights of the primary fields are, respectively, $h_{7}= \tfrac{2}{5}$ and $h_{26}= \tfrac{3}{5}$
So, our proposed decomposition of $E_8$ level 1 (which has only one integrable
representation)
\begin{equation*}
[{\bf 1}] \: = \: [{\bf 1},{\bf 1}] \: + \: [{\bf 7},{\bf 26}]
\end{equation*}
does, indeed, reproduce the correct central charge and the conformal weights
and multiplicity of currents.
Under modular transformations,
\begin{equation}\label{g2f4chardecomp}
\chi_{E_{8}}({\bf 1},q) = \chi_{G_{2}}({\bf 1},q)\chi_{F_{4}}({\bf 1},q)+ \chi_{G_{2}}({\bf 7},q)\chi_{F_{4}}({\bf 26},q)
\end{equation}
transform identically. To see this, note that the fusion rules of $G_{2}$
and $F_{4}$ at level 1 are, respectively,
\begin{equation}\label{g2f4fusion}
\begin{array}{cc}
G_2: & [{\bf 7}] \times [{\bf 7}] \: = \: [{\bf 1}] + [{\bf 7}]\\
F_4: & [{\bf 26}] \times [{\bf 26}] \: = \: [{\bf 1}] + [{\bf 26}]
\end{array}
\end{equation}
The modular S-matrix (for both $G_{2}$ and $F_{4}$) is
\begin{equation}
S = \frac{1}{\sqrt{2}}\begin{pmatrix}
\sqrt{1-1/\sqrt{5}} & \sqrt{1+1/\sqrt{5}} \\
\sqrt{1+1/\sqrt{5}} & - \sqrt{1-1/\sqrt{5}}
\end{pmatrix}
\end{equation}
which, in both cases, satisfies $S^{2}= (ST)^{3}=1$ and
$N_{ijk}= \sum_{m}\tfrac{S_{im}S_{jm}S_{km}}{S_{1m}}$.
Using this modular S-matrix, the particular combination of characters on the
RHS of \eqref{g2f4chardecomp} is invariant, as it should be.
This, along with the transformation under $T$ which we have already checked, proves \eqref{g2f4chardecomp}.
However, it is clear from the fusion rules, \eqref{g2f4fusion},
that something is amiss. If we take the OPE of $[{\bf 7},{\bf 26}]$ with
itself, the fusion rules dictate that we should see, in addition to the
desired $[{\bf 1},{\bf 1}]+[{\bf 7},{\bf 26}]$, terms
involving $[{\bf 7},{\bf 1}]+[{\bf 1},{\bf 26}]$ as well.
While we have managed to reproduce the multiplicity of states correctly,
it appears that we have failed to reproduce their interactions correctly.
Moreover Ka\v c and Sanielevici \cite{kacsan} have found several other
examples of
non-maximal rank embeddings of characters of affine algebras, of which this
is, perhaps, the simplest example. As far as we can tell, the same criticism
applies to their other examples: the multiplicity of states correctly
reproduces that of the $E_{8}$ current algebra, but the interactions do not.
It is worth remarking that our previous examples were obtained as (asymmetric)
orbifolds by some subgroup of the center.
In the case at hand, $G_{2}$ and $F_{4}$ are center-less\footnote{
This fact is discussed in appendix~\ref{gpthy}. In addition, they also
have no normal finite subgroup, as any discrete normal subgroup of a connected
group is necessarily central, and there is no center in this case.
The statement on discrete normal subgroups can be shown as follows.
Let $G$ be a connected group and $N$ a discrete normal subgroup.
Let $G$ act on $N$ by conjugation, which it does since $N$ is normal.
Then for any $n \in N$, every $g n g^{-1}$ is in $N$, and connected to
$n$ within $N$, since $G$ is connected. Since $N$ is discrete,
for $g n g^{-1}$ to be connected to $n$, they must be equal,
hence $N$ is central. We would like to thank A.~Knutson for pointing
this out to us.
}, so there is
no obvious orbifold construction that could give rise to \eqref{g2f4chardecomp}.
\section{Symmetric bosonic fibered WZW models} \label{symmfibwzw}
Now that we have seen alternative constructions of ten-dimensional
heterotic strings using more general current algebras than
$\mathrm{Spin}(16)/\BZ_2$, we will next discuss how to fiber
those current algebras over nontrivial spaces.
As a warm-up, let us first describe a fibered WZW model in the
symmetric case. This will not be useful for heterotic strings,
but it will provide a good `stepping-stone' to the asymmetric
fibered WZW models we will discuss in the next section.
Start with the total space of a G-bundle in which across coordinate
patches the fibers transform as, $g \mapsto (g_{\alpha \beta}) g
(g_{\alpha \beta}^{-1})$. Let $A_{\mu}$ be a connection on this bundle.
First, recall from \cite{cliffwzw1}[eqn (2.4)] that a WZW model
in which the adjoint action has been gauged has the form
\begin{equation*}
\begin{split}
S = \: &
- \: \frac{k}{4 \pi} \int_{\Sigma} d^2z \mbox{Tr } \left[
g^{-1} \partial g g^{-1} \overline{\partial} g \right] \\
& - \frac{i k}{12 \pi} \int_B d^3y \epsilon^{ijk}
\mbox{Tr }\left[ g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right] \\
& + \frac{k}{2 \pi} \int_{\Sigma} d^2z \mbox{Tr }\left[
A_{\overline{z}} g^{-1} \partial g \: - \:
A_{z} \overline{\partial} g g^{-1} \right] \\
& + \frac{k}{2 \pi} \int_{\Sigma} d^2z
\mbox{Tr } \left[
A_{\overline{z}} g^{-1} A_{z} g \: - \: A_{\overline{z}} A_{z} \right]
\end{split}
\end{equation*}
where $A_z$, $A_{\overline{z}}$ is a worldsheet gauge field.
To define a fibered WZW model, we will want to replace the
worldsheet gauge fields with pullbacks of a gauge field on the
target space (the connection on the $G$ bundle).
That way, gauge invariance across coordinate patches will be
built in.
Thus,
consider a nonlinear sigma model on the total space of that bundle
with action
\begin{equation*}
\begin{split}
S = \: & \frac{1}{\alpha'} \int_{\Sigma} d^2z \partial_{\alpha} \phi^{\mu}
\partial^{\alpha} \phi^{\nu} g_{\mu \nu}
\: - \: \frac{k}{4 \pi} \int_{\Sigma} d^2z \mbox{Tr } \left[
g^{-1} \partial g g^{-1} \overline{\partial} g \right] \\
& - \frac{i k}{12 \pi} \int_B d^3y \epsilon^{ijk}
\mbox{Tr }\left[ g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right] \\
& + \frac{k}{2 \pi} \int_{\Sigma} d^2z \mbox{Tr }\left[
\overline{\partial} \phi^{\mu} A_{\mu} g^{-1} \partial g \: - \:
\partial \phi^{\mu} A_{\mu} \overline{\partial} g g^{-1} \right] \\
& + \frac{k}{2 \pi} \int_{\Sigma} d^2z \overline{\partial}
\phi^{\mu} \partial \phi^{\nu} \mbox{Tr } \left[
A_{\mu} g^{-1} A_{\nu} g \: - \: A_{\mu} A_{\nu} \right]
\end{split}
\end{equation*}
where the $\phi^{\mu}$ are coordinates on the base and $g$ is a coordinate
on the fibers.
On each coordinate patch on the base, the Wess-Zumino term is an
ordinary Wess-Zumino term -- the fields $g$ are fields on the worldsheet,
not functions of the $\phi$ -- and so can be handled in the ordinary
fashion.
Next, although we have deliberately engineered this action to be
well-defined across coordinate patches on the target space,
let us explicitly check that the action is indeed gauge invariant.
Under the following variation
\begin{equation*}
\begin{split}
g \mapsto \: & h g h^{-1} \\
A_{\mu} \mapsto \: & h \partial_{\mu} h^{-1} \: + \:
h A_{\mu} h^{-1}
\end{split}
\end{equation*}
(where $h = h(\phi)$), the variation of all terms except the WZ
term is given by
\begin{multline*}
\delta =
\frac{k}{4 \pi} \int_{\Sigma} d^2z\mbox{Tr }\left[
- h^{-1} \overline{\partial} h g^{-1} \partial g \: + \:
h^{-1} \partial h \overline{\partial} g g^{-1} \: - \:
h^{-1} \partial h g h^{-1} \overline{\partial} h g^{-1} \right. \\
\left. + h^{-1} \partial h g^{-1} \overline{\partial} g
\: - \: \partial g g^{-1} h^{-1} \overline{\partial} h \: + \:
h^{-1} \partial h g^{-1} h^{-1} \overline{\partial} h g \right]
\end{multline*}
and where it is understood that, for example,
$\partial h = \partial \phi^{\mu} \partial_{\mu} h$.
The variation of the WZ term is given by
\begin{equation*}
\begin{split}
- \frac{3 i k}{12 \pi} &\int_B d^3y \epsilon^{ijk} \mbox{Tr }\left[
g^{-1} h^{-1} \partial_i h h^{-1} \partial_j h \partial_k g \: - \:
g^{-1} h^{-1} \partial_i h h^{-1} \partial_j h g h^{-1} \partial_k h
\right. \\
& + \: h^{-1} \partial_i h \partial_j g g^{-1} \partial_k g g^{-1} \:
- \: g^{-1} h^{-1} \partial_i h \partial_j g h^{-1} \partial_k h \\
& - \: g^{-1} h^{-1} \partial_i h g h^{-1} \partial_j h g^{-1} \partial_k g
\: + \: g^{-1} h^{-1} \partial_i h g h^{-1} \partial_j h h^{-1} \partial_k h\\
& \left. - \: g^{-1} \partial_i g g^{-1} \partial_j g h^{-1} \partial_k h
\: + \: g^{-1} \partial_i g h^{-1} \partial_j h h^{-1} \partial_k h \right] \\
= \: & - \frac{3ik}{12 \pi} \int_B d \mbox{Tr }\left[
- h^{-1} dh \wedge dg g^{-1} \: - \:
h^{-1} dh \wedge g^{-1} dg \: + \: g^{-1} h^{-1} (dh) g \wedge h^{-1} dh
\right] \\
= \: & - \frac{3ik}{12\pi}\int_{\Sigma} \mbox{Tr }\left[
- h^{-1} dh \wedge dg g^{-1} \: - \:
h^{-1} dh \wedge g^{-1} dg \: + \: g^{-1} h^{-1} (dh) g \wedge h^{-1} dh
\right]
\end{split}
\end{equation*}
If we write $z = x + iy$ then
\begin{equation*}
dz \wedge d\overline{z} \: - \: d \overline{z} \wedge dz \: = \:
2 i \left( dy \wedge dx \: - \: dx \wedge dy \right)
\end{equation*}
then we see that the terms generated by the variation of the WZ term
are exactly what is needed to cancel the terms generated by everything else.
Note that the computation above, the check that the model is
well-defined across target-space coordinate patches,
is identical to the computation needed to show that an ordinary
gauged WZW model is invariant under gauge transformations.
The model we have described so far is bosonic, but one could
imagine adding fermions along the base and demanding supersymmetry
under transformations that leave the fibers invariant.
A simpler version of this is obtained by taking a $(2,2)$ nonlinear
sigma model and adding right- and left-moving fermions $\lambda_{\pm}$
coupling to a vector bundle over the $(2,2)$ base.
Demanding that the resulting model be $(2,2)$
supersymmetric on-shell unfortunately forces the bundle to be flat: $F=0$.
Roughly, half of the constraints one obtains from supersymmetry
force the curvature
to be holomorphic, in the sense $F_{ij} =
F_{\overline{\imath} \overline{\jmath}} = 0$,
and the other half force the connection to be flat.
We shall find in the next section that imposing merely $(0,2)$
supersymmetry is easier: one merely needs the curvature to
be holomorphic, not necessarily flat.
\section{Fibered (0,2) WZW models} \label{chirfibwzw}
\subsection{Construction of the lagrangian}
Begin with some principal $G$ bundle with connection $A_{\mu}$
over some Calabi-Yau $X$.
Consider a nonlinear sigma model on the total space of that bundle.
We shall think of the fibers as defining, locally, WZW models,
so we use the connection $A_{\mu}$ to define a chiral multiplication on the
fibers of the bundle, and have a WZ term to describe $H$ flux in the fibers.
\subsubsection{Gauge invariance and global well-definedness}
We are going to write down a fibered WZW model in which each
fiber is a gauged WZW model, gauging the action $g \mapsto h g$
across coordinate patches on the target space, the principal
$G$ bundle.
First, recall from \cite{cliffwzw1}[eqn (2.9)]
and \cite{wittenholfac}, a gauged WZW model
gauging the chiral multiplication $g \mapsto h g$ is given by
\begin{equation*}
\begin{split}
S' = \: &
- \: \frac{k}{4 \pi} \int_{\Sigma} d^2 z \mbox{Tr }\left(
g^{-1} \partial_z g g^{-1} \partial_{\overline{z}} g \right)
\: - \: \frac{ik}{12 \pi} \int_B d^3 y \epsilon^{ijk}
\mbox{Tr }\left( g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right) \\
& - \: \frac{k}{2 \pi} \int_{\Sigma} d^2 z \mbox{Tr }
\left( A_{z}
\partial_{\overline{z}} g g^{-1}\: + \: \frac{1}{2}
A_{z} A_{\overline{z}} \right)
\end{split}
\end{equation*}
where $A_z$, $A_{\overline{z}}$ are worldsheet gauge fields.
With that in mind, to describe a fibered WZW model, one would
replace the worldsheet gauge fields with pullbacks of a connection
$A_{\mu}$ on the target space, the principal $G$ bundle.
In fact,
one would initially suppose that the action should have the form
\begin{equation*}
\begin{split}
S = \: & \frac{1}{\alpha'} \int_{\Sigma} d^2z\left( \frac{1}{4}
g_{i \overline{\jmath}}
\partial_{\alpha} \phi^{i} \partial^{\alpha} \phi^{\overline{\jmath}}
\: + \: i g_{i \overline{\jmath}} \psi_+^{\overline{\jmath}} D_{\overline{z}}
\psi_+^i \right) \\
& - \: \frac{k}{4 \pi} \int_{\Sigma} d^2 z \mbox{Tr }\left(
g^{-1} \partial_z g g^{-1} \partial_{\overline{z}} g \right)
\: - \: \frac{ik}{12 \pi} \int_B d^3 y \epsilon^{ijk}
\mbox{Tr }\left( g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right) \\
& - \: \frac{k}{2 \pi} \int_{\Sigma} d^2 z \mbox{Tr }
\left( (\partial_{z} \phi^{\mu}) A_{\mu}
\partial_{\overline{z}} g g^{-1}\: + \: \frac{1}{2} (\partial_z \phi^{\mu}
\partial_{\overline{z}} \phi^{\nu}) A_{\mu} A_{\nu} \right)
\end{split}
\end{equation*}
The field $g$ defines a coordinate on the fibers of the bundle,
and $\phi$ are coordinates on the base.
However, the full analysis is slightly more complicated.
As described in \cite{cliffwzw1,wittenholfac,ralph1} a WZW action is not
invariant under chiral group multiplications, so the action above is
not invariant across coordinate patches on the target space.
Specifically, under the target-space gauge transformation
\begin{equation*}
\begin{split}
g \mapsto \: & h g \\
A_{\mu} \mapsto \: & h A_{\mu} h^{-1} \: + \: h \partial_{\mu} h^{-1}
\end{split}
\end{equation*}
(where $h$ is a group-valued function on the target space)
the gauge transformation of the terms above excepting the Wess-Zumino term
is given by
\begin{equation*}
\frac{k}{4 \pi} \int_{\Sigma} d^2z \mbox{Tr }\left( h^{-1} \partial h
\overline{\partial} g g^{-1} \: - \: h^{-1} \overline{\partial} h \partial g
g^{-1} \: + \: \overline{\partial} \phi^{\mu} A_{\mu} h^{-1} \partial h
\: - \: \partial \phi^{\mu} A_{\mu} h^{-1} \overline{\partial} h \right)
\end{equation*}
where, for example, $\partial h = (\partial_z \phi^{\mu})( \partial_{\mu} h)$,
and the gauge transformation of the Wess-Zumino term is given by
\begin{equation*}
- \frac{ik}{12 \pi} \int_B d^3y \epsilon^{ijk} \mbox{Tr }\left(
h^{-1} \partial_i h h^{-1} \partial_j h h^{-1} \partial_k h \right)
\: + \: \frac{ik}{4 \pi} \int_{\Sigma} \mbox{Tr }\left(
h^{-1} dh \wedge dg g^{-1} \right)
\end{equation*}
This lack of gauge invariance is exactly what one would expect
of a bosonized description of the left-movers on a heterotic
string worldsheet. There is a chiral gauge anomaly in the
fermionic realization which after bosonization should be realized classically.
On the other hand, a lack of gauge-invariance across coordinate
patches means we have a problem with global well-definedness of the chiral fibered WZW
model.
We can resolve this problem with gauge invariance in the standard
way for heterotic strings: assign the $B$ field nontrivial
gauge transformation properties. So, we add a $B$ field,
coupling as
\begin{equation*}
\frac{1}{\alpha'} \int_{\Sigma} d^2z B_{\mu \nu} \left(
\partial \phi^{\mu} \overline{\partial} \phi^{\nu} \: - \:
\overline{\partial} \phi^{\mu} \partial \phi^{\nu} \right)
\end{equation*}
and demand that
under the gauge transformation above, the holonomy above pick up
the terms
\begin{equation} \label{CS-gauge-trans}
+ \frac{ik}{12 \pi} \int_B d^3y \epsilon^{ijk} \mbox{Tr }\left(
h^{-1} \partial_i h h^{-1} \partial_j h h^{-1} \partial_k h \right) \: + \:
\frac{ik}{4 \pi} \int_{\Sigma} \mbox{Tr }\left(
h^{-1} dh \wedge d \phi^{\mu} A_{\mu} \right)
\end{equation}
This transformation law manifestly restores gauge-invariance.
Let us check for a minute that this transformation law is consistent.
The second term is a two-form, and so it is completely consistent
for the $B$ field to pick up such a term. The first term, on the
other hand, is a three-form, which in general will not even
be closed on each overlap chart. As a result, the first term cannot
be expressed even locally in terms of a two-form.
However, there is a fix.
In addition to gauge invariance, we must also demand, as is standard
in heterotic strings, that the $B$ field transform under local
Lorentz transformations acting on the chiral right-moving fermions.
These transformations are anomalous, and by demanding that the $B$
field transform, we can restore the gauge-invariance broken by
the anomalies.
Under such transformations, the $B$ field
will necessarily pick up two closely analogous terms, one of which
will involve another problematic three-form.
Thus, we need for the combination
\begin{equation*}
k \, \mbox{Tr } \left( \left( g_{\alpha \beta}^F \right)^{-1} d g_{\alpha \beta}^F
\right)^3 \: - \:
\mbox{Tr }\left( \left( g_{\alpha \beta}^R \right)^{-1} d g_{\alpha \beta}^R
\right)^3
\end{equation*}
to be exact on each overlap, where the $g_{\alpha \beta}$'s are transition
functions for the gauge ($F$) and tangent ($R$) bundles.
This turns out \cite{tonypriv} to be implied by the statement that
$k \mbox{Tr }F^2$ and $\mbox{Tr } R^2$ match in cohomology;
writing Chern-Simons forms for both and interpreting in terms of
Deligne cohomology, the condition that the difference across overlaps
is exact is immediate.
This is the first appearance of the anomaly-cancellation constraint that
\begin{equation} \label{anom1}
k \, [ \mbox{Tr } F^2 ] \: = \: [ \mbox{Tr } R^2 ]
\end{equation}
where $k$ is the level of the fibered Kac-Moody algebra.
We shall see this same constraint emerge several more times in
different ways.
In any event, so long as the condition~(\ref{anom1}) is obeyed,
we see that the chiral fibered WZW model is well-defined globally.
Next we shall the fermion kinetic terms in this model.
In order to formulate a supersymmetric theory, we shall need to
add a three-form flux $H_{\mu \nu \rho}$ to the connection appearing
in the $\psi$ kinetic terms. Ordinarily $H = d B$, but we need
$H$ to be gauge- and local-Lorentz-neutral, whereas $B$ transforms
under both gauge and local Lorentz transformations. To fix this,
we follow the standard procedure in heterotic strings of adding
Chern-Simons terms. For example, the gauge terms~(\ref{CS-gauge-trans})
are the same as those arising in a gauge transformation of the
Chern-Simons term
\begin{equation*}
+ \: \frac{ i k }{ 4 \pi} \int_B d^3y \epsilon^{ijk}
\partial_i \phi^{\mu} \partial_j \phi^{\nu} \partial_k \phi^{\rho}
\mbox{Tr }\left( A_{\mu} \partial_{\nu} A_{\rho} \: + \:
\frac{2}{3}A_{\mu}A_{\nu}A_{\rho} \right)
\end{equation*}
and similarly one can cancel the terms picked up under
local Lorentz transformation by adding a term involving the
Chern-Simons form coupling to the spin connection.
Schematically, we have
\begin{equation*}
H \: = \: d B \: + \: (\alpha')\mbox{Tr }\left( k \, CS(A) \: - \: CS(\omega) \right)
\end{equation*}
where $k$ is the level of the fibered current algebra.
$H$ is now an ordinary gauge- and local-Lorentz-invariant three-form.
This statement implies that $k \, \mbox{Tr } F^2$ and $\mbox{Tr }R^2$
must be in the same cohomology class. For a fibered current algebra
defined by a principal $SU(n)$ bundle ${\cal E}$ over a space $X$,
this is the statement that $k \, c_2({\cal E}) = c_2(TX)$,
which generalizes the ordinary anomaly cancellation condition of heterotic
strings. This is the second appearance of this constraint;
we shall see it again later.
As an aside, note that since this model has
nonzero $H$ flux, the metric cannot be K\"ahler
\cite{strominger}.
More precisely, to zeroth order in $\alpha'$ a K\"ahler metric can
be consistent, but to next leading order in $\alpha'$ the metric
will be nonK\"ahler, with $H$ measuring how far the metric is from
being K\"ahler.
Also note that this analysis is analogous to, though slightly
different from, that of $(0,2)$ WZW models discussed in
\cite{cliffwzw1,ralph1}. There, WZW models with chiral group
multiplications and chiral fermions were also considered. However,
the fermions lived in the tangent bundle to the group manifold,
so the chiral group multiplication induced the right-moving fermion
anomaly, and so that chiral fermion anomaly and the classical
noninvariance of the action could be set to cancel each other out.
Here, on the other hand, the chiral fermions live on the base,
not the WZW fibers, and so do not see the chiral group multiplication
(which only happens on the fibers). Thus, here we proceed in a more
nearly traditional fashion, by adding a $B$ field with nontrivial
gauge- and local-Lorentz transformations, whose global well-definedness
places constraints on the bundles involved.
Thus, the gauge-invariant action has the form
\begin{equation*}
\begin{split}
S = \: & \frac{1}{\alpha'} \int_{\Sigma} d^2z\left( \frac{1}{4}
g_{i \overline{\jmath}}
\partial_{\alpha} \phi^{i} \partial^{\alpha} \phi^{\overline{\jmath}}
\: + \: i g_{i \overline{\jmath}} \psi_+^{\overline{\jmath}} D_{\overline{z}}
\psi_+^i \right) \\
& + \: \frac{1}{\alpha'} \int_{\Sigma} d^2z B_{\mu \nu}
\left(
\partial \phi^{\mu} \overline{\partial} \phi^{\nu} \: - \:
\overline{\partial} \phi^{\mu} \partial \phi^{\nu} \right) \\
& - \: \frac{k}{4 \pi} \int_{\Sigma} d^2 z \mbox{Tr }\left(
g^{-1} \partial_z g g^{-1} \partial_{\overline{z}} g \right)
\: - \: \frac{ik}{12 \pi} \int_B d^3 y \epsilon^{ijk}
\mbox{Tr }\left( g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right) \\
& - \: \frac{k}{2 \pi} \int_{\Sigma} d^2 z \mbox{Tr }
\left( (\partial_{z} \phi^{\mu}) A_{\mu}
\partial_{\overline{z}} g g^{-1}\: + \: \frac{1}{2} (\partial_z \phi^{\mu}
\partial_{\overline{z}} \phi^{\nu}) A_{\mu} A_{\nu} \right) \\
\end{split}
\end{equation*}
\subsubsection{Worldsheet supersymmetry}
Next, let us demand that the model possess $(0,2)$ supersymmetry,
under the transformations
\begin{equation*}
\begin{split}
\delta \phi^i = \: & i \alpha_- \psi_+^i \\
\delta \phi^{\overline{\imath}} = \: & i \tilde{\alpha}_- \psi_+^{
\overline{\imath}} \\
\delta \psi_+^i = \: & - \tilde{\alpha}_- \partial \phi^i \\
\delta \psi_+^{\overline{\imath}} = \: & - \alpha_- \partial
\phi^{\overline{\imath}} \\
\delta g = \: & 0
\end{split}
\end{equation*}
Supersymmetry will require us to add the gauge-invariant term
\begin{equation*}
\frac{i k}{4 \pi} \int_{\Sigma} d^2z \mbox{Tr }\left(
F_{\mu \nu} \overline{\partial}_A g g^{-1} \right)
\psi_+^{\mu} \psi_+^{\nu}
\end{equation*}
where
\begin{equation*}
\begin{split}
\overline{\partial}_A g g^{-1} = \: & \left( \overline{\partial} g
\: + \: \overline{\partial} \phi^{\mu} A_{\mu} g \right) g^{-1} \\
= \: & \overline{\partial} g g^{-1} \: + \: \overline{\partial} \phi^{\mu}
A_{\mu}
\end{split}
\end{equation*}
and $F_{\mu \nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}
+ [A_{\mu}, A_{\nu}]$.
The term above is an analogue of the four-fermi term appearing
in standard heterotic string constructions.
We shall also add an $H$ flux field to the base.
One finds that for the supersymmetry transformations to close,
one needs $F_{ij} = F_{\overline{\imath} \overline{\jmath}} = 0$.
Let us outline how the $\alpha_-$ supersymmetry transformations work.
The $\alpha_-$ terms in the supersymmetry transformation of the base terms
\begin{equation*}
\frac{1}{\alpha'} \int_{\Sigma} d^2z\left( \frac{1}{4}
g_{i \overline{\jmath}}
\partial_{\alpha} \phi^{i} \partial^{\alpha} \phi^{\overline{\jmath}}
\: + \: \frac{i}{2} g_{\mu \nu} \psi_+^{\mu} D_{\overline{z}}
\psi_+^{\nu}
\: + \: B_{\mu \nu} \left(
\partial \phi^{\mu} \overline{\partial} \phi^{\nu}
\: - \: \overline{\partial} \phi^{\mu} \partial \phi^{\nu}
\right) \right)
\end{equation*}
where
\begin{equation*}
D_{\overline{z}} \psi_+^{\nu} \: = \: \overline{\partial} \psi_+^{\nu} \: + \:
\overline{\partial} \phi^{\mu} \left( \Gamma^{\nu}_{\: \: \sigma \mu}
\: - \: H^{\nu}_{\: \: \sigma \mu} \right) \psi_+^{\sigma}
\end{equation*}
are given by
\begin{equation*}
\begin{split}
\frac{1}{\alpha'}\int d^2z & \left[
(i \alpha_- \psi_+^i) (\overline{\partial} \phi^{\mu})
(\partial \phi^{\nu}) (H_{i \mu \nu}) \right] \\
+& \frac{1}{\alpha'} \int_{\Sigma} d^2z
\left[ \frac{ i}{2} (i \alpha_- \psi_+^i )(\overline{\partial} \phi^{\mu})
\psi_+^j \psi_+^{\overline{k}}
\left( H_{\overline{k} i j, \mu} \: - \: H_{\overline{k} i \mu, j} \: - \:
H_{j \overline{k} \mu, i} \: + \: H_{j i \mu, \overline{k}} \right)
\right] \\
+& \frac{1}{\alpha'} \int_{\Sigma} d^2z (i \alpha_- \psi_+^i)
\left( B_{\mu \nu, i} \: - \: B_{i \nu, \mu} \: - \: B_{\mu i, \nu}
\right) \left( \partial \phi^{\mu} \overline{\partial} \phi^{\nu}
\: - \: \overline{\partial} \phi^{\mu} \partial \phi^{\nu} \right)
\end{split}
\end{equation*}
and where we needed to assume
\begin{equation*}
\begin{array}{c}
H_{ijk} \: = \:
H_{\overline{\imath} \overline{\jmath} \overline{k}} \: = \: 0 \\
H_{i j \overline{k}} \: = \: \frac{1}{2}\left( g_{i \overline{k}, j}
\: - \: g_{j \overline{k}, i} \right) \: = \: \Gamma_{i j \overline{k}}
\end{array}
\end{equation*}
(This was derived off-shell, without using any equations of motion.)
The $\alpha_-$ terms in the supersymmetry transformation of the fiber terms
\begin{equation*}
\begin{split}
- \: \frac{k}{4 \pi} \int_{\Sigma} d^2 z & \mbox{Tr }\left(
g^{-1} \partial_z g g^{-1} \partial_{\overline{z}} g \right)
\: - \: \frac{ik}{12 \pi} \int_B d^3 y \epsilon^{ijk}
\mbox{Tr }\left( g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right) \\
& - \: \frac{k}{2 \pi} \int_{\Sigma} d^2 z \mbox{Tr }
\left( (\partial_{z} \phi^{\mu}) A_{\mu}
\partial_{\overline{z}} g g^{-1}\: + \: \frac{1}{2} (\partial_z \phi^{\mu}
\partial_{\overline{z}} \phi^{\nu}) A_{\mu} A_{\nu} \right) \\
& + \: \frac{i k}{4 \pi} \int_{\Sigma} d^2z \mbox{Tr }\left(
F_{\mu \nu} \overline{\partial}_A g g^{-1} \right)
\psi_+^{\mu} \psi_+^{\nu}
\end{split}
\end{equation*}
are given by
\begin{multline*}
\frac{i k}{4 \pi} \int_{\Sigma} d^2z
\left( i \alpha_- \psi_+^i \right)
\mbox{Tr }\left( F_{\mu \nu} F_{i \lambda} \right)
\psi_+^{\mu} \psi_+^{\nu}
(\overline{\partial} \phi^{\lambda}) \\
\: - \: \frac{k}{4 \pi} \int_{\Sigma} d^2z
\left( i \alpha_- \psi_+^i \right)
\overline{\partial} \phi^{\mu} \partial \phi^{\nu} \mbox{Tr }\left(
\left( A_i \partial_{\mu} A_{\nu} \: + \: \frac{2}{3} A_i A_{\mu} A_{\nu}
\right) \: \pm \: \mbox{ permutations }
\right)
\end{multline*}
The supersymmetry transformations only close on-shell\footnote{Alternatively,
the supersymmetry transformations will close off-shell if instead of
$\delta g = 0$ we take
\begin{equation*}
\delta g \: = \: - (i \alpha_- \psi_+^i) A_i g \: - \: (i \tilde{\alpha}_-
\psi_+^{\overline{\imath}}) A_{\overline{\imath}} g
\end{equation*}
(This is true for both $\alpha_-$ transformations considered here
as well as $\tilde{\alpha}_-$ transformations.)
In this form supersymmetry transformations explicitly commute with
gauge transformations; on the other hand, the on-shell formulation
$\delta g = 0$ makes it explicit that supersymmetry is only
meaningfully acting on the base.};
to get the result above requires using the classical
equations of motion for $g$,
namely
\begin{equation} \label{2ndclassconstr}
\partial_A \left( \overline{\partial}_A g g^{-1} \right) \: = \:
\partial \phi^{\mu} \overline{\partial} \phi^{\nu} F_{\mu \nu}
\: + \: \frac{i}{2} [ F_{\mu \nu}, \overline{\partial}_A g g^{-1}]
\psi_+^{\mu} \psi_+^{\nu} \: + \:
\frac{i}{2} \overline{\partial}_A\left( F_{\mu \nu} \psi_+^{\mu}
\psi_+^{\nu} \right)
\end{equation}
where
\begin{equation*}
\partial_A\left( \overline{\partial}_A g g^{-1} \right) \: = \:
\partial \left( \overline{\partial}_A g g^{-1} \right) \: + \:
[ \partial \phi^{\lambda} A_{\lambda}, \overline{\partial}_A g g^{-1} ]
\end{equation*}
Note equation~(\ref{2ndclassconstr}) generalizes the chirality condition
$\partial( \overline{\partial} g g^{-1} ) = 0$ that appears in
ordinary (non-fibered) WZW models.
We will also use equation~(\ref{2ndclassconstr}) to define a second
class constraint -- we are describing chiral nonabelian bosons,
after all.
Also note equation~(\ref{2ndclassconstr}) is the supersymmetrization
of the anomaly in the chiral gauge current:
defining $j = \overline{\partial}_A g g^{-1}$, and omitting fermions,
this says $D j \propto F$. If the WZW current were realized by fermions,
this would be the chiral anomaly; here, we have bosonized, and so the
anomaly is realized classically.
In such a fermionic realization, the second term is a classical
contribution to the divergence of the current from the four-fermi term
in the action, and the third term is a non-universal contribution to the
anomaly from a one-loop diagram also involving the four-fermi interaction.
In a fermionic realization of the left-movers, the terms in the
supersymmetry transformations above would
not appear at zeroth order in $\alpha'$. Classically, supersymmetry
transformations of the action result in one-fermi terms proportional
to $H - dB$ and three-fermi terms proportional to $dH$, both of
which are proportional to $\alpha'$.
However, at next-to-leading-order in $\alpha'$ on the worldsheet,
one has more interesting effects. Specifically,
``supersymmetry anomalies'' arise \cite{sen1,sen2}.
These are phase factors picked up by the path integral measure.
Unlike true anomalies, these are cancelled by counterterms.
In particular, the Chern-Simons terms added to make $H$ gauge- and
local-Lorentz-invariant cancel out the effect of these `anomalies.'
In more detail, if we realize the left-moving gauge degrees of
freedom by chiral fermions $\lambda_-$,
we can realize worldsheet supersymmetry off-shell\footnote{If we take
$\delta \lambda_- = 0$, the worldsheet supersymmetry transformations close
only if one uses the $\lambda_-$ equations of motion.} with supersymmetry
transformations of the form
\begin{equation*}
\delta \lambda_- \: = \: - (i \alpha_- \psi_+^i) A_i \lambda_-
\: - \: (i \tilde{\alpha}_- \psi_+^{\overline{\imath}}) A_{\overline{\imath}}
\lambda_-
\end{equation*}
where $A_{\mu}$ is the target-space gauge field.
However, these supersymmetry transformations are equivalent to
(anomalous chiral) gauge transformations with parameter
\begin{equation} \label{susygauge}
- (i \alpha_- \psi_+^i) A_i \: - \:
(i \tilde{\alpha}_- \psi_+^{\overline{\imath}}
) A_{\overline{\imath}}
\end{equation}
Thus, the supersymmetry transformation implies an anomalous gauge
transformation, and so the path integral measure picks up a phase factor.
From the (universal) bosonic term in the divergence of the
gauge current proportional to the
curvature $F$, we will get a one-fermi term in the anomalous
transformation proportional to the Chern-Simons form.
In our case, as we have bosonized the left-movers, we get such
a one-fermi term in supersymmetry transformations classically.
In addition to the universal piece, there is a regularization-dependent
multifermi contribution as well.
If we calculate the anomalous divergence of the gauge current
in a fermionic realization, then because of the four-fermi term
$F \lambda \lambda \psi \psi$ there will be a two-fermi contribution
to the divergence of the gauge current proportional to
$\overline{\partial} (F \psi \psi)$. Plugging into the
gauge parameter~(\ref{susygauge}) yields a three-fermi term
in the supersymmetry transformations proportional to
$\mbox{Tr } F\wedge F$, exactly as we have discovered in the
classical supersymmetry transformations of our bosonized
formulation.
There is a closely analogous phenomenon of supersymmetry anomalies
in the right-moving fermions as well. Since we have not bosonized
them, the analysis here is identical to that for ordinary
heterotic string constructions discussed, for example, in
\cite{sen1,sen2}. In terms of supersymmetry transformations
of the right-moving fermions written with general-covariant
indices, {\it e.g.} $\delta \psi_+^i \: = \: - \tilde{\alpha}_-
\partial \phi^i$, the source of the anomaly is not obvious.
To make it more manifest, we must switch to local Lorentz indices,
and define
\begin{equation*}
\psi_+^a \: = \: e^a_{\mu} \psi_+^{\mu}
\end{equation*}
Then, the supersymmetry transformations have the form
\begin{equation*}
\delta \psi_+^a \: = \: \left( e^a_i (- \tilde{\alpha}_- \partial
\phi^i) \: + \: e^a_{\overline{\imath}}(- \alpha_- \partial \phi^{
\overline{\imath}}) \right) \: + \:
\left( e^a_{\mu,i} (i \alpha_- \psi_+^i) \psi_+^{\mu} \: + \:
e^a_{\mu,\overline{\imath}} (i \tilde{\alpha}_- \psi_+^{\overline{\imath}})
\psi_+^{\mu} \right)
\end{equation*}
The second set of terms above can be written as
(anomalous, chiral) local Lorentz
transformations, and so the supersymmetry transformations induce
anomalous local Lorentz transformations.
In particular, under a supersymmetry transformation the path integral
measure will pick up a phase factor including a one-fermi term
proportional to the Chern-Simons form for the target-space spin
connection, whose origin is the (univeral, bosonic) curvature term
in the divergence of the local Lorentz current.
The path integral phase factor will also include a multifermi contribution.
Here, the same analysis of four-fermi terms as before would appear
to imply that the multifermi contribution will be proportional to
$FR$, where $F$ is the gauge curvature and $R$ is the metric curvature.
However, these multifermi terms are sensitive to the choice of regulator,
and to maintain (0,2) worldsheet supersymmetry we must be very careful
about the choice of regulator here. For the correct choice of regularization,
the multifermi contribution is a three-fermi term proportional to
$\mbox{Tr } R \wedge R$, where $R$ is the curvature of the
connection $\Gamma - H$, as discussed in {\it e.g.} \cite{sen1,sen2}.
As a check on this method, note that if we replace the right-moving
chiral fermions with nonabelian bosons, then following the same analysis
as for the gauge degrees of freedom the supersymmetry transformations
will automatically generate one-fermi and three-fermi terms of the
desired form.
For more information on supersymmetric anomalies in such two-dimensional
theories, see also \cite{wangwu,ssw}. See also \cite{yuwu1,yuwu2}
for an interesting approach to the interaction of second-class
constraints and worldsheet supersymmetry.
To summarize, under (anomalous) worldsheet supersymmetry transformations we have
found one-fermi terms proportional to
\begin{equation*}
H \: - \: d B \: - \: (\alpha')\left ( k \, \mbox{CS}(A) \: - \: \mbox{CS}(\omega - H)
\right)
\end{equation*}
and three-fermi terms proportional to
\begin{equation*}
dH \: - \: (\alpha')\left( k \, \mbox{Tr } F \wedge F \: - \:
\mbox{Tr } R \wedge R \right)
\end{equation*}
where the terms involving the spin connection $\omega$ arise
from quantum corrections, and the terms involving the gauge field
$A$ arise classically in our bosonic construction but from
quantum corrections in fermionic realizations of left-movers.
Closure of supersymmetry is guaranteed by our definition of $H$.
Put another way, we see that worldsheet supersymmetry is deeply
intertwined with the Green-Schwarz mechanism.
The $\tilde{\alpha}_-$ terms in the supersymmetry transformations are
almost identical.
The $\tilde{\alpha}_-$ terms in the supersymmetry transformation
of the base terms are given by
\begin{equation*}
\begin{split}
\frac{1}{\alpha'} \int_{\Sigma} d^2z &(i \tilde{\alpha}_- \psi_+^{\overline{k}})
(\overline{\partial} \phi^{\mu}) (\partial \phi^{\nu}) H_{
\overline{k} \mu \nu } \\
& + \: \frac{1}{\alpha'} \int_{\Sigma} d^2z \frac{i}{2}
(i \tilde{\alpha}_- \psi_+^{\overline{k}}) (\overline{\partial} \phi^{\mu})
\psi_+^i \psi_+^{\overline{\jmath}}\left(
H_{\overline{\jmath} \overline{k} i, \mu} \: - \:
H_{\overline{\jmath} \overline{k} \mu, i} \: - \:
H_{i \overline{\jmath} \mu, \overline{k}} \: + \:
H_{i \overline{k} \mu, \overline{\jmath}} \right) \\
& + \: \frac{1}{\alpha'} \int_{\Sigma} d^2z ( i \tilde{\alpha}_-
\psi_+^{\overline{\imath}} ) \left(
B_{\mu \nu, \overline{\imath}} \: - \: B_{\overline{\imath} \nu, \mu} \: - \:
B_{\mu \overline{\imath}, \nu} \right)\left(
\partial \phi^{\mu} \overline{\partial} \phi^{\nu} \: - \:
\overline{\partial} \phi^{\mu} \partial \phi^{\nu} \right)
\end{split}
\end{equation*}
which are virtually identical to the corresponding $\alpha_-$ terms.
The $\tilde{\alpha}_-$ terms in the supersymmetry transformation of the
fiber terms are given by
\begin{equation*}
\begin{split}
\frac{i k}{4 \pi} \int_{\Sigma} d^2z & \mbox{Tr }
\left( F_{\mu \nu} F_{\overline{\imath}
\rho} \right) (i \tilde{\alpha}_- \psi_+^{\overline{\imath}} )
\psi_+^{\mu} \psi_+^{\nu} \overline{\partial} \phi^{\rho}\\
& - \: \frac{k}{4 \pi} \int_{\Sigma} d^2 z
(i \tilde{\alpha}_- \psi_+^{\overline{\imath}}) (\overline{\partial} \phi^{\mu})
(\partial \phi^{\nu}) \mbox{Tr }\left( A_{\overline{\imath}} \partial_{\mu}
A_{\nu} \: + \: \frac{2}{3} A_{\overline{\imath}} A_{\mu} A_{\nu}
\: \pm \: \mbox{permutations} \right)
\end{split}
\end{equation*}
which are virtually identical to the corresponding $\alpha_-$ terms.
(As before, to get the result above requires using the equations of
motion for $g$.)
The supersymmetry anomaly story works here in the same way as for
the $\alpha_-$ terms, and just as for the $\alpha_-$ terms,
one can show that the worldsheet theory is supersymmetric through
first order in $\alpha'$.
\subsubsection{The full gauge-invariant supersymmetric lagrangian}
Let us summarize the results of the last two subsections.
The full lagrangian is given by
\begin{equation*}
\begin{split}
S = \: & \frac{1}{\alpha'} \int_{\Sigma} d^2z\left( \frac{1}{4}
g_{i \overline{\jmath}}
\partial_{\alpha} \phi^{i} \partial^{\alpha} \phi^{\overline{\jmath}}
\: + \:
\frac{i}{2} g_{\mu \nu} \psi_+^{\mu} D_{\overline{z}}
\psi_+^{\nu}
\right) \\
& + \: \frac{1}{\alpha'} \int_{\Sigma} d^2z B_{\mu \nu} \left(
\partial \phi^{\mu} \overline{\partial} \phi^{\nu} \: - \:
\overline{\partial} \phi^{\mu} \partial \phi^{\nu} \right) \\
& - \: \frac{k}{4 \pi} \int_{\Sigma} d^2 z \mbox{Tr }\left(
g^{-1} \partial_z g g^{-1} \partial_{\overline{z}} g \right)
\: - \: \frac{ik}{12 \pi} \int_B d^3 y \epsilon^{ijk}
\mbox{Tr }\left( g^{-1} \partial_i g g^{-1} \partial_j g
g^{-1} \partial_k g \right) \\
& - \: \frac{k}{2 \pi} \int_{\Sigma} d^2 z \mbox{Tr }
\left( (\partial_{z} \phi^{\mu}) A_{\mu}
\partial_{\overline{z}} g g^{-1}\: + \: \frac{1}{2} (\partial_z \phi^{\mu}
\partial_{\overline{z}} \phi^{\nu}) A_{\mu} A_{\nu} \right) \\
& + \: \frac{i k}{4 \pi} \int_{\Sigma} d^2z \mbox{Tr }\left(
F_{\mu \nu} \overline{\partial}_A g g^{-1} \right)
\psi_+^{\mu} \psi_+^{\nu}
\end{split}
\end{equation*}
where
\begin{equation*}
D_{\overline{z}} \psi_+^{\nu} \: = \: \overline{\partial} \psi_+^{\nu} \: + \:
\overline{\partial} \phi^{\mu} \left( \Gamma^{\nu}_{\: \: \sigma \mu}
\: - \: H^{\nu}_{\: \: \sigma \mu} \right) \psi_+^{\sigma}
\end{equation*}
and the metric $g_{\mu \nu}$ on the base will not be K\"ahler
(except optionally at zeroth order in $\alpha'$).
The action is well-defined under the gauge transformations
\begin{equation*}
\begin{split}
g \mapsto \: & h g \\
A_{\mu} \mapsto \: & h A_{\mu} h^{-1} \: + \: h \partial_{\mu} h^{-1}
\end{split}
\end{equation*}
across coordinate-charge-changes on the base,
where $h$ is a group-valued function on the overlap patch on the target space,
and the $B$ field transforms to absorb both the gauge anomaly above
and the local Lorentz anomaly on the right-moving chiral fermions.
The action is also invariant under the (0,2) worldsheet supersymmetry
transformations
\begin{equation*}
\begin{split}
\delta \phi^i = \: & i \alpha_- \psi_+^i \\
\delta \phi^{\overline{\imath}} = \: & i \tilde{\alpha}_- \psi_+^{
\overline{\imath}} \\
\delta \psi_+^i = \: & - \tilde{\alpha}_- \partial \phi^i \\
\delta \psi_+^{\overline{\imath}} = \: & - \alpha_- \partial
\phi^{\overline{\imath}} \\
\delta g = \: & 0
\end{split}
\end{equation*}
where we assume $F_{ij} = F_{\overline{\imath} \overline{\jmath}} = 0$,
and that $H$ has only (1,2) or (2,1) components, no (0,3) or (3,0),
related to the metric by
\begin{equation*}
H_{i \overline{\jmath} k} \: = \: - \frac{1}{2}\left(
g_{i \overline{\jmath}, k} \: - \: g_{k \overline{\jmath}, i} \right)
\end{equation*}
and where $H$ is also given by the difference of Chern-Simons forms,
in the form
\begin{equation*}
H \: = \: dB \: + \: (\alpha')\left(k \, \mbox{CS}(A) \: - \:
\mbox{CS}(\omega - H)\right)
\end{equation*}
The classical equations of motion for $g$ are
\begin{equation*}
\partial_A \left( \overline{\partial}_A g g^{-1} \right) \: = \:
\partial \phi^{\mu} \overline{\partial} \phi^{\nu} F_{\mu \nu}
\: + \: \frac{i}{2} [ F_{\mu \nu}, \overline{\partial}_A g g^{-1}]
\psi_+^{\mu} \psi_+^{\nu} \: + \:
\frac{i}{2} \overline{\partial}_A\left( F_{\mu \nu} \psi_+^{\mu}
\psi_+^{\nu} \right)
\end{equation*}
where
\begin{equation*}
\partial_A\left( \overline{\partial}_A g g^{-1} \right) \: = \:
\partial \left( \overline{\partial}_A g g^{-1} \right) \: + \:
[ \partial \phi^{\lambda} A_{\lambda}, \overline{\partial}_A g g^{-1} ]
\end{equation*}
Note this equation generalizes the chirality condition
$\partial( \overline{\partial} g g^{-1} ) = 0$ that appears in
ordinary (non-fibered) WZW models. Here, it also plays the role
of a second-class constraint.
Also note this is the supersymmetrization of the chiral anomaly in the current:
defining $j = \overline{\partial}_A g g^{-1}$, and omitting fermions,
this says $D j \propto F$. Since we have bosonized, the anomaly is
realized classically.
In a fermionic description of the left-movers,
the current $\overline{\partial}_A g g^{-1}$ would be given by $\lambda_-
\overline{\lambda}_-$, the $[F, \overline{\partial}_A g g^{-1}] \psi \psi$
term would be a classical contribution to the divergence of the
current, and the $F$ and $\overline{\partial}(F \psi \psi)$ terms would
arise as quantum corrections, from one-loop diagrams involving
the interactions $A \lambda \lambda$
and the four-fermi term $F \psi \psi \lambda \lambda$, respectively.
The former (bosonic) contribution to the divergence is universal,
the latter is in principle
regularization-dependent.
\subsection{Anomaly cancellation}
In order to make the action well-defined, recall we needed to demand
that $k \int \mbox{Tr } F^2$ and $\int \mbox{Tr }R^2$ be in the same
de Rham cohomology class. From that fact we can
immediately read off the form of the anomaly
cancellation condition for general levels of the fibered current algebra:
if the condition at level $1$ is that
\begin{equation*}
c_2({\cal E}) \: = \: c_2(TX)
\end{equation*}
then the condition at level $k$ is
\begin{equation*}
k c_2({\cal E}) \: = \: c_2(TX).
\end{equation*}
We have already seen several independent derivations of the anomaly
cancellation condition -- it plays several roles in making the
fibered WZW model self-consistent and supersymmetric, analogues of the
same roles in heterotic worldsheets.
Here is another quick test of this claim.
Take the heterotic $E_8 \times E_8$ string on $S^1$, and orbifold by
the action which translates halfway around the $S^1$ while simultaneously
exchanging the two $E_8$'s. The result is a theory, again on $S^1$,
but with a single $E_8$ current algebra at level two.
We can understand anomaly cancellation in this theory by working on the
covering space, before the orbifold action.
Embed bundles ${\cal E}_1$, ${\cal E}_2$ (${\cal E}_1 \cong {\cal E}_2
\cong {\cal E}$) in each of the $E_8$'s, then for anomaly cancellation to
hold we must have
\begin{equation*}
c_2({\cal E}_1) \: + \: c_2({\cal E}_2) \: = \: c_2(TX)
\end{equation*}
but this is just the statement
\begin{equation*}
2 c_2({\cal E}) \: = \: c_2(TX)
\end{equation*}
which is precisely the prediction above for anomaly cancellation in a
level two fibered current algebra.
(Attentive readers will note that the central charge of a single
$E_8$ at level two is 15.5, not 16, and so this does not suffice
for a critical heterotic string. However, the orbifold has
massive structure in the twisted sector that is not captured
purely by the description above, and so the central charge of
the level two $E_8$ current algebra does not suffice;
put another way, in the flat ten-dimensional space limit, the $S^1$
unravels, the orbifold is undone, and some of the massive twisted sector states
become massless, curing the naive problem with the central charge.)
We can outline another derivation of the anomaly-cancellation constraint
in the language of chiral de Rham complexes \cite{edcdra,cdrc,cdrcgerb,tan}.
In those papers, the idea was to describe the perturbative physics
of a nonlinear sigma model on a space in terms of a set of free
field theories on patches on a good cover of the target space.
Conditions such as the anomaly cancellation condition arise as
consistency conditions on triple overlaps.
(Technically, the local free field descriptions need not patch
together nicely, so one need get nothing more than a stack over
the target, in fact a special stack known as a gerbe. The anomaly
cancellation condition arises as the condition for that stack/gerbe
to be trivial.)
Here, we can follow a similar program, except that instead of associating
free theories to patches, we associate solvable theories to patches,
which is the next best thing.
So, consider the left-moving degrees of freedom, described by a current
algebra at level $k$:
\begin{equation*}
J_F^a(z) \cdot J_F^b(z') \: \sim \:
\frac{ k \delta^{ab} }{(z - z')^2} \: + \: i \sum_c
f_{abc} \frac{J^c(z')}{z-z'} \: + \: \cdots
\end{equation*}
Let $T^a$ denote the generators of the Lie algebra, and suppose that they
are functions of the base space, $T^a = T^a(\gamma(z))$ in the
notation of \cite{edcdra,tan}.
Define
\begin{equation*}
J_F(\gamma) \: = \: \sum_a J_F^a(z) T^a(\gamma(z))
\end{equation*}
Using the expansion
\begin{equation*}
T^a(\gamma(z')) \: = \: T^a(\gamma(z)) \: + \: (z'-z) \left(
\partial_{z'} \gamma^j \right) \partial_j T^a \: + \: \cdots
\end{equation*}
it is trivial to derive that the following OPE includes the terms
\begin{equation*}
J_F(\gamma(z)) \cdot J_F(\gamma(z')) \: \sim \: \cdots \: + \:
i \sum_c f_{abc} \frac{ J^c(z') }{z-z'} T^a(\gamma) T^b(\gamma) \: + \:
k \frac{ \left( \partial_{z'} \gamma^j \right) T^a(\gamma) \partial_j
T^a(\gamma) }{z-z'} \: + \: \cdots
\end{equation*}
The equation above should be compared to \cite{tan}[eqn (5.30)], for
example. The essential difference between the two is that the
second term above (which corresponds to the fourth term on the
right-hand side of \cite{tan}[eqn (5.30)]) has an extra factor of $k$,
the level.
That $k$-dependence in the
second term on the right-hand side is ultimately responsible for
modifying the anomaly cancellation condition from
$[ \mbox{Tr }F^2 ] = [ \mbox{Tr }R^2 ]$ to
$k [ \mbox{Tr }F^2 ] = [ \mbox{Tr }R^2 ]$.
\subsection{Massless spectra}
Letting the currents of a Kac-Moody algebra be denoted $J^a(z)$,
for $a$ an index of the ordinary Lie algebra,
the WZW primaries $\varphi_{(r)}(w)$
are fields whose OPE's with the currents have only
simple poles \cite{gincft}[section 9.1]:
\begin{equation*}
J^a(z) \cdot \varphi_{(r)}(w) \: \sim \: \frac{ t^a_{(r)} }{z-w}
\varphi_{(r)}(w) \: + \: \cdots
\end{equation*}
where $(r)$ denotes some representation of the ordinary Lie algebra.
In other words, the WZW primaries transform under the currents just
like ordinary representations of the ordinary Lie algebra.
When we fiber WZW models, each WZW primary will define a
smooth vector bundle associated to the principal $G$ bundle defining
how the WZW models are fibered, since across coordinate patches the
primaries will map just as sections of such a bundle.
(In the language of chiral de Rham complexes and soluble field theories
on coordinate patches, the WZW primaries transform just like sections
of associated vector bundles when we cross from one coordinate patch
to another.)
If the theory has $(0,2)$ supersymmetry, then that $C^{\infty}$
vector bundle is a holomorphic vector bundle
(otherwise, the transition functions break the BRST symmetry in the
twisted theory).
More generally, a primary together with its descendants form
a `positive-energy representation' of a Kac-Moody algebra.
Since $[J^a_0, L_0] = 0$, the states at any given mass level will
break into irreducible representations of $G$ (as described
by the zero-mode components $J^a_0$ of the currents).
(In addition, their OPE's with the full currents will have
higher-order poles, but this is not important here.)
When fibering WZW models, each such representation will then define
a vector bundle associated to the underlying principal bundle,
and so for WZW models fibered over a base manifold $X$ the states
in the positive-energy representation can be thought of as
sections of $K(X)[[q]]$, a fact which will be important to the analysis
of elliptic genera.
Following the usual yoga,
a chiral primary in the $(0,2)$ fibered WZW model is then of the form
\begin{equation*}
f_{\overline{\imath}_1 \cdots \overline{\imath}_n} \psi^{\overline{\imath}_1}
\cdots \psi^{\overline{\imath}_n}
\end{equation*}
where the $\psi$'s are right-moving worldsheet fermions, coupling to
the tangent bundle of the base manifold $X$,
and $f$ is a section of $V \otimes \Lambda^n TX$,
where $V$ is a vector bundle defined by an irreducible representation of
$G$ corresponding to some component of a positive-energy representation
of the Kac-Moody algebra as above. In cases\footnote{Our fibered WZW model
construction also applies to cases in which the base space is
nonK\"ahler to zeroth order in $\alpha'$. However, that complicates
the BRST condition, and so for present purposes we restrict to Calabi-Yau
spaces. }
in which the base space is
a Calabi-Yau to zeroth order in $\alpha'$,
for the state to the BRST closed, $f$ will be a holomorphic section,
and in fact following the usual procedure this will realize a sheaf
cohomology group valued in $V$, {\it i.e.} $H^*(X, V)$.
Morally, the integrable (or `unitary')
representations (which define WZW primaries)
correspond to massless states, as they have the lowest-lying
$L_0$ eigenvalues (though of course that need not literally
be true in all cases).
Let us briefly consider an example.
For $SU(n)$ at level 1, the integrable representations (WZW primaries)
correspond to
antisymmetric powers of the fundamental ${\bf n}$.
The construction above predicts `massless states' counted by
$H^*(X, \Lambda^* {\cal E})$ where ${\cal E}$ is a rank $n$ vector
bundle associated to a principal $SU(n)$ bundle.
These are precisely the left-Ramond-sector states described in
\cite{dg}, for ordinary heterotic worldsheets built with
left-moving fermions, and this is a standard result.
(Because \cite{dg} are concerned with heterotic compactifications,
their $SU(n)$ is embedded in $\mathrm{Spin}(16)$ and then a
left $\BZ_2$ orbifold is performed, so there are additional
states, in $\BZ_2$ twisted sectors.)
At higher levels there are additional integrable representations.
(In fact, the integrable representations of $SU(n)$ at any level
are classified by Young diagrams of width bounded by the level.
Thus, at level 2, the adjoint representation becomes integrable,
and so in addition to the WZW current there is a WZW primary which
transforms as the adjoint.)
In ordinary heterotic compactifications, Serre duality has the effect
of exchanging particles and antiparticles. Let us check
that the same is true here.
For any complex reductive algebraic group $G$ and any representation
$\rho$, let ${\cal E}_{\rho}$ denote the holomorphic vector bundle
associated to $\rho$. Then on an $n$-dimensional complex manifold $X$,
Serre duality is the statement
\begin{equation*}
H^i(X, {\cal E}_{\rho}) \: \cong \: H^{n-i}(X, {\cal E}_{\rho^*} \otimes
K_X)^*
\end{equation*}
where $\rho^*$ denotes the representation dual to $\rho$.
We have implicitly used the fact that
${\cal E}_{\rho^*} \cong {\cal E}_{\rho}^{\vee}$, an immediate consequence of
the definition of dual representation (see {\it e.g.} \cite{fh}[section 8.1]).
For example, for the group $SU(n)$, the dual of the representation
$\Lambda^i V$ is $\Lambda^{i} V^*\cong \Lambda^{n-i}V$,
exactly as needed to reproduce the
usual form. Thus, for Serre duality on Calabi-Yau's to respect the
spectrum, properties of fields associated to representations $\rho$
must be symmetric with respect to the dual representations $\rho^*$.
Suppose the original representation $\rho$ is integrable,
then it can be shown that\footnote{The unitarity bound is \cite{gincft}[eqn (9.30)]
\begin{equation*}
2 \frac{\psi \cdot \lambda}{\psi^2} \: \leq \: k
\end{equation*}
where $\lambda$ is the highest weight of the representation in question and
$\psi$ is the highest weight of the adjoint representation.
The highest weight of the dual representation is $- w_0 \lambda$,
where $w_0$ is the longest Weyl group element \cite{diFranc}[eqn (13.117)].
(The weight $- \lambda$ is the lowest weight of the dual
representation.) Since the Killing form
is invariant under $w_0$, {\it i.e.}, $A \cdot B = (w_0 A) \cdot (w_0 B)$,
and $w_0 \psi = - \psi$, we see that the left-hand side of the inequality
is invariant under $\lambda \mapsto - w_0 \lambda$, and so
a representation is unitary if and only if its dual is also unitary.
We would like to thank A.~Knutson for a discussion of this matter.
} the dual representation $\rho^*$ is also integrable.
Furthermore, the conformal weights of the states are also invariant\footnote{
For a given WZW primary (which are also Virasoro primaries),
the $L_0$ eigenvalue is \cite{diFranc}[eqn (15.87)]
\begin{equation*}
h \: = \: \frac{(\lambda, \lambda+2 \rho)}{2(k+g)}
\end{equation*}
where $k$ is the level, $g$ is the dual Coxeter number,
and $\rho$ is the Weyl vector (half-sum of positive roots).
Recall that for a highest weight $\lambda$, the highest weight of the
dual representation is $- w_0 \lambda$, where $w_0$ is the longest Weyl
group element. Now, $w_0 \rho = - \rho$, it takes all the positive roots
to negatives. Thus, using the fact that the Killing metric is
Weyl invariant,
\begin{equation*}
(\lambda, \lambda + 2 \rho) \: = \: (- w_0 \lambda, - w_0 \lambda - 2 w_0
\rho) \: = \: (- w_0 \lambda, -w_0 \lambda + 2 \rho)
\end{equation*}
and so we see that a representation and its dual define primaries
with the same conformal
weight.
}
under this dualization.
Thus, Serre duality symmetrically closes
states into other states, just as one would expect.
\subsection{Physical applications}
Some interesting examples of six-dimensional gauged supergravities
exist in the literature \cite{sezgin1,sezgin2,sezgin3,sezgin4},
for which a string-theoretic interpretation does not seem to be
clear at present. The technology of this paper may give some insight
into this question. (The relevance of higher-level currents has
been observed previously, see {\it e.g.} \cite{dienes}, but is worth
repeating here.)
One of the six-dimensional theories in question \cite{sezgin1} has a
gauge group $E_6 \times E_7 \times U(1)$ with massless matter
in the ${\bf 912}$ representation of $E_7$.
One basic problem with realizing this in ordinary string worldsheet
constructions is that it is not clear how to build a massless
${\bf 912}$. If we apply a standard construction, then
the $e_7$ algebra is built from a $so(12) \times su(2)$ subalgebra.
Under that subalgebra the ${\bf 912}$ decomposes as
\begin{equation*}
{\bf 912} \: = \: ({\bf 12}, {\bf 2}) \oplus ({\bf 32}, {\bf 3}) \oplus
({\bf 352}, {\bf 1}) \oplus ({\bf 220}, {\bf 2})
\end{equation*}
However, the standard construction can only recreate adjoints (${\bf 66}$) and
spinors (${\bf 32}$) of $\mathrm{Spin}(12)$ in massless
states from left-moving fermions,
not a ${\bf 352}$ or ${\bf 220}$, and so it is far from clear how
a ${\bf 912}$ could arise.
By working with current algebras at higher levels, however,
more representations become unitary. In particular, an $E_7$ current
algebra at level greater than one could have a massless state given
by a ${\bf 912}$, which is part of what one would need to reproduce
the six-dimensional supergravity in \cite{sezgin1}.
This by itself does not suffice to give a string-theoretic
interpretation of any of the six-dimensional theories described in
\cite{sezgin1,sezgin2,sezgin3,sezgin4}, but at least is a bit
of progress towards such a goal.
\subsection{Elliptic genera}
Elliptic genera are often described as one-loop partition functions
of half-twisted heterotic theories. Since we are describing
new heterotic worldsheet constructions, we are implicitly realizing
some elliptic genera not previously considered by physicists.
However, although the elliptic genera implied by our work have not
been realized previously by physics constructions,
they have
been studied formally in the mathematics community, in the recent\footnote{
We should briefly speak to a potential language confusion.
Many mathematics papers on elliptic genera speak of genera
``at level $k$.'' This does not usually refer to the level of the
current algebra to which left-moving degrees of freedom couple,
but rather refers to the modular properties of the genus.
Specifically, it means the form is modular with respect to
the ``level-$k$-principal congruence subgroup''
$\Gamma_0(k) \subset SL(2,\BZ)$ defined by
matrices congruent mod $k$ to the identity.
Thus, Witten's elliptic genera are often called level 1 elliptic genera,
not because the left-movers couple to a level 1 current algebra,
but rather because they have good modular properties with respect
to all of $SL(2,\BZ)$. The elliptic genera discussed in
\cite{kliu,ando1}, by constrast, have left-moving degrees of freedom
coupling to level $k$ current algebras, just as in our heterotic
fibered WZW model construction.
} works
\cite{kliu,ando1}. Those papers describe elliptic genera in which the
left-moving degrees of freedom couple to some $G$-current algebra at
some level $k$, fibered over the base in a fashion determined by
a fixed principal $G$ bundle, just as done in this paper.
In a little more detail, each positive energy representation,
call it $E$,
of the $G$ current algebra decomposes at each mass level into a
sum of irreducible representations of $G$, and so fibering them
over the base in a fashion determined by an underlying principal
$G$ bundle $P$ yields an element $\psi(E,P) \in K(X)[[q]]$,
where the coefficient of each power of $q$ is sum of vector bundle
associated to $P$ via the irreducible representations appearing in
$E$ at the corresponding mass level.
Each such positive energy representation consists of the descendants
of some WZW primary. The corresponding characters in an ordinary
WZW model can be interpreted as sections of line bundles over the moduli
space of flat $G$ connections on an elliptic curve \cite{looijenga}.
Replacing the coordinates on the moduli space with Chern roots of $P$
gives the Chern character of $\psi(E,P)$.
(For example, compare the $\chi_S$ in \cite{kliu}[p. 353] to
the $P_{++}$ in \cite{schwarner3}[eqn (4.15)].)
The elliptic genera described by Witten \cite{witeggen1,witeggen2}
are described and derived in this language in \cite{kliu}.
Ordinarily we think of the left-movers' contribution to Witten's elliptic
genera in terms of boundary conditions on fermions; the precise relationship
between those boundary conditions and positive energy representations
of the left-moving current algebra is spelled out in \cite{lt}[eqn (11.102)].
For the elliptic genera of \cite{witeggen1,witeggen2},
demanding that the genera have good modular properties implies the
standard anomaly cancellation constraint $c_2(P) = c_2(TX)$,
see for example \cite{schwarner3,schwarner1,schwarner2,lerche1}.
For fibered level $k$ current algebras,
it is shown in detail in \cite{kliu,ando1} that demanding the genera
have good modular properties implies $k c_2(P) = c_2(TX)$, the same
anomaly cancellation constraint we have already derived multiple times
from the physics of fibered WZW models.
\subsection{The relevance of principal $LG$ bundles}
We have described how to fiber WZW models,
but we (as well as \cite{kliu,ando1}) have only discussed
how to fiber in a fashion controlled
by a principal $G$ bundle with connection.
Since the WZW models describe Kac-Moody algebras,
since we are fibering current algebras,
one might expect that one could more generally fiber according to the dictates
of a principal $LG$ bundle.
Any principal $G$ bundle induces a principal $LG$ bundle,
as there is a map $BG \rightarrow BLG$.
Indeed, we have implicitly used that fact -- the Kac-Moody algebra
determined by a WZW model fits into a principal $LG$ bundle that is
such an image of a principal $G$ bundle.
If $G$ is simply-connected then a principal $LG$ bundle over $X$ can
be thought of as a principal $G$ bundle on $X \times S^1$
\cite{sm,murray,bv}. Given a principal $LG$ bundle so described,
we can get a principal $G$ bundle just by evaluating at a point on the
$S^1$, but these maps are not terribly invertible.
Thus, principal $LG$ bundles are not the same as principal $G$ bundles.
In fact, there is a physical difficulty with fibering Kac-Moody algebras
using general principal $LG$ bundles that do not arise from principal
$G$ bundles. Put briefly, a physical state condition would not be
satisfied in that more general case, and so one cannot expect to
find physical theories in which left-moving current algebras have
been fibered with more general principal $LG$ bundles.
Let us work through this in more detail.
As discussed earlier, a positive energy representation of $LG$ decomposes
into irreducible representations of $G$ at each mass level,
essentially because $[ J_0^a, L_0 ] =0$. Thus, so long as we are fibering
with a principal $G$ bundle, instead of a principal $LG$ bundle, the $L_0$
eigenvalues of states should be well-defined across coordinate patches.
(This is also the reason why the descendants can all be understood
in terms of $K(X)[[q]]$, as used in the discussion of elliptic genera.)
If we had a principal $LG$ bundle that was not the image of a principal
$G$ bundle, then the transition functions would necessarily mix up states
of different conformal weights, more or less by definition of $LG$ bundle.
Now, the physical states need to satisfy a condition of the form
$m_L^2 = m_R^2$, which defines a matching between conformal weights of
left- and right-moving parts.
In a large-radius limit, we can choose a basis of right-moving states
with well-defined $L_0$ eigenvalues. For the left-movers, if the
WZW model is fibered with a principal $G$ bundle, then we can choose
a basis of left-moving states that also have well-defined $L_0$
eigenvalues, and so we can hope to satisfy the physical state condition
above. On the other hand, if the WZW model were to be fibered
with a principal $LG$ bundle, then we would not be able to choose
a basis of left-moving states with well-defined $L_0$ eigenvalues,
and would not be able to satisfy the physical state condition.
Thus, in a heterotic context, the only way to get states that satisfy
the physical state condition above is if the left-moving
current algebra couples to a principal $G$ bundle, and not a more
general principal $LG$ bundle.
Note, however, that in a symmetrically fibered WZW model, of the form
discussed in section~\ref{symmfibwzw}, this argument would not apply.
\subsection{T-duality}
One natural question to ask is how heterotic T-duality works when
one has fibered a current algebra of level greater than one.
We have seen how the fibering structure of a fibered current
algebra is determined by a principal $G$ bundle and a connection
on that bundle.
In the special case of tori, when the flat connection over the
torus can be rotated into a maximal torus of $G$, it is
easy to speculate that heterotic T-duality should act on the
connection in a fashion independent of $k$.
After all, once one rotates the connection into a maximal torus,
the connection only sees a product of $U(1)$'s, and for $U(1)$'s
the level of the Kac-Moody algebra is essentially irrelevant.
Thus, if this conjecture is correct, in such cases heterotic T-duality
would proceed as usual.
However, even if this conjecture is correct, we have no
conjectures regarding how heterotic T-duality at higher levels should
act when the connection cannot be diagonalized into a maximal torus
(as can happen for flat connections on tori), or if the base space is
not a torus so that one only has a fiberwise notion of heterotic T-duality.
\section{Conclusions}
In this paper we have done three things:
\begin{itemize}
\item We argued that conventional heterotic worldsheet theories
do not suffice to describe arbitrary $E_8$ gauge fields in compactifications.
The basic issue is that the conventional construction builds each
$E_8$ using a $\mathrm{Spin}(16)/\BZ_2$ subgroup, and only data
reducible to $\mathrm{Spin}(16)/\BZ_2$ can be described, but not
all $E_8$ gauge fields are so reducible.
\item We reviewed alternative constructions of
the ten-dimensional $E_8$ algebra,
using other subgroups than $\mathrm{Spin}(16)/BZ_2$.
In examples we recalled the character decomposition of the
affine algebras (see {\it e.g.} \cite{kacsan} for earlier work),
and also described how that character decomposition is realized
physically in a heterotic partition function via orbifold twisted
sectors that correlate to $E_8$ group theory.
In addition to discussing maximal-rank subgroups, we also discussed
whether it may be possible to use non-maximal-rank subgroups such as
$G_2 \times F_4$.
\item We developed\footnote{After the original publication of this
paper it was pointed out to us that chiral fibered WZW models with
$(0,1)$ supersymmetry have been previously considered,
under the name ``lefton, righton Thirring models,''
see for example \cite{gates1,gates2,gates3,gates4,gates5}.
We believe we have pushed the notion somewhat further, by studying
anomaly cancellation, spectra, elliptic genera and so forth in
chiral fibered WZW models with $(0,2)$ supersymmetry.}
fibered WZW models to describe these more general
$E_8$ constructions on arbitrary manifolds. In fact, this allows us
to describe conformal field theories in which the left-movers couple to
general $G$-current algebras at arbitrary levels, a considerable generalization
of ordinary heterotic worldsheet constructions. This also enables us to give
a physical realization of some new elliptic genera recently
studied in the mathematics
literature \cite{kliu,ando1}.
\end{itemize}
It would be interesting if the elliptic genera discussed here appeared in
any black hole entropy computations.
It would also be interesting to understand heterotic worldsheet
instanton corrections in these theories, along the lines of
\cite{sharpe02a,sharpe02b,sharpe02c,ade,kg}. Unfortunately,
to produce the (0,2) analogues of the A and B models described in those
papers required a left-moving topological twist involving a global
$U(1)$ symmetry present because the left-moving fermions were
realizing a $U(n)$ current algebra at level 1. In more general cases
there will not be such a global $U(1)$ symmetry, unless one adds it
in by hand.
\section{Acknowledgements}
This paper began in conversations with B.~Andreas and developed
after discussions with numerous other people. Some discussion of
the initial issues regarding reducibility of $E_8$ bundles to
$\mathrm{Spin}(16)/\BZ_2$ bundles has appeared previously in
Oberwolfach report 53/2005, reporting on the ``Heterotic strings,
derived categories, and stacks'' miniworkshop held at
Oberwolfach on November 13-19, 2005.
We would like to thank M.~Ando, B.~Andreas, J.~Francis, S.~Hellerman,
M.~Hill, T.~Pantev, E. Witten and especially A.~Henriques,
A.~Knutson, E.~Scheidegger, and R.~Thomas
for useful conversations.
|
1,108,101,564,246 | arxiv | \section{Introduction}
Hypothesis testing on large covariance matrices has received considerable attention in the past decade.
The covariance matrices not only have the fundamental importance in multivariate statistics such as discriminant analysis, principal component analysis, and clustering \citep{anderson2003introduction}, but also play a vital role in various research topics in biological science, finance, operations research including portfolio allocation \citep{goldfarb2003robust}, gene-set testing \citep{chen2010two}, and gene-set clustering \citep{chang2017comparing}.
Let $\mathrm{\bf X}$ and $\mathrm{\bf Y}$ represent two independent $p$-dimensional random vectors with covariance matrices $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$ respectively. We are interested in testing whether these two covariance matrices are equal, that is,
$
H_0:\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2.
$
This test is well studied in the classical setting where the dimension is fixed and the sample size diverges \citep{anderson2003introduction}. For instance, the likelihood ratio test was shown to enjoy the optimality under mild conditions \citep*{sugiura1968unbiasedness,perlman1980unbiasedness}.
However, the likelihood function is not well-defined due to the singular sample covariance matrix in the high-dimensional setting where the dimension is no longer fixed but diverges at a possibly faster rate than the sample size.
Over the past decade, statisticians have made a lot of efforts to tackle the challenges in the high-dimensional setting and proposed three different types of statistics for testing large covariance matrices. Firstly, quadratic form statistics were studied to test against the dense alternatives, which can be written in terms of the Frobenius norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$ with many small differences between two covariance matrices. When the dimension is on the same order of the sample size, \cite{schott2007test} proposed a test statistic based on the sum of squared differences between two sample covariance matrices, and \cite{srivastava2010testing} used a consistent estimator of $\mbox{tr}(\boldsymbol{\Sigma}_1^2)/\left[\mbox{tr}(\boldsymbol{\Sigma}_1)\right]^2 - \mbox{tr}(\boldsymbol{\Sigma}_2^2)/\left[\mbox{tr}(\boldsymbol{\Sigma}_2)\right]^2$ to construct a new test statistic.
\cite{li2012two} introduced an unbiased estimator of the Frobenius norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$ to allow for the ultra-high dimensionality that the dimension grows much faster than the sample size.
Recently, \cite{he2018asymptotically} proposed the adaptive testing to combine the finite-order U-statistics that includes the variants of quadratic form statistics. Secondly, maximum form statistics were explored to account for the sparse alternatives with only a few large differences between two covariance matrices, which can be written in terms of the entry-wise maximum norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$. \cite{cai2013two} studied the maximal standardized differences between two sample covariance matrices to test against the sparse alternative, and \cite{chang2017comparing} proposed a perturbed-based maximum test using a data-driven approach to determine the rejection region.
Thirdly, \cite{li2015joint}, \cite{yang2017weighted} and \cite{li2018applications} used a weighted combination of quadratic form statistics and maximum form statistics to test against the dense or sparse alternatives, which shares the similar philosophy with the power enhancement method \citep{fan2015power} for testing cross-sectional dependence.
Similar to these weighted combination tests, we are motivated by combining the strengths of quadratic form statistics and maximum form statistics to boost the power against the dense or sparse alternatives. It is also of great importance to combine the power of these two different statistics in real-world applications such as financial studies and genetic association studies. For instance, the anomalies in financial markets may come from the mispricing of a few assets or a systematic market mispricing \citep{fan2015power}, and the phenotype may be affected by a few causal variants or a large number of mutants \citep{liu2019acat}.
It is worth pointing out that these weighted combination tests critically depend on the proper choice of weights to combine two different types of test statistics. There may exist a non-negligible discrepancy on the different magnitudes between quadratic form statistics and maximum form statistics in practice, which makes the choice of weights a very challenging task. As a promising alternative to \cite{fan2015power}, \cite{li2015joint}, \cite{yang2017weighted} and \cite{li2018applications}, we provide a new perspective to exploit the full potential of quadratic form statistics and maximum form statistics for testing high-dimensional covariance matrices.
We propose a scale-invariant power enhancement test based on Fisher's method \citep{Fisher1925} to combine the $p$-values of quadratic form statistics and maximum form statistics.
To study the asymptotic property, we need to solve several non-trivial challenges in the theoretical analysis and then derive the asymptotic joint distribution of quadratic form statistics and maximum form statistics under the null hypothesis. We prove that the asymptotic null distribution of the proposed combination test statistic does not depend on the unknown parameters. More specifically, the proposed statistic follows a chi-squared distribution with $4$ degrees of freedom asymptotically under the null hypothesis.
We also show the consistent asymptotic power against the union of dense alternatives and sparse alternatives, which is more general than the designated alternative in the weighted combination test. It is worth pointing out that Fisher's method achieves the asymptotic optimality with respect to Bahadur relative efficiency. Moreover, we demonstrate the numerical properties in simulation studies and a real application to gene-set testing \citep{dudoit2008multiple, ritchie2015limma}.
In the real application, the proposed test can detect the important gene-sets more effectively, and our findings are supported by biological evidences.
In recent literature, \cite{liu2019cauchy} proposed the Cauchy combination of $p$-values for testing high-dimensional mean vectors, and \cite{he2018asymptotically} proved the joint normal limiting distribution of finite-order U-statistics with an identity covariance matrix and used the minimum combination of their $p$-values. The methods and theories of \cite{liu2019cauchy} and \cite{he2018asymptotically} do not apply to the more challenging setting for testing two-sample high-dimensional covariance matrices. Specifically, \cite{li2015joint} and \cite{he2018asymptotically} considered the one-sample test for large covariance matrices that $H_0 : \boldsymbol{\Sigma} = \mathrm{\bf I}$ under the restricted complete independence assumption among entries of $\mathrm{\bf X}$, and \cite{li2018applications} studied the one-sample test that $H_0 : \boldsymbol{\Sigma}$ is a banded matrix under the Gaussian assumption. \cite{li2015joint}, \cite{li2018applications}, and \cite{he2018asymptotically} studied the one-sample covariance test and did not prove the asymptotic independence result for testing two-sample covariance matrices. However, it is significantly more challenging to deal with the complicated dependence in the two-sample tests for large covariance matrices.
To the best of our knowledge, our work presents the first proof of the asymptotic independence result of quadratic form statistics and maximum form statistics for testing two-sample covariance matrices, which provides the essential theoretical guarantee for Fisher's method to combine their $p$-values.
In the theoretical analysis, we use a non-trivial decorrelation technique to address the complex nonlinear dependence in high dimensional covariances. Recently, \cite{shi2019linear} used the decorrelation to study the linear hypothesis testing for high-dimensional generalized linear models. But the nonlinear dependence in the two-sample covariance testing is much more challenging than the linear hypothesis testing. Moreover, we develop a new concentration inequality for two-sample degenerate U-statistics of high-dimensional data, which makes a separate contribution to the literature. This result is an extension of the concentration inequality for one-sample degenerate U-statistics \citep{arcones1993limit}.
The rest of this paper is organized as follows. After presenting the preliminaries in Section 2, we introduce the Fisher's method for testing two-sample large covariance matrices in Section 3. Section 4 studies the asymptotic size and asymptotic power, and Section 5 demonstrates the numerical properties in simulation studies. Section 6 evaluates the proposed test in an empirical study on testing gene-sets. Section 7 includes the concluding remarks. The technical details are presented in the supplementary note.
\section{Preliminaries}
Let $\mathrm{\bf X}$ and $\mathrm{\bf Y}$ be $p$-dimensional random vectors with covariance matrices $\boldsymbol{\Sigma}_1=\left(\sigma_{ij1}\right)_{p\times p}$ and $\boldsymbol{\Sigma}_2=\left(\sigma_{ij2}\right)_{p\times p}$ respectively. Without loss of generality, we assume both $\mathrm{\bf X}$ and $\mathrm{\bf Y}$ have zero means. Let $\left\{ \mathrm{\bf X}_1,\cdots, \mathrm{\bf X}_{n_1} \right\}$ be independently and identically distributed (\emph{i.i.d.}) random samples of $\mathrm{\bf X}$, and $\left\{ \mathrm{\bf Y}_1,\cdots, \mathrm{\bf Y}_{n_2} \right\}$ be \emph{i.i.d.} samples of $\mathrm{\bf Y}$ that are independent of $\left\{ \mathrm{\bf X}_1,\cdots, \mathrm{\bf X}_{n_1} \right\}$. The problem of interest is to test whether two covariance matrices are equal,
\begin{equation}\label{eq: test}
H_0: \boldsymbol{\Sigma}_1=\boldsymbol{\Sigma}_2.
\end{equation}
We first revisit the quadratic form statistic \citep{li2012two} to test against the dense alternative and the maximum form statistic \citep{cai2013two} to test against the sparse alternative. The dense alternative can be written in terms of the Frobenius norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$ and the sparse alternative can be written using the entry-wise maximum norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$.
\cite{li2012two} proposed a quadratic-form test after reformulating the null hypothesis (\ref{eq: test}) into its equivalent form based on the squared Frobenius norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$, that is, $$
H_0: \|\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2\|_F^2= 0. $$ To construct the test statistic, given the simple fact that
$$\|\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2\|_F^2=\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\} = \mbox{tr}(\boldsymbol{\Sigma}_1^2) + \mbox{tr}(\boldsymbol{\Sigma}_2^2) -2\mbox{tr}(\boldsymbol{\Sigma}_1\boldsymbol{\Sigma}_2),$$
\cite{li2012two} proposed a test statistic $T_{n_1,n_2}$ in the form of linear combination of unbiased estimators for each term, specifically,
\begin{equation}
T_{n_1,n_2}=A_{n_1}+B_{n_2}-2C_{n_1,n_2},
\end{equation}
where $A_{n_1}$, $B_{n_2}$ and $C_{n_1,n_2}$ are the unbiased estimators under $H_0$ for $\mbox{tr}(\boldsymbol{\Sigma}_1^2)$, $\mbox{tr}(\boldsymbol{\Sigma}_2^2)$ and $\mbox{tr}(\boldsymbol{\Sigma}_1\boldsymbol{\Sigma}_2)$ respectively. Then, the expected value of $T_{n_1,n_2}$ is zero under the null hypothesis.
For details about $A_{n_1}$, $B_{n_2}$ and $C_{n_1,n_2}$, please refer to Section 2 of \cite{li2012two}. \cite{li2012two} proved that the asymptotic distribution of $T_{n_1,n_2}$ is a normal distribution. Let $z_\alpha$ be the upper $\alpha$ quantile of the standard normal distribution, and $\widehat\sigma_{0,n_1,n_2}$ is a consistent estimator of the leading term $\sigma_{0,n_1,n_2}$ in the standard deviation of $T_{n_1,n_2}$ under $H_0$. Hence, \cite{li2012two} rejects the null hypothesis at the significance level $\alpha$ if
\begin{equation}\label{test: LC}
T_{n_1,n_2}\geq \widehat\sigma_{0,n_1,n_2}z_\alpha.
\end{equation}
As an alternative to the quadratic form statistic \citep{li2012two},
\cite{cai2013two}
studied the null hypothesis (\ref{eq: test}) in terms of the maximal absolute difference of two covariance matrices, i.e.,
$$
H_0:
\max_{1\leq i\leq j\leq p} |\sigma_{ij1}-\sigma_{ij2}| = 0. $$
\cite{cai2013two} proposed a maximum test statistic $M_{n_1,n_2}$ based on the maximum of standardized differences between $\widehat\sigma_{ij1}$'s and $\widehat\sigma_{ij2}$'s. The maximum form statistic is written as
\begin{equation}
M_{n_1,n_2}=\max_{1\leq i \leq j\leq p} \frac{\left( \widehat{\sigma}_{ij1}-\widehat{\sigma}_{ij2} \right)^2}{\widehat{\theta}_{ij1}/n_1+\widehat{\theta}_{ij2}/n_2}_{\textstyle,}
\end{equation}
where the denominator $\widehat\theta_{ij1}/n_1+\widehat\theta_{ij1}/n_2$ estimates the variance of $\widehat\sigma_{ij1}-\widehat\sigma_{ij2}$ to account for the heteroscedasticity of $\widehat\sigma_{ij1}$'s and $\widehat\sigma_{ij2}$'s among different entries.
\cite{cai2013two} proved that the asymptotic null distribution of $M_{n_1,n_2}$ is a Type \uppercase\expandafter{\romannumeral1} extreme value distribution (also known as the Gumbel distribution). Thus, \cite{cai2013two} rejects the null hypothesis at a significance level $\alpha$ if
\begin{equation}\label{test: CLX}
M_{n_1,n_2} \geq q_{\alpha}+4\log p- \log\log p_{\textstyle,}
\end{equation}
where $q_\alpha$ is the upper $\alpha$ quantile of the Gumbel distribution.
\begin{comment}
Before proceeding with the tests, we first calculate their corresponding sample covariance matrices.
\begin{equation}
\widehat\mathbf{\Sigma}_1 = \frac{1}{n_1}\sum_{u=1}^{n_1} (\mathrm{\bf X}_u - \overline\mathrm{\bf X}) (\mathrm{\bf X}_u - \overline\mathrm{\bf X})^T := \left(\hat\sigma_{ij1}\right)_{ p\times p}
\end{equation}
\begin{equation}
\widehat\mathbf{\Sigma}_2 = \frac{1}{n_2}\sum_{v=1}^{n_2} (\mathrm{\bf Y}_v - \overline\mathrm{\bf Y}) (\mathrm{\bf Y}_v - \overline\mathrm{\bf Y})^T := \left(\hat\sigma_{ij2}\right)_{ p\times p}
\end{equation}
where $\overline\mathrm{\bf X} = \frac{1}{n_1}\sum_{u=1}^{n_1}\mathrm{\bf X}_u$ and $\overline\mathrm{\bf Y} = \frac{1}{n_2}\sum_{v=1}^{n_2}\mathrm{\bf Y}_v$ are the sample means of $\{\mathrm{\bf X}_{u}\}_{u=1}^{n_1}$ and $\{\mathrm{\bf Y}_{v}\}_{v=1}^{n_2}$.
\end{comment}
\begin{comment}
\subsection{Quadratic-Form Statistic}
\cite{li2012two} propose a quadratic-form test that allows the dimension to be diverging and much larger than the sample size. The intuition to formulate the hypothesis (\ref{eq: test}) is to choose a suitable metric to measure the difference between two covariance matrices, and construct a proper estimator to quantify the difference from sample-level of view. So that we are able to conduct a test on the basis of behavior of the estimator. They formulate their quadratic-form test targeting on $\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\}$, the squared Frobenius norm of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$. Even though the Frobenius norm has larger magnitude compared with other matrix norms, using it for testing brings two advantages. First, a test statistic based on the Frobenius norm is relatively easier for deriving limiting distribution, which provides theoretical evidence for size and power analysis. From another perspective, using Frobenius norm allows flexible modification to directly target on a subset of the covariance matrix, which would be hard to accomplish with other norms.
To construct the test statistic, by
\begin{equation*}
\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\} = \mbox{tr}(\boldsymbol{\Sigma}_1^2) + \mbox{tr}(\boldsymbol{\Sigma}_2^2) -2\mbox{tr}(\boldsymbol{\Sigma}_1\boldsymbol{\Sigma}_2),
\end{equation*}
they propose a test statistic $T_{n_1,n_2}$ in the form of linear combination of unbiased estimators for each term. More explicitly, let
\begin{align*}
\begin{aligned}
A_{n_1} = & \frac{1}{n_1(n_1-1)}\sum_{u\neq v}\left(\mathrm{\bf X}_u'\mathrm{\bf X}_v\right)^2 -\frac{2}{n_1(n_1-1)(n_1-2)}\sum_{u,v,k}^{\star} \mathrm{\bf X}_u' \mathrm{\bf X}_v \mathrm{\bf X}_v' \mathrm{\bf X}_k \\
&+\frac{2}{n_1(n_1-1)(n_1-2)(n_1-3)}\sum_{u,v,k,l}^{\star} \mathrm{\bf X}_u' \mathrm{\bf X}_v \mathrm{\bf X}_k' \mathrm{\bf X}_l \\
\end{aligned}
\end{align*}
\begin{align*}
\begin{aligned}
B_{n_2} = & \frac{1}{n_2(n_2-1)}\sum_{u\neq v}\left(\mathrm{\bf Y}_u'\mathrm{\bf Y}_v\right)^2 -\frac{2}{n_2(n_2-1)(n_2-2)}\sum_{u,v,k}^{\star} \mathrm{\bf Y}_u' \mathrm{\bf Y}_v \mathrm{\bf Y}_v' \mathrm{\bf Y}_k \\
&+\frac{2}{n_2(n_2-1)(n_2-2)(n_2-3)}\sum_{u,v,k,l}^{\star} \mathrm{\bf Y}_u' \mathrm{\bf Y}_v \mathrm{\bf Y}_k' \mathrm{\bf Y}_l \\ \\
C_{n_1,n_2} = & \frac{1}{n_1n_2}\sum_{u}\sum_{v}\left(\mathrm{\bf X}_u'\mathrm{\bf Y}_v\right)^2 -\frac{1}{n_1n_2(n_1-1)}\sum_{u,k}^{\star}\sum_{v} \mathrm{\bf X}_u' \mathrm{\bf Y}_v \mathrm{\bf Y}_v' \mathrm{\bf X}_k \\
- & \frac{1}{n_1n_2(n_2-1)}\sum_{u,k}^{\star}\sum_{v} \mathrm{\bf Y}_u' \mathrm{\bf X}_v \mathrm{\bf X}_v' \mathrm{\bf Y}_k \\
+ & \frac{2}{n_1n_2(n_1-1)(n_2-1)}\sum_{u,k}^{\star} \sum_{v,l}^{\star} \mathrm{\bf X}_u' \mathrm{\bf Y}_v \mathrm{\bf X}_k' \mathrm{\bf Y}_l
\end{aligned}
\end{align*}
where $\sum^{\star}$ denotes summation over mutually distinct indices.
They show that $A_{n_1}$, $B_{n_2}$ and $C_{n_1,n_2}$ are unbiased estimators for $\mbox{tr}(\boldsymbol{\Sigma}_1^2)$, $\mbox{tr}(\boldsymbol{\Sigma}_2^2)$ and $\mbox{tr}(\boldsymbol{\Sigma}_1\boldsymbol{\Sigma}_2)$ respectively, and then define the test statistic as
\begin{equation}
T_{n_1,n_2}=A_{n_1}+B_{n_2}-2C_{n_1,n_2}.
\end{equation}
Under certain regularity conditions (Assumption \ref{assum: A1A2-in-Chen}) together with Gaussian assumption, they prove
\begin{equation}\label{eq: cqtheoretical}
\frac{T_{n_1,n_2}-\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\}}{\sigma_{n_1,n_2}} \rightarrow N(0,1)
\end{equation}
as $\min\{n_1,n_2\} \rightarrow\infty$, where
\begin{equation*}
\sigma^2_{n_1,n_2} = \sum_{k=1}^2\frac{4}{n_k^2}\mbox{tr}^2(\boldsymbol{\Sigma}_k^2)+\sum_{k=1}^2 \frac{8}{n_k}\mbox{tr}\{(\boldsymbol{\Sigma}_k^2-\boldsymbol{\Sigma}_1\boldsymbol{\Sigma}_2)^2\}+\frac{8}{n_1n_2}\mbox{tr}^2(\boldsymbol{\Sigma}_1\boldsymbol{\Sigma}_2)
\end{equation*}
is the leading order variance of $T_{n_1,n_2}$. Under the null hypothesis, $\sigma^2_{n_1,n_2}$ becomes
\begin{equation*}
\sigma^2_{0,n_1,n_2} = 4\left(\frac{1}{n_1}+\frac{1}{n_2}\right)^2\mbox{tr}^2(\boldsymbol{\Sigma}^2).
\end{equation*}
Since $A_{n_1}$ and $B_{n_2}$ are unbiased estimators of $\mbox{tr}(\boldsymbol{\Sigma}_1^2)$ and $\mbox{tr}(\boldsymbol{\Sigma}_2^2)$, they use
\begin{equation*}
\widehat\sigma_{0,n_1,n_2} = \frac{2}{n_2}A_{n_1}+\frac{2}{n_1}B_{n_2}
\end{equation*}
to approximate $\sigma_{0,n_1,n_2}$. In addition, they show that
\begin{equation*}
\frac{A_{n_1}}{\mbox{tr}(\boldsymbol{\Sigma}_1^2)} \overset{p}{\rightarrow} 1,\quad \frac{B_{n_2}}{\mbox{tr}(\boldsymbol{\Sigma}_2^2)} \overset{p}{\rightarrow} 1, \text{ and}\quad \frac{\hat\sigma_{0,n_1,n_2}}{\sigma_{0,n_1,n_2}} \overset{p}{\rightarrow} 1.
\end{equation*}
As a result, under the null hypothesis,
\begin{equation}
\frac{T_{n_1,n_2}}{\hat\sigma_{0,n_1,n_2}} \overset{H_0}{\longrightarrow} N(0,1)
\end{equation}
Hence, the proposed test with a nominal significance level $\alpha$ rejects the null hypothesis if $$T_{n_1,n_2}\geq \hat\sigma_{0,n_1,n_2}z_\alpha$$
where $z_\alpha$ is the upper-$\alpha$ quantile of standard normal distribution.
\noindent In terms of power analysis, let
\begin{equation}\label{eq: G1}
\mathcal{G}_1 = \left\{ (\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2): \boldsymbol{\Sigma}_1 >0, \boldsymbol{\Sigma}_2>0, \frac{1}{n_1}\mbox{tr}(\boldsymbol{\Sigma}_1^2) + \frac{1}{n_2}\mbox{tr}(\boldsymbol{\Sigma}_2^2) = o\left(\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\}\right)\right\}.
\end{equation}
they show that
\begin{equation} \label{eq: powerT}
\inf_{(\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2)\in \mathcal{G}_1} P\left( T_{n_1,n_2} \geq \widehat\sigma_{0,n_1,n_2}z_\alpha\right) \rightarrow 1
\end{equation}
where $z_\alpha$ is the upper-$\alpha$ quantile of standard normal distribution. In addition, if we assume all the eigenvalues of $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$ are bounded away fro zero and infinity, (\ref{eq: G1}) becomes $\min\{n_1,n_2\}\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\}/p \rightarrow \infty$.
It is worth mentioning that under the assumption of zero means, the last two sums in $A_{n_1}$ and $B_{n_2}$ as well as the last three in $C_{n_1,n_2}$ are all of small order than the first term. As a result, after centering each datum, removing those terms would not affect the asymptotic normality. This gives us a chance to alleviate difficulties in theoretical analysis. Henceforth, we only consider the simplified $T_{n_1,n_2}$ statistic.
\begin{equation}\label{eq: Tp}
T_{n_1,n_2}'=\frac{1}{n_1(n_1-1)}\sum_{u\neq v}\left(\mathrm{\bf X}_u'\mathrm{\bf X}_v\right)^2+\frac{1}{n_2(n_2-1)}\sum_{u\neq v}\left(\mathrm{\bf Y}_u'\mathrm{\bf Y}_v\right)^2-\frac{2}{n_1n_2}\sum_u\sum_v\left(\mathrm{\bf X}_u'\mathrm{\bf Y}_v\right)^2
\end{equation}
As we discuss, $T_{n_1,n_2}'$ shares the same asymptotic distribution as $T_{n_1,n_2}$, i.e.,
\begin{equation} \label{teststat: Tp-normal}
\frac{T_{n_1,n_2}'}{\hat\sigma_{0,n_1,n_2}} \overset{H_0}{\longrightarrow} N(0,1)
\end{equation}
as $\min\{n_1,n_2\}\rightarrow \infty$.
\subsection{Maximum-Form Statistic}
Other than using Frobenius norm to measure the difference between two covariance matrices, \cite{cai2013two} consider this problem from another point of view. They point out that the null hypothesis $H_0:\boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_2 $ is equivalent to $H_0': \max_{1\leq i\leq j\leq p} |\sigma_{ij1}-\sigma_{ij2}| = 0$. A natural approach to testing this hypothesis is to compare the sample covariances $\widehat\sigma_{ij1}$' and $\widehat\sigma_{ij2}$'s and make use of their maximum differences. Before making a comparison among different entries, it is of great importance to standardize $\widehat\sigma_{ij1}-\widehat\sigma_{ij2}$ so as to eliminate the impact of variability resulting from the heteroscedasticity of $\widehat\sigma_{ij1}$'s and $\widehat\sigma_{ij2}$'s. To be specific, first compute the sample covariances matrices by
\begin{align*}
\widehat\mathbf{\Sigma}_1 & = \frac{1}{n_1}\sum_{u=1}^{n_1} (\mathrm{\bf X}_u - \overline\mathrm{\bf X}) (\mathrm{\bf X}_u - \overline\mathrm{\bf X})^T := \left(\hat\sigma_{ij1}\right)_{ p\times p} \\
\widehat\mathbf{\Sigma}_2 & = \frac{1}{n_2}\sum_{v=1}^{n_2} (\mathrm{\bf Y}_v - \overline\mathrm{\bf Y}) (\mathrm{\bf Y}_v - \overline\mathrm{\bf Y})^T := \left(\hat\sigma_{ij2}\right)_{ p\times p}
\end{align*}
with $\overline\mathrm{\bf X} = \frac{1}{n_1}\sum_{u=1}^{n_1}\mathrm{\bf X}_u$ and $\overline\mathrm{\bf Y} = \frac{1}{n_2}\sum_{v=1}^{n_2}\mathrm{\bf Y}_v$ being the sample means of $\{\mathrm{\bf X}_{u}\}_{u=1}^{n_1}$ and $\{\mathrm{\bf Y}_{v}\}_{v=1}^{n_2}$. Consider the variances
\begin{equation*}
\theta_{ij1} = \mbox{var}\left((X_i-\mu_{1i})(X_j-\mu_{1j})\right)\quad \text{and} \quad \theta_{ij2} = \mbox{var}\left((Y_i-\mu_{2i})(Y_j-\mu_{2j})\right)
\end{equation*}
as well as their corresponding estimation
\begin{align*}
\begin{aligned}
& \widehat\theta_{ij1} = \frac{1}{n_1}\sum_{u=1}^{n_1} \left[(X_{u,i}-\overline{X}_i)(X_{u,j}-\overline{X}_j)-\widehat\sigma_{ij1}\right]^2 \\
\text{and} \quad & \widehat\theta_{ij2} = \frac{1}{n_2}\sum_{v=1}^{n_2} \left[(Y_{v,i}-\overline{Y}_i)(Y_{v,j}-\overline{Y}_j)-\widehat\sigma_{ij2}\right]^2.
\end{aligned}
\end{align*}
Hence, the variance of $\widehat\sigma_{ij1}-\widehat\sigma_{ij2}$ can be estimated by $\widehat\theta_{ij1}/n_1+\widehat\theta_{ij1}/n_2$. They develop a test statistic based on the maximum value of standardized differences between $\widehat\sigma_{ij1}$'s and $\widehat\sigma_{ij2}$'s, i.e.,
\begin{equation}
M_{n_1,n_2}=\max_{1\leq i \leq j\leq p} \frac{\left( \widehat{\sigma}_{ij1}-\widehat{\sigma}_{ij2} \right)^2}{\widehat{\theta}_{ij1}/n_1+\widehat{\theta}_{ij2}/n_2}
\end{equation}
They conclude that under certain regularity conditions (Assumption \ref{assum: C1-in-Cai}) and Gaussian assumption,
\begin{equation} \label{teststat: M-Gumbel}
P\left(M_{n_1,n_2} - 4\log p + \log\log p \leq x \right) \overset{H_0}{\longrightarrow} \exp\left(-\frac{1}{\sqrt{8\pi}}\exp\left(-\frac{x}{2}\right)\right)
\end{equation}
for any $x$ as $n_1,n_2,p\rightarrow \infty$. Hence, the proposed test with a nominal significance level $\alpha$ rejects the null hypothesis if
\begin{equation*}
M_{n_1,n_2} \geq q_{\alpha}+4\log p- \log\log p
\end{equation*}
where $q_\alpha$ is the upper-$\alpha$ quantile of the Type \uppercase\expandafter{\romannumeral1} extreme value distribution with the cumulative distribution function $\exp\left(-\frac{1}{\sqrt{8\pi}}\exp\left(-\frac{t}{2}\right)\right)$, that is
\begin{equation*}
q_\alpha = -\log(8\pi)-2\log\log(1-\alpha)^{-1}.
\end{equation*}
Moreover, they claim that it only requires one of the entries of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$ having a magnitude more than $C\sqrt{\log p/n}$ in order for the test to correctly reject $H_0$. More precisely, let
\begin{equation}\label{eq: G2}
\mathcal{G}_2 = \left\{(\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2): \boldsymbol{\Sigma}_1 >0, \boldsymbol{\Sigma}_2>0, \max_{1\leq i\leq j\leq p} \frac{|\sigma_{ij1}-\sigma_{ij2}|}{\sqrt{\theta_{ij1}/n_1+\theta_{ij2}/n_2}} \geq 4\sqrt{\log p} \right\},
\end{equation}
then,
\begin{equation} \label{eq: powerM}
\inf_{(\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2)\in\mathcal{G}_2} P\left( M_{n_1,n_2} \geq q_\alpha+4\log p-\log\log p\right) \rightarrow 1
\end{equation}
as $n_1,n_2,p\rightarrow \infty$.
\subsection{Fisher's Combined Probability Test}
Theorem \ref{thm: asymp-indep} proves the asymptotic independence between the aforementioned two popular tests. We consider the following combined test statistic.
\begin{equation}
F_{n_1,n_2} = -2 \log\left(1-F(M_{n_1,n_2}-4\log p+\log\log p)\right) - 2 \log\left(1-\Phi\left(\frac{T_{n_1,n_2}}{\widehat\sigma_{0,n_1,n_2}}\right)\right)
\end{equation}
where $F(\cdot)$ is the cdf of a Gumbel distribution defined in Section 4.1.2, and $\Phi(\cdot)$ is the cdf of $N(0,1)$.
On top of Theorem \ref{thm: asymp-indep} and simple probability transformation, it's easy to obtain
\begin{equation}
F_{n_1,n_2} \overset{H_0}{\longrightarrow} \chi_4^2
\end{equation}
as $n_1,n_2,p\rightarrow \infty$. Therefore, let $c_\alpha$ denote the upper-$\alpha$ quantile of $\chi_4^2$ distribution, we reject the null hypothesis at the significance level $\alpha$ if
\begin{equation}
F_{n_1,n_2}\geq c_\alpha
\end{equation}
\end{comment}
\section{Fisher's Combined Probability Test}
\cite{li2012two} and \cite{cai2013two} have their respective power for testing high-dimensional covariance matrices. The quadratic form statistic $T_{n_1,n_2}$ is powerful against the dense alternative, where the difference between $\boldsymbol{\Sigma}_1$ and $\boldsymbol{\Sigma}_2$ under the squared Frobenius norm is no smaller than the order of $\mbox{tr}(\boldsymbol{\Sigma}_1^2)/n_1+\mbox{tr}(\boldsymbol{\Sigma}_2^2)/n_2$.
The maximum form statistic $M_{n_1,n_2}$ is powerful against the sparse alternative, where at least one entry of $\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2$ has the magnitude larger than the order of $\sqrt{\log p/n}$. However, $T_{n_1,n_2}$ performs poorly against the sparse alternative and $M_{n_1,n_2}$ performs poorly against the dense alternative. More details will be presented in Subsection \ref{subsec: size-power} and Section \ref{sec:simulation}.
\cite{fan2015power}, \cite{li2015joint}, \cite{yang2017weighted} and \cite{li2018applications} studied the weighted combination $J = J_0+J_1$ to achieve the power enhancement, where $J_0$ is built on the extreme value form statistic and $J_1$ is constructed from the asymptotically pivotal statistic. It is worth pointing out that, with the proper weighted combination, $J$ enjoys the so-called \textsl{power enhancement principles} \citep{fan2015power}: (i) $J$ is at least as powerful as $J_1$, (ii) the size distortion due to the addition of $J_0$ is asymptotically negligible, and (iii) power is improved under the designated alternatives. For testing large covariance matrices, \cite{yang2017weighted} proposed $J_1 = (1-(s_p+\xi_1)^{-1})M_n$ and $J_0 = n^{\frac{1}{s_p+\xi_1} + \frac{1}{\xi_2}}\cdot \max_{1\leq i,j\leq p} (\widehat\sigma_{ij1}-\widehat\sigma_{ij2})^2 $, where $M_n$ is a macro-statistic which performs well against the dense alternative, and $s_p$ is the number of distinct entries in two covariance matrices. Note that the quantities $\xi_1$ and $\xi_2$ are carefully chosen such that $J_0 \rightarrow 0$ under $H_0$.
As a promising alternative, we propose a scale-invariant combination procedure based on Fisher's method \citep{Fisher1925} to combine both strengths of $T_{n_1,n_2}$ and $M_{n_1,n_2}$. Let $\Phi(\cdot)$ be the cumulative distribution function of $N(0,1)$ and $G(x)=\exp\left(-\frac{1}{\sqrt{8\pi}}\exp\left(-\frac{x}{2}\right)\right)$ be the cumulative distribution function of the Gumbel distribution. More specifically, we combine the $p$-values of $T_{n_1,n_2}$ and $M_{n_1,n_2}$ after the negative natural logarithm transformation, that is,
\begin{equation}
F_{n_1,n_2} = -2 \log p_T - 2 \log p_M,
\end{equation}
where $$p_T = 1-\Phi\left(T_{n_1,n_2}/\widehat\sigma_{0,n_1,n_2}\right)$$ and $$p_M = 1-G(M_{n_1,n_2}-4\log p+\log\log p)$$ are the $p$-values associated with the test statistics $T_{n_1,n_2}$ and $M_{n_1,n_2}$, respectively.
Let $c_\alpha$ denote the upper $\alpha$ quantile of a chi-squared distribution with 4 degrees of freedom (i.e., $\chi_4^2$). We reject the null hypothesis at the significance level $\alpha$ if
\begin{equation}
F_{n_1,n_2}\geq c_\alpha.
\end{equation}
Unlike the weighted statistic $J = J_0+J_1$, $F_{n_1,n_2}$ does not need to estimate $s_p$ or choose $\xi_1$ and $\xi_2$ to construct the proper weights, which may be non-trivial to deal with in practice.
The inappropriate choice of $s_p$, $\xi_1$ and $\xi_2$ may lead to the size distortion or loss of power. In contrast, $F_{n_1,n_2}$ is scale-invariant as the $p$-values always take values between 0 and 1, and the asymptotic null distribution of $F_{n_1,n_2}$ (i.e., $\chi_4^2$) does not depend on any hyper-parameters. As we will show in Section \ref{subsec: size-power}, $F_{n_1,n_2}$ achieves the desired nominal significance level asymptotically while boosting the power against either sparse or dense alternatives. Moreover, Fisher's method achieves the asymptotic optimality with respect to Bahadur relative efficiency \citep{littell1971asymptotic,littell1973asymptotic}.
\begin{remark}
The idea of combining $p$-values has been widely used as an important technique for data fusion or meta analysis \citep{hedges2014statistical}. Recently, the Cauchy combination of $p$-values was used for testing high-dimensional mean vectors in \citep{liu2019cauchy}, and the minimum combination of $p$-values from the finite-order U-statistics was used for testing two-sample high-dimensional covariance matrices in \citep{he2018asymptotically}. However, neither \cite{liu2019cauchy} nor \cite{he2018asymptotically} studied the combination of $p$-values of $T_{n_1,n_2}$ and $M_{n_1,n_2}$,
and it is fundamentally challenging to study the asymptotic joint distribution of $T_{n_1,n_2}$ and $M_{n_1,n_2}$. We will solve this open problem in Subsection 4.2.
\end{remark}
\section{Asymptotic Properties}\label{sec: asymptotic}
This section presents the asymptotic properties of our proposed Fisher's combined probability test $F_{n_1,n_2}$. Section \ref{subsec: assumptions} presents the assumptions. Section \ref{subsec: asymp-independence} studies the joint limiting distribution of two test statistics $M_{n_1,n_2}$ and $T_{n_1,n_2}$ under the null hypothesis. Section
\ref{subsec: size-power} proves the correct asymptotic size and consistent asymptotic power of our proposed method.
\subsection{Assumptions}\label{subsec: assumptions}
We define some useful notations. For any matrix $\mathrm{\bf A}$, let $\lambda_i(\mathrm{\bf A})$ be the $i$-th largest eigenvalue of $\mathrm{\bf A}$. For any set $\mathcal{A}$, $\mbox{card}(\mathcal{A})$ represents the cardinality of $\mathcal{A}$. For $0<r<1$, let
\begin{equation*}
\mathcal{V}_i(r) = \left\{1\leq j\leq p: \frac{|\sigma_{ij1}|}{\sqrt{\sigma_{ii1}\sigma_{jj2}}}\geq r \text{ or } \frac{|\sigma_{ij2}|}{\sqrt{\sigma_{ii2}\sigma_{jj2}}} \geq r \right\}
\end{equation*}
be the set of indices $j$ such that $X_j$ (or $Y_j$) is highly correlated (whose correlation $>r$) with $X_i$ (or $Y_i$) for a given $i\in\{1,\dots, p\}$. And for any $\alpha>0$, let
\begin{equation*}
s_i(\alpha) = \mbox{card}(\mathcal{V}_i(\left(\log p\right)^{-1- \alpha})),\ i=1,\cdots, p
\end{equation*}
denote the number of indices $j$ in the set $\mathcal{V}_i(\left(\log p\right)^{-1- \alpha})$. Moreover, define
\begin{equation*}
\mathcal{W}(r) = \left\{1\leq i\leq p: \mathcal{V}_i(r) \neq \varnothing \right\}
\end{equation*}
such that, $\forall i\in\mathcal{W}(r)$, $X_i$ (or $Y_i$) is highly correlated with some other variable of $\mathrm{\bf X}$ (or $\mathrm{\bf Y}$).
Throughout the rest of this section, we assume that $\mathrm{\bf X}$ and $\mathrm{\bf Y}$ are both Gaussian random vectors. The Gaussian assumption facilitates the use of a new decorrelation technique to address the complex nonlinear dependence in high dimensional covariances in the theoretical analysis of the proposed scale-invariant combination test.
\begin{remark}
\cite{li2015joint}, \cite{li2018applications} and \cite{he2018asymptotically} studied the asymptotic joint distribution of the maximum test statistic and the quadratic test statistic for one-sample covariance test under the Gaussian assumption or restricted
complete independence assumption. Please see the first paragraph of Section 2 in \cite{li2015joint}, the first paragraph of Section 2 in \cite{li2018applications}, and Condition 2.3 in \cite{he2018asymptotically} for more details. However, the nonlinear dependence in two-sample covariance test is fundamentally more challenging than the dependence in the one-sample covariance test.
\end{remark}
\begin{assump}\label{assum: A1A2-in-Chen}
As $\min\{n_1,n_2\}\rightarrow\infty$ and $p \rightarrow\infty$,
\begin{itemize}
\item[(i)] $n_1/\left(n_1+n_2\right)\rightarrow \gamma$, for some constant $\gamma\in (0,1)$.
\item[(ii)] $\sum_{i=1}^{q}\lambda^2_i(\boldsymbol{\Sigma}_j)/\sum_{i=1}^{p}\lambda^2_i(\boldsymbol{\Sigma}_j)\to 0$ for any integer $q = O(\log p)$ and $j=1, 2$.
\end{itemize}
\end{assump}
\begin{remark}
Assumption \ref{assum: A1A2-in-Chen} is analogous to (A1) and (A2) in \cite{li2012two}, where the first condition is standard for two-sample asymptotic analysis, and the second one describes the extent of high dimensionality and the dependence which can be accommodated by the proposed tests. Sharing the spirit, Assumption \ref{assum: A1A2-in-Chen} does not impose explicit requirements on relationships between $p$ and $n_1, n_2$, but rather
requires a mild condition (ii) regarding the covariances, which can be satisfied if eigenvalues of two covariance matrices are bounded.
\end{remark}
\begin{assump}\label{assum: C1-in-Cai}
There exists a subset $\Upsilon\subset\left\{1,2, \cdots,p\right\}$ with $\mbox{card}\left(\Upsilon\right)=o(p)$ and some constant $\alpha_0>0$, such that for all $\kappa>0$, $\operatornamewithlimits{\max}\limits_{1\leq i\leq p, i\not\in \Upsilon} s_i(\alpha_0)=o(p^\kappa)$.
In addition, there exists a constant $0<r_0<1$, such that $\mbox{card}(\mathcal{W}(r_0)) = o(p)$.
\end{assump}
\begin{remark}
Assumption \ref{assum: C1-in-Cai} was introduced by \cite{cai2013two} such that $\max_{1\leq i\leq p, i\not\in \Upsilon}$ $s_i(\alpha_0)$ and $\mathcal{W}(r_0)$ are moderate for $\alpha_0>0$ and $0<r_0<1$. It is satisfied if the eigenvalues of covariance matrices are bounded from above and correlations are bounded away from $\pm 1$.
\end{remark}
\subsection{Asymptotic Joint Distribution}\label{subsec: asymp-independence}
Now, we present the joint limiting law for $M_{n_1,n_2}$ and $T_{n_1,n_2}$ under the null hypothesis.
\begin{theorem}\label{thm: asymp-indep}
Suppose Assumptions \ref{assum: A1A2-in-Chen} and \ref{assum: C1-in-Cai} hold, and $\log p = o(n^{\frac{1}{5}})$ for $n = n_1 + n_2$, then under the null hypothesis $H_0$, for any $x,t\in\mathbb{R}$, we have
\begin{equation}\label{eq: asympindep}
P\left(\frac{T_{n_1,n_2}}{\widehat\sigma_{0,n_1,n_2}}\leq t,\ M_{n_1,n_2}-4\log p+\log\log p \leq x\right) \rightarrow \Phi(t)\cdot G(x)
\end{equation}
as $n_1,n_2,p\rightarrow\infty$, where $G(x)=\exp\left(-\frac{1}{\sqrt{8\pi}}\exp\left(-\frac{x}{2}\right)\right)$ is the cdf of Gumbel distribution, and $\Phi(t)$ is the cdf of standard normal distribution.
\end{theorem}
\begin{remark}
Together with Theorems 1 and 2 from \cite{li2012two} and Theorem 1 from \cite{cai2013two}, Theorem \ref{thm: asymp-indep} implies that $M_{n_1,n_2}$ and $T_{n_1,n_2}$ are asymptotically independent.
\end{remark}
In the sequel, we provide a high-level intuition to prove the asymptotic independence result (\ref{eq: asympindep}). First of all, it is worth mentioning that under Assumption \ref{assum: A1A2-in-Chen}, all the third-moment and fourth-moment terms in $A_{n_1}$, $B_{n_2}$ and $C_{n_1,n_2}$ are of small order than the leading second-moment terms, which may be neglected when deriving the asymptotic normality.
Hence in theoretical analysis, we may consider the simplified statistic of $T_{n_1,n_2}$ defined by
\begin{equation}\label{eq: Tp}
\widetilde{T}_{n_1,n_2}=\frac{1}{n_1(n_1-1)}\sum_{u\neq v}\left(\mathrm{\bf X}_u'\mathrm{\bf X}_v\right)^2+\frac{1}{n_2(n_2-1)}\sum_{u\neq v}\left(\mathrm{\bf Y}_u'\mathrm{\bf Y}_v\right)^2-\frac{2}{n_1n_2}\sum_u\sum_v\left(\mathrm{\bf X}_u'\mathrm{\bf Y}_v\right)^2_{\textstyle.}
\end{equation}
As pointed out by \cite{li2012two}, $\widetilde{T}_{n_1,n_2}$ and $T_{n_1,n_2}$ shares the same asymptotic behavior.
Compared with the simple one-sample covariance test in \cite{li2015joint}, \cite{li2018applications}, and \cite{he2018asymptotically}, it is significantly more difficult to analyze the asymptotic joint distribution given the complicated dependence in the two-sample tests for large covariance matrices. To address this challenge, we use a decorrelation technique to address the complex nonlinear dependence in high dimensional covariances. Specifically, we introduce a decorrelated statistic $T_{n_1,n_2}^{*}$. Under $H_0: \boldsymbol{\Sigma}_1 = \boldsymbol{\Sigma}_{2} = \boldsymbol{\Sigma}$, we may partition $\mathrm{\bf X}$ and $\mathrm{\bf Y}$ as follows:
\begin{equation*}
\mathrm{\bf X}_{p\times 1}=\begin{pmatrix} \mathrm{\bf X}^{(1)} \\ \mathrm{\bf X}^{(2)} \end{pmatrix} \text{ and } \mathrm{\bf Y}_{p\times 1}=\begin{pmatrix} \mathrm{\bf Y}^{(1)} \\ \mathrm{\bf Y}^{(2)} \end{pmatrix}
\sim N_{p}\left( \begin{pmatrix} \mathrm{\bf 0}_{p-q} \\ \mathrm{\bf 0}_{q} \end{pmatrix}_{\textstyle,} \boldsymbol{\Sigma}=\begin{pmatrix} \boldsymbol{\Sigma}_{11} & \boldsymbol{\Sigma}_{12} \\ \boldsymbol{\Sigma}_{21} & \boldsymbol{\Sigma}_{22} \end{pmatrix} \right).
\end{equation*}
where $\mathrm{\bf X}^{(1)}, \mathrm{\bf Y}^{(1)}\in\mathbb{R}^{p-q},\ \mathrm{\bf X}^{(2)}, \mathrm{\bf Y}^{(2)}\in\mathbb{R}^{q}$ for integer $q$ satisfying $q=O(\log p)$.
Let $\mathrm{\bf Z}_1=\mathrm{\bf X}^{(1)}-\boldsymbol{\Sigma}_{12}\boldsymbol{\Sigma}_{22}^{-1}\mathrm{\bf X}^{(2)}$, $\mathrm{\bf Z}_2=\mathrm{\bf X}^{(2)}$, $\mathrm{\bf W}_1=\mathrm{\bf Y}^{(1)}-\boldsymbol{\Sigma}_{12}\boldsymbol{\Sigma}_{22}^{-1}\mathrm{\bf Y}^{(2)}$, $\mathrm{\bf W}_2=\mathrm{\bf Y}^{(2)}$. It's easy to see that $\mathrm{\bf Z}_1$ is independent of $\mathrm{\bf Z}_2$, and the same results hold for $\mathrm{\bf W}_1$ and $\mathrm{\bf W}_2$. Back to the sample level, we have that $\{\mathrm{\bf Z}_{1u}\}_{u=1}^{n_1}$ and $\{\mathrm{\bf W}_{1v}\}_{v=1}^{n_2}$ i.i.d. follow $N_{p-q}(\mathrm{\bf 0}, \boldsymbol{\Sigma}_{11}-\boldsymbol{\Sigma}_{12}\boldsymbol{\Sigma}_{22}^{-1}\boldsymbol{\Sigma}_{21})$. Following the pattern of $\widetilde{T}_{n_1,n_2}$ in (\ref{eq: Tp}), we define
\begin{equation}\label{eq: Tstar}
T_{n_1,n_2}^*=\frac{1}{n_1(n_1-1)}\sum_{u\neq v}\left(\mathrm{\bf Z}_{1u}'\mathrm{\bf Z}_{1v}\right)^2+\frac{1}{n_2(n_2-1)}\sum_{u\neq v}\left(\mathrm{\bf W}_{1u}'\mathrm{\bf W}_{1v}\right)^2-\frac{2}{n_1n_2}\sum_u\sum_v\left(\mathrm{\bf Z}_{1u}'\mathrm{\bf W}_{1v}\right)^2_{\textstyle.}
\end{equation}
$\{\mathrm{\bf Z}_{1u}\}_{u=1}^{n_1}$ and $\{\mathrm{\bf W}_{1v}\}_{v=1}^{n_2}$ are regarded as a decorrelated version of $\{\mathrm{\bf X}_{u}\}_{u=1}^{n_1}$ and $\{\mathrm{\bf Y}_{v}\}_{v=1}^{n_2}$, respectively. $T_{n_1,n_2}^*$ is regarded as the $\widetilde{T}_{n_1,n_2}$ statistic derived from the decorrelated samples. The above deccorelation shares a similar philosophy with \citep{shi2019linear}. We should point out that \cite{shi2019linear} used the decorrelation to study the linear hypothesis testing for high-dimensional generalized linear models, but the nonlinear dependence in the two-sample covariance testing is much more challenging than the linear hypothesis testing.
In what follows, we study the joint distribution of $M_{n_1,n_2}$ and $\widetilde{T}_{n_1, n_2}$. Let $A$ denote the event associated with the maximum statistic $M_{n_1, n_2}$, and let $B$ be the event corresponding to the quadratic statistic $\widetilde{T}_{n_1,n_2}$. We use the simple but very helpful fact that $A = \cup_i A_i$. Then, we may rewrite the joint probability $P\left(A\cap B\right)$ into the probability for a union of events, that is, $P\left(A\cap B\right) = P\left( (\cup_i A_i)\cap B\right)$. In what follows, we give the proof sketch to derive the upper bound $P(A\cap B) - P(A)P(B)\le o(1)$. We begin with a union bound to obtain that $P\left(\cup_i(A_i\cap B)\right)\leq \sum_{i} P(A_i\cap B)$. In order to deal with the joint probability of $A_{i} \cap B$, we further decompose the quadratic statistic into two parts: $T_{n_1,n_2}^*$ is independent of $A_i$, and the remaining term $ \widetilde{T}_{n_1,n_2}- T_{n_1,n_2}^*$ is associated with $A_i$. Consequently, $B$ can be written as $B = B_i^c \cup B_i$, in which $B_i^c$ represents to the event corresponding to $T_{n_1,n_2}^{*}$. Therefore, $\sum_i P(A_i\cap B) \leq \sum_{i} P(A_i\cap B_i^c) + \sum_i P(A_i\cap B_i)\leq \sum_{i} P(A_i)P(B_i^c) +\sum_{i} P(B_i)$.
Lemma \ref{lem: T-star-expdecay} suggests $T_{n_1,n_2}^*$ is sufficiently close to $\widetilde{T}_{n_1,n_2}$ so that we have $P(B_i^c)\approx P(B)$, $\sum_{i} P(A_i)\to P(A)$ and $\sum_{i} P(B_i) = o(1)$.
The lower bound $o(1) \le P(A\cap B) - P(A)P(B)$ can be similarly derived from the Bonferroni inequality. Therefore, we can prove the asymptotic independence given that $|P(A\cap B) - P(A)P(B)| = o(1)$.
In the following, we present three useful lemmas to prove (\ref{eq: asympindep}) in Theorem 1.
\begin{lemma}[Asymptotic Normality]\label{lem: T-star-asympnormality} Under Assumption \ref{assum: A1A2-in-Chen}, as $n_1,n_2,p\rightarrow\infty$,
\begin{equation}\label{eq: T-star}
\frac{T_{n_1,n_2}^*}{2\left(n_1^{-1}+n_2^{-1}\right)\mbox{tr}\left(\boldsymbol{\Sigma}^2\right)} \overset{d}{\rightarrow} N(0,1).
\end{equation}
\end{lemma}
\begin{lemma}[Exponential Decay]\label{lem: T-star-expdecay} Under Assumption \ref{assum: A1A2-in-Chen}, for any $\epsilon>0$, there exists positive constants $C, c$ that do not depend on $p$, $n_1$, $n_2$, such that
\begin{equation}\label{eq: exp-decay}
P\left( \frac{\left|\widetilde{T}_{n_1,n_2}-T_{n_1,n_2}^*\right|}{2\left(n_1^{-1}+n_2^{-1}\right)\mbox{tr}\left(\boldsymbol{\Sigma}^2\right)} \geq \epsilon \right) \leq C \exp\{-c \epsilon n^{\beta}\},
\end{equation}
with $1/5<\beta<1/3.$
\end{lemma}
\begin{remark}
Lemma \ref{lem: T-star-expdecay} presents a new concentration inequality for two-sample degenerate U-statistics. It extends the well-known concentration inequality for one-sample degenerate U-statistics \citep{arcones1993limit} and makes a separate contribution to the literature.
\end{remark}
As a final step, Lemma \ref{lem: asymp-indep} derives the joint limiting distribution of the test statistic $M_{n_1, n_2}$ and the simplified statistic $\widetilde{T}_{n_1,n_2}$, which directly implies Theorem \ref{thm: asymp-indep}.
\begin{lemma}\label{lem: asymp-indep}
Under the same assumptions as in Theorem \ref{thm: asymp-indep},
\begin{equation}\label{eq: asympindep2}
P\left(\frac{\widetilde{T}_{n_1,n_2}}{\widehat\sigma_{0,n_1,n_2}}\leq t,\ M_{n_1,n_2}-4\log p+\log\log p \leq x\right) {\rightarrow} \Phi(t)\cdot G(x)
\end{equation}
for any $x,t\in\mathbb{R}$, as $n_1,n_2,p\rightarrow\infty$.
\end{lemma}
Lemma \ref{lem: T-star-asympnormality} shows that such decorrelation procedure does not affect the asymptotic behavior of the quadratic test statistic. Lemma \ref{lem: T-star-expdecay} depicts the tail behavior of the difference between $\widetilde{T}_{n_1,n_2}$ and ${T}_{n_1,n_2}^*$ with explicit decaying rate. Lemma \ref{lem: T-star-asympnormality} and Lemma \ref{lem: T-star-expdecay} lay the foundation of replacing $\widetilde{T}_{n_1,n_2}$ with ${T}_{n_1,n_2}^*$ in the theoretical analysis.
\subsection{Asymptotic Size and Power}\label{subsec: size-power}
\begin{comment}
Motivated by the asymptotic independence in Theorem \ref{thm: asymp-indep}, we propose the following testing procedures based on Fisher's combined probability test:
$$
TS = I\bigg\{-2\big(\log (1-\Phi\left( \frac{T_{n_1,n_2}}{{2\left(n_1^{-1}+n_2^{-1}\right)\mbox{tr}\left(\widehat\mathbf{\Sigma}^2\right)}}\right)+\log (1-F( M_{n_1,n_2}-4\log p+\log\log p)\big)>c_{\alpha}\bigg\}.
$$
where the threshold $c_\alpha$ is defined as the $\alpha$ upper quantile of the $\chi_4$ distribution. Here, $TS=1$ leads to the rejection of the null hypothesis.
\begin{theorem} Under the null hypothesis, we have
$$
P_{\mathbf{H}_0}(TS =1)\rightarrow \alpha
\quad \text{as} \ n\rightarrow \infty.
$$
\end{theorem}
\begin{theorem} \label{alt}Under alternative hypothesis,
$$
P_{\Sigma\in \mathcal{G}_1\cup \mathcal{G}_2 } (TS=1)\to 1
\quad \text{as} \ n\rightarrow \infty.$$
\end{theorem}
Under the same arguments from Li and Chen (2012), $\mathcal{G}_1$ is the set of contain $\{(\mathbf{\Sigma}_1, \mathbf{\Sigma}_2 ): \mbox{tr}\left(\left(\mathbf{\Sigma}_1-\mathbf{\Sigma}_2\right)^2\right) \text{is at the larger order of } \frac{1}{n_1}\mbox{tr}\left( \mathbf{\Sigma}_1^2\right)+\frac{1}{n_2}\mbox{tr}\left( \mathbf{\Sigma}_2^2\right)\}$, then
$$P \left( \frac{T_{n_1,n_2}}{{2\left(n_1^{-1}+n_2^{-1}\right)\mbox{tr}\left(\widehat\mathbf{\Sigma}^2\right)}} \geq Z_{\alpha}\right)\geq \Phi\left( -\frac{Z_{\alpha}}{n_1n_2/(n_1+n_2)^2} +\frac{\frac{1}{n_1}\mbox{tr}\left( \mathbf{\Sigma}_1^2\right)+\frac{1}{n_2}\mbox{tr}\left( \mathbf{\Sigma}_2^2\right)}{\mbox{tr}\left(\left(\mathbf{\Sigma}_1-\mathbf{\Sigma}_2\right)^2\right)}\right)\to 1
$$
and under the alternative arguments from Cai et al, $\mathcal{G}_2$ is the set of contain $\{(\mathbf{\Sigma}_1, \mathbf{\Sigma}_2 ): \text{the maximum entries of} \left(\left(\mathbf{\Sigma}_1-\mathbf{\Sigma}_2\right) \right) \geq C \sqrt{\frac{\log p}{n}}\}$
$$
P \left( M_{n_1,n_2}-4\log p+\log\log p>F_\alpha\right)\to1
$$
Since $$ P_{\Sigma\in \mathcal{G}_1\cup \mathcal{G}_2 } (TS=1)\geq\min \left(P_{\Sigma\in \mathcal{G}_1} (TS=1),P _{\Sigma\in \mathcal{G}_2 } (TS=1)\right),$$ we only need to show that $$ P_{\Sigma\in \mathcal{G}_i} (TS=1)\to 1$$ for $i=1,2.$
Further that $$
P_{\Sigma\in \mathcal{G}_2} (TS=1)\geq P_{\Sigma\in \mathcal{G}_2}\left( \frac{T_{n_1,n_2}}{{2\left(n_1^{-1}+n_2^{-1}\right)\mbox{tr}\left(\widehat\mathbf{\Sigma}^2\right)}} \geq \Phi^{-1}\left(1-\exp(-\frac{C_{\alpha}}{2})\right) \right)\to 1
$$
and
$$
P_{\Sigma\in \mathcal{G}_1} (TS=1)\geq P_{\Sigma\in \mathcal{G}_1}\left( M_{n_1,n_2}-4\log p+\log\log p \geq F^{-1}\left(1-\exp(-\frac{C_{\alpha}}{2})\right) \right)\to 1.
$$
\end{comment}
Given the explicit joint distribution of $M_{n_1, n_2}$ and $T_{n_1,n_2}$, we proceed to present the asymptotic properties of our proposed Fisher's test $F_{n_1,n_2}$. Recall that $c_\alpha$ is the upper $\alpha$-quantile of $\chi_4^2$ distribution and $F_{n_1, n_2} = -2\log (p_M) - 2\log(p_T)$ rejects $H_0$ if $F_{n_1, n_2}$ is as extreme as $c_\alpha$. On top of the asymptotic independence established in Section \ref{subsec: asymp-independence} and by simple probability transformation, it's easy to obtain the null distribution of $F_{n_1, n_2}$, and therefore, the asymptotic size of the test. The results are formally presented in Theorem \ref{thm: size}.
\begin{theorem}[Asymptotic Size]\label{thm: size}
Under the same assumptions as in Theorem \ref{thm: asymp-indep}, the Fisher's test achieves accurate asymptotic size, that is, under the null hypothesis,
$$P\left(F_{n_1,n_2}\geq c_\alpha\right) \rightarrow \alpha\quad \text{as } n_1,n_2,p\rightarrow \infty.$$
\end{theorem}
\begin{remark}
Besides Fisher's method, the asymptotic independence result makes it feasible to combine $p$-values using other approaches such as Tippett's method \citep{tippett1931methods}, Stouffer's method \citep{stouffer1949american}, and Cauchy combination \citep{liu2019cauchy}.
\end{remark}
\cite{li2012two} and \cite{cai2013two} provided power analysis of tests $T_{n_1,n_2}$ and $M_{n_1,n_2}$ over the dense alternative $\mathcal{G}_d$ and the sparse alternative $\mathcal{G}_s$ respectively.
\begin{align}
\mathcal{G}_d & = \left\{ (\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2): \boldsymbol{\Sigma}_1 >0, \boldsymbol{\Sigma}_2>0, \frac{1}{n_1}\mbox{tr}(\boldsymbol{\Sigma}_1^2) + \frac{1}{n_2}\mbox{tr}(\boldsymbol{\Sigma}_2^2) = o\left(\mbox{tr}\{(\boldsymbol{\Sigma}_1-\boldsymbol{\Sigma}_2)^2\}\right)\right\} \label{eq: G1}_{\textstyle,} \\
\mathcal{G}_s & = \left\{(\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2): \boldsymbol{\Sigma}_1 >0, \boldsymbol{\Sigma}_2>0, \max_{1\leq i\leq j\leq p} \frac{|\sigma_{ij1}-\sigma_{ij2}|}{\sqrt{\theta_{ij1}/n_1+\theta_{ij2}/n_2}} \geq 4\sqrt{\log p} \right\}_{\textstyle.} \label{eq: G2}
\end{align}
Taking advantage of the combination, we shall show that our proposed combined test $F_{n_1,n_2}$ makes the most of merits from the two tests and successfully boost the power against either dense or sparse alternatives.
\begin{theorem}[Asymptotic Power]\label{thm: power}
Under the same assumptions as in Theorem \ref{thm: asymp-indep}, the Fisher's test achieves consistent asymptotic power, that is, under the alternative hypothesis, $$\inf_{(\boldsymbol{\Sigma}_1,\boldsymbol{\Sigma}_2)\in \mathcal{G}_d \cup \mathcal{G}_s} P\left(F_{n_1,n_2}\geq c_\alpha\right) \rightarrow 1 \quad \text{as } n_1,n_2,p\rightarrow \infty.$$
\end{theorem}
\begin{remark}
\noindent {(Bahadur Efficiency)} As discusses in \cite{littell1971asymptotic,littell1973asymptotic}, among all approaches of combining independent tests, Fisher's method delivers the largest exact Bahadur slope, indicating the fastest decay rate of the p-values. Therefore,
Fisher's test is asymptotically optimal in terms of Bahadur relative efficiency.
\end{remark}
\section{Simulation Studies}\label{sec:simulation}
This section examines the finite-sample performance of our Fisher's combined probability test, compared to the tests proposed by \cite{cai2013two} (refer as the CLX test in the following context) and \cite{li2012two} (refer as the LC test). We generate $\{\mathrm{\bf X}_1,\cdots,\mathrm{\bf X}_{n_1}\}$ \emph{i.i.d.} from $N_p\left(\mathrm{\bf 0},\boldsymbol{\Sigma}_1\right)$ and $\{\mathrm{\bf Y}_1,\cdots,\mathrm{\bf Y}_{n_2}\}$ \emph{i.i.d.} from $N_{p}\left(\mathrm{\bf 0}, \boldsymbol{\Sigma}_2\right)$.
The sample sizes are taken to be $n_1=n_2=N$ with $N=100$ and $200$, while the dimension $p$ varies over the values 100, 200, 500, 800 and 1000. For each simulation setting, the average number of rejections are reported based on 1000 replications. The significance level is set to be $0.05$ for all the tests.
Under the null hypothesis $H_0$, we set $\boldsymbol{\Sigma}_1=\boldsymbol{\Sigma}_2=\boldsymbol{\Sigma}^{*(i)}, i=1,\cdots,5$, and consider the following five models to evaluate the testing size.
\begin{itemize}
\item[(i)] $\boldsymbol{\Sigma}^{*(1)}=\mathrm{\bf I}_p$.
\item[(ii)] $\boldsymbol{\Sigma}^{*(2)}=(\mbox{\boldmath $\Omega$}^{*(2)})^{-1}$, where $\omega_{ij}^{*(2)}=0.5^{|i-j|}$.
\item[(iii)] $\boldsymbol{\Sigma}^{*(3)}$ is a block diagnoal matrix given by each block being $0.5\mathrm{\bf I}_5+0.5\mathds{1}_5\mathds{1}'_5$.
\item[(iv)] $\boldsymbol{\Sigma}^{*(4)}=\{\sigma_{ij}^{*(4)}\}_{p\times p}$, $\sigma_{ij}^{*(4)}=(-1)^{i+j}0.4^{|i-j|^{1/10}}$.
\item[(v)] $\boldsymbol{\Sigma}^{*(5)}=(\boldsymbol{\Sigma}^{(5)}+\delta\mathrm{\bf I})/(1+\delta)$, where $\sigma_{ii}^{(5)}=1$, $\sigma_{ij}^{(5)}=0.5*Bernoulli(1,0.05)$ for $i<j$ and $\sigma_{ij}^{(5)}=\sigma_{ji}^{(5)}$, $\delta=|\lambda_{\min}(\boldsymbol{\Sigma}^{(5)})|+0.05$.
\end{itemize}
Model (i) is the most commonly used multivariate standard normal distribution. Model (ii) and Model (iii) are the cases when the true covariance matrices have certain banded-type and block-type sparsity. Model (iv) was first proposed by \cite{srivastava2010testing} and further studied in \cite{cai2013two}. Model (v) is also a sparse matrix yet without any specific sparsity pattern.
To evaluate the power of the tests, we consider the scenarios when the differences of the two covariance matrices satisfy certain structure. There are two types of alternatives we desire to look into: the sparse alternative $H_s$ and the dense alternative $H_d$.
Generally speaking, the sparse alternative shares commonality among different models. Let $\mathrm{\bf U}$ denote the difference between $\boldsymbol{\Sigma}_2$ and $\boldsymbol{\Sigma}_1$, i.e. $\mathrm{\bf U}=\boldsymbol{\Sigma}_2-\boldsymbol{\Sigma}_1$. Inspired by \cite{cai2013two}, we consider the situation when $\mathrm{\bf U}$ is a symmetric sparse matrix with eight random nonzero entries. The locations of four nonzero entries are randomly selected from the upper triangle of $\mathrm{\bf U}$, each with a magnitude of Unif(0,4)$\times\max_{1\leq j \leq p} \sigma_{jj}^{*}$. The other four are determined by symmetry. Then we generate samples from these covariance pairs $\left(\boldsymbol{\Sigma}_{1}^{(i)}, \boldsymbol{\Sigma}_{2}^{(i)}\right)$, $i=1,\cdots,5$, in order to evaluate the power of the tests against sparse alternative, where $\boldsymbol{\Sigma}_{1}^{(i)}=\boldsymbol{\Sigma}^{*(i)}+\delta \mathrm{\bf I}$ and $\boldsymbol{\Sigma}_{2}^{(i)}=\boldsymbol{\Sigma}^{*(i)}+\delta \mathrm{\bf I}+ \mathrm{\bf U}$, with $\delta=|\min\{\lambda_{\min}(\boldsymbol{\Sigma}^{*(i)}+\mathrm{\bf U}),\lambda_{\min}(\boldsymbol{\Sigma}^{*(i)})\}|+0.05$.
In terms of the dense alternative setting, since the five models differ a lot from each other, we shall discuss their corresponding alternative settings separately afterwards. To begin with, we shall take a look at the simplest case in Model (i). We consider its dense alternative to be the AR(1) model with parameter $\rho=0.2$ and $0.3$, denoted by $\boldsymbol{\Sigma}_{\rho}^{AR}$. In another word, we generate the copies of $\mathrm{\bf X}$ from the $p$-dimensional standard normal while copies of $\mathrm{\bf Y}$ from $N_p\left(\mathrm{\bf 0},\boldsymbol{\Sigma}_{\rho}^{AR}\right)$. We follow the same alternative hypothesis as in \cite{srivastava2010testing} for Model (iv), which is $\sigma_{ij}^{(4)}=(-1)^{i+j}0.6^{|i-j|^{1/10}}$, whereas we use the identity matrix $\mathrm{\bf I}_p$ for Models (ii), (iii) and (v).
\begin{table}[H]
\small
\centering
\caption{Comparison of Empirical Size and Power (\%) for Model (i)}\label{tab: standardnormal}
\vspace{1ex}
\begin{tabular}{ccrrrrr|rrrrrrrrrrr}
\hlin
n&p& 100 & 200 & 500 & 800 & 1000 & 100 & 200 & 500 & 800 & 1000 \\
\hline
&&\multicolumn{5}{c|}{Size} &\multicolumn{5}{c}{Power under sparse alternative}\\
\multirow{3}{*}{100}
& Proposed & 5.6 & 5.0 & 5.0 & 5.2 & 5.6 & 98.0 & 96.6 & 87.3 & 83.9 & 80.2\\
& CLX & 4.3 & 5.2 & 4.5 & 4.4 & 4.5 & 98.5 & 98.3 & 91.1 & 89.8 & 85.8\\
& LC & 4.8 & 5.0 & 5.1 & 4.5 & 4.2 & 20.6 & 11.2 & 5.9 & 5.7 & 5.0\\
\hline
\multirow{3}{*}{200}
& Proposed & 4.6 & 4.7 & 4.8 & 4.9 & 4.3 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
& CLX & 3.6 & 4.2 & 4.5 & 5.5 & 5.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
& LC & 5.4 & 3.2 & 4.6 & 4.8 & 5.3 & 50.5 & 22.2 & 8.0 & 7.6 & 7.3 \\
\hline
&&\multicolumn{10}{c}{Power under dense alternative}\\
&& \multicolumn{5}{c}{$\rho=0.2$} & \multicolumn{5}{c}{$\rho=0.3$}\\
\multirow{3}{*}{100}
& Proposed & 59.8 & 56.3 & 55.7 & 53.1 & 53.1 & 99.7 & 99.8 & 99.7 & 100.0 & 99.9 \\
& CLX & 13.9 & 8.9 & 8.1 & 6.9 & 6.6 & 51.5 & 45.7 & 38.3 & 31.9 & 27.2 \\
& LC & 60.7 & 63.2 & 64.8 & 62.4 & 63.3 & 99.7 & 99.8 & 100.0 & 99.9 & 99.8 \\
\hline
\multirow{3}{*}{200}
& Proposed & 98.6 & 99.3 & 99.3 & 98.8 & 98.9 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
& CLX & 46.5 & 40.1 & 30.9 & 28.0 & 25.3 & 99.8 & 99.9 & 100.0 & 99.8 & 99.9 \\
& LC & 98.6 & 99.3 & 99.0 & 99.1 & 98.9 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
\hlin
\end{tabular}
\vspace{1.5ex}
{\small
Note: This table reports the frequencies of rejection by each method under the null and alternative hypotheses based on $1000$ independent replications at the significance level $5\%$.}
\end{table}
For each covariance model, we generate samples independently from $N_p (\mathrm{\bf 0}, \boldsymbol{\Sigma}^{*(i)})$ to evaluate the size, and use different covariance pairs described above to examine the power against dense and sparse alternatives. The empirical size and power are calculated based on 1,000 replications at significance level $5\%$ and the results are reported in Tables \ref{tab: standardnormal}, \ref{tab: Model23} and \ref{tab: Model45}.
\begin{table}[H]
\small
\centering
\caption{Comparison of Empirical Size and Power (\%) for Models (ii) and (iii) }\label{tab: Model23}
\vspace{1ex}
\begin{tabular}{ccrrrrr|rrrrrrrrrrrrrrr}
\hlin
&& \multicolumn{5}{c}{Model (ii)} & \multicolumn{5}{c}{Model (iii)}\\
\hline
n&p& 100 & 200 & 500 & 800 & 1000 & 100 & 200 & 500 & 800 & 1000 \\
\hline
&&\multicolumn{10}{c}{Size}\\
\multirow{3}{*}{100}
& Proposed & 4.9 & 5.5 & 4.2 & 5.6 & 5.3 & 6.0 & 6.1 & 4.8 & 4.9 & 3.9 \\
& CLX & 4.6 & 5.4 & 4.9 & 5.5 & 4.5 & 4.5 & 4.4 & 5.1 & 4.6 & 4.0\\
& LC & 4.6 & 5.3 & 3.8 & 4.5 & 5.2 & 5.3 & 5.6 & 4.7 & 5.1 & 4.3\\
\hline
\multirow{3}{*}{200}
& Proposed & 6.5 & 5.4 & 4.1 & 3.8 & 4.3 & 6.3 & 6.5 & 4.8 & 4.1 & 4.9 \\
& CLX & 4.5 & 4.3 & 5.8 & 4.0 & 4.3 & 4.3 & 6.5 & 4.1 & 3.8 & 4.8 \\
& LC & 5.8 & 4.9 & 4.1 & 3.7 & 5.1 & 5.6 & 5.2 & 4.3 & 4.3 & 4.8 \\
\hline
&&\multicolumn{10}{c}{Power under sparse alternative}\\
\multirow{3}{*}{100}
& Proposed & 98.4 & 96.1 & 87.5 & 85.3 & 79.8 & 98.1 & 95.7 & 88.1 & 82.3 & 81.3 \\
& CLX & 98.8 & 97.7 & 92.3 & 90.2 & 85.9 & 98.7 & 97.5 & 91.3 & 88.0 & 86.6 \\
& LC & 19.7 & 11.4 & 6.8 & 5.8 & 5.7 & 20.0 & 11.6 & 6.6 & 5.4 & 5.3 \\
\hline
\multirow{3}{*}{200}
& Proposed & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
& CLX & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
& LC & 50.1 & 22.5 & 8.7 & 7.2 & 6.1 & 53.7 & 23.0 & 10.1 & 6.9 & 6.0 \\
\hline
&&\multicolumn{10}{c}{Power under dense alternative}\\
\multirow{3}{*}{100}
& Proposed & 85.7 & 83.0 & 84.7 & 83.7 & 81.7 & 97.6 & 98.0 & 97.6 & 96.3 & 98.2 \\
& CLX & 15.9 & 11.7 & 7.0 & 7.7 & 6.2 & 36.0 & 27.5 & 21.5 & 17.0 & 14.8 \\
& LC & 88.5 & 87.7 & 89.6 & 89.2 & 89.8 & 97.9 & 98.5 & 98.5 & 97.4 & 99.1 \\
\hline
\multirow{3}{*}{200}
& Proposed & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
& CLX & 59.6 & 50.4 & 37.5 & 33.7 & 31.1 & 90.7 & 91.8 & 87.7 & 86.1 & 83.6\\
& LC & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
\hlin
\end{tabular}
\vspace{1.5ex}
{\small
Note: This table reports the frequencies of rejection by each method under the null and alternative hypotheses based on $1000$ independent replications at the significance level $5\%$.}
\end{table}
\vspace{-2ex}
\begin{table}[H]
\small
\centering
\caption{Comparison of Empirical Size and Power (\%) Comparisons for Models (iv) and (v) }\label{tab: Model45}
\vspace{1ex}
\begin{tabular}{ccrrrrr|rrrrrrrrrrrrrrr}
\hlin
&& \multicolumn{5}{c}{Model (iv)} & \multicolumn{5}{c}{Model (v)}\\
\hline
n&p& 100 & 200 & 500 & 800 & 1000 & 100 & 200 & 500 & 800 & 1000 \\
\hline
&&\multicolumn{10}{c}{Size}\\
\multirow{3}{*}{100}
& Proposed & 9.8 & 9.5 & 10.4 & 9.6 & 9.3 & 5.7 & 5.2 & 4.0 & 4.8 & 4.3 \\
& CLX & 4.1 & 4.1 & 3.8 & 4.2 & 4.0 & 4.6 & 4.9 & 4.6 & 4.9 & 4.2 \\
& LC & 9.5 & 9.3 & 10.7 & 10.3 & 9.3 & 5.4 & 5.2 & 4.7 & 4.6 & 3.7 \\
\hline
\multirow{3}{*}{200}
& Proposed & 10.1 & 10.8 & 9.0 & 10.1 & 8.2 & 6.3 & 6.0 & 3.6 & 4.3 & 4.4 \\
& CLX & 3.2 & 4.5 & 3.0 & 3.4 & 4.8 & 5.1 & 4.0 & 3.7 & 4.6 & 4.3 \\
& LC & 8.8 & 10.6 & 9.0 & 10.7 & 8.2 & 5.7 & 5.2 & 4.1 & 3.8 & 5.0 \\
\hline
&&\multicolumn{10}{c}{Power under sparse alternative}\\
\multirow{3}{*}{100}
& Proposed & 97.6 & 96.4 & 88.1 & 84.8 & 81.5 & 99.9 & 85.2 & 78.9 & 72.5 & 86.7 \\
& CLX & 98.8 & 98.1 & 92.4 & 89.3 & 86.5 & 100.0 & 90.0 & 83.5 & 77.8 & 90.9 \\
& LC & 19.3 & 12.0 & 6.8 & 5.9 & 5.0 & 33.1 & 11.3 & 6.9 & 5.2 & 4.6 \\
\hline
\multirow{3}{*}{200}
& Proposed & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
& CLX & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
& LC & 52.3 & 22.1 & 8.8 & 7.2 & 7.3 & 80.3 & 20.4 & 8.0 & 8.6 & 6.9\\
\hline
&&\multicolumn{10}{c}{Power under dense alternative}\\
\multirow{3}{*}{100}
& Proposed & 84.1 & 89.7 & 92.2 & 95.5 & 96.8 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
& CLX & 57.4 & 62.8 & 67.3 & 76.3 & 76.4 & 34.9 & 14.0 & 6.9 & 5.3 & 5.1\\
& LC & 84.5 & 89.4 & 92.4 & 95.8 & 96.5 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
\hline
\multirow{3}{*}{200}
& Proposed & 98.9 & 98.7 & 99.8 & 99.9 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0\\
& CLX & 88.6 & 90.3 & 95.6 & 97.1 & 98.0 & 94.2 & 52.0 & 12.8 & 8.8 & 6.9\\
& LC & 99.1 & 98.9 & 99.9 & 99.8 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 & 100.0 \\
\hlin
\end{tabular}
\vspace{1ex}
{\small
Note: This table reports the frequencies of rejection by each method under the null and alternative hypotheses based on $1000$ independent replications at the significance level $5\%$.}
\end{table}
The size and power comparisons from Tables \ref{tab: standardnormal}, \ref{tab: Model23} and \ref{tab: Model45} give us some intriguing findings:
\begin{itemize}
\item[(1)] Under $H_0$, the sizes of all three tests are well retained close to the nominal level 0.05, except for Model (iv), in which both the LC test and our proposed test suffer from the size distortion, because of the violation of the test assumptions on covariance matrices.
\vspace{-0.5ex}
\item[(2)] As can be seen from Model (i), the CLX test is demonstrated to be powerful under the sparse alternative $H_s$, however, its performance is not satisfactory under the dense alternative. Even though in Models (ii)-(iv), the CLX test still has competitive powers, it fails with a decaying power as dimension grows in Model (v).
\vspace{-0.5ex}
\item[(3)] In the meantime, the LC test remains a high power under the dense alternative $H_d$, whereas performs poorly against the sparse alternative with a tendency of decaying as dimension $p$ grows large.
\vspace{-0.5ex}
\item[(4)] In comparison, our proposed Fisher's combined test exhibits competent results. Our proposed test performs as good as the CLX test under the sparse alternative, together with the comparable performance to the LC test when against the dense alternative.
\end{itemize}
In a summary, based on the simulation results in this section, we are able to say that the proposed Fisher test boost the power tremendously against more general alternatives, in the meanwhile, retaining the desired nominal significance level.
\section{Application to Gene-Set Testing}
We further demonstrate the power of our proposed test by applying the test to identify those sets of genes which potentially have significant differences in covariance matrices across different types of tumors. In biology, each gene does not work individually, but rather tends to function as groups to achieve complex biological tasks. Sets of genes are interpreted by Gene Ontology (GO) terms making use of the Gene Ontology system, in which genes are assigned to a set of predefined bins depending on their functional characteristics. The Gene Ontology covers three domains: biological process (BP), cellular component (CC) and molecular function (MF).
We consider the Acute Lymphoblastic Leukemia(ALL) data from the Ritz Laboratory at the Dana-Farber Cancer Institute (DFCI). The latest data is accessible at the ALL package (version 1.24.0) on \href{https://www.bioconductor.org/}{\color{blue}{Bioconductor}} website, including the original version published by \cite{chiaretti2004gene}. The ALL dataset consists of microarrays expression measures of 12,625 probes on Affymetrix chip series HG-U95Av2 for 128 different individuals with acute lymphoblastic leukemia, which is a type of blood cancer in that bone marrow affects white blood cells.
Based on the type of lymphocyte that the leukemia cells come from, the disease is classified into subgroups of T-cell ALL and B-cell ALL. In our study, we focus on a subset of the ALL data of 79 patients with the B-cell ALL. We are interested in two types of B-cell tumors: BCR/ABL and NEG, with sample sizes being 37 and 42 respectively.
Let us consider $K$ gene sets $S_1,\cdots, S_K$, and $\boldsymbol{\Sigma}_{1S_k}$ and $\boldsymbol{\Sigma}_{2S_k}$ be the covariance matrices of two types of tumors respectively. The null hypotheses we are interested are
$$H_{0,category}: \boldsymbol{\Sigma}_{1S_k} = \boldsymbol{\Sigma}_{2S_k},\quad k=1,\cdots,K$$
where $category \in \{BP, CC, MF\}$ because we classify the gene sets into three different GO categories and shall test each GO category separately.
To control the computational costs, we first perform a pre-screening procedure following the same criteria as in \cite{dudoit2008multiple} by choosing those probes that satisfy (i) the fluorescence intensities greater than 100 (absolute scale) for at least 25\% of the 79 cell samples; (ii) the interquartile range (IQR) of the fluorescence intensities for the 79 cell samples greater than 0.5 (log base 2 scale). The preliminary gene-filtering retains 2,391 probes. After that we then identify those GO terms annotating at least 10 of the 2,391 filtered probes, which gives us 1849 unique GO terms in BP category, 306 in CC and 324 in MF for further analysis. Table \ref{tab: summary-gene-set} and Figure \ref{fig: plotgeneset} summarize the dimension of gene-sets contained in each category.
\vspace{-2ex}
\begin{table}[H]
\small
\centering
\caption{Summary of the Dimension of Gene-sets for Three GO Categories}\label{tab: summary-gene-set}
\vspace{1ex}
\begin{tabular}{c|c|ccccccc}
\hline
GO Category & Total number & Min & 1st-Quantile & Median & 3rd-Quantile & Max \\
\hline
BP & 1849 & 10 & 15 & 27 & 62 & 2153 \\ \hline
CC & 306 & 10 & 17 & 32 & 85 & 2181 \\ \hline
MF & 324 & 10 & 14 & 26 & 68 & 2148 \\
\hline
\end{tabular}
\end{table}
\vspace{-3ex}
\begin{figure}[H]
\centering
\caption{Histograms of the Dimension of Gene-sets for Three GO Categories}\label{fig: plotgeneset}
\vspace{1ex}
\includegraphics[width=\textwidth, height = 2.3in]{plotGeneset.pdf}
\end{figure}
We first take a look at the performance of the CLX test and the LC test. Figure \ref{fig: plotstat} displays boxplots of both test statistics. It can be observed that test statistics have quite different magnitudes, indicating difficulty in the approach of weighted summation combination of the two statistics.
\vspace{-2ex}
\begin{figure}[H]
\centering
\caption{Boxplots of the LC and CLX Test Statistics for Three GO Categories}\label{fig: plotstat}
\vspace{1ex}
\includegraphics[width=5in, height=3in]{plotstat.pdf}
\end{figure}
We then apply our proposed Fisher's method to test the hypothesis, together with comparisons to the CLX and LC tests. We also compare our test with the natural Bonferroni combination. The test outcomes are reported in Table \ref{tab: realdata}, with nominal level $\alpha=0.05$ for each test. Furthermore, in order to control the false discovery rate (FDR), we apply the Benjamini-Hochberg (BH) procedure \citep{benjamini1995controlling} to each GO category, and the results are listed in Table \ref{tab: realdataBH}, with nominal level $\alpha=0.05$ for every category.
\begin{table}[H]
\small
\centering
\caption{Gene-Set Testing Results at the Nominal Level $\alpha=0.05$} \label{tab: realdata}
\vspace{1ex}
\begin{tabular}{c|c|ccccc}
\hline
\multirow{2}{*}{GO Category} & Total number & \multicolumn{4}{c}{Number of Significant Gene-sets} \\
\cline{3-6}
& of Gene-sets & \hspace{0.3cm} CLX \hspace{0.3cm} & \hspace{0.3cm} LC \hspace{0.4cm} & Bonferroni & Proposed \\
\hline
BP & 1849 & 297 & 505& 451 & 615 \\ \hline
CC & 306 & 52 & 111 & 96 & 116 \\ \hline
MF & 324 & 38 & 78 & 61 & 96\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\small
\centering
\caption{Gene-Set Testing Results with the FDR Control at $\alpha=0.05$}\label{tab: realdataBH}
\vspace{1ex}
\begin{tabular}{c|c|ccccc}
\hline
\multirow{2}{*}{GO Category} & Total number & \multicolumn{4}{c}{Number of Significant Gene-sets} \\
\cline{3-6}
& of Gene-sets & \hspace{0.3cm} CLX \hspace{0.3cm} & \hspace{0.3cm} LC \hspace{0.4cm} & Bonferroni & Proposed \\
\hline
BP &1849 & 0 & 126 & 81 & 254 \\ \hline
CC & 306 & 0 & 55 & 24 &68 \\ \hline
MF & 324 & 0 & 20 & 4 & 26\\
\hline
\end{tabular}
\end{table}
\begin{comment}
\begin{table}[H]
\centering
\caption{Test results with nominal level $\alpha=0.01$} \label{tab: realdata0.01}
\vspace{1ex}
\begin{tabular}{c|c|ccccc}
\hline
\multirow{2}{*}{GO Category} & Total number & \multicolumn{4}{c}{Number of Significant Gene-sets} \\
\cline{3-6}
& of Gene-sets & Maximum & Quadratic & Bonferroni & Fisher \\
\hline
BP & 1849 & 69 & 207 & 174 & 299 \\ \hline
CC & 306 & 10 & 56 & 37 & 65 \\ \hline
MF & 324 & 9 & 35 & 27 & 34\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Test results (BH) with nominal level $\alpha=0.01$}\label{tab: realdataBH0.01}
\vspace{1ex}
\begin{tabular}{c|c|ccccc}
\hline
\multirow{2}{*}{GO Category} & Total number & \multicolumn{4}{c}{Number of Significant Gene-sets} \\
\cline{3-6}
& of Gene-sets & Maximum & Quadratic & Bonferroni & Fisher \\
\hline
BP &1849 & 0 & 56 & 35 & 84 \\ \hline
CC & 306 & 0 & 4 & 4 & 13 \\ \hline
MF & 324 & 0 & 2 & 2 & 4 \\
\hline
\end{tabular}
\end{table}
\newpage
\end{comment}
As shown in Table \ref{tab: realdataBH}, our proposed test identifies much more significant gene-sets than the other methods. The LC identifies a few while the Bonferroni test identifies fewer significant gene-sets than the LC test does. This illustrates that the Bonferroni test is relatively conservative, which is consistent with what we expect. Unfortunately, the CLX test fails to declare any significance after we control the FDR using BH procedure. This is possibly because the signals in the differences are not strong enough for the CLX test to detect
Biological evidence supports that such improvement is quite meaningful and very helpful in cancer research. To clarify this, we further investigate those gene-sets that are not declared significant by the CLX and LC tests but are identified by our proposed Fisher test. Taking the GO term ``GO:0005905" as an example, it refers to the clathrin-coated pit which functions in the cellular component (CC) gene ontology category. Protein evidence by \cite{ezkurdia2014multiple} confirms that the clathrin-coated pit works with several protein-coding genes, such as CLTCL1, PICALM, etc., that are closely related to human cancers. We also take a deep look at ``GO:0035259", the glucocorticoid receptor binding, in the molecular function (MF) gene ontology category. Many genes contribute to this gene-set, among them, we pay special attention to STAT3, a protein-coding gene which plays an important role in the immune system by transmitting signals for the maturation of immune system cells, especially T-cells and B-cells. Researchers have observed that STAT3 gene mutations are highly correlated with cancers, especially blood cancers \citep{hodge2005role,jerez2012stat3,haapaniemi2015autoimmunity,milner2015early}. In a short summary, our proposed test incorporates the information from the CLX statistic, which successfully enhances the power over the LC test, even though the LC test itself may not declare any significance.
\section{Conclusion}
This paper studies the fundamental problem of testing high-dimensional covariance matrices. Unlike the existing quadratic form statistics, maximum form statistics, and their weighted combination, we provide a new perspective to exploit the full potential of quadratic form statistics and maximum form statistics. We propose a scale-invariant and computationally efficient power enhancement test based on Fisher's method to combine their respective $p$-values. Theoretically, after deriving their joint limiting null distribution, we prove that the proposed combination method retains the correct asymptotic size and boosts the power against more general alternatives. Numerically, we demonstrate the finite-sample properties in simulation studies and the practical relevance through an empirical study on gene-set testing problem.
It is still an open question to relax the Gaussian assumption when deriving the asymptotic joint distribution of quadratic form statistics and maximum form statistics in the two-sample covariance tests. There are several potential directions to relax the Gaussian assumption. For instance, we may use the semiparametric Gaussian copula distribution \citep{liu2012high,xue2012regularized} and study the nonparametric tests. Alternatively, we may use the Gaussian approximation theory to bridge this gap. We will leave this open question for future work.
{
\bibliographystyle{agsm}
|
1,108,101,564,247 | arxiv | \section{Related Work}
There exists a number of works in the literature that have tackled continuous planning for IMU-aided systems. A common approach used, which is optimization based, generates minimum snap trajectories for quadrotor systems~\cite{mellinger2011minimum}. The approach generates trajectories that minimize the square of the norm of the fourth derivative of position (snap) using \emph{Quadratic Program}~(QP). Work in \cite{mellinger2012mixed,burke2021fast} improve the numerical stability of the underlying QP and make it possible to have long-range trajectories composed of many segments for a finite time. Our regression method based on GPs can also deal with long-range trajectories and include as an addition their associated uncertainty.
Another common method of generating continuous trajectories is through parametric polynomial curves called Bezier curves. In~\cite{yassine2022robust}, the authors compare the control performance of continuous trajectories generated by Bezier curves and B-splines. A major drawback of using Bezier curves is that they do not consider the dynamics of the system in the interpolation. In \cite{hitz2017adaptive}, authors use B-splines for continuous trajectories within an \emph{Informative Path Planning}~(IPP) framework to find an efficient path for maximizing information collected by the robot while respecting time and energy constraints~\cite{wakulicz2022informative}. Similarly, authors in \cite{bahnemann2017sampling} and \cite{usayiwevu2020information} employ sampling-based motion planning to explore the space of possible beliefs and find a maximally informative trajectory within a user-defined budget to reduce model parameter uncertainty. Although this is similar to our work, in that we use an IPP for maximizing information to reduce localization uncertainty, we prioritize IMU bias convergence in order to improve the quality of the bias estimation and ultimately improve state estimation.
GPs are used in motion planning in \cite{mukadam2018continuous} and \cite{marchant2014bayesian}. In \cite{mukadam2018continuous}, the GP representation is tightly coupled to the gradient descent-based optimization algorithm for full state estimation. Our proposed framework uses GP regression in a loosely coupled manner that allows the planner to focus on reducing bias uncertainty. Authors in \cite{marchant2014bayesian} use GP regression for space-time modeling then apply Bayesian Optimization to estimate the best path to collect new observations in an exploratory manner. Unlike their work, we use GP regression to interpolate between sample points from our planner that finds the best trajectory to improve localization accuracy. Additionally, application of linear operators to the kernel function of underlying GP in position in our method allows inference the first and second derivatives.
Inertial based active localization, active SLAM, navigation and exploration can be found in \cite{qin2019autonomous}, \cite{elisha2017active} and \cite{papachristos2017autonomous}. These works operate by providing approaches in which the action the robot takes and the measurements collected are the most efficient for reducing localization uncertainty. Authors of \cite{liu2005minima} improve localization accuracy for inertial-aided systems by reducing the noise level in the raw acceleration measurements. Despite the direct link between localization estimates and the IMU bias estimates, most approaches take a passive approach of handling the bias errors, where the estimation is carried out to find the robot location and bias with the same importance on each task. To the best of our knowledge, no other work exploits IMU bias convergence to guide planning for improved localization estimates. The closest work to ours is presented in \cite{elisha2017active}, were the authors propose an active calibration framework of the intrinsic parameters such as IMU bias, which allows the robot to select the best path that improve its state estimation. The main distinction between our work and theirs is that their approach is based on belief space planning, while we employ an IPP algorithm that prioritizes convergence of IMU biases to improve localization accuracy. Our novel approach employs an adaptive technique in which the biases uncertainty is used to guide the planning before convergence of the bias uncertainties and localization uncertainty is used after convergence.
\section{Problem Statement and Overview}
\subsection{Inertial Based Systems}
Consider an inertial-aided system whose state can be estimated by any probabilistic estimation framework, and the state is modelled as a multivariate Gaussian distribution $\mathcal{N}(\mathbf{x},\,\mathbf{P})$. The mean state defined as
$ \mathbf{x} = (\mathbf{r, v, R, }\mathbf{b}_f, \mathbf{b}_w, \mathbf{c, z}),$
where $\mathbf{r}$ is the position of the IMU in the world frame $W$, $\mathbf{v}$ is the velocity of the IMU in $W$, $\mathbf{R}\in$ $SO(3)$ is the orientation (represented as a rotation matrix from $W$ to the IMU frame $I$), $\mathbf{b}_f$ is additive bias on accelerometer, $\mathbf{b}_w$ is additive bias on gyroscope, $\mathbf{c}$ and $\mathbf{z}$ are the linear and rotational part of extrinsic parameters between the IMU and the exteroceptive sensor used, and $\mathbf{P}$ is the state covariance matrix.
The process model and measurement model of the additional exteroceptive sensor are defined as
\begin{align}
\mathbf{\dot{x}}(t) &= f(\mathbf{x}(t),\mathbf{u}(t), \boldsymbol{\epsilon}(t) ) \\
\mathbf{y}(t) &= h(\mathbf{x}(t), \boldsymbol{\upsilon}(t)) ,
\end{align}
where $\mathbf{u}(t)$ is the control input, the process noise is $\boldsymbol{\epsilon}(t) \sim \mathcal{N}(0,\,\boldsymbol{\Sigma}_{\epsilon}(t))$ and the measurement noise is $\boldsymbol{\upsilon}(t) \sim \mathcal{N}(0,\,\boldsymbol{\Sigma}_{\upsilon}(t))$.
The IMU provides linear acceleration $\tilde{\mathbf{f}}(\mathbf{t}_{i})$ and angular velocity measurements $\tilde{\boldsymbol{\omega}}(\mathbf{t}_{i})$ at time $\mathbf{t}_{i}$ with $i = (1,....,T)$ in the inertial reference frame. The linear acceleration of the IMU in $W$ is denoted as $\mathbf{f}_{W}$, while $\boldsymbol{\omega}$ is the angular velocity of the IMU frame relative to $W$. The relationship between the IMU measurements and $\mathbf{f}_{W}(\mathbf{t}_{i})$ and $\boldsymbol{\omega}(\mathbf{t}_{i})$ is given by,
\begin{align}
\tilde{\mathbf{f}}(t) &= \mathbf{R}_{W}^{t}(t)^{\top}(\mathbf{f}_{W}(t)- \mathbf{g}) + \mathbf{b}_{f}(t) + \boldsymbol{\eta_{f}}(t) \\
\tilde{\boldsymbol{\omega}}(t) &= \boldsymbol{\omega}(t) + \mathbf{b}_{\omega}(t) + \boldsymbol{\eta_{\omega}}(t)\,,
\end{align}
where $\mathbf{g}$ is the gravity vector in $W$, and $\boldsymbol{\eta}_{f}$ and $\boldsymbol{\eta}_{\omega}$ are zero-mean Gaussian sensor noises with
covariance matrix $\Sigma_{\eta_{f}}$ and $\Sigma_{\eta_{\omega}}$ for the linear accelerations and angular velocities respectively.
At time $t$ given an IMU, the system kinematics $f(\mathbf{x}(t),\mathbf{u}(t), \boldsymbol{\epsilon}(t))$ is given by:
\begin{align}
\dot{\mathbf{R}}_{W}^{t}(t) &= \mathbf{R}_{W}^{t}(t) (\tilde{\boldsymbol{\omega}}(t) - \mathbf{b}_{\omega}(t) - \boldsymbol{\eta}_{\omega}(t))^{\wedge}\\
\dot{\mathbf{v}}_{W}^{t}(t) &= \mathbf{R}_{W}^{t}(t) (\tilde{\mathbf{f}}(t) - \mathbf{b}_{f}(t) - \boldsymbol{\eta}_{f}(t)) + \mathbf{g} \\
\dot{\mathbf{r}}_{W}^{t}(t) &= \mathbf{v}_{W}^{t}(t)\,,
\end{align}
and the IMU sensor biases modelled by a Brownian motion,
\begin{align}
\dot{\mathbf{b}}_{f}(t) &= \boldsymbol{\eta}_{\mathbf{b}_{f}}(t) \\
\dot{\mathbf{b}}_{\omega}(t) &= \boldsymbol{\eta}_{\mathbf{b}_{\omega}}(t)\,,
\end{align}
where $\boldsymbol{\eta}_{\mathbf{b}_{f}}$ and $\boldsymbol{\eta}_{\mathbf{b}_{\omega}}$ are zero-mean Gaussian noise of the accelerometer and gyroscope biases, with variances given by $\Sigma_{b_f}$ and $\Sigma_{b_{\omega}}$ respectively.
Thus, the control input is given by,
\begin{align}
\mathbf{u}(t) =
\begin{bmatrix}
\tilde{\mathbf{f}}(t) - \boldsymbol{\eta_{f}}(t) \\
\tilde{\boldsymbol{\omega}}(t) - \boldsymbol{\eta_{\omega}}(t)
\end{bmatrix}
\quad
\boldsymbol{\epsilon}(t) =
\begin{bmatrix}
\boldsymbol{\eta}_{\mathbf{b}_{f}}(t) \\
\boldsymbol{\eta}_{\mathbf{b}_{\omega}}(t) \\
\end{bmatrix}
.
\end{align}
Note the symbol $^\wedge$ is the skew-symmetric matrix operator that transforms a $3\times1$ vector to a $3\times3$ matrix as
\begin{align}
\boldsymbol{\omega}^\wedge =
\begin{bmatrix}
\omega_{1}\\
\omega_{2}\\
\omega_{3}\\
\end{bmatrix}
^\wedge
=
\begin{bmatrix}
0&-\omega_{3}&\omega_{2}\\
\omega_{3}&0&-\omega_{1}\\
-\omega_{2}&\omega_{1}&0\\
\end{bmatrix}\,.
\end{align}
\subsection{Problem formulation}
\label{sec:problem formulation}
Given an inertial-aided system and an associated estimation framework moving in an unknown environment, the aim is to find the continuous optimal trajectory $\boldsymbol{\pi}^{*}$ of the system, in the space of all trajectories $\psi$ for maximum gain in the information-theoretic measure,
\begin{align}\label{eq:ipp}
\boldsymbol{\pi^{*}} &= \underset{\pi \in \psi}{\text{argmax}} \;
\frac{\text{I}[\text{M}(\pi)]}{\text{T}(\pi)},\\
\text{s.t.}
&\ C(\pi) \leq B \nonumber,
\end{align}
where~$\text{I}[\cdot]$ is the utility function that evaluates the information gain in localization. The
function $\text{M}(\cdot)$ obtains discrete sensor measurements along the trajectory $\boldsymbol{\pi}$ with $\text{T}(\cdot)$ as corresponding travel time. The cost of the path $\text{C}(\cdot)$ given by the planner cannot exceed a predefined budget $\text{B}$. The utility function above is formulated to compute the expected reduction in IMU biases uncertainty and robot localization uncertainty.
\subsection{Overview}
We propose an Informative Path Planning framework as described in Section~\ref{sec:problem formulation} that directly takes into account the impact of the biases $\mathbf{b}_f$ and $\mathbf{b}_\omega$ embedded in the IMU measurements $\tilde{\mathbf{f}}$ and $\tilde{\boldsymbol{\omega}}$ to maximize localization information gain or in other words minimize localization uncertainty. Given a trajectory, a state estimation framework is used to generate a prior map of the environment~$\mathcal{E}$ and to produce initial estimates of the state and its associated covariance. The last state of the prior trajectory is set as the start node for the planning algorithm. An RRT$^{*}$ planner is used to build a decision tree by sampling in the linear position and orientation space. GP regression is then used to connect in-between two nodes and propagate uncertainties to evaluate the proposed IPP metric. The planner considers poses that have the most \emph{excitation} in the acceleration and angular velocity space which helps the IMU biases to converge quicker and producing more accurate localization estimates. Note that our planner can work with any filtered-based inertial-aided estimation framework as we shown in the experiments section using two existing frameworks.
\section{GPs for Continuous Trajectories} \label{sec:GP}
A \emph{Gaussian Process} (GP) is a collection of random variables, any finite number of which have a joint Gaussian distribution \cite{rasmussen2003gaussian}. It is completely specified by its mean function~$\mu(\boldsymbol{t})$ and covariance function~$k(\boldsymbol{t,t'})$ for a real function~$\xi: \mathbb{R}^d \mapsto \mathbb{R}^{s}$
\begin{align}
\mu(\mathbf{t}) &= \E[\xi(\mathbf{t})] \\
k(\mathbf{t,t'}) &= \E[(\xi(\mathbf{t}) - \mu(\mathbf{t}))(\xi(\mathbf{t'}) - \mu(\mathbf{t'}))]
.
\end{align}
In this work, we consider a zero-mean GP defined over time, which is used to generate continuous linear position, velocity and acceleration trajectories, angular positions and velocities used in a state estimation framework. Our GP is defined as,
\begin{align}
\xi(\mathbf{t}) &\sim \mathcal{GP}(0 , k(\mathbf{t,t'})) \\
\boldsymbol{\gamma}_{i} &= \xi(\mathbf{t}_{i})+ \mathbf{e}_{i}
,
\end{align}
where~$\boldsymbol{\gamma}_{i}, \mathbf{e}_{i} \in \mathbb{R}^{s}$ and the joint covariance of errors $\mathbf{e} =\mathbf{(e_1,e_2,...,e_n)}$ is assumed to be given by the matrix $\Sigma_{\mathbf{e}}$.
Given a sequence of waypoints in position $\boldsymbol{\gamma =(\gamma_{1}, \gamma_{2},...,\gamma_{n})}$, the joint distribution of the observed position waypoints and the continuous position values at the test locations can be written as,
\begin{align}
\begin{bmatrix}
\boldsymbol{\gamma}\\
\boldsymbol{\xi}_*
\end{bmatrix}
\sim
\mathcal{N}
\begin{pmatrix}
\mathbf{0},
\begin{bmatrix}
K(\mathbf{t,t}) + \Sigma_{\mathbf{e}} & K(\mathbf{t,t}_{*}) \\
K(\mathbf{t_{*},t}) & K(\mathbf{t_{*},t_{*}})
\end{bmatrix}
\end{pmatrix}
.
\end{align}
The position posterior mean and covariance are given by the predictive Gaussian process regression as,
\begin{align}
\bar{\boldsymbol{\xi}}_* &= \E[\boldsymbol{\xi_{*}}|\mathbf{t},\boldsymbol{\gamma},\mathbf{t}_*] = K(\mathbf{t_*,t})[K(\mathbf{t,t}) + \Sigma_{\mathbf{e}} ]^{-1}\boldsymbol{\gamma} \nonumber \\
\text{cov}(\boldsymbol{\xi}_{*}) &= K(\mathbf{t_*,t_*}) - K(\mathbf{t_*,t})[K(\mathbf{t,t})+ \sigma^2 _n I ]^{-1}K(\mathbf{t,t}_*)
.\nonumber
\end{align}
With $\mathbf{t} =
\begin{bmatrix}
t_{1} & t_{2} & \dots & t_{n}\\
\end{bmatrix}^{\top}$,
\begin{align}
K(\mathbf{t_{*},t}) &=
\begin{bmatrix}
k(t_{1}, t_{1}) & k(t_{1},t_{2}) & \dots & k(t_{1},t_{n})
\end{bmatrix}, \\
K(\mathbf{t,t_{*}}) &= K(\mathbf{t_{*},t})^{\top}
\end{align}
and
\begin{align}
K(\mathbf{t,t}) =
\begin{bmatrix}
k(t_{1}, t_{1}) & k(t_{1},t_{2}) & \dots & k(t_{1},t_{n}) \\
k(t_{2}, t_{1}) & k(t_{2},t_{2}) & \dots & k(t_{2},t_{n})\\
\vdots & \vdots & \ddots & \vdots\\
k(t_{n},t_{1}) & k(t_{n},t_{2}) & \dots & k(t_{n},t_{n})
\end{bmatrix}.
\end{align}
Suppose we want to use this model for inferring velocities and acceleration. GPs are adept at predicting not only the posterior mean and covariances of the function values but their derivatives as well \cite{sarkka2011linear}. This is because differentiation is a linear operator on the space of functions and hence the derivative of a GP is another GP. So velocity and acceleration functions obtained from applying linear operators to the position function are GPs as well. We choose to use the Square Exponential (SE) kernel given that it is analytically infinitely differentiable.
Consider the linear operator $\mathcal{L}^t$ applied on the function $\boldsymbol{\xi(t)}$ as follows,
\begin{align}
\boldsymbol{\phi(t)} &= \mathcal{L}^t_{\phi}\boldsymbol{\xi(t)}\\
\boldsymbol{\zeta(t)} &= \mathcal{L}^t_{\zeta}\boldsymbol{\phi(t)} = \mathcal{L}^t_{\zeta}\mathcal{L}^t_{\phi}\boldsymbol{\xi(t)} \label{eq:operator}
,
\end{align}
where $\mathcal{L}^{t}$ = $\mathbf{d(t)}$ is the derivative operator.
Note that the linear operators~$\mathcal{L}^{t}$ are not matrix multiplication, but can be thought of as operators acting on a function and return another function with the same input domain as the input function $\phi: \mathbb{R}^d \mapsto \mathbb{R}^{s}$. When the operator is applied twice on the kernel (e.g., $\mathcal{L}^t_{\zeta}\mathcal{L}^t_{\phi}\boldsymbol{\xi(t)}$ in \eqref{eq:operator}), it is analogous of taking the partial derivative of $\boldsymbol{\xi(t)}$ with respect to $\mathbf{t}$ twice. Consequently, $\boldsymbol{\phi(t)}$ and $\boldsymbol{\zeta(t)}$ are GPs of the first and second derivative functions respectively. Linear operators can be applied on the right-hand side of the kernel function like so $\mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})\mathbf{\mathcal{L}}_{\phi}^{t}$, and it is not synonymous to right multiplication by a matrix in linear algebra. The right multiplication reflects the operator is operating on the second argument of the kernel function.
Given a training data including waypoints~$\boldsymbol{\gamma}$, velocity~$\mathbf{\dot{\boldsymbol{\gamma}}}$, and acceleration $\mathbf{\ddot{\boldsymbol{\gamma}}}$, a linear functional can be applied to the kernel matrix to incorporate derivative observations as,
\begin{align}
\boldsymbol{\gamma}_{i} &= \xi(\mathbf{t})+ \mathbf{e}_{i} \\
\dot{\boldsymbol{\gamma}_{i}} &= \mathbf{\mathcal{H}}_{\gamma}^t \xi(\mathbf{t})+ \mathbf{e}_{i} \\
\ddot{\boldsymbol{\gamma}_{i}} &= \mathbf{\mathcal{H}}_{\gamma}^t \mathbf{\mathcal{H}}_{\gamma}^t \xi(\mathbf{t})+ \mathbf{e}_{i}
,
\end{align}
where $\mathcal{H}$ is deterministic linear functional for estimating the linear operator transformation of the signal $\mathbf{d(t)}$. Linear functionals are similar to linear operators, but they output vectors or matrices instead of functions.
Through the application of a combination of linear operators and functionals to the kernel function of the underlying position GP function, we can conduct inference in the velocity and acceleration space (see Fig.~\ref{fig:pos_vel_acc}). Constraints in the velocity and acceleration space, which enforce continuity at the start and end of each segment in these spaces can also be included in the measurement (waypoints) vector and multiplied properly with the kernel function through linear functionals. The inference of $\boldsymbol{\bar{\xi}_{*},\bar{\phi}_{*}}$ and $\boldsymbol{\bar{\zeta}_{*}}$ with measurements in the position, velocity and acceleration spaces is given by,
\begin{align}\label{eq:gp_mat}
\begin{bmatrix}
\bar{\boldsymbol{\xi_{*}}}\\
\bar{\boldsymbol{\phi_{*}}}\\
\bar{\boldsymbol{\zeta_{*}}}\\
\end{bmatrix}
=
\begin{bmatrix}
m_{1,1}&m_{1,2}&m_{1,3}\\
m_{2,1}&m_{2,2}&m_{2,3}\\
m_{3,1}&m_{3,2}&m_{3,3}\\
\end{bmatrix}
*
\begin{bmatrix}
\boldsymbol{\gamma}\\
\dot{\boldsymbol{\gamma}}\\
\ddot{\boldsymbol{\gamma}}\\
\end{bmatrix}
.
\end{align}
Each of the terms $m_{i,j}$ in the matrix above are defined by applying the linear operator on the GP kernel in the position space, to infer both linear and angular positions, velocities and accelerations.
Position inference;
\begin{align}
m_{1,1} &= K(\mathbf{t_*,t})[K(\mathbf{t,t}) + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
m_{1,2} &= K(\mathbf{t_*,t}) \mathbf{\mathcal{H}}_{\gamma}^t [\mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
m_{1,3} &= K(\mathbf{t_*,t}) \mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t [\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t}) \mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}] ^{-1}\nonumber
,
\end{align}
Velocity inference;
\begin{align}
m_{2,1} &= \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})[K(\mathbf{t,t}) + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
m_{2,2} &= \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})\mathbf{\mathcal{H}}_{\gamma}^t [\mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
m_{2,3} &= \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K\mathbf{(t_*,t)}\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t[\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^tK(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}]^{-1} \nonumber
,
\end{align}
Acceleration inference;
\begin{align}
m_{3,1} &= \mathbf{\mathcal{L}}_{\zeta}^{t_{*}}\mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})[K(\mathbf{t,t}) + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
m_{3,2} &= \mathbf{\mathcal{L}}_{\zeta}^{t_{*}}\mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})\mathbf{\mathcal{H}}_{\gamma}^t [\mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
m_{3,3} &= \mathbf{\mathcal{L}}_{\zeta}^{t_{*}} \mathbf{\mathcal{L}}_{\phi}^{t{*}} K(\mathbf{t_*,t})\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t[\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^tK(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}]^{-1} \nonumber
.
\end{align}
The covariances are given by;
\begin{align}
\text{cov}(\boldsymbol{\xi_{*}}) &= K(\mathbf{t_*,t_*}) - K(\mathbf{t_*,t})[K(\mathbf{t,t}) + \Sigma_{\mathbf{e}}]^{-1} K(\mathbf{t,t_*}) \nonumber\\
\text{cov}(\boldsymbol{\phi_{*}}) &= \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t_*})\mathbf{\mathcal{L}}_{\phi}^{t_{*}} - \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})\mathbf{\mathcal{H}}_{\gamma}^t [\mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t \nonumber\\
& + \Sigma_{\mathbf{e}}]^{-1} \mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t_*})\mathbf{\mathcal{L}}_{\phi}^{t_{*}} \nonumber\\
\text{cov}(\boldsymbol{\zeta_{*}}) &= \mathbf{\mathcal{L}}_{\zeta}^{t_{*}} \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t_*})\mathbf{\mathcal{L}}_{\zeta}^{t_{*}} \mathbf{\mathcal{L}}_{\phi}^{t_{*}} \nonumber\\
&- \mathbf{\mathcal{L}}_{\zeta}^{t_{*}} \mathbf{\mathcal{L}}_{\phi}^{t_{*}} K(\mathbf{t_*,t})\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t
[\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^tK(\mathbf{t,t})\mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t + \Sigma_{\mathbf{e}}]^{-1} \nonumber \\
& \mathbf{\mathcal{H}}_{\gamma}^t\mathbf{\mathcal{H}}_{\gamma}^t K(\mathbf{t,t}_*)\mathbf{\mathcal{L}}_{\zeta}^{t_{*}} \mathbf{\mathcal{L}}_{\phi}^{t_{*}}
.
\end{align}
Note that the substitution $\mathbf{t_{*} = t} $ is done for all instances of $\mathbf{t}_{*}$ after all the operations have been performed. In the equations above, $\mathbf{t}_{*}$ is used to remove ambiguity on which variable the operator is applied on.
\begin{figure*}
\centering
\subfigure[Positions over time]{
\includegraphics[width=0.65\columnwidth]{fig/3_pos_new.pdf}}
\subfigure[Velocities over time]{
\includegraphics[width=0.65\columnwidth]{fig/3_vel_new.pdf}}
\subfigure[Accelerations over time]{
\includegraphics[width=0.65\columnwidth]{fig/3_acc_new.pdf}}
\caption{Example of continuous position, velocity and acceleration trajectories in the~$(x, y, z)$ axes from the GP interpolation.}
\label{fig:pos_vel_acc}
\vspace{-1em}%
\end{figure*}
\section{Adaptive Trace Method}
Following~\eqref{eq:ipp}, the utility function $\text{I}[\cdot]$ evaluates the information content in the new sensor measurements with respect to the localization uncertainty. Numerous criteria exist for determining optimality in experimental designs. The most common criteria in robotics are the A-optimality and D-optimality and the choice of which one to use is made based on the application. D-optimality result in a confidence region for the parameters within minimum volume and A-optimality minimizes the average variance or the expected mean square error \cite{srivastava1974comparison}. For our purpose of minimizing localization uncertainty, A-optimality is the most efficient and formally, the A-optimality for any $n\times n$ matrix $\mathbf{P}$ is given by the trace of the matrix tr($\bullet$) as,
\begin{align}
\text{A-optimality} = \text{tr}(\mathbf{P}).
\end{align}
The utility function I[·], which is based on our proposed Adaptive trace method is then given by:
\begin{align}\label{eq:utility_fxn}
\text{I}(\boldsymbol{\pi}_{k:k+1}) =
\begin{cases}
\text{tr}(\mathbf{P}_{\mathbf{b}_{k+1}})-\text{tr}(\mathbf{P}_{\mathbf{b}_{k}}) & \text{if} \quad \delta \geq \lambda, \\
\text{tr}(\mathbf{P}_{\mathbf{r}_{k+1}}) -\text{tr}(\mathbf{P}_{\mathbf{r}_{k}}) & \text{otherwise}
\end{cases}
\end{align}
where~$\mathbf{P_{\mathbf{b}}} = [\mathbf{P_{\mathbf{b_f}}, P_{\mathbf{b_{\omega}}} ] }$ is the biases covariance matrix, $\mathbf{P_r}$ is IMU position covariance matrix, $\delta$ is the bias uncertainty, $\lambda$ is a preset threshold to determine bias uncertainty convergence, and $\pi_{k:k+1}$ is the trajectory from which evaluation measurements are obtained.
\section{Path Planning}\label{sub_sec:planning}
To find the trajectory that minimizes localization uncertainty in the estimated state, we define the cost function between two points as the trace of either the biases $\mathbf{b}_f$ and $\mathbf{b}_w$, or the IMU position $\mathbf{r}$. The planner aims to excite the system in such a way that it prioritizes convergence of bias errors first then proceeds to focus on the trace of the IMU position. In order to achieve this, the planner alternates between two cases of minimizing either the IMU biases uncertainty or IMU position uncertainty based on whether or not the bias uncertainties have converged, as depicted in~\eqref{eq:utility_fxn}. For the planner, it can formally be defined as:
\begin{align}\label{eq:planning_cost}
c(\boldsymbol{\pi}_{k:k+1}) = \text{I}(\boldsymbol{\pi}_{k:k+1})\,,
\end{align}
where $c(\boldsymbol{\pi}_{k:k+1})$ is the $\mathtt{Cost}$ function associated with connecting two points connected with a trajectory $\boldsymbol{\pi}_{k:k+1}$.
Rapidly-exploring random tree (RRT)$^{*}$~\cite{karaman2010incremental} is the sampling-based motion planning algorithm used to generate a set of nodes that are used for evaluation in our framework. The algorithm incrementally builds a tree of feasible trajectories from an initial node $x_{init}$. At each new iteration, a new point $x_{sample}$ is sampled from the obstacle-free space $\mathcal{X}_{free}$, and connection attempts are made to vertices in $\mathcal{X}_{near}$, which is defined as vertices within a radius from $x_{sample}$. An edge is created from $x_{sample}$ to the vertex in $\mathcal{X}_{near}$ that can be connected at a minimal cost.
The additive cost function used for evaluating nodes is defined as:
\begin{align}
\mathtt{Cost}(x_{\text{sample}}) &= \mathtt{Cost(\text{Parent}}(x_{\text{sample}})) \nonumber\\
&+ c(\mathtt{Connect} (x_{\text{sample}},x_{\text{near}}))
.
\end{align}
After the addition of the new node $x_{sample}$ to the graph, the planner removes redundant edges from $E$, i.e, edges that are not part of a shortest path from $x_{init}$. This technique is called rewiring, and it ensures that all vertices on the tree are on a minimal cost branch. Because of the constraints the IMU modeling imposes on the system, the $\mathtt{Connect}$ function between two nodes is not a straight line as in the original RRT* algorithm. As explained above, we require smooth and continuous trajectories that are differentiable at least twice, to ensure smooth and continuous linear position, velocity and acceleration trajectories, and angular positions and velocity trajectories. We use the tailor-made interpolation method based on GP regression described in Section~\ref{sec:GP}, that guarantees that all position, velocity and acceleration trajectories from the planner meet the continuity and smoothness constraints.
Note that 6D sampling is carried out in position and orientation as this allows us to plan in both the Cartesian and orientation spaces, and all higher order derivatives are constrained to zero. We enforce continuity by matching the linear position, velocity, acceleration and angular position and velocity at the end of a trajectory segment with those at the start of a subsequent trajectory segment. Additionally, the trace of the covariance matrix is used as the cost function to determine the optimal trajectory which the planner returns. This trajectory is not the shortest path but rather a trajectory which leads to quicker biases convergence with better bias estimates and ultimately better accuracy in the robot localization.
\subsection{Covariance Propagation}
For each new node sampled by our RRT-based planner, the posterior covariance matrix $\mathbf{P}_{k}^{+}$ is initially obtained by the chosen estimation framework and propagated into the future by the linearized model using the equations of the Extended Kalman filter and forecasted measurements. The trace of $\mathbf{P}_{k}^{+}$ is then used for decision making by our planner. The simulated inertial measurements are used for propagation in the prediction step, while the simulated measurements from the exteroceptive sensor are taken into account during the update step. The prediction step of the filter estimates the a-priori covariance $\mathbf{P}_{k}^{-}$, from the a-posteriori covariance estimate from the previous time step $\mathbf{P}_{k-1}^{+}$:
\begin{align}\label{eq:pred1}
\mathbf{P}_{k}^{-} &= \mathbf{F}_{k-1}\mathbf{P}_{k-1}^{+}\mathbf{F}_{k-1}^{\top} + \mathbf{G}_{k-1}\boldsymbol{\Sigma}_{\boldsymbol{\epsilon}_{k-1}}\mathbf{G}_{k-1}^{\top}
.
\end{align}
The covariance matrix is updated according to:
\begin{align} \label{eq:upd2}
\mathbf{P}_{k}^{+} = (\mathbf{I}-\mathbf{K}_{k}\mathbf{H}_{k}) \mathbf{P}_{k}^{-},
\end{align}
where the jacobians are given by
\begin{align}
\mathbf{F}_{k-1} &= \frac{\partial f}{\partial \mathbf{x}_{k-1}}(\mathbf{x}_{k-1}^{+}, \mathbf{u}_{k-1}, \boldsymbol{\epsilon}_{k-1}), \nonumber \\
\mathbf{G}_{k-1} &= \frac{\partial f}{\partial \boldsymbol{\epsilon}_{k-1}}(\mathbf{x}_{k-1}^{+}, \mathbf{u}_{k-1}, \boldsymbol{\epsilon}_{k-1}),
\mathbf{H}_{k} &= \frac{\partial h}{\partial \mathbf{x}_{k}} (\mathbf{x}_{k})
\end{align}
and $\mathbf{K}$ is the Kalman filter gain.
\section {Results}
We validate our approach using both simulated and hardware demonstrations.
The state estimation framework used in the simulation experiments is based on an Error State Kalman Filter (ESKF) \cite{sola2017quaternion} while ROVIO~\cite{bloesch2015robust}, which is an Iterated Extended Kalman Filter framework is used for the real-world experiments. The simulated acceleration, angular velocities and range measurements have realistic sensor noises added to them, ${0.0196}\,\mathrm{m.s^{-2}}$ and ${0.0017}\,\mathrm{rad.s^{-1}}$ for the accelerometer and gyroscope and ${0.02}\,\mathrm{m}$ for the range measurements. We evaluate the performance of the proposed Adaptive trace method with a greedy planner and with the variant of RRT$^{*}$ we propose as well. We also compare trajectories made with our GP regression vs the minimum snap interpolation algorithm.
\subsection{ Evaluation of proposed adaptive trace method}
We evaluate the results we get from an adaptive approach with the traces used for choosing waypoints to add to our trajectory. At each timestep, the planner picks the waypoint with the smallest trace out of the five sampled points. The adaptive approach uses the trace of the bias estimate covariance until the bias uncertainties have converged. Beyond this point, the trace of the robot position estimate covariance is used for planning.
This approach is compared against a method which uses the trace of the robot position estimate covariance to guide the planner. A sampling rate of 20Hz is used for sampling the GP regression trajectory. We conduct a 50-run Monte Carlo simulation. Note, for all the experiments, an identical prior trajectory is used to explore the environment first to generate a map of the environment and get a prior estimate for the state covariance. All the experiments are run for the same number of time steps, 12000 which results in 600s trajectories. The mean and standard deviation for the localization and bias RMSE for each of the two approaches are shown in Fig.~\ref{fig:robot_pose_error_bounds} and Fig.~\ref{fig:bias_error_bounds} respectively.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig/localization_error_vs_time_new.pdf}
\caption{Localization error averaged over 50 Monte-Carlo simulation for each of the 2 approaches.}
\label{fig:robot_pose_error_bounds}
\vspace{-1em}%
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig/bias_errors_with_error_bounds_new.pdf}
\caption{ The plots above show bias error over a 600s trajectory, for 50 Monte-Carlo simulation for the adaptive trace and position trace method.}
\label{fig:bias_error_bounds}
\vspace{-1em}%
\end{figure}
Initially, the localization errors of the two approaches are comparable, as can be seen in the first 2000 time steps from Fig.~\ref{fig:robot_pose_error_bounds}. However, with more time steps, the localization error of the adaptive traces method is less than that of the method using robot position traces alone. This is because the adaptive method prioritizes convergence of the bias estimates, and better quality bias estimates ultimately lead to improved estimation of the entire state. After the 12000 steps, the average localization error is ${9.846}\,\mathrm{m}$ using the adaptive trace method and ${28.04}\,\mathrm{m}$ with the robot position trace. We also note how the bias convergence occurs more quicker in the approach where we inform the planner with waypoints that prioritize bias estimate convergence.
\subsection {RRT$^{*}$ optimal path vs greedy planning }
We compare the performance of our RRT* variant, that uses the adaptive trace as the cost function as it grows the tree vs the greedy algorithm which only picks the best waypoint locally. The RRT$^{*}$ algorithm is limited to 3000 nodes and it is grown without a set goal node as this allows it to be more exploratory. The biases error from the optimal RRT$^{*}$ path are then compared with the average bias errors from the greedy algorithm.
The results from these simulation are shown in Fig.~\ref{fig:rrt vs_greedy_bias_error}. The plot shows that the bias errors from the RRT$^{*}$ are lower that those for the greedy planner although both planners are using the adaptive trace cost function. At the end of the 390s trajectory, the bias error for the greedy planner is $0.062$ while that for the RRT$^{*}$ planner is $0.026$. This result is consistent with what is expected because the RRT$^{*}$ algorithm has a rewiring technique which ensures that the newest sample is connected on a minimal cost branch to the start node unlike the greedy approach which only considers the cost of connecting the new sample to current node.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth,scale=0.8]{fig/rrt_vs_greedy_new.pdf}
\caption{Comparing of how bias errors vary over time for the greedy planner and RRT$^{*}$. Both planners use the adaptive trace technique and they are run for 390s.}
\label{fig:rrt vs_greedy_bias_error}
\end{figure}
\subsection{ GP regression trajectories vs minimal snap trajectory}
We compare the bias and localization error from the GP regression and minimum snap trajectories. In both simulations, 50 Monte-Carlo runs are considered over 300s trajectories and the average errors are compared.
The results in Table.~\ref{table:1} show that GP interpolation performs better than minimum snap trajectories. We believe that this is the case because the acceleration trajectories from GP regression have larger magnitudes in comparison to those from minimum snap for the same maximum acceleration setting as can be seen in Fig.~\ref{fig:acc_3_axes_comparison} and this generates more excitation for the GP trajectories which leads to quicker convergence of the IMU bias. This ultimately leads to smaller localization errors for GP regression trajectories.
\begin{table}
\begin{center}
\caption{Comparison of the bias and robot position errors accumulated after a 300s trajectory. The first row shows averaged results over 50 Monte-Carlo runs for the Gaussian Process regression interpolation. The second row has averaged results over a 50 Monte Carlo run using minimum snap interpolation method.}
\begin{tabular}{c|c|c}
\hline \hline
Interpolation & Average bias & Average localization\\method& RMSE $[m/s^2]$ & RMSE $[m]$ \\
\hline
\textbf{GP regression} & \textbf{0.033} & \textbf{2.623} \\
minimum snap & 0.0468 & 5.3913 \\
\hline
\end{tabular}
\label{table1}
\label{table:1}
\end{center}
\vspace{-2em}
\end{table}
\begin{figure*}
\centering
\subfigure[Acceleration on $x$ axis]{
\includegraphics[width=0.65\columnwidth]{fig/x_acc_new.pdf}}
\subfigure[Acceleration on $y$ axis]{
\includegraphics[width=0.65\columnwidth]{fig/y_acc_new.pdf}}
\subfigure[Acceleration on $z$ axis]{
\includegraphics[width=0.65\columnwidth]{fig/z_acc_new.pdf}}
\caption{Accelerations on $x$, $y$ and $z$. The blue trajectory is the GP interpolation acceleration trajectory and the orange trajectory is the minimum snap acceleration trajectory.}
\label{fig:acc_3_axes_comparison}
\end{figure*}
\subsection{Hardware experiment}
In the hardware experiment, we compare the performance of our proposed adaptive trace algorithm using GP regression on a UR5 arm with a stereo camera and an IMU attached. The aim of this experiment is to show the performance in localization of the proposed method with respect to a non-adaptive method using minimum snap trajectories.
The camera used in this experiment is the Realsense D455 with its internal Bosch BMI055 IMU. The camera provides global shutter RGB images at 20Hz and IMU measurements at 200Hz. The camera and the IMU are calibrated using Kalibr \cite{furgale2013unified}. The state, feature map and the associated covariance matrix are estimated by a formulation of the Iterated Extended Kalman Filter implemented in ROVIO \cite{bloesch2015robust} after execution of our trajectories on the arm.
Evaluation of the information content in the trajectories generated by our planner is carried out in simulation where we simulate measurements for each of the candidate trajectories, which are evaluated by using the map of the environment and the robot state. The simulated measurements are used to propagate the filter state and its covariance. The planner then decides on the best path to execute by evaluating and comparing the information content in each of the candidate trajectories (see Fig.~\ref{fig:fig_roadmap_robot_features_trajectory}).
\subsubsection{Robot arm planner}
\begin{figure}[t]
\centering
\includegraphics[ width=\linewidth]{fig/Localization_trace_new.pdf}
\caption{Comparing of how localization trace varies with the number of segments added to the trajectory.}
\label{fig:localization_trace}
\vspace{-1em}%
\end{figure}
The planning method proposed in Sec.~\ref{sub_sec:planning} allows for unconstrained sampling of trajectories in $SE(3)$. For the hardware experiments we constrain the trajectories to be executable by the robot arm. Specifically, for a sampled trajectory we require that a valid inverse kinematics (IK) solution exists for each pose, the joint limits of the robot are not violated and the robot does not collide with itself or the environment. Furthermore, we want to avoid any large changes in the arm's configuration between two consecutive poses in order to ensure smooth trajectories which improves tracking and avoids damaging the camera or its cable routed along the robot.
While this could be achieved by sampling directly in the configuration space of the robot, it is computationally expensive and not obvious how to bias sampling in order to achieve diverse excitation for the sensor system. Hence, to enable fast and direct sampling of trajectories in $SE(3)$ we leverage Hausdorff approximation planner (HAP)~\cite{sukkar2022motion}
which, given a robot, task-space and environment model, computes a subspace in $SE(3)$ to sample from such that the resulting executed robot trajectory satisfies our desired constraints. This subspace is represented using a discrete roadmap of poses, shown in Fig.~\ref{fig:fig_roadmap_robot_features_trajectory}(a), such that moving along a path between any two poses in $SE(3)$ within the subspace results in a similar length path in configuration space.
This roadmap is provided to the RRT$^{*}$ planner to bias its sampling towards. The sampled trajectories are post processed and ensured to be within the provided subspace by checking for time-continuous safety and any large changes in arm configuration between two consecutive poses. If either of these conditions are violated the trajectory is discarded. In practice it was found that a majority of trajectories were within the subspace and not discarded owing to the robustness of the planner.
\subsubsection{Results}
The results of the experiment are shown in Fig. \ref{fig:localization_trace}.
Between segments 0 to 43, the localization traces between the two methods are comparable. However, after convergence of bias uncertainty, the growth of the localization uncertainty is significantly smaller in the experiment using adaptive trace and GP interpolation method as opposed to the non-adaptive method in the dashed red plot. After 58 segments, the localization trace is $0.704$ in the adaptive experiment and $2.540$ in the non-adaptive experiment. This shows that planning for IMU bias convergence helps minimize localization error in state estimation.
\section{Conclusion}
This paper proposed a new algorithm for informative path planning over continuous trajectories to minimize localization error. The key contribution is the use of Gaussian Process regression to interpolate waypoints coming from our sampling based planner. Linear operators are applied to the kernel function of underlying position GP in order to infer the first and second derivative which are the velocity and acceleration respectively. The use of linear functionals enable velocity and acceleration constraints to be added in the GP model as part of the measurement vector. Furthermore, we proposed an adaptive cost function that used either the robot position trace or the biases trace within the planner in order to prioritize convergence of the IMU biases. This adaptive trace technique is used as the cost function in the RRT$^{*}$ variant that generates a set of discreet waypoints.
Our method is evaluated in 3 simulation experiments and one real world experiment. Overall, our work has shown that planning for IMU bias convergence helps minimize localization error in state estimation.
\bibliographystyle{IEEEtran}
|
1,108,101,564,248 | arxiv |
\section{Introduction}
ACM's consolidated article template, introduced in 2017, provides a
consistent \LaTeX\ style for use across ACM publications, and
incorporates accessibility and metadata-extraction functionality
necessary for future Digital Library endeavors. Numerous ACM and
SIG-specific \LaTeX\ templates have been examined, and their unique
features incorporated into this single new template.
If you are new to publishing with ACM, this document is a valuable
guide to the process of preparing your work for publication. If you
have published with ACM before, this document provides insight and
instruction into more recent changes to the article template.
The ``\verb|acmart|'' document class can be used to prepare articles
for any ACM publication --- conference or journal, and for any stage
of publication, from review to final ``camera-ready'' copy, to the
author's own version, with {\itshape very} few changes to the source.
\section{Template Overview}
As noted in the introduction, the ``\verb|acmart|'' document class can
be used to prepare many different kinds of documentation --- a
double-blind initial submission of a full-length technical paper, a
two-page SIGGRAPH Emerging Technologies abstract, a ``camera-ready''
journal article, a SIGCHI Extended Abstract, and more --- all by
selecting the appropriate {\itshape template style} and {\itshape
template parameters}.
This document will explain the major features of the document
class. For further information, the {\itshape \LaTeX\ User's Guide} is
available from
\url{https://www.acm.org/publications/proceedings-template}.
\subsection{Template Styles}
The primary parameter given to the ``\verb|acmart|'' document class is
the {\itshape template style} which corresponds to the kind of publication
or SIG publishing the work. This parameter is enclosed in square
brackets and is a part of the {\verb|documentclass|} command:
\begin{verbatim}
\documentclass[STYLE]{acmart}
\end{verbatim}
Journals use one of three template styles. All but three ACM journals
use the {\verb|acmsmall|} template style:
\begin{itemize}
\item {\verb|acmsmall|}: The default journal template style.
\item {\verb|acmlarge|}: Used by JOCCH and TAP.
\item {\verb|acmtog|}: Used by TOG.
\end{itemize}
The majority of conference proceedings documentation will use the {\verb|acmconf|} template style.
\begin{itemize}
\item {\verb|acmconf|}: The default proceedings template style.
\item{\verb|sigchi|}: Used for SIGCHI conference articles.
\item{\verb|sigchi-a|}: Used for SIGCHI ``Extended Abstract'' articles.
\item{\verb|sigplan|}: Used for SIGPLAN conference articles.
\end{itemize}
\subsection{Template Parameters}
In addition to specifying the {\itshape template style} to be used in
formatting your work, there are a number of {\itshape template parameters}
which modify some part of the applied template style. A complete list
of these parameters can be found in the {\itshape \LaTeX\ User's Guide.}
Frequently-used parameters, or combinations of parameters, include:
\begin{itemize}
\item {\verb|anonymous,review|}: Suitable for a ``double-blind''
conference submission. Anonymizes the work and includes line
numbers. Use with the \verb|\acmSubmissionID| command to print the
submission's unique ID on each page of the work.
\item{\verb|authorversion|}: Produces a version of the work suitable
for posting by the author.
\item{\verb|screen|}: Produces colored hyperlinks.
\end{itemize}
This document uses the following string as the first command in the
source file:
\begin{verbatim}
\documentclass[sigconf]{acmart}
\end{verbatim}
\section{Modifications}
Modifying the template --- including but not limited to: adjusting
margins, typeface sizes, line spacing, paragraph and list definitions,
and the use of the \verb|\vspace| command to manually adjust the
vertical spacing between elements of your work --- is not allowed.
{\bfseries Your document will be returned to you for revision if
modifications are discovered.}
\section{Typefaces}
The ``\verb|acmart|'' document class requires the use of the
``Libertine'' typeface family. Your \TeX\ installation should include
this set of packages. Please do not substitute other typefaces. The
``\verb|lmodern|'' and ``\verb|ltimes|'' packages should not be used,
as they will override the built-in typeface families.
\section{Title Information}
The title of your work should use capital letters appropriately -
\url{https://capitalizemytitle.com/} has useful rules for
capitalization. Use the {\verb|title|} command to define the title of
your work. If your work has a subtitle, define it with the
{\verb|subtitle|} command. Do not insert line breaks in your title.
If your title is lengthy, you must define a short version to be used
in the page headers, to prevent overlapping text. The \verb|title|
command has a ``short title'' parameter:
\begin{verbatim}
\title[short title]{full title}
\end{verbatim}
\section{Authors and Affiliations}
Each author must be defined separately for accurate metadata
identification. Multiple authors may share one affiliation. Authors'
names should not be abbreviated; use full first names wherever
possible. Include authors' e-mail addresses whenever possible.
Grouping authors' names or e-mail addresses, or providing an ``e-mail
alias,'' as shown below, is not acceptable:
\begin{verbatim}
\author{Brooke Aster, David Mehldau}
\email{dave,judy,[email protected]}
\email{[email protected]}
\end{verbatim}
The \verb|authornote| and \verb|authornotemark| commands allow a note
to apply to multiple authors --- for example, if the first two authors
of an article contributed equally to the work.
If your author list is lengthy, you must define a shortened version of
the list of authors to be used in the page headers, to prevent
overlapping text. The following command should be placed just after
the last \verb|\author{}| definition:
\begin{verbatim}
\renewcommand{\shortauthors}{McCartney, et al.}
\end{verbatim}
Omitting this command will force the use of a concatenated list of all
of the authors' names, which may result in overlapping text in the
page headers.
The article template's documentation, available at
\url{https://www.acm.org/publications/proceedings-template}, has a
complete explanation of these commands and tips for their effective
use.
Note that authors' addresses are mandatory for journal articles.
\section{Rights Information}
Authors of any work published by ACM will need to complete a rights
form. Depending on the kind of work, and the rights management choice
made by the author, this may be copyright transfer, permission,
license, or an OA (open access) agreement.
Regardless of the rights management choice, the author will receive a
copy of the completed rights form once it has been submitted. This
form contains \LaTeX\ commands that must be copied into the source
document. When the document source is compiled, these commands and
their parameters add formatted text to several areas of the final
document:
\begin{itemize}
\item the ``ACM Reference Format'' text on the first page.
\item the ``rights management'' text on the first page.
\item the conference information in the page header(s).
\end{itemize}
Rights information is unique to the work; if you are preparing several
works for an event, make sure to use the correct set of commands with
each of the works.
The ACM Reference Format text is required for all articles over one
page in length, and is optional for one-page articles (abstracts).
\section{CCS Concepts and User-Defined Keywords}
Two elements of the ``acmart'' document class provide powerful
taxonomic tools for you to help readers find your work in an online
search.
The ACM Computing Classification System ---
\url{https://www.acm.org/publications/class-2012} --- is a set of
classifiers and concepts that describe the computing
discipline. Authors can select entries from this classification
system, via \url{https://dl.acm.org/ccs/ccs.cfm}, and generate the
commands to be included in the \LaTeX\ source.
User-defined keywords are a comma-separated list of words and phrases
of the authors' choosing, providing a more flexible way of describing
the research being presented.
CCS concepts and user-defined keywords are required for for all
articles over two pages in length, and are optional for one- and
two-page articles (or abstracts).
\section{Sectioning Commands}
Your work should use standard \LaTeX\ sectioning commands:
\verb|section|, \verb|subsection|, \verb|subsubsection|, and
\verb|paragraph|. They should be numbered; do not remove the numbering
from the commands.
Simulating a sectioning command by setting the first word or words of
a paragraph in boldface or italicized text is {\bfseries not allowed.}
\section{Tables}
The ``\verb|acmart|'' document class includes the ``\verb|booktabs|''
package --- \url{https://ctan.org/pkg/booktabs} --- for preparing
high-quality tables.
Table captions are placed {\itshape above} the table.
Because tables cannot be split across pages, the best placement for
them is typically the top of the page nearest their initial cite. To
ensure this proper ``floating'' placement of tables, use the
environment \textbf{table} to enclose the table's contents and the
table caption. The contents of the table itself must go in the
\textbf{tabular} environment, to be aligned properly in rows and
columns, with the desired horizontal and vertical rules. Again,
detailed instructions on \textbf{tabular} material are found in the
\textit{\LaTeX\ User's Guide}.
Immediately following this sentence is the point at which
Table~\ref{tab:freq} is included in the input file; compare the
placement of the table here with the table in the printed output of
this document.
\begin{table}
\caption{Frequency of Special Characters}
\label{tab:freq}
\begin{tabular}{ccl}
\toprule
Non-English or Math&Frequency&Comments\\
\midrule
\O & 1 in 1,000& For Swedish names\\
$\pi$ & 1 in 5& Common in math\\
\$ & 4 in 5 & Used in business\\
$\Psi^2_1$ & 1 in 40,000& Unexplained usage\\
\bottomrule
\end{tabular}
\end{table}
To set a wider table, which takes up the whole width of the page's
live area, use the environment \textbf{table*} to enclose the table's
contents and the table caption. As with a single-column table, this
wide table will ``float'' to a location deemed more
desirable. Immediately following this sentence is the point at which
Table~\ref{tab:commands} is included in the input file; again, it is
instructive to compare the placement of the table here with the table
in the printed output of this document.
\begin{table*}
\caption{Some Typical Commands}
\label{tab:commands}
\begin{tabular}{ccl}
\toprule
Command &A Number & Comments\\
\midrule
\texttt{{\char'134}author} & 100& Author \\
\texttt{{\char'134}table}& 300 & For tables\\
\texttt{{\char'134}table*}& 400& For wider tables\\
\bottomrule
\end{tabular}
\end{table*}
\section{Math Equations}
You may want to display math equations in three distinct styles:
inline, numbered or non-numbered display. Each of the three are
discussed in the next sections.
\subsection{Inline (In-text) Equations}
A formula that appears in the running text is called an inline or
in-text formula. It is produced by the \textbf{math} environment,
which can be invoked with the usual
\texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with
the short form \texttt{\$\,\ldots\$}. You can use any of the symbols
and structures, from $\alpha$ to $\omega$, available in
\LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few
examples of in-text equations in context. Notice how this equation:
\begin{math}
\lim_{n\rightarrow \infty}x=0
\end{math},
set here in in-line math style, looks slightly different when
set in display style. (See next section).
\subsection{Display Equations}
A numbered display equation---one set off by vertical space from the
text and centered horizontally---is produced by the \textbf{equation}
environment. An unnumbered display equation is produced by the
\textbf{displaymath} environment.
Again, in either environment, you can use any of the symbols and
structures available in \LaTeX\@; this section will just give a couple
of examples of display equations in context. First, consider the
equation, shown as an inline equation above:
\begin{equation}
\lim_{n\rightarrow \infty}x=0
\end{equation}
Notice how it is formatted somewhat differently in
the \textbf{displaymath}
environment. Now, we'll enter an unnumbered equation:
\begin{displaymath}
\sum_{i=0}^{\infty} x + 1
\end{displaymath}
and follow it with another numbered equation:
\begin{equation}
\sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f
\end{equation}
just to demonstrate \LaTeX's able handling of numbering.
\section{Figures}
The ``\verb|figure|'' environment should be used for figures. One or
more images can be placed within a figure. If your figure contains
third-party material, you must clearly identify it as such, as shown
in the example below.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{sample-franklin}
\caption{1907 Franklin Model D roadster. Photograph by Harris \&
Ewing, Inc. [Public domain], via Wikimedia
Commons. (\url{https://goo.gl/VLCRBB}).}
\Description{The 1907 Franklin Model D roadster.}
\end{figure}
Your figures should contain a caption which describes the figure to
the reader. Figure captions go below the figure. Your figures should
{\bfseries also} include a description suitable for screen readers, to
assist the visually-challenged to better understand your work.
Figure captions are placed {\itshape below} the figure.
\subsection{The ``Teaser Figure''}
A ``teaser figure'' is an image, or set of images in one figure, that
are placed after all author and affiliation information, and before
the body of the article, spanning the page. If you wish to have such a
figure in your article, place the command immediately before the
\verb|\maketitle| command:
\begin{verbatim}
\begin{teaserfigure}
\includegraphics[width=\textwidth]{sampleteaser}
\caption{figure caption}
\Description{figure description}
\end{teaserfigure}
\end{verbatim}
\section{Citations and Bibliographies}
The use of \BibTeX\ for the preparation and formatting of one's
references is strongly recommended. Authors' names should be complete
--- use full first names (``Donald E. Knuth'') not initials
(``D. E. Knuth'') --- and the salient identifying features of a
reference should be included: title, year, volume, number, pages,
article DOI, etc.
The bibliography is included in your source document with these two
commands, placed just before the \verb|\end{document}| command:
\begin{verbatim}
\bibliographystyle{ACM-Reference-Format}
\section{Background} \label{sec:background}
\subsection{Quantum Programs and Architectures} \label{quantum-architecture}
The typical fundamental unit of quantum information is the qubit (quantum bit). Unlike classical bits which occupy either 1 or 0 at any given time, quantum bits may exist in a superposition of the two basis states $\ket{0}$ and $\ket{1}$. Qubits are manipulated via quantum gates, operations which are both reversible and preserve a valid probability distribution over the basis states. There is a single irreversible quantum operation called measurement, which transforms the qubit to either $\ket{0}$ or $\ket{1}$ probabilistically. Pairs of qubits are interacted via two-qubit gates, which are generally much more expensive in terms of error rates and latency.
There are a variety of competing styles of quantum systems each with a hardware topology specifying the relative location of the machine's qubits. This topology indicates between which pairs of qubits two-qubit interactions may be performed.
Typical quantum hardware does not readily support long-range multi-qubit operations but does provide a mechanism for moving qubits, either by swapping qubits (in the case of nearest neighbor or 2D-grid devices), teleportation via photon mediated entanglement, physically moving qubits (as in ion-trap devices), \reviewaddition{or a resonant bus (as in superconducting devices)}. Interacting qubits which are distant generate additional latency which is undesirable for near-term qubits with limited coherence time (the expected lifetime of a qubit before an error). \reviewaddition{These machines have expected error rates on the order of 1 in every 100-1000 two-qubit gates \cite{ionq, ibm_error}, and non-local communication has error on average 10-100x worse.}
In this paper, we are motivated by a specific set of architectures or extensions to such architectures, as in \cite{schuster_machine, ion1, ion2, ion3}. In these devices, qubits are arranged into several regions of high connectivity \reviewaddition{with expensive communication between the clusters, referred to as non-local communication.} These devices naturally lend themselves to mapping techniques which utilize partitioning algorithms.
\begin{figure}
\centering%
\scalebox{\figscale}{
\input{figs/example-circuit.qcircuit}%
}%
\\%
\scalebox{\figscale}{%
\resizebox{\linewidth}{!}{\input{figs/moment-graphs.tikz}}}%
\caption{(Top) An example of a quantum program with single-qubit gates not shown. The inputs are on the left and time flows to the right toward the outputs. The two-qubit operations here are CNOT (controlled-NOT).
(Bottom) The graph representations of the quantum circuit of the above circuit. On the far left is the total interaction graph where each edge is weighted by the total number of interactions for the whole circuit. To the right is the sequence of time slice graphs, where an edge is only present if the qubits interact in the time slice. The sum of all time slice graphs is the total interaction graph.}
\label{fig:sample_program}
\end{figure}
Quantum programs are often represented as circuit diagrams, for example the one in Figure \ref{fig:sample_program}a. We define a \textit{time slice} in a quantum program as a set of operations which are parallel in the circuit representation of the program. We express time slices as a function of both the circuit representation and limitations of the specific architecture. We also define a \textit{time slice range} as a set of contiguous time slices; we also refer to them as \textit{slices} and when no length is specified, it will be assumed to be of length 1.
For evaluation, we consider two primary metrics: the \textit{width} and the \textit{depth} of a circuit. The width is the total number of qubits used and the depth, or the run time, is the total number of time slices required to execute the program. Qubit movement operations which are inserted in order \reviewaddition{to move interacting qubits into the same partition} contribute to the overall depth of the circuit.
We consider two abstract representations of quantum programs: the total interaction graph and a sequence of time slice interaction graphs, examples of which are found in Figure \ref{fig:sample_program}b. In both representations, each qubit is a vertex and edges between qubits indicate two-qubit operations acting on these qubits. In the total interaction graph, edges are weighted by the total number of interactions between pairs of qubits. In time slice graphs, an edge with weight 1 exists only if the pair of qubits interact at that time slice.
\subsection{Graph Partitioning}
\subsubsection*{\textbf{Static Partitioning}}
Finding graph partitions is a well studied problem \cite{fiduccia1982linear, park1995algorithms, kernighan1970efficient, hendrickson1995multi} and is used frequently in classical architecture. In this paper, we consider a variant of the problem which fixes the total number of partitions and bounds the total number of elements in each partition. Specifically, given a fixed number of partitions $k$, a maximum partition size $p$, and an undirected weighted graph $G$ with $\abs{V(G)} \le k \cdot p$ we want to find a $k$-way assignment of the vertices to partitions such that the weight of edges between vertices in different partitions is minimized. This can be rephrased in terms of \reviewaddition{statically} mapping a quantum circuit to the aforementioned architectures. Let the total interaction graph be $G$ and let $k$ and $p$ fixed by the topology of the architecture. Minimizing the edge weight between partitions corresponds to minimizing the total number of swaps which must be executed.
Solving for an optimal $k$-way partition is known to be hard \cite{partition_hardness}, but there exist many algorithms which find approximate solutions \cite{kernighan1970efficient, park1995algorithms, fiduccia1982linear}. There are several heuristic solvers such as in \cite{METIS, graph1} which can be used to find approximate $k$-way partition of a graph. However, they often cannot make guarantees about the size of the resulting partitions, preventing us from using them for the fixed size partitioning problem.
\subsubsection*{\textbf{Partitioning Over Time}}
Rather than considering a single graph to be partitioned we instead consider the problem of generating a \textit{sequence} of assignments of qubits to clusters, one for each moment of the circuit. We want to minimize the total number of differences between consecutive assignments, naturally corresponding to minimizing the total number of non-local communications between clusters. This problem is much less explored than the prior approach. Partitioning in this way guarantees interacting qubits will be placed in the same partition making the schedule for the input program immediate. In the case of a static partition, which gives only the initial mapping, a further step is needed to generate a schedule.
\subsubsection*{\textbf{Optimal Compilation and Exact Solvers}}
It is too computationally expensive to find a true optimal solution for even reasonably sized input programs. Use of constraint-based solvers has been used recently to look for optimal and near-optimal solutions \cite{murali, uwsic_spatial_arch1, uwisc_spatial_arch2}. Unfortunately, these approaches will not scale in the near-term let alone to larger, error-corrected devices. We explored the use of these solvers but found them to be too slow. Finding a static mapping with SMT is impractical with more than 30 to 40 qubits, and SMT partitioning over time is impractical when number of qubits times the depth became more than 40.
\section{Experimental Setup} \label{sec:benchmarks}
All experiments were run on an Intel(R) Xeon(R) Silver 4100 CPU at 2.10 GHz with 128 GB of RAM with 32 cores running Ubuntu 16.04.5. Each test was run on a single core. Our framework runs on Python 3.6.5 using Google's Cirq framework for circuit processing and for implementing our benchmarks \cite{cirq}. For testing exact solvers, we used the Z3 SMT solver \cite{z3}, though results could not be obtained for the size of benchmarks tested because Z3 never completes on problems this size.
\subsection{Benchmarks}
We benchmark the performance of our circuit mapping algorithms on some common sub-circuits used in many algorithms (for example Shor's and Grovers) and, for comparison, on random circuits. Our selection of benchmarks covers a wide variety of internal structure. For every benchmark, we use a representative cluster-based architecture with 100 qubits with 10 clusters each containing 10 qubits but our methods are not limited to any size. We sweep over the number of qubits used from 50 to 100, when in the cases of a few benchmarks the remaining qubits are available for use as either clean or dirty ancilla\footnote{An ancilla is a temporary quantum bit used often to reduce the depth or gate count of a circuit. ``Clean'' indicates the initial state of the ancilla is known while ``dirty'' means the state is unknown.}.
\subsubsection*{\textbf{Generalized Toffoli Gate}}
The Generalized Toffoli gate ($C^nU$) is an $n$-controlled $U$ gate for any single qubit unitary $U$ and is well studied \cite{cnx1, cnx2,cnx3,cnx4,cnx5,cnx6}. A $C^nX$ gate works by performing an $X$ gate on the target conditioned on all control qubits being in the $\ket{1}$ state. There are many known decompositions \cite{GidneyBlogPost, He_circuit, Barenco} both with and without the use of ancilla. A complete description of generating these circuits is given by \cite{cnx_decomps}, which provides a method for using clean ancilla.
\subsubsection*{\textbf{Multi-Target Gate}}
The multi-target gate performs a single-qubit gate on many targets conditioned on a single control qubit being in the $\ket{1}$ state. This is useful in several applications such as one quantum adder design \cite{cnx4} and can also be used in the implementation of error correcting codes \cite{ecc}. These circuits can be generated with different numbers of ancilla (both clean and dirty), as given by \cite{cnx_decomps}.
\subsubsection*{\textbf{Arithmetic Circuits}}
Arithmetic circuits in quantum computing are typically used as subcircuits of much larger algorithms like Shor's factoring algorithm and are well studied \cite{cnx3, cnx4, rev_mult}. Many arithmetic circuits, such as modular exponentiation, lie either at the border or beyond the range of NISQ era devices, typically requiring either error correction or large numbers of data ancilla to execute. We examine two types of quantum adders - the Cuccaro Adder and the QFT Adder - as representatives of a class of highly structured and highly regular arithmetic circuits \cite{cuccaro2004adder, qft_adder}.
\subsubsection*{\textbf{Random Circuit}}
The gates presented above have a lot of regular structure when decomposed into circuits. We want to contrast this with circuits with less structure.
We create these random circuits by picking some probability $p$ and some number of samples and generate an interaction between two qubits with probability $p$ for each sample. These circuits have the same structure as QAOA solving a min-cut problem on a random graph with edge probability $p$, so these circuits are a realistic benchmark.
\subsection{Circuit to Hardware}
We begin with a quantum program which is specified at the gate level, consisting of one and two qubit gates. We then generate the total interaction and time slice graphs, where we assume gates are inserted at the earliest possible time. Any further optimization, such as via commutivity or template matching, should be done prior to mapping the program to hardware. We also take the specifications of the hardware, such as number of clusters and the maximum size of the clusters, which constrain possible mappings.
We use our rOEE as our algorithm for Fine Grained Partitioning. Therefore, we pass the total interaction graph to a static partitioning algorithm to obtain a good starting assignment. This serves as a seed to rOEE rather than starting with a random assignment which may introduce unnecessary starting communication. To the time slice graphs, we apply the lookahead function to obtain the lookahead graphs. We run rOEE on this set of graphs to obtain an assignment sequence such that at every time slice qubits which interact appear in the same bucket. This assignment describes what non-local communication is added before each slice. Finally, we compute the cost and insert the necessary movement operations into the circuit \reviewaddition{to move interacting qubits into the same partition}, this is a path. As a byproduct, by generating a partitioning over time, we obtain a schedule of operations to be performed.
\section{Conclusion} \label{sec:conclusion}
Alternative to using \reviewaddition{near-optimal} graph partitioning algorithms to find a single static assignment for an entire circuit, we show considering the locality in a circuit during a mapping gives a reduction in the total non-local communication required when running a quantum circuit. There is a natural restriction in using static mappings suggesting the problem of mapping qubits to cluster-based architectures has a different structure than partitioning a single graph for minimum weight between the partitions. Our modification to OEE no longer attempts to optimize the weights at every time slice. It is much more effective in practice to guide the partitioning based on heuristics and not to find the optimal value for every time slice. Optimality at every time slice does not correspond to a global reduction in non-local communication overhead.
We propose to use similar schemes for other cluster-based quantum hardware, especially those based on internally connected clusters. In our model, the different clusters of the architecture are also very well connected, but is not limited to only this specific instance of a clustered architecture.
\reviewaddition{Our proposed algorithm produces partitions based on a simplifying assumption about the connectivity of the clusters because the cost of non-local communication is substantially more expensive than any in-cluster operations. Our method can be adapted to other cluster-based architectures by first applying our partitioning algorithm to obtain good clusters of operations and then adding a device-specific scheduling algorithm for scheduling much cheaper in-cluster operations.}
A relaxed version with well chosen lookahead functions of a heuristic outperforms a well selected initial static mapping. Using lookahead weights has been explored previously, as in \cite{paler1}, and more can be done to better choose the lookahead function, for example based on a metric of circuit regularity. Techniques for mapping which attempt to solve for near optimal mappings will not scale and instead heuristics will be the dominant approach. Our approach is computationally tractable and adaptable to changes in machine architecture, such as additional or varied size clusters.
Non-local communication overhead in quantum programs makes up a large portion of all operations performed, therefore, minimizing non-local communication is critical. In recent hardware \cite{mount2016scalable}, the cost of moving between clusters makes non-trivial computation impossible with current standards for mapping qubits to hardware. Reducing this hardware bottleneck or finding algorithms to reduce the non-local communication are critical for quantum computation. We reduce this cost substantially in cluster-based architectures \reviewaddition{(see Table \ref{tab:est_cost})}.
\section{Introduction} \label{introduction}
Quantum computing aims to provide significant speedup to many problems by taking advantage of quantum mechanical properties such as superposition and entanglement \cite{quantum_ml, quantum_chemistry, quantum_optimization}. Important applications such as Shor's integer factoring algorithm \cite{Shor} and Grover's unordered database search algorithm \cite{Grover} provide potentially exponential and quadratic speedups, respectively.
\reviewaddition{Current quantum hardware of the NISQ era \cite{preskill_nisq}, which has on the order of tens to hundreds of physical qubits, is insufficient to run these important quantum algorithms. Scaling these devices even to a moderate sizes with low error rates has proven extremely challenging. Manufacturers of quantum hardware such as IBM and IonQ have had only limited success in extending the number of physical qubits present on a single contiguous piece of hardware. Issues on these devices such as crosstalk error scaling with the number of qubits or increased difficulty in control will limit the size this single-chip architecture can achieve \cite{bruzewicz2019trapped, brown2016co}}.
\reviewaddition{Due to these challenges, as well as developing technology for communicating between different quantum chips \cite{blakestad2009high, wallraff2018deterministic}, we expect quantum hardware to scale via a modular approach similar to how a classical computer can be scaled increasing the number of processors not just the size of the processors. Two of the leading quantum technologies, ion trap and superconducting physical qubits, are already beginning to explore this avenue and experimentalists project modularity will be the key to moving forward \cite{brecht2016multilayer, devoret2013superconducting, duan2010colloquium, bapat2018unitary, maslov2018outlook, monroe2013scaling, hucul2017spectroscopy}. One such example for ion traps is shown in Figure \ref{fig:modular-ion} where many trapped ion devices are connected via a single central optical switch. Technology such as resonant busses in superconducting hardware or optical communication techniques in ion trap devices will enable a more distributed approach to quantum computing, having many smaller, well-connected devices with sparser and more expensive non-local connections between them. Optimistically, due to current technology in the near term, we expect these non-local communication operations to be somewhere between 5-100x higher latency than in-cluster communication.}
\reviewaddition{With cluster-based approaches becoming more prominent, new compiler techniques for mapping and scheduling of quantum programs are needed. As the size of executable computations increase it becomes more and more critical to employ program mappings exhibiting both adaptivity of dynamic techniques and global optimization of static techniques. Key to realizing both advantages is to simplify the problem. Since non-local communication is dominant, we focus on only non-local costs. This simplification, along with static knowledge of all control flow, allows us to map a program in many timeslices with substantial lookahead for future program behavior. This approach would not be computationally tractable on a non-clustered machine.}
\begin{figure}
\centering
\quad\qquad
\scalebox{\figscale}{%
\input{figs/plot-static-vs-best-bar.tikz}}
\caption{Non-local communication overhead in circuits mapped to cluster-based machines. Our new mapping scheme FPG-rOEE provides \reviewaddition{reduces the number of operations added for non-local communication} on all benchmarks.}
\label{fig:com_costs_results}
\end{figure}
For devices with many modular components mapping quantum programs translates readily to a graph partitioning problem with a goal of minimizing edge crossings between partitions. This approach is standard in many classical applications such as high performance parallel computing, etc. \cite{vlsi_partitioning, classical_partitioning, hpc_graph_partitioning} with the goal of minimizing total latency. Here latency is approximated by the total number of times qubits must be shuttled between different regions of the device. Graph partitioning is known to be hard and heuristics are the dominant approach \cite{fiduccia1982linear, park1995algorithms, kernighan1970efficient, hendrickson1995improved, heuristic1}.
\reviewaddition{While this problem is related to many problems in distributed or parallel computing, there are a few very important distinctions. In a typical quantum program, the control flow is statically known at compile time, meaning all interactions between qubits are known. Furthermore, the no-cloning theorem states we cannot make copies of our data, meaning non-local communication between clusters is \textit{required} to interact data qubits. Finally, any additional non-local operations affect not only latency as they would classically but are directly related to the probability a program will succeed since operations in quantum computing are error prone and therefore reducing non-local communication is especially critical for successful quantum program execution.}
Our primary contribution is the development of a complete system for mapping quantum programs to near-term cluster-based quantum architectures via graph partitioning techniques where qubit interaction in-cluster is relatively free compared to expensive out-of-cluster interaction. Our primary goal is to minimize the communication overhead by reducing the number of low-bandwidth, high-latency operations such as moving qubits which are required in order to execute a given quantum program. Rather than partitioning the circuit once to obtain a generally good global assignment of the qubits to clusters, we find a sequence of assignments, one for each time slice in the circuit. This fine-grained approach is much less studied, especially for this class of architectures. With our techniques, we reduce the total number of non-local communication operations by 89.8\% in the best case and 60.9\% in the average case; Figure \ref{fig:com_costs_results} shows a few examples of circuits compiled statically versus with our methods.
The rest of the paper is organized as follows: \reviewaddition{In Section \ref{sec:background}, we introduce the basics of quantum circuits and graph partitioning.} In Section \ref{sec:mapping}, we introduce our proposed methodology for mapping qubits to the clusters of these modular systems, specifically a method for \textit{fine-grained partitioning}. In Section \ref{sec:lookahead}, we introduce a method for applying lookahead weights to tune what is considered \textit{local} at each time slice and evaluate their effect on non-local communication. In Section \ref{sec:benchmarks}, we introduce the benchmarks we test on and present our explicit toolflow for taking quantum programs to a sequence of mappings \reviewaddition{which guarantee interacting qubits are moved into the same partition before each time slice using non-local communication}. In Section \ref{sec:results}, we present our results and provide a brief discussion, and in Section \ref{sec:prior}, we present a summary of related work for hardware mapping. We conclude in Section \ref{sec:conclusion}.
\begin{figure}
\centering
\scalebox{\figscale}{%
\includegraphics[width=\columnwidth,keepaspectratio=true]{figs/modular-ion.png}}\vspace*{-.2in}%
\caption{An example modular architecture of qubits in individual ion traps connected with optics proposed by Monroe et al \cite{modular-ion}. Communication between traps is supported by photon-mediated entanglement. Similar communication for superconducting qubits \cite{yale-modular} can facilitate modular architectures for that technology.}
\label{fig:modular-ion}
\end{figure}
\section{Lookahead Weights} \label{sec:lookahead}
Finding a suitable lookahead weight function to use in Fine Grained Partitioning is necessary to maximize the benefit gained from choosing our swaps appropriately between time slices. We only require the lookahead function to be monotonically decreasing and non-negative. Throughout this section, we denote our lookahead weight function as $D$.
\subsection{Natural Candidates}
We explore a few natural candidate weighting functions from the huge space of possible functions. In each of the functions we explore below, we vary a stretching factor or scale $\sigma$ which can be tuned for the given circuit, providing a trade-off between local and global information.
\subsubsection*{\textbf{Constant Function}}
\[ D(n) = \begin{cases}
1 & n\leq \sigma \\
0 & n > \sigma
\end{cases}
\]
A constant function captures a fixed amount of local information in the circuit. This is just the number of times the pair of qubits interact in the next $\sigma$ time slices. For $\sigma = 0$, this function corresponds to no lookahead applied.
\subsubsection*{\textbf{Exponential Decay}}
\[ D(n) = 2^{-n/\sigma}
\]
An exponential is a natural way to model a decaying precedence. When $\sigma\le 1$, any interaction will always have a weight at least as high as the sum of interactions after it.
\begin{table*}[]
\caption{\reviewaddition{A subset of our benchmarks. Clean multi-control has a maximum size of 87. With more than 87 data qubits and fewer than 13 clean ancilla, the depth of the multi-control decomposition is too large to run on these cluster-based machines with predicted error rates.}}
\centering
\reviewaddition{
\input{figs/benchmarks-table.tex}
}
\label{tab:benchmarks-table}
\end{table*}
\subsubsection*{\textbf{Gaussian Decay}}
\[ D(n) = e^{-n^2/\sigma^2}
\]
Similar to an exponential, a Gaussian is natural to model decaying precedence with more weight given to local interactions.
\subsection{Evaluating Lookahead Functions}
To evaluate the choice of lookahead function as well as choice of $\sigma$, we study Fine Grained Partitioning using rOEE with all of the above candidate functions with varying $\sigma$ on benchmarks of various types: those with lots of local structure (a quantum ripple carry adder), those with very little structure (a random circuit), and those which lie somewhere in between (a Generalized Toffoli decomposition).
In Figure \ref{fig:lookahead-bar}, we show an example of a circuit which benefits from having a large scale $\sigma$, the Cuccaro Adder \cite{cuccaro2004adder}. In contrast, all of the random benchmarks benefit from having small $\sigma$ values, functions which decay quickly even for small $n$.
We also compare the different natural lookahead functions we described in the previous section on some representative benchmarks in Figure \ref{fig:lookahead-results}. In these figures, we see the exponential decay has a clear benefit over the rest in the structured circuits of the Multi-Control gate and the Cuccaro Adder. In random circuits, there seems to be no clear benefit to any of the lookahead functions, so long as they have some small lookahead scaling factor. So, we use exponential decay with $\sigma=1$ for our primary benchmarks in Section \ref{sec:benchmarks}.
\section{Mapping Qubits to Clusters} \label{sec:mapping}
We define an \textit{assignment} as a set of partitions of the qubits, usually at a specific time slice. We present algorithms which take a quantum circuit and output a \textit{path}, defined as a sequence of assignments of the qubits with the condition that every partitioning in the sequence is \textit{valid}. An assignment is valid if each pair of interacting qubits in a time slice are located within the same partition. \reviewaddition{Finally, we define the \textit{non-local communication} between consecutive assignments as the total number of operations which must be executed to transition the system from the first assignment to the second assignment.} The total communication of a path is the sum over all communication along the path.
\subsection{Computing Non-local Communication}
To compute the non-local communication overhead between consecutive assignments of $n$ qubits, we first construct a directed graph with multiple edges where the nodes in the graph are the partitions and the edges indicate a qubit moving from partition $i$ to partition $j$. We extract all 2-cycles from this graph and remove those edges from the graph. We proceed extracting all 3-cycles, and so on and record the number of $k$-cycles extracted as $c_k$. When there are no cycles remaining, the total number of remaining edges is $r$, and the total communication overhead $C$ is given by
$$C = r + \sum_{k=2}^n (k-1)\cdot c_k$$
The remaining edges indicate a qubit swapping with an unused qubit. We repeat this process for every pair of consecutive assignments in the path to compute the total non-local communication of the path. These cycles specify where qubits will be moved with non-local communication.
\subsection{Baseline Non-local Communication} \label{baseline}
As a baseline we consider using a \textit{Static Mapping} \reviewaddition{using an owner computes model}, which takes into account the full set of qubit interactions for the circuit, providing a generally good assignment of the qubits for the entire duration of the program, called the static assignment. At each time step in the circuit, a good static assignment ensures, on average, qubits are not \textit{too far} from other qubits they will interact with frequently.
\reviewaddition{We find the assignment which requires the fewest number of swaps from the static assignment but has each pair of interacting qubits in a common partition. \reviewaddition{These assignments form} a path for the computation. We refer to this method of path generation in conjunction with a partitioning algorithm, for example Static Mapping with OEE (Overall Extreme Exchange, discussed further later) is referred to as Static-OEE.}
\subsection{Fine Grained Partitioning}
The primary approach we developed to dynamically map a circuit to hardware is \textit{Fine Grained Partitioning} (FGP). In this algorithm, we find an assignment at every time slice using the time slice graphs. By default, these time slice graphs give only immediately local information about the circuit but have no knowledge about upcoming interactions. Alone, they only specify the constraints of which qubits interact in that time slice. The key advantage for this method is using \textit{lookahead weights}. The main idea is to construct modified time slice graphs capturing more structure in the circuit than the default time slice graphs. We refer to these graphs as time slice graphs with lookahead weights, or \textit{lookahead graphs}.
\begin{figure}
\centering
\scalebox{\figscale}{%
\scalebox{0.8}{\input{figs/lookahead-example.tikz}}}
\caption{An example of a time slice graph with lookahead weights based on the circuit in Figure \ref{fig:sample_program}. We take the graph from the left and add weight to the edges of qubits that interact in the future. In this case, we take the weight equal to the number of times the qubits will interact in the future.}
\label{fig:lookahead}
\end{figure}
To construct the lookahead graph at time $t$, we begin with the original time slice graph and give the edges present infinite weight. For every pair of qubits we add the weight
$$w_t(q_i,q_j) = \sum_{t< m\le T} I(m,q_i,q_j)\cdot D(m-t)$$
to their edge, where $D$ is some monotonically decreasing, non-negative function, which we call the lookahead function, and $I(m,q_i,q_j)$ is an indicator that is 1 if $q_i$ and $q_j$ interact in time slice $m$ and 0 otherwise, and $T$ is the number of time slices in the circuit. The new time slice graphs consider the remainder of the circuit, more heavily weighting sooner interactions. The effectively infinite weight on edges between interacting qubits is present to guarantee any assignment will place interacting qubits into the same partition. An example is shown in Figure~\ref{fig:lookahead}.
The final mapping of the qubits in our model is obtained by partitioning each of these time slices. Iteratively, we find the next assignment with a partitioning algorithm, seeded with the assignment obtained from the previous time slice. The first can choose a seed randomly or use the static assignment (presented in \ref{baseline}). The new weights in the time slice graphs will force any movement necessary in the partitioning algorithm. Together, these assignments give us a valid path for the circuit to be mapped into our hardware.
\subsection{Choosing the Partitioning Algorithm}
We assume full connectivity within clusters and the ability to move between clusters. These assumptions give us the liberty to tap into well studied partitioning algorithms. The foundation of many partitioning algorithms is largely considered to be the Kernighan-Lin heuristic for partitioning graphs with bounded partition sizes \cite{kernighan1970efficient, fiduccia1982linear, park1995algorithms}. The KL heuristic selects pairs of vertices in a graph to exchange between partitions based on the weights between the vertices themselves and the total weight between the vertices and the partitions.
We consider a natural extension of the KL algorithm, Overall Extreme Exchange presented by Park and Lee \cite{park1995algorithms}. The OEE algorithm finds a sequence of pairs of vertices to exchange and makes as many exchanges as give it an overall benefit. Using OEE, the Fine Grained Partitioning scheme often over corrects (see Figure \ref{fig:partitioner_results}). If a qubit needs to interact in another partition, then it can ``drag along'' a qubit it is about to interact with because OEE attempts to minimize weight between partitions regardless of its relation to the previous or next time slice graphs. Choosing an optimal partitioning algorithm would not give better solutions to our non-local communication based mapping problem. Instead, we consider a more relaxed version of a partitioning algorithm using the KL heuristic.
\subsubsection*{\textbf{Relaxing the Partitioning Algorithm}}
We provide relaxed version of the algorithm better suited to generating a path over time, called relaxed-OEE (rOEE). We run OEE until the partition is valid for the time slice (all interacting qubits are in the same partition) and then make no more exchanges. This is similar in approach to finding the time slice partitions in our Static Mapping approaches. It is critically important we make our exchange choices using lookahead weights applied to the time slice graphs. Choosing without information about the upcoming circuit provides no insight into which qubits are beneficial to exchange. As a side benefit, making this change strictly speeds up OEE, an already fast heuristic algorithm. Although a strict asymptotic time bound for OEE is difficult to prove, rOEE never took more than a few seconds on any instance it was given.
With such a significant non-local communication overhead improvement (see Figure \ref{fig:partitioner_results}), this relaxed KL partitioning algorithm is much better suited for the problem at hand. It has the ability to take into account local structure in the circuit and avoid over correcting and swapping qubits unnecessarily.
\section{\reviewaddition{Motivation}} \label{sec:motivation}
\reviewaddition{Current quantum hardware, which has on the order of tens of physical qubits, is insufficient to run important quantum algorithms. Scaling these devices even to a moderate size with low error rates has proven extremely challenging. Manufacturers of quantum hardware such as IBM and IonQ have had only limited success in extending the number of physical qubits present on a single contiguous piece of hardware. Issues on these devices such as crosstalk error scaling with the number of qubits or increased difficulty in control will limit the size this single-chip architecture can achieve \cite{bruzewicz2019trapped, brown2016co}}.
\reviewaddition{Due to these challenges, as well as developing technology for communicating between different quantum chips \cite{blakestad2009high, wallraff2018deterministic}, we expect quantum hardware to scale via a modular approach similar to how a classical computer could be scaled increasing the number of processors not just the size of the processors. Two of the leading quantum technologies, ion trap and superconducting physical qubits, are already beginning to explore this avenue and experimentalists project modularity will be the key to moving forward \cite{brecht2016multilayer, devoret2013superconducting, duan2010colloquium, bapat2018unitary, maslov2018outlook, monroe2013scaling, hucul2017spectroscopy}. Technology such as resonant busses in superconducting hardware or optical communication techniques in ion trap devices will enable this more distributed approach to quantum computing, having many smaller, well-connected devices with sparser and more expensive non-local connections between them. Optimistically, due to current technology in the near term, we expect these non-local communication operations to be somewhere between 5-100x higher latency than in-cluster communication.}
\reviewaddition{With cluster-based approaches becoming more prominent, new compiler techniques for mapping and scheduling of quantum programs are needed. Furthermore, as the size of executable computations increase it becomes more and more critical to employ program mappings that exhibit both the adaptivity of dynamic techniques and the global optimization of static techniques. Key to realizing both advantages is to simplify the problem. Since non-local communication is dominant, we can focus on only non-local costs. This simplification, along with static knowledge of all control flow, allows us to map a program in many timeslices with substantial lookahead for future program behavior. This approach would not be computationally tractable on a non-clustered machine.}
\section{Related Work} \label{sec:prior}
Current quantum hardware is extremely restricted and has prompted a great deal of research aimed at making the most of current hardware conditions. This usually amounts to a few main categories of optimization. The first is circuit optimization at a high level to reduce the number of gates or depth via template matching as in \cite{rw1-template-matching, rw-template-rewriting} or via other optimization techniques as in \cite{optimization-qiskit, automated_optimization}. Other work focuses on optimization at the device level, such as by breaking the circuit model altogether as in \cite{YunongPaper} or by simply improving pulses via Quantum Optimal Control \cite{qoc}.
At an architectural level, optimization has been studied for many different types hardware with various topologies. The general strategy in most of these works is to reduce SWAP counts with the same motivation as this work, as in \cite{intel1, ai1, siraichi, optimization-qiskit, paler1, paler2, automatic_layout}. Much of this work focuses primarily on linear nearest neighbor (LNN) architectures or 2D lattice architectures as in \cite{lnn1, lnn2, lnn3, lnn4, 2d1}. Some work has focused on ion trap mappings as in \cite{ion_trap_mapping1} though the architecture of this style of device resembles more closely that of a 2D architecture. Some work has recently focused on optimization around specific error rates in near term machines as in \cite{murali, li-ding-xie}. Many of these techniques promise an extension to arbitrary topologies but are not specifically designed to accommodate cluster-based architectures. Work by \cite{qc_paritioning} has explored using graph partitioning to reduce swap counts in near term machines, but their focus is on LNN architectures exclusively. Other work focuses on architectures of the more distant future, namely those with error correction such as in \cite{future1, future2, future3}.
\section{Results and Discussion} \label{sec:results}
We run our mapping algorithms on each of our benchmark circuits. The results are shown in Figure \ref{fig:partitioner_results}.
Baseline mapping and the original version of OEE perform worse than our best scheme on any benchmark tested. Baseline mapping uses global structure of the graph, but often maintains this structure too much throughout the execution of the circuit. This lack of local awareness and rigid nature of the Static Mapping limits its usefulness. Most out of the box graph partitioning algorithms are designed to only minimize the edge weight between partitions; this will tend to over correct for local structure in the circuit. FGP can overcome this limitation with its choice of partitioning algorithm. By relaxing the partitioning algorithm and not requiring local optimality, we only move qubits until all interacting pairs are together, we require far fewer non-local operations.
The most noticeable changes between FGP-OEE and FGP-rOEE are on the clean multi-control gate with many controls and on the Cuccaro adder. Here, there are often consecutive, overlapping operations with little parallelism. With this structure, after the first operation is performed, the original OEE algorithm will exchange qubits to comply with the next time slice for the next operation. OEE is required to separate qubits which will later interact. To minimize the total crossing weight between partitions, more qubits are shuffled around, usually towards this displaced qubit. In rOEE, this reshuffle optimization never takes place because we terminate once \reviewaddition{each pair of interacting qubits in a time slice is placed in a common partition}. The reshuffling detriments the overall non-local communication when running the circuit because of how often qubits will be displaced from their common interaction partners. In rOEE, not reshuffling keeps the majority of the qubits in sufficiently good spots and the displaced qubit has the opportunity to immediately move back with its interaction partners later.
\begin{figure*}[h!]
\centering
\input{figs/plot-final-results.tikz}
\caption{The non-local communication overhead for our benchmark circuits mapped by each mapping algorithm. The x-axis is the number of qubits that are used in the circuit. \reviewaddition{The y-axis is the number of non-local communication operations inserted to make the circuit executable in our hardware model.} In Clean multi-control, Clean multi-target, and Dirty multi-target, the remainder of the 100 qubits are used as ancilla (clean or dirty determined by the circuit name). FGP-rOEE outperforms all other mapping algorithms on all but the multi-target circuits, and shows substantial improvement over the static baseline. As the size of the circuit increases, rOEE tends to outperform by a greater margin, indicating scales better into the future.}
\label{fig:partitioner_results}
\end{figure*}
We include the algorithm Fixed Length Slicing as an alternative not presented in this paper. It is a method with slower computation which explores grouping time slices at fixed intervals. Fixed Length Slicing was consistently the best performing time slice range based mapping algorithm, so we present it in our results. FLS-OEE only beats FGP-rOEE on some instances of the multi-target benchmarks and consistently performs worse on all other benchmarks.
In Figure \ref{fig:com_costs_results}, we show the percentage of operations used for non-local communication for each of the benchmark circuits, and in Table \ref{tab:improvement-table} we show the percent improvement of our algorithm over the baseline. On average, we save over 60\% of the non-local communication operations added. When each non-local communication operation is implemented in hardware, the amount of time each takes is significantly longer than the operations between the qubits in the clusters \cite{mount2016scalable}. Based on current communication technology, we expect these non-local communication operations to take anywhere from 5x to 100x longer than local in-cluster operations. Furthermore, the choice in technology limits how many of these expensive operations can be performed in parallel.
In Table \ref{tab:est_cost} we compute the estimated running time based on this ratio of costs and show that by substantially reducing the non-local communication via FGP-rOEE, we can drastically reduce the expected run time. We compare our algorithm to the baseline when non-local communication can be performed in parallel (such as in optically connected ion trap devices) and when it is forced to occur sequentially (as when using a resonant bus in superconducting devices). Based on current technology, a 5-10x multiplier is optimistic while 100x is realistic in the near term.
\begin{table}[]
\caption{Comparing Static-OEE against FGP-rOEE over all benchmarked instances. We obtain improvement across the board with the worst case still reducing non-local communication by 22.6\%.}
\label{tab:improvement-table}
\centering
\input{figs/improvement-table.tex}
\end{table}
\begin{table}[]
\caption{Estimated execution time of the clean multi-control benchmark with 76 data qubits and 24 ancilla. Two-qubit gates take 300ns \cite{ibm_error} and the multiplier indicates how many times longer non-local communication operations take.}
\label{tab:est_cost}
\centering
\input{figs/estimated_execution_table.tex}
\end{table}
|
1,108,101,564,249 | arxiv | \section{Fermi surface and orbital dependent superconducting gap}
In FeSe, it has been reported that the Fermi pockets in the nematic state are extremely small and shallow. Figure\,S1(a) shows a schematic illustration of the Fermi surface of FeSe in the nematic state in the unfolded Brillouin zone proposed in Ref.~[15]. The Fermi surface consists of a hole pocket at the zone center and an electron pocket at the zone corner. Green, red, and blue areas represent the Fermi-surface regions dominated by $d_{yz}$, $d_{xz}$ and $d_{xy}$ orbital characters, respectively. Note that we use the coordinate, in which the two-dimensional Fe lattice is used as a principal unit cell and the nearest neighbor Fe-Fe distance is larger along the $b_{\rm Fe}$ axis ($y$ direction) than along the $a_{\rm Fe}$ axis ($x$ direction). The Fermi energies $\varepsilon_F^{h(e)}$ of the hole (electron) pockets are extraordinarily small, $\varepsilon_F^{h} \sim 10$ - 15\,meV and $\varepsilon_F^{e} \sim$ 5 - 10\,meV.
In Fig.\,S1(b), we show a schematic figure on the amplitude of the superconducting gap $\Delta$ at the hole pocket. It has been shown that $\Delta$ is highly anisotropic in FeSe and the largest superconducting gap is observed for the flat portion with dominant $d_{xz}$ character.
\begin{figure}[h]
\includegraphics[width=0.485\linewidth]{FigS_FS.pdf}
\end{figure}
\noindent
{Figure S1. (a) Schematic figure of the Fermi surface of FeSe in the nematic state. (b) In-plane anisotropy of the superconducting gap amplitude $\Delta$ at the hole pocket. The orange shade represents the amplitude of $\Delta$.}
\section{Sample characterization}
Single crystals of FeSe were grown by the vapor transport method.
Particular attention was paid to select the crystal with extraordinary high quality. By measuring the STM topography, we confirmed that the investigated surface is atomically clean with only 2-3 defects per 10,000 Fe atoms [Fig.\,S2(a)]. This same single crystal was further cleaved into two pieces. One (smaller) piece ($\sim 250 \times 100 \times 5$\,$\mu$m$^3$) was used for the resistivity measurements by directly soldering the four indium contacts. The other (larger) piece ($\sim$\,30\,$\mu$g) was used for the magnetic torque and the heat capacity measurements. In Fig.~S2(b), we show the temperature dependence of resistivity. The crystal exhibits zero resistivity at $T_c = 9.0$\,K. In the inset of Fig.\,~S2(b), we plot the temperature dependence of the magnetization $M$ measured by a SQUID magnetometer. A sharp superconducting transition in the $M$-$T$ curve demonstrates that the sample is homogeneous. In Fig.~S2(c), we show zero-field superconducting-gap spectra taken on several FeSe surfaces obtained by different cleaves. The samples are all taken from the same batch. Spectra are averaged over the field of view wide enough to flatten the short-wavelength inhomogeneity due to the quasiparticle interference. Even fine structures of the spectra are quantitatively reproduced. Small variations among the spectra may be due to the difference in the exact nature of the tip used in different runs. Such cleave-to-cleave reproducibility guarantees that our samples are uniform and diminishing superconductivity at $H^* < H_{c2}$ is not due to the sample inhomogeneity.
\newpage
\begin{figure}[h]
\includegraphics[width=0.80\linewidth]{FigS1.pdf}
\end{figure}
\noindent
{Figure S2. Sample characterization of FeSe.} (a) Constant-current STM topographic image of FeSe obtained at 90\,mK taken over the same field of view (FOV: $100 \times 100$ nm$^2$) of the SI-STM images shown in Figs\,3(a)-(d). Feedback conditions are set-point current $I=100$\,pA and set-point bias voltage $V=+20$\,mV. (b) Temperature dependence of resistivity measured on the same single-crystalline sample used for STM. The inset shows the temperature dependence of the magnetization under field-cooling and zero-field-cooling conditions in a magnetic field of 1\,Oe applied along the $c$-axis. (c) Zero-field superconducting-gap spectra averaged over the FOVs, which were taken on several FeSe surfaces obtained by different cleaves. Each spectrum is normalized at the value at +6\,meV and shifted vertically for clarity. Measurement conditions are summarized in Table S1. The spectrum \#1 has been measured for the same FOV investigated in this work.
\section{Torque magnetometry}
Magnetic torque $\tau$ was measured by the piezo-resistive micro-cantilever technique down to 0.38\,K and up to $\sim16$\,T. A tiny single crystal was carefully mounted onto the tip-less piezo-resistive lever (PRS-L450-F30-TL-STD, SCL-Sensor. Tech.) which forms an electrical bridge circuit. The field is slightly tilted away from the $c$ axis.
In Fig.\,S3(a) and (b), we show the field dependences of $\rho$ and $\tau$, respectively. We determined $H_{irr}$ by the onset field of nonzero resistivity with a 0.5\,n$\Omega$\,cm criterion, and by closing field of hysteresis loops of magnetic torque to the level of 0.3\% of the whole signal.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.35\linewidth]{FigS_Hirr.pdf}
\label{fig:Hirr}
\end{center}
\end{figure}
\noindent
Figure S3. (a) Magnetic-field dependence of the resistivity $\rho$. (b) Magnetic-field dependence of the magnetic torque $\tau$.
\section{Heat capacity}
The heat capacity of the tiny single crystal of FeSe used for the STM and torque measurements was measured by the long-relaxation method \cite{Wang01,Taylor07}.
With a tiny amount of grease, the sample was mounted onto the bare chip Cernox sensor, which is used as a thermometer and a heater. The sensor is suspended from the cold stage by gold-coated glass fibers such that it is weakly linked to the cold stage. The heat capacity of the crystal is obtained by subtracting the addenda from the total heat capacity measured with the sample.
\section{SI-STM}
SI-STM experiments were performed with an ultrahigh vacuum dilution-fridge-based STM equipped with a 17.5\,T superconducting magnet~\cite{MachidaRSI}. We used a tungsten tip prepared by electrochemical etching. The tip was cleaned by field evaporation using a field-iron microscope, followed by controlled indentation into a clean Au(100) surface. The clean sample surface was obtained by vacuum cleaving at liquid nitrogen temperature. All data were taken in the constant-current mode with the feedback conditions of $I = 100$\,pA and $V = 20$\,mV. $dI/dV$ spectra were taken by standard lock-in technique with a bias modulation of 0.21\,mV$_{\rm rms}$. Whenever we changed the magnetic field, the sample was heated up above $T_c$ to ensure uniform vortex distribution in the sample.
\begin{table}
\caption{Measurement conditions for the spectra shown in Fig. S2(c). $V_{\rm mod}$ denotes bias modulation amplitude.
}
\begin{tabular}{ccccccc}
\hline
Spectrum & FOV size (nm$^2$) & $I$ (pA) & $V$ (mV) & $V_{\rm mod}$ (mV$_{\rm rms}$)& $T$ (K) \\
\hline
\#1 & $100 \times 100$ & 100 & 20 & 0.21 & 0.09 \\
\#2 & $160 \times 160$ & 100 & 20 & 0.21 & 1.5 \\
\#3 & $50 \times 50$ & 100 & 10 & 0.11 & 1.5 \\
\#4 & $100 \times 100$ & 100 & 20 & 0.21 & 1.5 \\
\#5 & $160 \times 160$ & 100 & 20 & 0.21 & 1.5 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{FigS3.pdf}
\end{center}
\end{figure}
\vspace{-5pt}
\noindent
{Figure S4. Fourier-transformed spectroscopic images at 90\,mK.}
Complete dataset of Fourier-transformed spectroscopic images $dI(E,{\bm r})/dV/(I(E,{\bm r})/V)$ across $H^*$. Normal-state quasiparticle interference signals appear along ${\bm q}_b$, whereas density-of-states modulations associated with vortices are observed along ${\bm q}_a$. Note that we adopt the coordinate system $|{\bm a}_{\rm Fe}|<|{\bm b}_{\rm Fe}|$.
\newpage
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{FigS4.pdf}
\end{center}
\end{figure}
\noindent
{Figure S5. Superconducting signals in ${\bm q}$ space.}
Superconducting signals obtained by subtracting $dI(E,{\bm r})/dV/(I(E,{\bm r})/V)$ at 16.5\,T $>H_{c2}^c$ from the ones taken under $H<H_{c2}^c$. The signals are confined below the superconducting-gap energy and disappear above $H^*$.
\end{document}
|
1,108,101,564,250 | arxiv | \section{Introduction}
\label{intro}
Despite extensive studies during last decades,
physics of the light scalar mesons $a_0(980)$
($I^G(J^{PC}) = 1^-(0^{++})$), $f_0(980)$ and $f_0(600) \equiv
\sigma$ ($I^G(J^{PC}) = 0^+(0^{++})$) is far from complete
understanding. In particular, there are doubts whether simple quark
model can explain their properties,
see, e.g., the review in~\cite{PDG_2008}.
The dominant decay channels of scalar mesons are known to be
$\pi^+ \pi^-$, $\pi^0 \pi^0$ for the $f_0 (980)$ and $\sigma$ meson,
and $\pi^0 \eta$ for the $a_0(980)$ meson.
Much experimental attention has already been paid to the radiative
decays of the $\phi$ meson: $\phi(1020) \to \gamma a_0 \to \gamma \pi\eta$
~\cite{Aloisio:2002bsa,Ambrosino:2009py}
and $\phi(1020) \to \gamma f_0 \;(or \; \gamma \sigma)\to \gamma \pi\pi$
~\cite{KLOEres,KLOEres:07}
(see also the KLOE summary in~\cite{KLOE:2009:scalarsummary} and
results from Novosibirsk~\cite{CMD2res,SNDres,Achasov:2000ym}).
Such measurements are a good source of
information about the scalar meson properties~\cite{Achasov_Ivanchenko}.
Various models
have been proposed to describe these
decays,~\cite{Achasov_Ivanchenko,Close:1992ay,Ivashyn:2007yy,Oller:2002na,Bramon:2002iw},
to mention a few. The calculated decay widths turn out to be very
sensitive to model ingredients, however, the experimental data is
still insufficient to unambiguously discriminate between the
models.
In the case of the neutral final state (FS), i.e.,
$\pi^0\pi^0\gamma$ and $\pi^0\eta \gamma$, the cross section is
determined solely by final-state radiation (FSR) mechanism, since
there is no initial-state radiation (ISR) contribution resulting
in the same final state. Despite the lower value of the cross
section, compared to the charged pion case ($e^+e^-\to
\pi^+\pi^-\gamma$), processes with the neutral-meson FS are an
invaluable source of information on complicated hadron dynamics.
In this paper we describe the differential cross section
of the $e^+ e^-$ annihilation
to a pair of neutral pseudoscalar mesons and one photon in the FS,
\begin{equation}
e^+ (p_+) \; e^- (p_-) \to \gamma^\ast \to P_1 (p_1) \; P_2 (p_2) \; \gamma (k).
\label{eq:reaction_P1P2}
\end{equation}
The pseudoscalar mesons ($J^{PC} = 0^{-+}$) are denoted by $P_1 P_2
\equiv \pi^0 \pi^0$ and $\pi^0 \eta$. In Section~\ref{fsr_model} we
present a formalism for a differential cross section, which is the main
task of this paper. We provide more general
formulae in comparison with
Refs.~\cite{Dubinsky:2004xv,Isidori:2006we,Achasov:1999wr}, namely,
the non-integrated expressions are
given as well as those integrated over the angles. It gives a
convenient ground to implement the results in the Monte Carlo generators,
e.g., in FASTERD~\cite{Shekhovtsova:2009yn} (based on the general
structure given in Ref.~\cite{Dubinsky:2004xv}) or
PHOKHARA~\cite{Grzelinska:2008eb}.
Our framework is consistent with symmetries of the strong
and electromagnetic interactions.
It incorporates a model-dependent description of the FSR
only through the explicit form of the Lorentz-invariant functions
$f_{1,2,3}$ and has a model-independent tensor decomposition.
In Sections~\ref{section_scal} and~\ref{section_double} we calculate the
FS hadronic tensor. It is the second goal of the paper to provide
such a description in terms of functions $f_{1,2,3}$. Our model
relies on the Lagrangian of Resonance Chiral Theory
($\mathrm{R\chi T}$~)~\cite{EckerNP321}. The $\mathrm{R\chi T}$~ is a consistent extension of Chiral
Perturbation Theory to the region of energies near 1 GeV, which
introduces the explicit resonance fields and exploits the idea of
resonance saturation. One of the advantages of the $\mathrm{R\chi T}$~ Lagrangian at
leading order (LO), which makes it
convenient for the present study, is that, having a good predictive
power, it contains very few free parameters compared with other
phenomenological models. In order to get good agreement with data,
we release a rigor of $\mathrm{R\chi T}$~ and include some $SU(3)$ symmetry breaking
effects (e.g., use realistic masses of vector mesons) and mixing
phenomena (e.g., a G-parity-violating $\phi\omega\pi^0$ transition).
The loop contributions follow from the model Lagrangian. For
example, the kaon loop in the $\phi f_0 \gamma$ transtion, which is often
considered as a pure phenomenology manifestation, in the present model is a
direct consequence of the $\mathrm{R\chi T}$~ Lagrangian. In order to simplify the
formulae, some numerically irrelevant loop contributions are omitted. In
addition, the resonance exchanges in the loops are not considered to
avoid problems with renormalizability.
We consider in detail the following intermediate states with scalar
and vector resonances, which lead to the same FS $P_1 P_2 \gamma$:
\begin{eqnarray}
&& \text{ scalar decay, (Section~\ref{section_scal})}\nonumber \\
e^+e^-&\to& \gamma^\ast \to
S\gamma\to P_1P_2\gamma
\label{fsr_proc_scal}
\\
e^+e^-&\to& \gamma^\ast \to V \to S\gamma\to P_1P_2\gamma
\nonumber
\\
&& \text{ vector contribution, (Section~\ref{section_double})} \nonumber
\\
\label{fsr_proc_vec}
e^+e^-&\to& \gamma^\ast \to V P_{1,2}\to
P_1P_2\gamma
\\
\nonumber e^+e^-&\to& \gamma^\ast \to V_a \to V_b P_{1,2}\to P_1P_2\gamma
\end{eqnarray}
where $S$ ($J^{PC} = 0^{++}$) is an intermediate scalar meson
($S=f_0$, $\sigma$ for $\pi_0 \pi_0$ FS and $S=a_0$ for
$\pi_0\eta$).
Only the lowest nonet of vector mesons
($V, \ V_a, \ V_b =\rho$, $\omega$ and $\phi$) is taken into account.
We are interested in the center-of-mass energy $\sqrt{s}$ range
from the threshold up to $M_\phi$.
This framework may also be used in a somewhat dedicated
case of $\sqrt{s}= M_\phi$, giving, e.g., the $\phi$ radiative decay
description.
For the quantitative illustration of our approach,
in Section~\ref{section_numer} we show the
numerical results for the values of $\sqrt{s} = 1$~GeV and
$\sqrt{s} = M_\phi$.
The meson-pair invariant mass distributions are of interest,
and for $\sqrt{s} = M_\phi$ they are compared with available
results from KLOE.
We demonstrate the interplay of the
contributions~(\ref{fsr_proc_scal})
and~(\ref{fsr_proc_vec}).
Conclusions follow in Section~\ref{section_conlus}.
\section{General structure of the FSR cross section}
\label{fsr_model}
For a generic reaction $e^+ e^- \to \gamma P_1 P_2$ we define 4-momenta as
shown in Fig.~\ref{fig:e+e-generic-scheme}:
\begin{eqnarray}
p&=& p_1 + p_2 , \quad \quad l = p_1 - p_2, \\
Q&=& p_+ + p_- = k + p_1 + p_2 . \nonumber
\end{eqnarray}
The masses of
pseudoscalars are $m(P_1)=m_1, \ m(P_2)=m_2$.
\begin{figure}
\begin{center}
\resizebox{0.29\textwidth}{!}{%
\includegraphics{fig-01.eps}
}
\end{center}
\caption{Generic scheme for electron-positron annihilation into two
particles with final state radiation
}
\label{fig:e+e-generic-scheme}
\end{figure}
The cross section of the FSR process can be written as
\begin{eqnarray}
\label{sect_fsr}
d\sigma_{F} &=& \frac{1}{2s(2\pi)^5}C_{12}
\nonumber\\&&\nonumber\!\!\!\!\!\!\!\!\!\!\!\!\times
\int
\delta^4(Q-p_1-p_2-k) \overline{|M_{FSR}|^2}
\frac{d^3p_1 \, d^3p_2\, d^3k}{8E_1E_2\,\omega} \\
& = & C_{12} N\int \overline{|M_{FSR}|^2} d\cos\theta \, d\phi \, dm_{1\gamma}^2
\, dp^2 ,
\\\nonumber N& = &
\frac{1}{(2\pi)^4}\;\frac{1}{64s^2} , \nonumber
\end{eqnarray}
where $s=Q^2$, $\theta$ is the azimuthal angle, $\phi$ is the polar angle of the
photon and $m_{1\gamma}^2=(k+p_1)^2$. The factor
$C_{12}=1/2$ for $\pi^0 \pi^0$ in the final
state and $C_{12}=1$ for $\pi^0 \eta$. The matrix element
$M_{FSR}$ is
\begin{equation}
M_{FSR}=\frac{e}{s}M^{\mu\nu} \; \bar u(-p_+)\gamma_\mu
u(p_-)\epsilon^\ast_{\nu} ,
\end{equation}
where $e = \sqrt{4\pi \alpha} \approx \sqrt{4\pi /137}\approx 0.303$ and
the FSR tensor $M^{\mu\nu}$ can be decomposed into three
gauge-invariant independent tensors:
\begin{eqnarray}
\label{eqn:fsr}
&&M^{\mu \nu }(Q,k,l)\equiv -ie^{2}(\tau_{1}^{\mu \nu }f_{1}
+\tau_{2}^{\mu\nu}f_{2}+\tau _{3}^{\mu \nu }f_{3}) , \\
&&\tau _{1}^{\mu\nu}=k^{\mu }Q^{\nu }-g^{\mu \nu }k\cdot Q,
\nonumber \\
&&\tau _{2}^{\mu\nu}
=k\cdot l(l^{\mu }Q^{\nu }-g^{\mu \nu }Q\cdot l)
+l^{\nu }(k^{\mu }Q\cdot l-l^{\mu }k \cdot Q), \; \nonumber \\
&&
\tau _{3}^{\mu \nu }
=Q^{2}(g^{\mu \nu }k\cdot l-k^{\mu }l^{\nu})+Q^{\mu }(l^{\nu }k\cdot Q-Q^{\nu }k\cdot l)
\nonumber
\end{eqnarray}
with the Lorentz-invariant functions
\begin{equation}
f_i \equiv f_i (Q^2, k \cdot Q, k \cdot l),
\end{equation}
$i=1,2,3$.
If $m_1 = m_2$, these tensors coincide with those of Ref.~\cite{Dubinsky:2004xv,Drechsel:1996ag}.
One may also find a similar approach in~\cite{Achasov:1999wr,EidelmanKuraev,ArbuzovKuraev}.
We emphasize that the decomposition~(\ref{eqn:fsr}) is
model independent; the model dependence is contained in an
explicit form of functions $f_i$ only.
Notice that the scalar products can be written in terms of
the invariant masses:
\begin{eqnarray}
k\cdot Q &=& (s-p^2)/2, \nonumber
\\
k\cdot l &=& m_{1\gamma}^2 - m_1^2 - k\cdot Q, \nonumber
\\
Q\cdot l &=& k\cdot l + s\delta/2
,
\end{eqnarray}
where $\delta\equiv {2(m_1^2-m_2^2)}/{s}$.
For the matrix element squared and averaged over the
$e^+e^-$ polarizations we obtain
\begin{eqnarray}
\label{aik}
\overline{|M_{FSR}|^{2}} &=&\frac{e^{6}}{s^{2}}\biggl[%
a_{11}|f_{1}|^{2}+2a_{12}\mathrm{Re}(f_{1}f_{2}^{\ast
})+a_{22}|f_{2}|^{2} \nonumber
\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
+\; 2\; a_{23}\;{Re}(f_{2}f_{3}^{\ast
})+a_{33}|f_{3}|^{2}+2a_{13}{Re}(f_{1}f_{3}^{\ast })\biggr], \label{fsr}
\end{eqnarray}
with the coefficients
\begin{equation}
a_{ik} \equiv (\frac{s}{2}g_{\mu\rho}-p_{+ \mu} p_{- \rho}-p_{+
\rho} p_{- \mu})\tau_i^{\mu\nu}\tau_k^{\rho\lambda} g_{\nu\lambda},
\end{equation}
equal to
\begin{eqnarray}
a_{11} &=&\frac{1}{4}s \left(t_{1}^{2}+t_{2}^{2} \right) , \nonumber\\
a_{22} &=&\frac{1}{8} \biggl[ sl^{4}(t_{1}+t_{2})^{2}+4l^{2}
\bigl(%
u_{1}{}^{2} \left( s^{2}+s(t_{1}+t_{2})+t_{2}^{2} \right)
\nonumber\\&&
+u_{2}{}^{2} \left( s^{2}+s(t_{1}+t_{2})+t_{1}^{2} \right)
\nonumber \\
&&+ 2u_{1}u_{2} \left( s^{2}+s(t_{1}+t_{2})-t_{1}t_{2} \right)
\bigr)
\nonumber\\&&
+8s(u_{1}^{2}+u_{2}^{2})(u_{1}+u_{2})^{2} \biggr] \nonumber \\
& - & \bigl( 4u_{1}^2+ 4u_{2}^2 +
l^2(2s+t_{1}+t_{2})\bigr) \frac{s^2(u_1+u_2)\delta}{4}
\nonumber\\&&
+
\bigl( l^2s+2u_{1}^2+ 2u_{2}^2 \bigr) \frac{s^3\delta^2}{8}
, \nonumber\\
a_{33}&=&-\frac{s^{2}%
}{2} \bigl( t_{1}t_{2}l^{2}+2(u_{1}+u_{2})(u_{2}t_{1}+u_{1}t_{2})
\nonumber\\&&
-\delta s (u_{2}t_{1}+u_{1}t_{2})\bigr)
,
\nonumber
\end{eqnarray}
\begin{eqnarray}
a_{12} &=&\frac{1}{8}\biggl[
sl^{2}(t_{1}+t_{2})^{2}+4u_{1}^{2}(s^{2}+st_{2}+t_{2}^{2})
\nonumber\\&&
+4u_{2}^{2}(s^{2}+st_{1}+t_{1}^{2})
+4u_{1}u_{2}(2s^{2}+s(t_{1}+t_{2})-2t_{1}t_{2})
\nonumber\\&&
+2s^2 \left( t_1u_2+t_2u_1+2s(u_1+u_2) \right) \delta+s^4\delta^2\biggr], \nonumber\\
a_{13} &=&\frac{s}{4} \biggl[%
(u_{1}+u_{2})(st_{1}+st_{2}+t_{1}t_{2})-u_{1}t_{2}^{2}-u_{2}t_{1}^{2}
\nonumber\\&&
-\frac{\delta}{2}(t_1+t_2)s^2 \biggr],
\nonumber \\
a_{23} &=&\frac{s}{4}\biggl[%
l^{2}(u_{1}t_{2}-u_{2}t_{1})(t_{1}-t_{2})-2s(u_{1}+u_{2})^{3}
\nonumber\\&&
+2(u_{1}+u_{2})(u_{1}-u_{2})(t_{2}u_{1}-u_{2}t_{1})
\nonumber
\\
&&+\delta s \left( u_1u_2(4s+t_{1}+t_{2})+u_{1}^2(2s-t_2)+u_{2}^2(2s-t_2)
\right)
\nonumber\\&&
-\frac{\delta^2}{2}s^3(u_{1}+u_{2})%
\biggr] ,
\label{aik_coeff}
\end{eqnarray}
where
\begin{eqnarray}
\label{eq:scalars}
t_{1}&\equiv &
(p_{-}-k)^{2}-m^2_e=-2p_{-}\cdot k,
\nonumber\\
t_{2}&\equiv&
(p_{+}-k)^{2}-m^2_e=-2p_{+}\cdot k, \nonumber \\
u_{1} &\equiv &l\cdot p_{-}, \;\; u_{2}\equiv l\cdot p_{+}.
\end{eqnarray}
For numerical calculations
the relation $l^2 = 2(m_1^2 + m_2^2) - p^2$ may
be useful.
The Eqs.~(\ref{sect_fsr}) and~(\ref{aik}),
with the explicit expressions~(\ref{aik_coeff})
and~(\ref{eq:scalars}),
fix the whole model-independent part of the
differential cross section.
It is worth illustrating a relation of these formulae to
the partial differential cross section.
Taking into account the corresponding factors and integrating the
coefficients $a_{ik}$ over the angular variables of the final-meson
phase space we have
\begin{eqnarray}
\label{eq:dsigma_dm2dp2}
\nonumber
\frac{d\sigma}{dm^2_{1\gamma}dp^2} &=&
\frac{\alpha^3 C_{12}}{32 s}
\left(A_{11}|f_{1}|^{2}+2A_{12}\mathrm{Re}(f_{1}f_{2}^{\ast
})+A_{22}|f_{2}|^{2} \right.
\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
\left. +2A_{23}{Re}(f_{2}f_{3}^{\ast
})+A_{33}|f_{3}|^{2}+2A_{13}{Re}(f_{1}f_{3}^{\ast }) \right) ,
\end{eqnarray}
where
\begin{eqnarray}
A_{11}&=&\frac{4 x^2}{3} ,
\nonumber\\
A_{12} &=&
\frac{2s}{3} \bigl[ (x_1-x_2)^2+x^2(\sigma-1+x)-2\delta
(x_1-x_2)+\delta^2 \bigr] ,
\nonumber\\
A_{13}&=& -\frac{4s}{3}x(x_1-x_2-\delta)
\nonumber\\
A_{23} &=& -\frac{2s^2}{3}(x_1-x_2)(\delta-x_1+x_2)^2 ,
\nonumber\\
A_{22}&=&\frac{s^2}{3} \bigl[ (x_1-x_2)^4+2(x_1-x_2)^2(1-x)(\sigma-1+x)
\nonumber\\&&
+2x^2(\sigma-1+x)^2
\nonumber \\
&&-2\delta(x_1-x_2) \left( (x_1-x_2)^2+(\sigma-1+x)(x_1+x_2) \right)
\nonumber\\&&
+\delta^2 \left( (x_1-x_2)^2+2(\sigma-1+x) \right) \bigr],
\nonumber \\
A_{33}&=& \frac{2s^2}{3} \bigl[
(x_1-x_2)^2(1+x)-x^2(\sigma-1+x)
\nonumber\\&&
+\delta(\delta-(2+x)(x_1-x_2)) \bigr]
\label{eq_aik_integr}
,
\end{eqnarray}
and
\begin{eqnarray}
&x& = \frac{s-p^2}{s}, \text{\hspace{0.2cm}} x_1=\frac{2E_1}{\sqrt{s}}=
\frac{p^2+m_{1\gamma}^2-m_2^2}{s} ,
\nonumber\\
&x_2&=\frac{2E_2}{\sqrt{s}}=
\frac{s+m_2^2-m_{1\gamma}^2}{s} , \text{\hspace{0.2cm}}
\sigma=\frac{2(m_1^2+m_2^2)}{s} .
\end{eqnarray}
For the case $m_1=m_2$ Eq.~(\ref{aik_coeff}) reduces to Eq.~(17)
of Ref.~\cite{Dubinsky:2004xv}.
Also the results (\ref{eq:dsigma_dm2dp2}),~(\ref{eq_aik_integr}) coincide with Eqs.~(2.7),~(2.8)
of~\cite{Isidori:2006we}.
However, for an MC generator, the expressions~(\ref{sect_fsr}) and~(\ref{aik})
with coefficients $a_{ik}$ are more convenient than~(\ref{eq:dsigma_dm2dp2}).
Integrating Eq.~(\ref{eq:dsigma_dm2dp2}) over
$m_{1\gamma}^2$ one obtains the distribution of the invariant mass
$\sqrt{p^2}$ of two pseudoscalar mesons:
\begin{eqnarray}
\label{eq:dsigma_dp2}
\frac{d\sigma}{d \sqrt{p^2}} &=&
2\sqrt{p^2} \int_{(m_{1\gamma}^2)_{min}}^{(m_{1\gamma}^2)_{max}}
d m_{1\gamma} \left( \frac{d \sigma}{d m_{1\gamma}\; d p^2} \right)
.
\end{eqnarray}
The bounds of integration
over $m_{1\gamma}^2$ at the fixed value of $p^2$ are determined by
\begin{eqnarray}
\label{eq:m1gamma:limits}
(m_{1\gamma}^2)_{max/min} &=&
\frac{s(p^2\sigma+s\delta)}{4p^2}
\nonumber\\
&&+\frac{s-p^2}{2}
\Biggl(1\pm\sqrt{1-\frac{s\sigma}{p^2}+\frac{s^2\delta^2}{4p^4}}\Biggr)
.
\end{eqnarray}
At the $\phi$-meson peak ($s=M_\phi^2$) one can present the
results in terms of the branching ratio for the $\phi \to P_1 P_2
\gamma$ decay, which is related to the cross section as follows:
\begin{eqnarray}
\label{eq:phi-br}
\frac{d B(\phi\to P_1 P_2\gamma)}{d\sqrt{p^2}}&=&\frac{M_\phi^2}{12\pi B(\phi\to
e^+e^-)}
\nonumber\\&&\times
\frac{d \sigma (e^+e^- \to P_1 P_2 \gamma)}{d\sqrt{p^2}} , \end{eqnarray}
where the $\phi \to e^+ e^-$ branching ratio $B(\phi\to e^+e^-)$
is used. In the context of this paper, a calculation of this
branching ratio is useful for comparison of model predictions with
available data.
\section{Scalar contribution}\label{section_scal}
\begin{figure}
\begin{center}
\resizebox{0.29\textwidth}{!}{%
\includegraphics{fig-02.eps}
}
\end{center}
\caption{Scheme of $e^+ e^- \to S\gamma \to P_1 P_2 \gamma$ subprocess
}
\label{fig:e+e-scalar-scheme}
\end{figure}
In this Section
we consider in detail the transition amplitudes
\begin{eqnarray} \label{fsr_proc_pi0pi0_vec}
&&\gamma^\ast\to f_0\gamma \to \pi^0\pi^0 \gamma , \nonumber
\\
&&\gamma^\ast\to \sigma \gamma \to \pi^0\pi^0 \gamma , \nonumber
\\
\label{fsr_proc_pi0eta_vec}
&&\gamma^\ast\to a_0\gamma \to \pi^0\eta \gamma
\end{eqnarray}
for the $\pi^0\pi^0 \gamma$ and $\pi^0\eta \gamma$ final states,
respectively.
They contibute to $e^+ e^- \to S\gamma \to P_1 P_2 \gamma$ as
illustrated in Fig.~\ref{fig:e+e-scalar-scheme}.
To describe the processes~(\ref{fsr_proc_pi0pi0_vec}) we use
the Lagrangian of $\mathrm{R\chi T}$~~\cite{EckerNP321}
at the linear-in-resonance level, following~\cite{Ivashyn:2007yy,Ivashyn:2009te}.
The basic features of the Lagrangian framework of the~$\mathrm{R\chi T}$~
are sketched in~\ref{App:A}.
We emphasize that both light isoscalar scalar resonances, $f_0$ and $\sigma$
are included in the formalism in a natural way.
Throughout this section we work in the tensor representation for spin-$1$
particles~\cite{EckerNP321,EckerPLB223}.
In the present work we take into account
the pseudoscalar decay constants splitting ($f_\pi \neq f_K$)
which was discussed in the same context
in Ref.~\cite{Ivashyn:2009te}.
The interaction of pseudoscalars with the photon field $B^\mu$
in~$\mathrm{R\chi T}$~ is identical to the scalar QED.
We shall now discuss the interaction terms of the
Lagrangian~(\ref{lagr:vec:master})
relevant to the processes~(\ref{fsr_proc_pi0pi0_vec})
(cf.~\cite{Ivashyn:2007yy}).
For the vector mesons in the even-intrinsic-parity sector one has
\begin{eqnarray}
\mathcal{L}_{\gamma V}
&=&
e F_V F^{\mu \nu} \bigl(
\frac{1}{2}\rho^0_{\mu\nu} + \frac{1}{6}\omega_{\mu\nu} -
\frac{1}{3\sqrt{2}}\phi_{\mu\nu} \bigr),
\label{eq:F3}
\end{eqnarray}
\begin{eqnarray}
\label{eq:F4} \mathcal{L}_{VPP} & = & {i} G_V \big[
\frac{1}{f_\pi^2}\;(2\ \rho^0_{\mu\nu}
\partial^\mu\pi^+
\partial^\nu \pi^- )
\nonumber
\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
+ \frac{1}{f_K^2}
( \rho^0_{\mu\nu} + \omega_{\mu\nu} - \sqrt{2}\phi_{\mu\nu} )
(\partial^\mu K^+\partial^\nu K^- )
\nonumber
\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
+ \frac{1}{f_K^2}
(- \rho^0_{\mu\nu} + \omega_{\mu\nu} - \sqrt{2}\phi_{\mu\nu} )
( \partial^\mu K^0\partial^\nu \bar{K}^0 )
\big],
\end{eqnarray}
\begin{eqnarray}
\label{eq:F5}
\mathcal{L}_{\gamma V PP} &=& -\frac{e F_V}{f_\pi^2}
\partial^\mu B^\nu \rho_{\mu \nu}^0 \ \pi^+ \pi^-
\nonumber
\\
&& -\frac{e F_V}{2 f_K^2}
\partial^\mu B^\nu \left(\rho_{\mu \nu}^0 + \omega_{\mu \nu} - \sqrt{2} \phi_{\mu
\nu}\right)\ K^+ K^- \nonumber
\\
&&- \frac{2e G_V}{f_\pi^2} B^\nu \rho_{\mu \nu}^0 \left(
\pi^+\partial^\mu \pi^-
+ \pi^- \partial^\mu\pi^+\right)
\nonumber
\\
&&- \frac{e G_V}{f_K^2} B^\nu \left(\rho_{\mu \nu}^0 +
\omega_{\mu \nu} - \sqrt{2} \phi_{\mu \nu}\right)
\nonumber\\&&\times
\left( K^+ \partial^\mu K^- + K^-
\partial^\mu K^+ \right) ,
\end{eqnarray}
where $F^{\alpha\beta}$ stands for the electromagnetic field
tensor and $V^{\mu \nu}$ for the vector field in the tensor
representation, $F_V$ and $G_V$ are the model parameters
(see~\ref{App_B} for numerical values). Vertex functions for
Eqs.~(\ref{eq:F3})--(\ref{eq:F5}) are shown in
Table~\ref{Table:v3}.
\begin{table*}
\caption{The vertices from Resonance Chiral Lagrangian
terms~(\ref{eq:F3})-(\ref{eq:F5}).
The dashed line stands for pseudoscalar meson (momentum~$l$),
double solid --- for vector meson,
wavy line --- for photon (momentum~$q$).}
\label{Table:v3}
\begin{center}
\begin{tabular}
{|c|p{23pt}|p{23pt}|p{23pt}|p{35pt}|p{35pt}|p{35pt}|p{45pt}|p{45pt}|c|}
\hline
{Diagramm} &
\multicolumn{3}{c|}{\resizebox{0.17\textwidth}{!}{\includegraphics{fig-tab-01-1.eps}}} &
\multicolumn{3}{c|}{\resizebox{0.15\textwidth}{!}{\includegraphics{fig-tab-01-2.eps}}} &
\multicolumn{3}{c|}{\resizebox{0.15\textwidth}{!}{\includegraphics{fig-tab-01-3.eps}}} \\
{Vertex function} &
\multicolumn{3}{c|}{$e F_V \left[ g_{\nu\lambda}q_\mu - g_{\nu\mu} q_\lambda \right]$} &
\multicolumn{3}{c|}{$\frac{G_V}{2 f_P^2} \left[ l^-_\mu l^+_\lambda - l^+_\mu l^-_\lambda\right]$} &
\multicolumn{3}{c|}{$\frac{e G_V}{2 f_P^2} \left[ g_{\nu\lambda}(l^- + l^+)_\mu - g_{\nu\mu} (l^- + l^+)_\lambda \right]$} \\
{ } &
\multicolumn{3}{c|}{} &
\multicolumn{3}{c|}{} &
\multicolumn{3}{c|}{$+ \frac{e F_V}{4 f_P^2} \left[ g_{\nu\lambda}q_\mu - g_{\nu\mu} q_\lambda \right]$} \\
\hline
&
\centering $\rho$ & \centering $\omega$ & \centering $\phi$ &
\centering $\rho$ & \centering $\omega$ & \centering $\phi$ &
\centering $\rho$ & \centering $\omega$ & $\phi$ \\
\hline
$\pi^\pm$ ($f_P = f_\pi$)
&
\multicolumn{3}{c|}{} &
\centering$2$ & \centering$0$ & \centering$0$ &
\centering$2$ & \centering$0$ & $0$
\\
$K^\pm$ ($f_P = f_K$)
&
\multicolumn{3}{c|}{} &
\centering$1$ & \centering$1$ & \centering$-\sqrt{2}$ &
\centering$1$ & \centering$1$ & $-\sqrt{2}$\\
$K^0$ ($f_P = f_K$)
&
\multicolumn{3}{c|}{} &
\centering$-1$ & \centering$1$ & \centering$-\sqrt{2}$ &
\centering$0$ & \centering$0$ & $0$ \\
\hline
&
\centering$\frac{1}{2}$ & \centering$\frac{1}{6}$ & \centering$\frac{-1}{3 \sqrt{2}}$ &
\multicolumn{6}{c|}{}\\
\hline
\end{tabular}
\end{center}
\end{table*}
The Lagrangian terms for scalar and pseudoscalar meson
interactions, which follow from~(\ref{lagr:master}) are
\begin{eqnarray}
\label{eq:Lb}
\nonumber
\mathcal{L}_{scalar}
&=&
\sum_{S} S \Bigl(
\frac{1}{f_\pi^2}\frac{g_{S\pi\pi}}{2}\stackrel{\rightarrow}{\pi}^2 +
\frac{1}{f_\pi^2}\frac{g_{S\eta\eta}}{2}\eta^2
+ \frac{1}{f_\pi^2}g_{S\pi\eta} \pi^0 \eta
\\&&
\nonumber
+ \frac{1}{f_K^2}g_{SKK} \left(K^+K^- +(-1)^{I_S} K^0\bar{K}^0 \right)
\\&&
+\frac{1}{f_\pi^2}(\hat{g}_{S\pi\pi}/2)(\partial_\mu\stackrel{\rightarrow}{\pi})^2
\nonumber\\&
+ \frac{1}{f_\pi^2}(\hat{g}_{S\eta\eta}/2)(\partial_\mu\eta)^2
+ \frac{1}{f_\pi^2}\hat{g}_{S \pi^0 \eta}\partial_\mu\pi^0 \partial^\mu\eta
\nonumber\\&&
+ \frac{1}{f_K^2}\hat{g}_{SKK} \left( \partial_\mu K^+\partial^\mu K^-
+(-1)^{I_S} \partial_\mu K^0 \partial^\mu \bar{K}^0 \right)
\nonumber\\&&
+ \frac{1}{f_\pi^2}g_{S\gamma\pi\pi} eB_\mu \pi^+
\stackrel{\leftrightarrow}{\partial_\mu}\pi^-
\nonumber\\&&
+ \frac{1}{f_K^2}g_{S\gamma KK}eB_\mu K^+ \stackrel{\leftrightarrow}{\partial_\mu}
K^-
\nonumber\\&&
+ \frac{1}{f_\pi^2}g_{S\gamma\gamma\pi\pi}e^2B_\mu B^\mu \pi^+ \pi^-
\nonumber\\&&
+ \frac{1}{f_K^2}g_{S\gamma\gamma KK}e^2B_\mu B^\mu K^+ K^- \Bigr).
\end{eqnarray}
(interactions with $\eta^\prime$ are omitted here for brevity).
Here $S$ stands for any scalar field, $a_0$,$f_0$ or $\sigma$,
and $P$ -- for pseudoscalar $\stackrel{\rightarrow}{\pi}= \pi^0, \pi^\pm$ or
$K^\pm$, $K^0$, $\bar{K}^0$ and $\eta$.
We have introduced the effective couplings $g_{S \pi \pi }$,
$g_{S \eta\eta}$, etc. listed in
Table~\ref{table:generalscalarcouplings}, ${I_S}=0$ for $f_0$ and
$\sigma$ and ${I_S}=1$ for $a_0$.
Couplings are expressed in terms of the model parameters $c_d$, $c_m$
and $\theta$,
see also the expression~(\ref{eq:eta-coefficients}) for the $C_{q,s}$ coefficients.
The Lagrangian~(\ref{eq:Lb}) leads to the vertices shown in
Fig.~\ref{fig:v1}.
\begin{figure}
\begin{center}
\resizebox{0.49\textwidth}{!}{%
\includegraphics{fig-03.eps}
}
\end{center}
\caption{The vertices corresponding to
the Lagrangian~(\ref{eq:Lb}).
The dotted line stands for a scalar meson $S$,
the dashed one --- for a pseudoscalar~$P$.
Couplings are shown in
Table~\ref{table:generalscalarcouplings}.
} \label{fig:v1}
\end{figure}
Given this set of interaction terms, the leading
contribution to the $\gamma^\ast \gamma S$ vertex comes from
the one-loop diagrams~\cite{Ivashyn:2007yy}.
The mechanism of the $\phi$ meson decay via the
kaon loop was first considered in a different formalism
in~\cite{Achasov_Ivanchenko} and
is consistent with the data~\cite{KLOE:2009:scalarsummary}.
We would like to stress
that in the current approach the loop mechanism is a predicted subprocess
following directly from the Lagrangian, rather than an assumption.
In particular, for the case of the $\pi^0 \pi^0 \gamma$
final state both the kaon and pion loops contribute.
The latter are very important in the region of the $\rho$ resonance
(recall that the $\gamma^\ast$ invariant mass
$\sqrt{s}$ is not constrained to the $\phi$ meson mass).
When working with the three-point vertex functions $\gamma^\ast \gamma
S$, we factorize the kaon-loop part in the $a_0$ case
and separately the pion-loop and kaon-loop part for $f_0$ and $\sigma$,
as illustrated in Fig.~\ref{fig:g-g-S-scheme}
(see~\ref{App_C} for details).
The $\gamma^\ast (Q^\mu)\to \gamma(k^\nu) S (p)$
amplitude reads
\begin{eqnarray}
T^{\mu\nu} &=& -i e^2 (Q^\nu k^\mu - g^{\mu\nu}Q\cdot k)
F_{S\gamma^\ast\gamma}(p^2,\!Q^2).
\end{eqnarray}
The $\gamma^\ast (Q^2)\to \gamma S (p^2)$ transition
form factors (FF's) have the form
\begin{eqnarray}
F_{f_0\gamma^\ast\gamma}(p^2,\!Q^2) &=& G_{f_0\gamma^\ast\gamma}^{(\pi)}(p^2,\!Q^2)
+ G_{f_0\gamma^\ast\gamma}^{(K)}(p^2,\!Q^2),
\\
F_{\sigma\gamma^\ast\gamma}(p^2,\!Q^2) &=& G_{\sigma\gamma^\ast\gamma}^{(\pi)}(p^2,\!Q^2)
+ G_{\sigma\gamma^\ast\gamma}^{(K)}(p^2,\!Q^2),
\\
F_{a_0\gamma^\ast\gamma}(p^2,\!Q^2) &=& G_{a_0\gamma^\ast\gamma}^{(K)}(p^2,\!Q^2)
,
\end{eqnarray}
where the terms
\begin{eqnarray}
\nonumber
\!\!\!\!G_{S\gamma^\ast\gamma}^{(\pi)}(p^2,\!Q^2)
&\!=&\! \frac{G_{S\pi\pi}(p^2)}{2\pi^2\; m_\pi^2}
I\!\left(\frac{Q^2}{m_\pi^2},\frac{p^2}{m_\pi^2}\right)\!F_{em}^{\pi} (Q^2),
\\
\label{scalar-two-photon-ff-1}
\!\!\!\!G_{S\gamma^\ast\gamma}^{(K)}(p^2,\!Q^2)
&\!=& \!\frac{G_{S K K}(p^2)}{2\pi^2 \; m_K^2}
I\!\left(\frac{Q^2}{m_K^2}, \frac{p^2}{m_K^2}\right)\!F_{em}^{K} (Q^2),
\end{eqnarray}
for $S=f_0,\sigma$, and
\begin{eqnarray}
\label{scalar-two-photon-ff-3}
\!\!\!\!G_{a_0\gamma^\ast\gamma}^{(K)}(p^2,\!Q^2)
&\!=& \!\frac{G_{a_0 KK}(p^2)}{2\pi^2 \; m_K^2}
I\!\left(\frac{Q^2}{m_K^2}, \frac{p^2}{m_K^2}\right)\!F_{em}^{K} (Q^2)
\end{eqnarray}
follow from~(\ref{eq:F3})--(\ref{eq:Lb}),
and the pion and kaon electromagnetic form factors,
$F_{em}^{\pi} (Q^2)$ and $F_{em}^{K} (Q^2)$,
follow from~(\ref{eq:F3}) and~(\ref{eq:F4}).
The terms
\begin{eqnarray}
G_{S KK}(p^2) &\equiv & 1/f_K^2 \left( \hat{g}_{S KK} (m_K^2\!-\!p^2/2) + g_{S KK} \right), \nonumber\\
G_{S \pi \pi}(p^2) &\equiv & 1/f_\pi^2 \left( \hat{g}_{S \pi \pi} (m_\pi^2 - p^2/2) + g_{S\pi \pi} \right),
\label{eq:SPP-ffs:1}
\end{eqnarray}
for $S= f_0,\sigma$ and
\begin{eqnarray}
G_{a_0 KK}(p^2) &\equiv & 1/f_K^2 \left( \hat{g}_{a_0KK} (m_K^2 - p^2/2) + g_{a_0KK} \right), \nonumber\\
G_{a_0 \pi \eta}(p^2) &\equiv & 1/f_\pi^2 \left(
\hat{g}_{a\pi\eta} (m_\eta^2 + m_\pi^2 - p^2)/2 + g_{a\pi\eta} \right)
\label{eq:SPP-ffs:2}
\end{eqnarray}
have the meaning of momentum-dependent $SPP$ vertices.
The expression for $I(a,b)$ in~(\ref{scalar-two-photon-ff-1})--(\ref{scalar-two-photon-ff-3})
coincides with that of~\cite{Close:1992ay,Bramon:2002iw}
and for convenience is given in~\ref{App_C}.
The scalar meson contribution relevant to
the $\pi^0 \pi^0$ final state is
\begin{eqnarray}
\label{f1-scalar-f0}
f_1^{S,\, \pi^0 \pi^0} &=& \sum_{S=f_0,\;\sigma}
D_{S}(p^2) G_{S\pi\pi}(p^2) \left(
G_{S\gamma^\ast\gamma}^{(\pi)}(p^2,Q^2) \right.
\nonumber\\&&
\quad \quad \quad\left. +
G_{S\gamma^\ast\gamma}^{(K)}(p^2,Q^2) \right),
\end{eqnarray}
and in the $\pi^0 \eta$ case one has
\begin{eqnarray}
\label{f1-scalar-a0}
\!\!\!\!f_1^{S,\, \pi^0 \eta} &=& D_{a_0}(p^2) G_{a_0 \pi\eta}(p^2)
G_{a_0 \gamma^\ast\gamma}^{(K)}(p^2,Q^2).
\end{eqnarray}
We use the scalar meson propagator $D_S(p^2)$ in the
form~\cite{Ivashyn:2009te}
\begin{eqnarray}
D_{S}^{-1}(p^2)&=& p^2 - M_S^2 + M_S\; \Im\!\mathit{m}\!\left(
\tilde{\Gamma}_{S,\; {tot}}(M_S^2) \right)
\nonumber\\
&&+ i\, \sqrt{p^2}\;
\tilde{\Gamma}_{S,\; {tot}}(p^2)
\end{eqnarray}
with
\begin{eqnarray}
\tilde{\Gamma}_{tot,S}(p^2)&=&
\tilde{\Gamma}_{S\to\pi\pi}(p^2
+
\tilde{\Gamma}_{S\to K \bar{K}}(p^2
, \ \ S=f_0,\sigma
\nonumber \\
\tilde{\Gamma}_{tot,a_0}(p^2)&=& \Gamma_{a_0\to \pi \eta}(p^2)
+ \tilde{\Gamma}_{a_0 \to K \bar{K}}(p^2)
.
\end{eqnarray}
Contributions of heavy particles to the total widths, e.g.,
$\Gamma_{f_0\to \eta\eta}(p^2)$, are neglected. Modified widths
$\tilde{\Gamma}$ in the above expressions are defined similarly to the
tree-level decay widths given in~\ref{App_B}, see Eqs.~(\ref{width:ape}),
but the analytic continuation is used: \begin{equation} \sqrt{f(p^2)} = e^{i\;
Arg(f(p^2))/2}\sqrt{|f(p^2)|} , \end{equation} see Ref.~\cite{Ivashyn:2009te}.
By construction, the functions $f_1$ in~(\ref{f1-scalar-f0}),~(\ref{f1-scalar-a0})
are of the chiral order $\mathcal{O}(p^6)$: the diagrams of
Fig.~\ref{fig:g-g-S-scheme} are $\mathcal{O}(p^4)$ and
$SPP$ transition is $\mathcal{O}(p^2)$.
\begin{table}
\caption{Effective couplings for scalar mesons~\cite{Ivashyn:2009te}
(to be used with vertices of Fig.~\ref{fig:v1}).
Model parameters are $c_d$ and $c_m$; the
scalar octet-singlet mixing angle $\theta$ is defined in Eq.~(\ref{eq:multiplet_sc});
$\eta^\prime$ couplings are omitted; singlet couplings $\tilde{c}_d$ and $\tilde{c}_m$
are related to $c_d$ and $c_m$ in the large-$N_c$ approximation.
Notice that the entries relevant to the $\eta$ meson
correct the results of Table 9 in Ref.~\cite{Ivashyn:2007yy}.
}
\label{table:generalscalarcouplings}
\begin{center}
\begin{tabular}{rcl}
\hline\noalign{\smallskip}
$g_{f\pi\pi}$ &=& $- 2 \, c_m \, m_\pi^2 (2\, \cos \theta - \sqrt{2} \, \sin \theta)/\sqrt{3}$,
\\
$g_{f\eta\eta}$ &=& $- c_m (2\, (C_{s}^2(2 m_K^2 - m_\pi^2) + C_{q}^2 m_\pi^2) \, \cos \theta$
\\&&
$+ \sqrt{2}\, (C_{s}^2(4 m_K^2 - 2 m_\pi^2) - C_{q}^2 m_\pi^2) \, \sin \theta)/\sqrt{3}$,\\
$g_{fKK}$ &=& $- c_m \, m_K^2(4 \, \cos \theta + \sqrt{2}\, \sin \theta)/\sqrt{3}$ .\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\hat{g}_{f\pi\pi}$ &=& $2 \,c_d (2 \cos \theta - \sqrt{2}\, \sin \theta)/\sqrt{3}$,
\\
$\hat{g}_{f\eta\eta}$ &=& $c_d(2 (C_{q}^2 + C_{s}^2) \cos \theta$
\\&&
$- \sqrt{2} (C_{q}^2 - 2 C_{s}^2) \sin \theta)/\sqrt{3} $ ,\\
$\hat{g}_{fKK}$ &=& $c_d(4 \cos \theta + \sqrt{2} \sin \theta)/\sqrt{3}$.
\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$g_{{\sigma}\pi\pi}$ &=& $- 2 \, c_m \, m_\pi^2 (\sqrt{2}\, \cos \theta + 2 \, \sin \theta)/\sqrt{3}$,
\\
$g_{{\sigma}\eta\eta}$ &=& $- c_m (-\sqrt{2}\, (C_{s}^2(4 m_K^2 - 2 m_\pi^2) - C_{q}^2 m_\pi^2) \, \cos \theta$
\\&&
$+ 2\, (C_{s}^2(2 m_K^2 - m_\pi^2) + C_{q}^2 m_\pi^2) \, \sin \theta)/\sqrt{3}$,\\
$g_{{\sigma}KK}$ &=& $- c_m \, m_K^2( - \sqrt{2} \, \cos \theta + 4\, \sin \theta)/\sqrt{3}$ .\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\hat{g}_{{\sigma}\pi\pi}$ &=& $2 \,c_d (\sqrt{2} \cos \theta + 2\, \sin \theta)/\sqrt{3}$,
\\
$\hat{g}_{{\sigma}\eta\eta}$ &=& $c_d(\sqrt{2} (C_{q}^2 - 2 C_{s}^2) \cos \theta$
\\ &&
$+ 2 (C_{q}^2 + C_{s}^2) \sin \theta)/\sqrt{3} $ ,\\
$\hat{g}_{{\sigma}KK}$ &=& $c_d(- \sqrt{2} \, \cos \theta + 4\, \sin \theta)/\sqrt{3}$.
\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$g_{aKK}$ &=& $- \sqrt{2} \, c_m m_K^2$, \\
$g_{a\pi\eta}$ &=& $-2 \sqrt{2}\, C_{q} \,c_m \, m_\pi^2$ ,\\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\hat{g}_{aKK}$ &=& $\sqrt{2} \, c_d$ ,\\
$\hat{g}_{a\pi\eta}$ &=& $2 \sqrt{2} \, C_{q} \, c_d$
.\\
\noalign{\smallskip}\hline\noalign{\smallskip}
&&
$g_{f\pi\eta}$ = $\hat{g}_{f\pi\eta}$ = $g_{\sigma\pi\eta}$ =
$\hat{g}_{\sigma\pi\eta}$ = 0
,\\
&&
$g_{a\pi\pi}$ = $\hat{g}_{a\pi\pi}$ = $g_{a\eta\eta}$ = $\hat{g}_{a\eta\eta}$ = 0
.\\
\noalign{\smallskip}\hline
$g_{S\gamma\pi\pi}$ &=& $- i \hat{g}_{S\pi\pi}$, \hfill
$g_{S\gamma KK}$ = $ - i \hat{g}_{SKK}$ ,\nonumber\\
$g_{S\gamma\gamma\pi\pi}$ &=& $\hat{g}_{S\pi\pi}$, \hfill
$g_{S\gamma\gamma KK}$ = $\hat{g}_{SKK}$ \nonumber
\\
\noalign{\smallskip}\hline\noalign{\smallskip}
\end{tabular}
\end{center}
\end{table}
\section{Vector contribution}\label{section_double}
For $\gamma^\ast\to (\cdots)
\to\pi^0\pi^0\gamma$ the vector contribution mechanisms
are listed in Table~\ref{table:vector:contr:list} and the
corresponding diagrams are shown in Fig.~\ref{fig_vec}.
For the odd-intrinsic-parity vector-vector-pseudoscalar and
vector-photon-pseudoscalar interactions we use the chiral
Lagrangian in the vector formulation for spin-$1$ fields. As shown
in~\cite{EckerPLB237}, the use of vector formulation for $1^-$
fields ensures the correct behavior of Green functions to
order $\mathcal{O}(p^6)$, while the tensor formulation would
require additional local terms (see also discussion in
the Appendix~F of~\cite{Dubinsky:2004xv}). We choose Lagrangians of
Ref.~\cite{EckerPLB237,Prades}, that are $\mathcal{O}(p^2)$ and
$\mathcal{O}(p^3)$, for construction of the vector $\gamma V P $ and
double-vector $V V P$ contribution to $f_i$.
General Lagrangian terms are given in~\ref{App:A}.
Assuming exact $SU(3)$ case, the $\gamma V$
interaction can be written as
\begin{equation}
\mathcal{L}_{\gamma V} = - e f_V \partial^\mu B^\nu \bigl(
\tilde{\rho}^0_{\mu\nu} + \frac{1}{3}\tilde{\omega}_{\mu\nu} -
\frac{\sqrt{2}}{3}\tilde{\phi}_{\mu\nu} \bigr)
\label{eq:vector_gamma_V}
\end{equation}
with $\tilde{V}_{\mu \nu} \equiv \partial_\mu V_\nu -
\partial_\nu V_\mu$ and $f_V = F_V / M_\rho$ is the coupling for
the vector representation of the spin-1 fields~\cite{EckerPLB223}.
The interactions of vector mesons in the odd-intrinsic-parity sector read
\begin{eqnarray}
\mathcal{L}_{V\gamma P}&=& -\frac{4\sqrt{2} e h_V}{3
f_\pi}\epsilon_{\mu\nu\alpha\beta} \partial^\alpha B^\beta \biggl[ (
\rho^{0\mu} +3\omega^\mu + 3\varepsilon_{\omega\phi} \phi^\mu )
\partial^\nu \pi^0\nonumber \\ \label{lagr_vgp}
&&\!\!\!\!\!\!\!\!\!\!\!\!
+ \bigl[ (3 \rho^{0 \mu} + \omega^\mu)C_q + 2 \phi^\mu C_s \bigr] \partial^\nu \eta
\biggr],
\end{eqnarray}
\begin{eqnarray}
\nonumber
\mathcal{L}_{VVP}&=&-\frac{4\sigma_V}{f_\pi}\epsilon_{\mu\nu\alpha\beta}
\biggl[
\pi^0 \partial^\mu \omega^\nu \partial^\alpha \rho^{0\beta}
\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
+
\pi^0 \varepsilon_{\omega\phi}
\partial^\mu\phi^\nu \partial^\alpha \rho^{0\beta}
+
\pi^0 \varepsilon^\prime
\partial^\mu \omega^\nu \partial^\alpha \phi^{\beta}
\nonumber
\\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
+ \eta \bigl[ (\partial^\mu\rho^{0\nu} \partial^\alpha
\rho^{0\beta}+
\partial^\mu \omega^{\nu} \partial^\alpha \omega^{\beta} )
\frac{1}{2}\,C_q
\nonumber \\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!
- \partial^\mu \phi^{\nu}\partial^\alpha \phi^{\beta}
\frac{1}{\sqrt{2}} \, C_s +
\varepsilon_{\omega\phi} \partial^\mu \phi^{\nu} \partial^\alpha
\omega^{\beta} (C_q + C_s)\bigr] \biggr],
\label{lagr_vvp}
\end{eqnarray}
where $\epsilon_{\mu \nu \alpha \beta}$ is the totally antisymmetric
Levi-Civita tensor.
As before, we omit the $\eta^\prime$ meson.
\begin{figure*}
\begin{center}
\resizebox{0.9\textwidth}{!}{%
\includegraphics{fig-04-1.eps}
}
\vspace{25pt}
\resizebox{0.9\textwidth}{!}{%
\includegraphics{fig-04-2.eps}
}
\vspace{10pt}
\end{center}
\caption{Scheme for the $\gamma^\ast \gamma f_0$ and $\gamma^\ast \gamma \sigma$ (top) and
$\gamma^\ast \gamma a_0$ (bottom) transition.
Each ``loop blob'' corresponds to a set of diagrams following
from the Lagrangian, as explicitly shown in~\cite{Ivashyn:2007yy}
}
\label{fig:g-g-S-scheme}
\end{figure*}
As it is also seen from (\ref{lagr_vgp}) and (\ref{lagr_vvp}), the
transitions $\gamma \phi \pi^0$, $\phi \rho^0 \pi^0$ and $\phi
\omega \eta$ are related to a small parameter
$\varepsilon_{\omega\phi}$, responsible for the $u \bar{u} +
d\bar{d}$ component in the physical $\phi$ meson. The parameter
$\varepsilon^\prime$ is responsible for the G-parity-violating
$\phi\omega\pi^0$ vertex, caused by isospin breaking. The coupling
constants $f_V$, $h_V$ and $\theta_V$ are model parameters.
Numerical values for all parameters are given in~\ref{App_B}.
Due to a similar structure of the $\mathcal{L}_{V P \gamma}$ and $\mathcal{L}_{V V
P}$ interactions, the processes $\gamma^\ast\to V P_{1,2}\to P_1
P_2\gamma$ (one-vector-meson exchange) and $\gamma^\ast\to V_a
\to V_b P_{1,2} \to P_1 P_2 \gamma$ (double-vector-meson exchange)
can be described together. For this purpose it is convenient to
introduce the form factors $F_{\gamma^\ast V P}(Q^2)$ which
describe the transitions $\gamma^\ast (Q^2) \to V P$ including both
these mechanisms. Of course, the vector resonance enters
off-mass-shell.
For the $\gamma^\ast\to V \pi^0 $ transition we obtain
\begin{eqnarray}
\label{eq:FFs_SU3_pi0}
F_{\gamma^\ast \rho \pi} (Q^2)&=& \frac{4}{3
f_\pi} \big[\sqrt{2} h_V - \sigma_V f_V {Q^2} D_\omega (Q^2)
\nonumber\\&&
+
\varepsilon_{\omega\phi} \sqrt{2} \sigma_V f_V
{Q^2} D_\phi (Q^2) \bigr], \\
F_{\gamma^\ast \omega \pi}(Q^2)&=& \frac{4}{f_\pi} \bigl[\sqrt{2} h_V
- \sigma_V f_V {Q^2} D_\rho (Q^2)
\nonumber\\&&
+
\varepsilon^\prime \frac{\sqrt{2}}{3} \sigma_V f_V {Q^2} D_\phi (Q^2)
\bigr], \nonumber \\
F_{\gamma^\ast \phi \pi}(Q^2) &=& \varepsilon_{\omega\phi}
\frac{4}{f_\pi} \bigl[\sqrt{2} h_V
- \sigma_V f_V {Q^2} D_\rho (Q^2) \bigr]
. \nonumber
\end{eqnarray}
The vector meson $V= \rho, \omega, \phi$ propagators are
\begin{eqnarray}
\label{vector-propagator-simple} D_V(Q^2) &= &[Q^2 - M_V^2 + i
\sqrt{Q^2} \Gamma_{tot, V} (Q^2)]^{-1} .
\end{eqnarray}
with an energy-dependent width for the $\rho$ meson
\begin{eqnarray}
\Gamma_{tot, \rho}(Q^2) &=& \frac{G_V^2 M_\rho^2 }{48 \pi f_\pi ^4 Q^2}
\biggl[ \bigl(Q^2 - 4 m_\pi^2 \bigr)^{3/2}
\theta\bigl(Q^2 - 4 m_\pi^2 \bigr) \nonumber \\&&+ \frac{1}{2}
\bigl(Q^2 - 4 m_K^2 \bigr)^{3/2}\theta\bigl(Q^2 - 4 m_K^2 \bigr) \biggr]
\label{eq:C1}
\end{eqnarray}
and the constant widths for the $\omega$ and $\phi$ mesons.
In terms of these FF's we find the contribution to the functions $f_i$
(see Eq.~(\ref{eqn:fsr})) coming from the processes (\ref{fsr_proc_vec}).
For the $\pi^0 \pi^0 \gamma$ final state one obtains:
\begin{eqnarray}
\label{eq:delta-f1_pi0_pi0}
f_{1}^{V} &=& -\frac{1}{4} \sum_{V=\rho, \omega }
F_{\gamma^\ast V \pi} (Q^2) F_{\gamma^\ast V \pi} (0)
\nonumber\\&&\!\!\!\!\!\!\!\!\!\!
\times
\bigl[ ({k\cdot Q +l^2}) \bigl(D_{V}(R^2_{+}) + D_{V}(R^2_{-})
\bigr)
\\
&& \nonumber + 2 k \cdot l
\bigl( D_{V}(R^2_{+}) - D_{V}(R^2_{-}) \bigr) \bigr] ,
\nonumber \\
f_{2}^{V} &= & \frac{1}{4} \sum_{V =\rho, \omega}
F_{\gamma^\ast V\pi}(Q^2) F_{\gamma^\ast V \pi} (0)
\bigl[ D_{V} (R^2_{+})+ D_{V} (R^2_{-}) \bigr]
, \nonumber \\
f_{3}^{V} &=& -\frac{1}{4} \sum_{V =\rho, \omega}
F_{\gamma^\ast V \pi}(Q^2) F_{\gamma^\ast V \pi} (0)
\bigl[ D_{V}(R^2_{+}) - D_{V}(R^2_{-}) \bigr], \nonumber
\end{eqnarray}
where the contribution proportional to $F_{\gamma^\ast \phi \pi}(Q^2)
F_{\gamma^\ast \phi \pi}(0) \propto \varepsilon_{\omega\phi}^2$ has
been neglected. The momenta are defined as
\begin{equation}
R^2_{\pm} = (1/4) (Q^2 + l^2 +2 k\cdot Q \pm 2(k\cdot l+Q\cdot l) ),
\end{equation}
or equivalently $R^2_{+} = (k+ p_{1})^2$ and
$R^2_{-} = (k+ p_{2})^2$.
Similarly, for the $\gamma^\ast\to V \eta $ transition we obtain FF's
\begin{eqnarray}
\label{eq:FFs_SU3_eta}
F_{\gamma^\ast \rho \eta}(Q^2) & =
& C_q
F_{\gamma^\ast \omega \pi}(Q^2), \\
F_{\gamma^\ast \omega \eta}(Q^2) & = &
C_q
F_{\gamma^\ast \rho \pi}(Q^2), \nonumber \\
F_{\gamma^\ast \phi \eta}(Q^2) &=&
2\;C_s\; \frac{4}{3 f_\pi} \big[ \sqrt{2}
h_V - \sigma_V f_V {Q^2} D_\phi (Q^2) \big] \nonumber\\
&& - \varepsilon_{\omega\phi} (C_q+C_s) \frac{4}{3 f_\pi} \sigma_V f_V
{Q^2} D_\omega (Q^2) .
\nonumber
\end{eqnarray}
Correspondingly, the contribution to the functions $f_i$ for the $\pi^0
\eta \gamma$ final state is
\begin{eqnarray}
\label{eq:delta-f1_pi0_eta}
f_{1}^{V} &=& -\frac{1}{4} \sum_{V=\rho, \omega, \phi }
\Bigl\{
F_{\gamma^\ast V \pi} (0) F_{\gamma^\ast V \eta} (Q^2)
\nonumber\\&&\times
\bigl[ ({k\cdot Q
+l^2}) D_{V}(R^2_{+}) +
2 k \cdot l D_{V}(R^2_{+}) \bigr] \nonumber \\
&& + F_{\gamma^\ast V \eta} (0) F_{\gamma^\ast V \pi} (Q^2)
\nonumber\\&&\times
\bigl[ ( k\cdot Q +l^2 )
D_V (R^2_{-}) - 2 k \cdot l D_V (R^2_{-}) \bigr] \Bigr\},
\nonumber \\
f_{2}^{V} &= & \frac{1}{4} \sum_{V=\rho, \omega, \phi } \Bigl\{
F_{\gamma^\ast V \pi} (0) F_{\gamma^\ast V \eta}(Q^2) D_{V} (R^2_{+}) \Bigr.
\nonumber\\&&\Bigl.
+ F_{\gamma^\ast V \eta} (0) F_{\gamma^\ast V \pi}(Q^2)
D_{V} (R^2_{-}) \Bigr\},
\nonumber \\
f_{3}^{V} &= & -\frac{1}{4} \sum_{V=\rho, \omega, \phi } \Bigl\{
F_{\gamma^\ast V \pi} (0) F_{\gamma^\ast V \eta}(Q^2) D_{V} (R^2_{+}) \Bigr.
\nonumber\\&&\Bigl.
-
F_{\gamma^\ast V \eta} (0) F_{\gamma^\ast V \pi}(Q^2) D_{V} (R^2_{-})
\Bigr\}.
\end{eqnarray}
\section{Numerical results}
\label{section_numer}
In this section we present the numerical
results obtained in our framework.
The model-dependent ingredients, namely, the functions
$f_{1,2,3}$ are given in Sections~\ref{section_scal}
and~\ref{section_double}.
The values of the model parameters, which we used in our numerical
results, are listed in~\ref{App_B}. The masses of vector
and pseudoscalar mesons are taken from~\cite{PDG_2008}. The
coupling of vector mesons to a pseudoscalar and photon $h_V$ is
estimated from the tree-level decay width. The scalar meson
couplings and mass parameters were found from the
fit~\cite{Ivashyn:2009te}.
\subsection{Scalar mesons and $\phi$ radiative decay}
As we discussed in this paper, in $e^+e^-$ annihilation
to $\pi^0 \pi^0 \gamma$ and $\pi^0 \eta \gamma$ both
scalar~(\ref{fsr_proc_scal}) and vector decays~(\ref{fsr_proc_vec})
contribute to the observed events.
The KLOE Collaboration has reported data
on the invariant mass distributions~\cite{Aloisio:2002bsa,KLOEres}
at $\sqrt{s} = M_\phi$, in which the vector meson contribution
has been subtracted. In~\cite{Ivashyn:2009te} we performed a
combined fit of ${d}B(\phi\to a_0 \gamma \to\pi^0\eta\gamma)/{d
\sqrt{p^2}}$ and ${d}B(\phi\to (f_0,
\sigma)\gamma\to\pi^0\pi^0\gamma)/{d \sqrt{p^2}}$ to the KLOE 2002
data~\cite{Aloisio:2002bsa,KLOEres}, considering only scalar meson
contributions. We have found the inclusion of the $\sigma$ meson
into the framework important, and have fixed the numerical values
of scalar meson couplings and
mass parameters within the model, for
more detail see~\cite{Ivashyn:2009te}. In Fig.~\ref{fig:num:dBdm}
we show our model results for ${d}B(\phi\to S\gamma\to
P_1P_2\gamma)/{d \sqrt{p^2}}$, eq.~(\ref{eq:phi-br}), at $\sqrt{s}
= M_\phi$. In this and subsequent plots we use the notation
$m_{\pi^0\pi^0}$ and $m_{\eta\pi^0}$ for $\sqrt{p^2}$. Note that
only the scalar meson contribution to the $P_1P_2\gamma$ final
state is plotted in this Figure. The plot for the
$\pi^0\pi^0\gamma$ final state shows a rather good
fit~\cite{Ivashyn:2009te} to the KLOE 2002 data~\cite{KLOEres},
where both $f_0$ and $\sigma$ are taken into account.
In 2009 the new KLOE data~\cite{Ambrosino:2009py} on the
$\pi^0\eta\gamma$ channel appeared. A comparison of the model
prediction for $\phi\to a_0\gamma\to\pi^0\eta\gamma$ with these
new data is also shown in Fig.~\ref{fig:num:dBdm} (bottom). We
leave a refined fit of these new data for the future. Notice, if
one adds vector contributions to $\sigma(e^+e^-\to
\eta\pi^0\gamma)$ according to
Table~\ref{table:vector:contr:list}, then the shape of the
invariant mass distribution, calculated from
eq.~(\ref{eq:phi-br}), changes: cf.~Fig.~\ref{fig:num:dBdm}
(bottom) and Fig.~\ref{fig:pieta:tot}. It turns out that the 2009
KLOE data~\cite{Ambrosino:2009py} are better described by the
total contribution rather than by the scalar part alone. Note that
in Refs.~\cite{Ambrosino:2009py,SNDres} it was claimed that the
$\phi\to\pi^0\eta\gamma$ decay is dominated by the $\phi\to
a_0\gamma$ mechanism and the vector contribution is very small:
$B(e^+e^-\to VP \to \eta\pi^0\gamma) \ls 10^{-6}$.
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-05.eps}
}
\end{center}
\caption{The vector, $\gamma^\ast \to V P_1 \to P_1 P_2 \gamma$,
and double vector, $\gamma^\ast \to V_a \to V_b P_1 \to P_1 P_2
\gamma$, contributions }\label{fig_vec}
\end{figure}
\begin{table}
\caption{Mechanisms of the vector contribution. Notice that some of
the channels, suppressed due to small parameters, can be enhanced in the
vicinity of the corresponding resonance (e.g.
$\gamma^\ast\to\phi\to\omega\pi^0$, see the text)}
\label{table:vector:contr:list}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline & Dominant & Suppressed
\\
\hline \noalign{\smallskip} \noalign{\smallskip}
\multicolumn{3}{l}{in $\gamma^\ast\to (\cdots) \to\pi^0\pi^0\gamma$
:}
\\
\noalign{\smallskip} \hline 1-vector & $(\rho^0 \pi^0)$,
$(\omega \pi^0)$ & $(\phi\pi^0)$
\\
\hline 2-vector & \!$(\omega\!\to\!\rho^0\!\pi^0)$,
\!$(\rho^0\!\to\!\omega\!\pi^0)$ & $(\phi \to\rho^0\pi^0)$,
\!$(\phi\!\to\!\omega\!\pi^0)$
\\
&&$(\rho^0\!\to\!\phi\!\pi^0)$
\\
\hline \noalign{\smallskip} \noalign{\smallskip}
\multicolumn{3}{l}{in $\gamma^\ast\to (\cdots) \to\pi^0\eta\gamma$ :
}
\\
\noalign{\smallskip}\hline 1-vector &
$(\rho\pi^0)$, $(\omega\pi^0)$ & $(\phi\pi^0)$
\\
& $(\rho\eta)$, $(\omega\eta)$ & $(\phi\eta)$
\\
\hline 2-vector & $(\rho\to\omega\pi^0)$,
$(\omega\to\rho\pi^0)$ & $(\rho\to\phi\pi^0)$,
$(\phi\to\rho\pi^0)$
\\
&
$(\rho\to\rho\eta)$, $(\omega\to\omega\eta)$
& $(\phi\to\phi\eta)$, $(\phi\to\omega\eta)$
\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-06-1.eps}
}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-06-2.eps}
}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-06-3.eps}
}
\end{center}
\caption{Invariant mass distributions
in the $e^+e^-$ annihilation
to $\pi^0 \pi^0 \gamma$ (top panel)
and $\pi^0 \eta \gamma$ (middle and bottom panel)
for $\sqrt{s}=M_\phi$.
Data are from~\cite{KLOEres} (top),~\cite{Aloisio:2002bsa} (middle)
and~\cite{Ambrosino:2009py} (bottom)
}
\label{fig:num:dBdm}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-07.eps}
}
\end{center}
\caption{Invariant mass distributions
in the $e^+e^-$ annihilation to
$\pi^0 \eta \gamma$ for $\sqrt{s}=M_\phi$,
where the total contribution (vector and scalar)
is taken into account (cf.~Fig.~\ref{fig:num:dBdm} (bottom)).
Data are from~\cite{Ambrosino:2009py}.
}\label{fig:pieta:tot}
\end{figure}
\subsection{The $\gamma^* \to \rho \to\omega\pi$ and
$\gamma^*\to \phi \to\omega\pi$ contribution}
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-08.eps}
}
\end{center}
\caption{Partial differential cross section
of $e^+e^-$ annihilation to
$\pi^0 \pi^0 \gamma$ for $\sqrt{s}=M_\phi$ due to the
$\gamma^*\to\rho \to\omega\pi$ mechanism compared to
$\gamma^*\to(\rho,\ \phi)\to\omega\pi$
}\label{fig:pipi:epsprime}
\end{figure}
For the moment, to follow KLOE analysis~\cite{KLOEres:07} we neglect the
$G$-parity-violating vertex $\phi \omega \pi^0$, i.e.,
we set $\varepsilon^\prime = 0$.
For illustration we introduce the constant
$C_{\omega\pi}^\rho$~\cite{KLOEres:07,Shekhovtsova:2009yn}.
This constant can be obtained in terms of form
factors~(\ref{eq:FFs_SU3_pi0})
\begin{eqnarray}
\frac{C^\rho_{\omega\pi}(s)}{16 \pi \alpha} & = & -\frac{1}{4}\;
F_{\gamma^\ast \omega \pi}(s)\; F_{\gamma^\ast \omega \pi}(0),
\end{eqnarray}
leading to
\begin{eqnarray} C^\rho_{\omega\pi} & = & -16\pi \alpha \;
\frac{4\sqrt{2} h_V}{f_\pi^2}\; \left( \sqrt{2} h_V - \sigma_V f_V s
D_\rho(s) \right)
\nonumber\\
& \approx & (0.597 - 0.542\;i) \ \text{GeV}^{-2}
\end{eqnarray}
at $\sqrt{s}=M_\phi$. The KLOE result~\cite{KLOEres:07} for the same
constant is \
$C^\rho_{\omega\pi} = 0.850$ GeV$^{-2}$ ($\sqrt{s}=M_\phi$). Thus our
prediction for the absolute value, $\left|C^\rho_{\omega\pi}\right| =
0.751$ GeV$^{-2}$, which includes only the $\gamma^*(\to\rho)\to\omega\pi$
mechanism, is smaller than that of KLOE by about $15\%$.
This difference can be attributed to the $\rho^\prime=\rho(1450)$ meson which
is not included in the present calculation. To estimate the role of the
$\rho^\prime$ in the constant $C^\rho_{\omega\pi}$, we follow
Ref.~\cite{Dumm:2009va} (Eqs.~(32), (33)):
\begin{eqnarray}
C^\rho_{\omega\pi} & = & -16\pi \alpha
\; \frac{4\sqrt{2} h_V}{f_\pi^2}\; ( \sqrt{2} h_V \\
&-& \sigma_V f_V \frac{s}{1+\beta_{\rho^\prime}} (D_\rho(s)+\beta_{\rho^\prime}
D_{\rho^\prime}(s)) )
\nonumber \\
& \approx & (1.06 - 0.69\;i) \ \text{GeV}^{-2} \nonumber
\end{eqnarray}
for $\beta_{\rho^\prime}=-0.25$, $M_{\rho^\prime}=1.465$ GeV,
$\Gamma_{\rho^\prime}(M^2_{\rho^\prime})=400$ MeV and obtain
$\left|C^\rho_{\omega\pi}\right| = 1.27$ GeV$^{-2}$.
Next we turn on the parameter $\varepsilon^\prime$ responsible for the
$G$-parity-violating $\phi\pi\omega$ vertex and check how the $
C^\rho_{\omega\pi}$ value changes. Omitting $\rho^\prime$ we have
\begin{eqnarray}
C^\rho_{\omega\pi} & = & -16\pi \alpha \; \frac{4\sqrt{2} h_V}{f_\pi^2}\;
( \sqrt{2} h_V - \sigma_V f_V s D_\rho(s)
\\ & + &\frac{\sqrt{2}}{3}\sigma_V f_V s \varepsilon^\prime D_\phi(s) ) \approx
(0.52 - 0.72\;i) \ \text{GeV}^{-2} \nonumber
\end{eqnarray}
and obtain $\left|C^\rho_{\omega\pi}\right| = 0.892$ GeV$^{-2}$. While
making this estimation the value $\varepsilon^\prime = -0.0026$ has been
chosen~\footnote{Of course the experimental decay width
$\phi\to\omega\pi$ determines only the absolute value of this
parameter.}. Apparently, the present model with the lowest nonet of
vector mesons, supplemented with the $G$-parity-violating effect, allows
one to obtain the value for $C^\rho_{\omega\pi}$ close to the KLOE value
0.850 GeV$^{-2}$. Influence of the $\varepsilon^\prime$ parameter
on the cross section is presented in Fig.~\ref{fig:pipi:epsprime}.
Therefore, the difference between the $ C^\rho_{\omega\pi}$ value
originating from the $\gamma^*(\to\rho)\to\omega\pi$ mechanism,
and the value measured by KLOE may be explained by the
$\rho^\prime$ meson and/or $G$-parity-violating contribution. To
clarify further this issue, an analysis of data at $s=1 \
{\text{GeV}}^2$ will be essential~\footnote{At $s=1 \ {\text
{GeV}}^2$ the $G$-parity-violating vertex is suppressed, whereas
the $\rho^\prime$ mechanism survives. Therefore, any difference in
the values of $C^\rho_{\omega\pi}$ at two energies, $s=1 \ {\text
{GeV}}^2$ and $s=M_\phi^2$, would indicate sizeable
$G$-parity-violating effects.}.
\subsection{The $\gamma^* \to \phi \to\rho\pi$
and $\gamma^*\to \omega \to\rho\pi$}
In a similar manner one can define $C_{\rho\pi}(s)$:
\begin{eqnarray}
\label{eq:Crhopi}
- 16\pi\alpha \frac{1}{4}\; F_{\gamma^\ast \rho \pi}(s)\; F_{\gamma^\ast
\rho \pi}(0) &=&
C_{\rho\pi}(s)
\\
&=& C^{res}_{\rho\pi} D_\phi(s)
+C^\omega_{\rho\pi} \; {\text ,}
\nonumber
\end{eqnarray}
where
\begin{eqnarray}
C^\omega_{\rho\pi} & = & -16\pi \alpha \; \frac{4\sqrt{2}
h_V}{9f_\pi^2}\; ( \sqrt{2} h_V - \sigma_V f_V s D_\omega(s) )
\nonumber \\
\label{eq:C-omega-rhopi}
& \approx & (0.091 - 0.002\;i) \ \text{GeV}^{-2}
\end{eqnarray}
and
\begin{eqnarray} C^{res}_{\rho\pi} &=& -16\pi \alpha \; \frac{4\sqrt{2}
h_V}{9f_\pi^2}\; \sqrt{2} \; \sigma_V \; \varepsilon_{\omega\phi} \; f_V\; s
\nonumber\\
\label{eq:C-res-rhopi}
&\approx& - 0.0052.
\end{eqnarray}
The KLOE values for these constants are
$C^{res}_{\rho\pi}\approx - 0.0057$ and $C^\omega_{\rho\pi} =
0.26 \ \text{GeV}^{-2}$.
However, in the experiment, they are entangled
and one has to compare the total contributions.
Using the values~(\ref{eq:C-omega-rhopi})
and~(\ref{eq:C-res-rhopi}) we have
$|C_{\rho\pi}(M_\phi^2)| \approx 1.2$, which
is in a reasonable agreement with KLOE fit
$|C_{\rho\pi}(M_\phi^2)| \approx 1.3$.
\subsection{Full model prediction for the cross section}
Interference of leading vector resonance contributions $(\rho\pi)$
and $(\omega\pi)$ is presented in Fig.~\ref{fig:vec-interf:Mphi}.
One can see a destructive interference.
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-09.eps}
}
\end{center}
\caption{Vector and double-vector decay contributions to $d
\sigma/d \sqrt{p^2}$ of $e^+e^-\to \pi^0 \pi^0 \gamma$ at
$\sqrt{s}=M_\phi$ in the approximation
$\varepsilon_{\omega\phi}=0.058$, $\varepsilon^\prime=-0.0026$. The
($\phi\pi$) channel is negligible and not shown in the plot }
\label{fig:vec-interf:Mphi}
\end{figure}
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-10-1.eps}
}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-10-2.eps}
}
\end{center}
\caption{Differential cross section $d \sigma/d \sqrt{p^2}$
of the $e^+e^-$ annihilation
to $\pi^0 \pi^0 \gamma$ (top panel)
and $\pi^0 \eta \gamma$ (bottom panel)
for $\sqrt{s}=M_\phi$}
\label{fig:num:Mphi}
\end{figure}
The interplay of the scalar~(\ref{fsr_proc_scal})
and vector decay~(\ref{fsr_proc_vec}) contributions
to $d \sigma/d \sqrt{p^2}$ is shown in
Fig.~\ref{fig:num:Mphi} (for $\sqrt{s}=M_\phi$).
One observes a complicated interference between vector and scalar contributions.
We see that in the case of the $\pi^0\pi^0\gamma$ final state
the vector contribution has the
same size as the scalar meson one and is much smaller than the scalar
one for the $\pi^0\eta\gamma$ final state.
Notice that there exist the off-peak ($\sqrt{s} = 1$~GeV) data
collected by KLOE. The $\phi$ meson decays get strongly suppressed
and the total cross section is determined by the vector
contribution only.
In order to support the related activity
and provide the important model estimates, we include
this case into our numerical calculation. The corresponding results
are presented in Fig.~\ref{fig:num:1GeV}.
\section{Conclusions}
\label{section_conlus}
We presented a general framework for the model-independent decomposition
of the differential cross section for the
final-state radiation in the reactions
$e^+e^- \to \pi^0\pi^0\gamma$ and $e^+e^- \to \pi^0\eta \gamma$,
for which the ISR contribution is absent and the leading-order
cross section is determined solely by the FSR mechanism.
\begin{figure}
\begin{center}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-11-1.eps}
}
\resizebox{0.45\textwidth}{!}{%
\includegraphics{fig-11-2.eps}
}
\end{center}
\caption{Differential cross section $d \sigma/d \sqrt{p^2}$
of the $e^+e^-$ annihilation
to $\pi^0 \pi^0 \gamma$ (top panel)
and $\pi^0 \eta \gamma$ (bottom panel)
for $\sqrt{s}=1$~GeV}
\label{fig:num:1GeV}
\end{figure}
We calculated the explicit form of the functions $f_i$, which
carry the model-dependent information about the processes.
Scalar resonance, vector and double
vector meson exchange contributions are considered. Notice that
all the relative phases are fixed from the Lagrangian of Resonance Chiral Theory.
The only exception is the sign of the $\varepsilon^\prime$ parameter, which
is related to a rare $\phi\to\omega\pi$ decay.
The Lagrangian is taken at the linear-in-resonance level in the
even-intrinsic-parity sector and at the bilinear-in-resonance level in
the odd-intrinsic-parity sector.
For agreement with data, the $\mathrm{R\chi T}$~ Lagrangian with the lowest nonet of
vector and scalar mesons~\cite{EckerNP321} was extended by including some
$SU(3)$ symmetry breaking effects.
At the same time, we tried to keep
the number of model parameters
as small as possible, using additional constraints.
The model parameters for the scalar sector were obtained from the
fit~\cite{Ivashyn:2009te} to the KLOE
data~\cite{Aloisio:2002bsa,KLOEres}.
As a by-product, we also obtained predictions for various transition
form factors: \ $\gamma^\ast \gamma S$, $SPP$, $\gamma^\ast VP$ and
$\gamma^\ast PP$. These expressions follow directly from the Lagrangian,
and the corresponding parameters are fixed to a large extent.
The numerical results for the differential cross section $d
\sigma/d \sqrt{p^2}$ are given for two cases: $\sqrt{s} = 1$~GeV
and $\sqrt{s}=M_\phi$ and demonstrate an interplay of the scalar
and vector decay contributions. The influence of the scalar and
vector contributions on the cross section is studied in detail.
The main conclusions of the numerical studies are the following:
\begin{itemize}
\item for the $\pi^0\eta\gamma$ final state the vector
contribution is much smaller than the scalar one at
$\sqrt{s}=M_\phi$ whereas for the $\pi^0\pi^0\gamma$ channel the
vector and scalar contributions are of the same size;
\item among the vector contributions to the $\pi^0\pi^0\gamma$ channel
the leading one comes from the
$\gamma^\ast(\to(\rho;\phi))\to\omega\pi$ mechanism; comparing to
the KLOE fit~\cite{KLOEres:07} we have concluded that about $85\%$
of this contribution is caused by the $\rho$ intermediate state,
and the rest can be explained either by the $\rho(1450)$ or by the
G-parity-violating process: $\gamma^\ast\to\phi\to\omega\pi$. New
experimental data at $\sqrt{s} = 1$~GeV can help to clarify which
of these two mechanisms is responsible for the rest;
\item at $\sqrt{s} = 1$~GeV the scalar contribution is
suppressed and the total cross section is determined only
by the vector contribution both for the
$\pi^0\pi^0\gamma$ and $\pi^0\eta\gamma$ channels.
\end{itemize}
At the end, we would like to emphasize that the developed approach
allows one to obtain the cross section and branching fraction
close to the experimental results. The main advantage of this
approach is a small number of model parameters.
The proposed framework can be implemented in a Monte Carlo generator,
for the inspection of the completely differential characteristics of
the reaction, and thus is useful for a data analysis and a detailed comparison
of various models.
{
\acknowledgement
\noindent{\it Acknowledgements. }
We would like to thank
Zurab Silagadze for
his comments on~\cite{Achasov:1999wr}
and for providing us with a copy
of~\cite{EidelmanKuraev}.
This paper profited from discussions with
Henryk Czy\.z.
S.E., A.K. and O.S. acknowledge partial support by the INTAS grant 05-1000008-8328
``Higher order effects in $e^+ e^-$ annihilation and muon
anomalous magnetic moment''.
S.E.~acknowledges partial support by RFFI grant 09-02-01143.
S.I.~was supported in part by Research Training Network EU-MRTN-CT-2006-035482
(FLAVIAnet).
G.P.~is grateful to the MIT Center of Theoretical Physics for hospitality
while this work was being written.
\endacknowledgement
}
|
1,108,101,564,251 | arxiv | \section{\label{}}
\section{Introduction} \label{sec: intro}
The energy-level properties of excited nuclei (called the nuclear level scheme), which include the level energies, spins, parities, and gamma-rays associated with the excited levels, are important for the study of nuclear structure physics, nuclear reactions, and nuclear astrophysics. The level schemes of nuclei in the mass region $A\sim 150 \div 154$ are of particular interest because the nuclear deformation in this region was predicted to change drastically with only slight variations of $A$ \cite{1969Sm04,1979Re04}. Nuclei in this mass region are also called transitional nuclei. For example, $^{150}$Sm and $^{152}$Sm have very different level schemes as the former has the vibrational/quasi-vibrational characteristics, whereas the latter follows the rotational ones \cite{1968Lure}. Similarly, the level spectrum of $^{152}$Sm shows both rotational and vibrational behaviors, whereas that of $^{154}$Sm exhibits the strong rotational properties, indicating that this nucleus is strongly deformed \cite{1964Robert}. Moreover, two odd nuclei, $^{151}$Sm and $^{153}$Sm, which fall, respectively, between the two sets ($^{150}$Sm, $^{152}$Sm) and ($^{152}$Sm, $^{154}$Sm) are expected to be affected by the interplay between the rotational and vibrational bands \cite{1971Be41}. Therefore, the level schemes of $^{151,153}$Sm odd nuclei have been an interesting subject of many experimental and theoretical studies. The present paper focuses on the experimental study of the level scheme of $^{153}$Sm by using thermal neutron-capture reaction.
The level scheme of $^{153}$Sm has been studied by using different nuclear reactions and techniques \cite{Helmer2006} and all the experimental data have been compiled in the ENSDF library \cite{ENSDF}. For instance, the level scheme of $^{153}$Sm at the low-energy(spin) region (below 1.53 MeV) was studied by using the $\beta^-$ decay of $^{153}$Pm as well as the decay from the isomeric state of $^{153}$Sm to its ground state \cite{1971KiZC,1983MaYP,1995Gr19,1997Gr09}. These experiments detected in total 25 excited levels, 17 of which have the unique spin values within the interval of $[\frac{1}{2},\frac{9}{2}]\hbar$. The high-spin part in the level scheme of $^{153}$Sm was measured by using the heavy-ion capture reactions, in which a total number of 28 excited levels, 25 of which have the unique spin values falling into the range of $[\frac{11}{2},\frac{41}{2}]\hbar$, was reported \cite{1979Re04,1999As05,2000Ha59}. However, the above experiments have not covered the excited levels, whose energy and spin are in the regions of $[1.5,4.0]$ MeV and $[\frac{1}{2},\frac{3}{2}]\hbar$, respectively. In these regions, several transfer reactions such as $^{151}(t, p)$ \cite{2005Bu21}, $^{152}$Sm$(d, p)$ \cite{1965Ke09,1972Ka07,1997GoZn},$^{154}$Sm$(d, t)$ \cite{1971Be41,1972Ka07,1997GoZn}, $^{154}$Sm$(p, d)$ \cite{1997GoZn, 1997Bl11}, $^{152}$Sm($\alpha,^{3}$He) \cite{1984Li02}, $^{154}$Sm($^3$He, $\alpha$) \cite{1997GoZn}, and $^{154}$Eu($t, \alpha$) \cite{1985Ma26} have been employed and a considerable number of excited levels of $^{153}$Sm within the spin range of $[\frac{1}{2},\frac{11}{2}]\hbar$ has been explored. Most importantly, by using the $^{152}$Sm$(d, p)$ reaction, 132 excited levels below 3.929 MeV and 56 excited levels below 1.991 MeV in the level scheme of $^{153}$Sm have been deduced in Refs. \cite{1965Ke09} and \cite{1972Ka07}, respectively. Although, the data reported in Refs. \cite{1965Ke09} and \cite{1972Ka07} agree with each other, their uncertainties are quite high (about 10 keV or higher). The reason is that within the framework of the transfer reactions, the excited levels are indirectly deduced from the energy and momentum distributions of the reaction products (charged particles), instead of the direct way, that is, from the gamma transitions of the excited levels. The latter were also not reported in Refs. \cite{1965Ke09} and \cite{1972Ka07}.
Apart from the above ion-induced experiments, the neutron-captured reactions also play an important role in the construction of the $^{153}$Sm level scheme. In fact, by using the ($n_{th},\gamma$) and ($n$ = 2 keV, $\gamma)$ reactions ($n_{th}$ means the thermal neutron with energy of 0.025 eV), Refs. \cite{1971Be41,1969Sm04,1969Re04,1997GoZn} have thoughtfully investigated the level scheme of $^{153}$Sm by means of the bent-crystal, conversion-electron, and Ge detector spectrometers. For the latter, the first two spectrometers, which were used to measure the low energy gamma-rays, focused on the low-energy part (below 0.4 MeV) of the $^{153}$Sm level scheme, whereas the last one was used to detect the high-energy gamma rays and to consequently deduce the feeding levels corresponding to the observed gamma rays. Moreover, through the gamma spectrum measured by the Ge detectors, 35 gamma rays emitted from the compound state of $^{153}$Sm via ($n_{th},\gamma$) reaction were reported in Refs. \cite{1971Be41,1969Sm04,1969Re04}. Similarly, Ref. \cite{1997GoZn} has detected 31 gamma rays via ($n$ = 2 keV, $\gamma)$ reaction. Many excited levels, whose energies range from 0 to approximately 2.7 MeV, were also deduced from the gamma rays detected in Ref. \cite{1997GoZn}. In general, the number of gamma rays that can be detected by the conventional Ge detector spectrometer is restricted by the high Compton background of the gamma spectrum as well as the energy resolution of the Ge detector. Besides, the gamma spectrum of $^{153}$Sm obtained from the ($n,\gamma$) reaction is always influenced by $^{150}$Sm because the thermal neutron-capture cross section of $^{149}$Sm is extremely higher than that of $^{152}$Sm (See e.g., Table \ref{tab1}).
Given the limitations of the works mentioned above, it is necessary to improve the level scheme of $^{153}$Sm, especially in the energy region from 0.5 MeV to about 5.0 MeV. One of the possibilities is to perform the $^{152}$Sm($n_{th},\gamma$) reaction using an advance $\gamma-\gamma$ coincidence technique together with the Ge(Li) detectors (also called the (n, 2$\gamma$) technique or the method of digital summation amplitudes of coincident pulses) \cite{boneva1991}. This technique, which has advantages in identifying the correlated gamma transitions and in subtracting most of the Compton background, allows us to detect the two-step gamma cascades (TSC) decayed from the compound state to the low-energy final levels and can therefore be used to deduce many new excited levels in $^{153}$Sm within the energy region from 0.5 MeV to approximately 5.0 MeV and the spin range of $[\frac{1}{2}, \frac{3}{2}]\hbar$. Indeed, by using the above technique, we have successfully studied the updated level scheme of $^{172}$Yb via $^{171}$Yb($n_{th},\gamma$) reaction \cite{anh2017}. In particular, we have detected in the level scheme of $^{172}$Yb several new excited levels and the corresponding gamma transitions, whose data do not currently exist in the ENSDF library, especially in the intermediate energy region from 3 to 5 MeV.
The goal of the present paper is to update the level scheme of $^{153}$Sm via the ($n_{th},\gamma$) reaction by using the $\gamma-\gamma$ coincidence technique. The energy and spin regions to be covered by this experiment are $[0.52,5.3]$ MeV and $[\frac{1}{2}, \frac{3}{2}]\hbar$, respectively. In addition, by combining our newly updated levels with those presently existed in the ENSDF library, we are able to construct the new total and partial (within spin range of $[\frac{1}{2}, \frac{3}{2}]\hbar$) cumulative numbers of discrete levels, which are latter used to test the predictive power of various nuclear level density (NLD) models. At the same time, these new cumulative curves have also been compared with those extracted from the NLD data obtained by using the Oslo method \cite{OsloMethod}.
\section{Experimental Method}
The $^{152}$Sm($n_{th},\gamma$) reaction was carried out at Dalat Nuclear Research Institute (Vietnam) using the thermal neutron beam from the tangential channel of Dalat Nuclear Research Reactor. The thermal neutron beam, which was obtained by using the filtered technique, has the size and flux at the irradiated position to be equal to 2.5 cm and 1.7 $\times$ 10$^5$ n.cm$^{-2}$.s$^{-1}$, respectively. This beam configuration is sufficient for the present experiment as discussed e.g., in Ref. \cite{anh2017}. The experimental setup and measurement using the $\gamma-\gamma$ coincidence spectrometer with two HPGe detectors are the same as those presented in Ref. \cite{anh2017} (except the target nucleus), so we do not repeat them here.
The target nucleus $^{152}$Sm is in the form of a 583 mg Sm$_2$O$_3$ powder. This target, which was put in a plastic bag, was then measured at the center of the thermal neutron beam during approximately 661 hours. The isotopic content of the target, which is provided by the JSC Isotope Supplier with the quality certificate being given under the Contract No. 704/08625142/25/30-16, together with the thermal neutron-capture cross sections ($\sigma_{th}$) of all the isotopic components \cite{ncs} are given in Table \ref{tab1}.
\begin{table}[h!]
\caption{Isotopic content of the target used in the present experiment.}
\begin{tabular}{c| c| c}
Isotope & Percentage (\%) & $\sigma_{th}$ (barn) \cite{ncs}\\ \hline
$^{152}$Sm & 98.7 & 206 $\pm$ 3 \\
$^{144}$Sm & 0.01 & 1.64 $\pm$ 0.10 \\
$^{147}$Sm & 0.06 & 57 $\pm$ 3 \\
$^{148}$Sm & 0.07 & 2.4 $\pm$ 0.6 \\
$^{149}$Sm & 0.13 & 40140 $\pm$ 600 \\
$^{150}$Sm & 0.20 & 100 $\pm$ 4 \\
$^{154}$Sm & 0.83 & 8.5 $\pm$ 0.5 \\
\end{tabular}
\label{tab1}
\end{table}
Table \ref{tab1} shows that $^{144,148,154}$Sm isotopes have the values of both concentration and $\sigma_{th}$ being significantly smaller than those of $^{152}$Sm. Consequently, their influence on the spectroscopic data is negligible. For $^{147,150}$Sm isotopes, although their $\sigma_{th}$ values are comparable with that of $^{152}$Sm, their impact on the spectroscopic data is still small because of their tiny percentages. The only samarium isotope, which has a considerable influence on the spectroscopic data, is $^{149}$Sm because it has the noticeable $\sigma_{th}$ value, namely $\sigma_{th}$ of $^{149}$Sm is $\sim$ 198 times higher than that of $^{152}$Sm. Therefore, despite the percentage of $^{149}$Sm is $\sim$ 759 times less than that of $^{152}$Sm, its contribution to the coincidence events caused by the thermal neutron capture of $^{149}$Sm is only $\sim$ 3.8 times less than that of $^{152}$Sm, implying that approximately 20\% of all the detected coincidence events will be affected by the excited compound $^{150}$Sm nucleus. Fortunately, the two-step cascades caused by $^{150}$Sm can be distinguished from those of $^{153}$Sm by using the $\gamma-\gamma$ coincidence method because their summation energies (the total energy of two gamma rays) are different. For instance, the summation energies of the cascades of $^{150}$Sm detected within the present experiment range from $\sim$ 6.0 MeV to its neutron binding energy $B_n=$ 7.9867 MeV \cite{AME2016}, whereas those of $^{153}$Sm vary from $\sim$ 5.2 MeV to 5.87 MeV as clearly seen in Fig. \ref{sum153}.
For every detected coincident events, the energies absorbed by two HPGe detectors are recorded. The gamma cascades, which come from the decays of the compound state, go through different intermediate levels, and reach the ground state and some defined final levels, can be identified in the form of appropriate peaks appearing in the summation spectrum. The latter is obtained by counting the number of events per an interval of total energy absorbed by two HPGe detectors.
\begin{figure}[h]
\centering
\includegraphics[width = 12.9cm]{fig1.pdf}
\caption{\label{sum153} Experimental summation spectrum of $^{153}$Sm. The final energies $E_f$ are marked on top of their corresponding peaks. The notation SE denotes the single-escape peaks.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=11cm]{fig2.pdf}
\caption{ \label{overlap} (Color online) Illustration of the gating windows used to reduce the contribution of the overlapped peaks. This figure shows the overlap of the summation peaks between the ground and 7.535 keV excited states.}
\end{figure}
The most instructive part of the summation spectrum of $^{153}$Sm is shown in Fig. \ref{sum153}. In this figure, all the gamma cascades decayed from the compound state to the ground state and 15 final states, whose energies are 7.535, 35.844, 90.875, 126.412, 127.298, 182.902, 276.713, 321.113, 356.686, 362.286, 404.129, 405.470, 414.924, 450.050 and 481.088 keV\footnote{It should be noted that the very precise energy values of the final levels given in the present paper are taken from Ref. \cite{Helmer2006}.}, can be identified based on their corresponding peaks. By gating on the appropriate peak, the TSC spectrum corresponding to the gamma cascades from the compound state to a given final level is obtained. Figure \ref{sum153} also shows some overlaps between different groups of states, whose energies are not much different, e.g. (0, 7.535 keV), (414.924, 404.129, and 405.470 keV), etc. The gamma cascades coming from these overlap peaks are indistinguishable because of the restricted energy resolution of the HPGe detectors used in the present experiment. However, these overlaps can be possibly reduced by a special selection of the gating window as illustrated in Fig. \ref{overlap}. It can be seen in this Fig. \ref{overlap} that an overlapped peak of two states can be fitted by two Gaussian functions, whose width and centroid position are different. Thus, the overlapped region can be easily identified if the gating window is divided into two regions. The first region is set between the lines (1) and (2) corresponding, respectively, to the head-tail and maximum positions of first Gaussian. The second region is chosen between the lines (3) and (4), which correspond to the maximum and end-tail positions of the second Gaussian, respectively. Once the overlapped region is identified (see the overlapped area in Fig. \ref{overlap}), its contribution can be easily reduced from the TSC spectrum. As a result, the contribution of the overlapped regions to the obtained TSC spectra is found to be less than 5\%. However, it should be noted that the above approach can not be applied if energies of the overlapped peaks are notably close to each others, namely the different between energies of two peaks is smaller than 0.8 FWHM (Full Width at Half Maximum), e.g. the following pairs of final levels (126.412, 127.298) keV and (404.129, 405.470) keV.
All the measured TSC spectra are shown in Fig. \ref{tsc}. Due to the low statistics, the TSC spectra corresponding to the following final levels 276.713, 356.686, 362.286, and 450.050 keV have not been analyzed yet. Despite the energy resolutions of the two HPGe detectors used in the present experiment are slightly different, the obtained TSC spectra are mirror symmetry because an algorithm for improving the digital resolution \cite{resolution} has been applied. The vicinity regions around each summation peak are gated to create a corresponding background spectrum. The latter is then subtracted from the spectrum obtained from the gating of the peak region, thus leading to some negative values in the TSC spectra in Fig. \ref{tsc}.
A pair of peaks, which are symmetric within a TSC spectrum, represents a gamma cascade. The peak positions and areas correspond to the transition energies and intensities, respectively. In order to construct the nuclear level scheme, we assume that the gamma transitions, which appear in more than one TSC spectrum, are considered to be the primary transitions. In addition, a transition is also considered as primary if it is currently determined as primary in the ENSDF library \cite{ENSDF}.
As for the spin of the levels, the possible spins of an observed intermediate level are often evaluated by using the following formula
\begin{equation}
\label{eq1}
\textrm{max}(J_i-L,J_f-L) \leq J \leq \textrm{min}(J_i+L,J_f+L),
\end{equation}
where $J_i, J$, and $J_f$ are spins of the initial, intermediate, and final levels, respectively, whereas $L$ is the multipolarity. Within the present work, we assume that all the observed transitions are dipole ($L=1$). This assumption is made because the probability of detecting the dipole transition is much higher than that of the quadrupole ($L=2$) \cite{blatt1991book}.
\begin{figure}[h]
\centering
\includegraphics[width=12.9cm]{fig3.pdf}
\caption{Two-step cascade spectra of $^{153}$Sm obtained for different final states $E_f$.}
\label{tsc}
\end{figure}
\section{Results and Discussion}
\subsection{Level scheme of $^{153}$Sm \label{nls}}
We have identified in total 576 gamma transitions corresponding to 386 gamma cascades, which are associated with the decays from the compound state to the ground state and 11 final levels (see Table \ref{tab2}). The latter are 7.535 ($\frac{5}{2}^+$), 35.844 ($\frac{3}{2}^-$), 90.875 ($\frac{5}{2}^-$), 126.412 ($\frac{1}{2}^-$), 127.298 ($\frac{3}{2}^-$), 182.902 ($\frac{5}{2}^-$), 321.113 ($\frac{3}{2}^+$), 404.129 ($\frac{1}{2}^-$), 405.470 ($\frac{3}{2}^-$), 414.924 ($\frac{1}{2}^+$), and 481.088 ($\frac{3}{2}^+$) keV. Based on these observed cascades, we have determined 103 primary gamma transitions corresponding to 103 intermediate levels and 299 secondary transitions emitted from these levels. Among the above primary transitions, 99 transitions have been deduced since they appear in more than one TSC spectrum. The remain 4 transitions, whose the energies are 4329.1, 4420.1, 4769.6, and 5133.2 keV, are also considered as the primary ones despite that they appear in only one TSC spectrum because these transitions are found to be the same as the primary transitions that currently exist in the ENSDF library \cite{Helmer2006}.
Since the compound state of $^{153}$Sm has the spin of $\frac{1}{2}\hbar$, by using Eq. (\ref{eq1}) together with an assumption that all the observed transitions are dipole, we are able to tentatively assign an unique spin value of $\frac{3}{2}\hbar$ for 53 intermediate levels, which correspond to the gamma transitions emitted from the compound state to 3 final levels with the spins of $\frac{5}{2}\hbar$, namely the 7.535 ($\frac{5}{2}^+$), 90.875 ($\frac{5}{2}^-$), and 182.902 ($\frac{5}{2}^-$) keV levels. For the remain 50 levels, which relate to the gamma transitions emitted from the compound state to the final levels with the spin of $\frac{1}{2}\hbar$ or $\frac{3}{2}\hbar$, their spin values can not be uniquely deduced. Consequently, a possible spin range from $\frac{1}{2}\hbar$ to $\frac{3}{2}\hbar$ has tentatively been assigned to these levels.
The assumption that all the observed transitions are dipole is made based on the following experimental evidences. First, among all the transitions coming from the compound state (see the ($n,\gamma$) datasets for thermal and 2-keV neutrons in Ref. \cite{Helmer2006}), we found only 2 transitions which are not dipole, namely the 5506.4 and 5861.4 keV transitions to the 362.286 ($\frac{5}{2}^+$) and 7.535 ($\frac{5}{2}^+$) keV levels, respectively. These transitions, however, have considerably low intensities compared to those obtained from other primary transitions. Moreover, the 5506.4 keV transition has solely found in Ref. \cite{1969Sm04}, whereas that of 5861.4 keV has only detected in the form of a doublet with the strong transition of 5868.4 keV in Refs. \cite{1969Sm04, 1969Re04, 1971Be41}, which has not been reproduced within the framework of ($n,\gamma$) experiment with 2-keV neutron \cite{1997GoZn}. Second, within the low-excitation energy of $^{153}$Sm level scheme, the quadrupole transitions have rarely been reported. In fact, there are only few quadrupole transitions, which currently existed in the ENSDF library, such as the 223.173 and 278.17 keV transitions coming from the 276.713 and 405.470 keV levels, respectively. They all together have lower energy than the energy threshold of the present work (520 keV for both transition and excitation energy). These evidences apparently ensure the validity of the assumption above and consequently the reliability of the spin assignment within the present work, despite that the assumption is still restrictive and the spin assignment within the present work can not be determined as the definite values.
By comparing the $^{153}$Sm level scheme obtained within the present work with that extracted from the ENSDF library \cite{Helmer2006}, we have realized that 29 primary gamma transitions and 42 intermediate levels are found to be the same within their uncertainties, whereas only 8 secondary transitions are the same with those existed in the ENSDF library. The remain 74 primary gamma transitions, 61 intermediate levels, and 291 secondary transitions are therefore considered as the new data obtained within the present experiment.
In particular, the $^{153}$Sm level scheme obtained within the present work agrees well with that obtained within the previous studies using the same $^{152}$Sm($n_{th},\gamma$) reaction \cite{1997GoZn,1971Be41,1969Sm04,1969Re04}. For the energy region below 5300 keV, which is the maximum gamma energy that can be detected within the present experiment (because the energy threshold of detectors were set to be around 520 keV), we have reproduced 19 over 24 primary transitions that were previously reported in Refs. \cite{1997GoZn,1971Be41,1969Sm04,1969Re04}. Among the 5 unreproduced transitions, 2 transitions, whose energies are 5220.4 and 5283.9 keV, were reported in Ref. \cite{1971Be41}, whereas 2 transitions with the energies of 4850 and 4864.0 keV were detected in Ref. \cite{1969Sm04}. These transitions were found very long time ago and have not been reproduced by other experiments. The remain 4505.6 keV transition was reported with a slightly different energy of 4506.6 $\pm$ 1.0 keV in Ref. \cite{1969Re04} or 4505.8 $\pm$ 0.4 keV in Ref. \cite{1971Be41}, or 4506.5 $\pm$ 0.6 keV in Ref. \cite{1969Sm04}. This 4505.6 keV transition might be therefore the same as the 4507.4 $\pm$ 0.4 keV transition observed within the present work as well as the 4507.41 keV transition obtained from the ($n,\gamma$) experiment with the 2-keV neutron source in Ref. \cite{1997GoZn}. In general, we have reproduced 22 over 26 levels that were reported by the previous ($n_{th},\gamma$) experiments within the excitation energy above 600 keV in Refs. \cite{1997GoZn,1971Be41,1969Sm04,1969Re04}.
The result of the $^{153}$Sm level scheme obtained within the present work also agrees well with the neutron capture experiment using 2-keV neutron source, namely 22 over 24 primary transitions within the gamma energy of 520 to 5300 keV and 23 over 29 levels within the excitation energy region of 600 to 2000 keV reported in Ref. \cite{1997GoZn} have been replicated within the present experiment. Among the remain unreproduced levels, 4 levels, whose energies are 1675.8, 1723.5, 1737.5, and 1751.4 keV, have been determined in Ref. \cite{1997GoZn} without any populating gamma transitions. In addition, all the levels reported in Ref. \cite{1997GoZn} with the assigned spins of $\frac{1}{2}\hbar$ or $\frac{3}{2}\hbar$ are fully in agreement with those deduced from the present study.
Furthermore, our data also go along with those obtained within the ion-induced experiments, in particular the $^{152}$Sm($d,p$) \cite{1965Ke09,1972Ka07,1997GoZn}, $^{154}$($p,d$) \cite{1997GoZn,1997Bl11}, and $^{154}$($d,t$) \cite{1972Ka07,1997GoZn,1971Be41} reactions. Below 2000 keV, 24 excited levels found in the present work are supported by at least one of the experiments employing the ion-induced reactions. Similarly, 44 excited levels found within the present experiment agree with those extracted from the ion-induced reactions within their uncertainties (see the excited levels with the superscript denotation ''e'' in Table \ref{tab2}). It should be noted here that the uncertainties of the data obtained within the ion-induced experiments are often in the range of 8 to 18 keV, which are much larger than those obtained within the present work. Therefore, we consider that two levels are the same only if their discrepancy is less than 1.5 keV, that is, if a level deduced from the present experiment agrees with that deduced from the ion-induced experiments but the discrepancy between the two levels is larger than 1.5 keV, it is considered as the new level.
\begin{longtable}{@{\extracolsep\fill}ccccc|p{2.5cm}ccc@{}}
\caption{Gamma-cascade transition energies and absolute intensities obtained from the $^{152}$Sm($n_{th},\gamma$) reaction. Primary transitions and intermediate levels corresponding to each gamma cascade are determined if possible. Comparisons between the data obtained within the present work with those extracted from the ENSDF library are made. Detailed explanation is given at the end of the table.}\\
\multicolumn{5}{c|}{Present Work}&\multicolumn{4}{c}{ENSDF}\\[0.05 cm] \hline
$E_1$ & $E_i$ & $J_i$ & $E_2$ & $I_{\gamma\gamma}$ & $E_f$ & $E_1$\textsuperscript{a} & $E_i$\textsuperscript{b} & $J_i$ \\
\hline
\endfirsthead
\caption[]{(continue)}\\
\multicolumn{5}{c|}{Present Work}&\multicolumn{4}{c}{ENSDF}\\[0.05 cm] \hline
$E_1$ & $E_i$ & $J_i$ & $E_2$ & $I_{\gamma\gamma}$ & $E_f$ & $E_1$\textsuperscript{a} & $E_i$\textsuperscript{b} & $J_i$ \\
\hline
\endhead
\input{tab.tex}\hline
\multicolumn{9}{p{12cm}}{Present work: experimental data obtained from the present work.}\\
\multicolumn{9}{p{12cm}}{ENSDF: data taken from the ENSDF library \cite{Helmer2006}.}\\
\multicolumn{9}{p{12cm}}{$E_{1}$: energy (in keV) of the primary gamma transition.}\\
\multicolumn{9}{p{12cm}}{$E_{2}$: energy (in keV) of the secondary gamma transition.}\\
\multicolumn{9}{p{12cm}}{$E_i$: energy (in keV) of the intermediate level.}\\
\multicolumn{9}{p{15cm}}{$I_{\gamma\gamma}$: absolute intensity of the cascade normalized to 10$^6$ decays. Uncertainties of the normalization factors are not taken into account.}\\
\multicolumn{9}{p{15cm}}{$E_f$: energy (in keV) of the final level. Spin and parity of the final level are given in the parentheses.}\\
\multicolumn{9}{p{12cm}}{$J_i$: tentative spin (in $\hbar$) of the corresponding level.}\\
\multicolumn{9}{p{15cm}}{Throughout the table, the uncertainty for numeric values is given next to the corresponding value (in the $italic$ type) and referred to the last digits of the value, e.g. 12.1 {\it 23} means 12.1 $\pm$ 2.3.}\\
\multicolumn{9}{p{15cm}}{The experimental data within the present work, which agree with those existed in the ENSDF library, are highlighted in the {\bf bold} type.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{a} data taken from the ($n,\gamma$) with thermal and 2-keV neutron datasets in Ref. \cite{Helmer2006}.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{b} data taken from the Adopted Level dataset in Ref. \cite{Helmer2006}.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{c} unresolved final levels: 126.412 ($\frac{1}{2}^-$) or 127.298 ($\frac{3}{2}^-$).}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{d} unresolved final levels: 404.129 ($\frac{1}{2}^-$) or 405.470 ($\frac{3}{2}^-$).}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{e} energy of the observed level, which agrees with those obtained from the ion-induced $^{152}$Sm($d,p$) and/or $^{154}$($p,d$) and/or $^{154}$($d,t$) reactions within their uncertainty. It is noted that the superscript denotation ''e'' is not marked if the discrepancy between the observed level and that presented in the ENSDF library is less than 1.5 keV.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{f} the values of the 630.20 keV level and its spin are taken from the ($n,\gamma$) experiments.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{g} the spin value of $\frac{1}{2}\hbar$ was assigned to the 695.80 keV level in the ENSDF library based on the strong supports from the $l$-transfer and vector analyzing power in the $(d,t) $ particle-transfer reaction, whereas the present work suggests a different spin value, namely $\frac{3}{2}\hbar$. Our suggestion for this level is made based on its weak 604.8 keV dipole transition to the 90.875 keV ($\frac{5}{2}^-$) state. In the case the 604.8 keV transition is quadrupole, the spin of $\frac{1}{2}\hbar$ must be assigned to the 695.7 keV level found within the present work.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{h} this level can not be distinguished from the 734.7 keV ($\frac{1}{2}^+$) level within the present experiment.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{i} this level can not be distinguished from the 984.3 keV ($\frac{3}{2}^-$) level within the present experiment.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{j} the observed levels of 2494.7 \textit{10} and 2497.1 \textit{9} keV both agree with the 2496.6 \textit{12} keV state within their experimental uncertainties. Thus, there is a possibility that these three levels are all the same.}\\
\multicolumn{9}{p{15cm}}{\textsuperscript{x} the gamma cascades, which we are not able to identify as the primary transitions within the present work.}\\
\hline
\label{tab2}
\end{longtable}
Table \ref{tab2} presents the absolute intensities normalized to 10$^6$ captures together with the statistical uncertainties of all 386 measured cascades. The normalization factor is determined based on the absolute intensities of 4697.2 and 5117.8 keV primary transitions (i.e., the 4697.4 and 5118.3 keV transitions within the present work) taken from the ENSDF data \cite{Helmer2006} together with their branching ratios. The latter are determined from the gating spectrum of the primary transitions mentioned above. Since the energy threshold of the present experiment is 520 keV, we are not able to identify the branches, whose energy of the secondary transition is less than 520 keV. Therefore, our cascade intensities may contain a certain systematic error.
In general, the present experiment reproduces most of the ENSDF data obtained from the neutron capture and ion-induced reactions. This consistency obviously proves the reliability of the data obtained within the present study.
Thanks to the coincidence technique, the influence of $^{150}$Sm on the spectroscopic information of $^{153}$Sm, which limits the number of data obtained from the neutron-capture experiment using the conventional HPGe detector \cite{1971Be41,1969Sm04, 1969Re04}, has been considerably reduced within the present experiment. This technique also reduces the peak overlaps, which are immensely common in analyzing the conventional prompt gamma spectra, especially for nuclei with the complicated level scheme such as in the case of $^{153}$Sm. The reason is that the coincidence technique is able to detect only the intermediate level in a narrow spin range from $J_i-1$ to $J_i+1$ ($J_i$ is the spin of the compound state) and the detected gamma transitions are distributed to the multiple TSC spectra. As a result, we are able to detect more important information on the level scheme of $^{153}$Sm, which have not currently existed in the ENSDF library.
\subsection{Cumulative number of levels}
\subsubsection{Experimental cumulative number of levels} \label{cump1}
\begin{figure}[h]
\includegraphics[width=17.2cm]{fig4}
\caption{\label{cul} (Color online) Total (a) and partial (b) cumulative numbers of levels obtained by using the NLD data in Ref. \cite{Oslo} (estimated data) and ENSDF data in Ref. \cite{Helmer2006} in comparison with those obtained from ``This work 1'' and ``This work 2" (see the explanation in the text).}
\end{figure}
\begin{figure}[h]
\includegraphics[width=12cm]{fig5}
\caption{\label{nld} (Color online) Total level density obtained by counting the numbers of discrete levels in the ENSDF, ``This work 1'' and ``This work 2" versus the NLD data taken from Ref. \cite{Oslo}.}
\end{figure}
Since several new energy levels have been detected within the present experiment, we are able to construct the total and partial cumulative numbers of levels, which are, by definition, the numbers of excited levels fall within the specific energy and spin ranges. These cumulative numbers are constructed by combining the adopted levels taken from the ENSDF \cite{Helmer2006} with those obtained within the present work (Table \ref{tab2}). For the latter, however, there are unassigned intermediate levels corresponding to 87 gamma cascades as shown in Table \ref{tab2} with the superscript denotation ''x''. Therefore, we have constructed two cumulative curves denoted by ``This work 1" and ``This work 2" (see Fig. \ref{cul}). ``This work 1" is created by assuming that the gamma transitions in each of 87 cascades with the higher energies are considered as the primary transitions, whereas those with lower energies correspond to the secondary ones. ``This work 2" is generated by using the opposite assumption, namely the gamma transitions with lower (higher) energies are considered as the primary (secondary) ones. It is obvious that ``This work 1" is always higher than ``This work 2", regardless of their total or partial cumulative curves because ``This work 1" contains the primary gamma transitions, whose energies are higher than those in ``This work 2" (Fig. \ref{cul}). Here, it should be noted that the assumption for ``This work 1" should be much more reliable than that for ``This work 2'' because within the two-step cascades, one often observes the primary transition, whose energy is higher than that of the secondary one (see e.g. the data reported in the ENSDF library \cite{ENSDF}). Consequently, the real cumulative curve should probably be very close to ``This work 1''.
The total and partial cumulative numbers of levels within the present work are also compared with those obtained by using the NLD data in Ref. \cite{Oslo}. The total cumulative curve in this case is calculated by using the conventional formula \cite{gcmodel}
\begin{equation}
N(E_x) = \int_{0}^{E_x} \rho(E)dE ~, \label{ne_total}
\end{equation}
where $\rho(E)$ is the experimental NLD taken from Ref. \cite{Oslo}. As for the partial cumulative curve for the spin range $J= [\frac{1}{2},\frac{3}{2}]\hbar$, it should be calculated using the same Eq. (\ref{ne_total}) but the $J$-dependent NLD $\rho(E,J)$ must be used instead of the total NLD $\rho(E)$. However, there exists in literature only the total NLD extracted by using the Oslo method $\rho(E)$ in Ref. \cite{Oslo}. The latter was extracted from the gamma spectra of the $^{154}$Sm$(p, d\gamma)^{153}$Sm reaction, which were later normalized using the discrete levels taken from the ENSDF library \cite{ENSDF} as well as the NLD data at the neutron binding energy (see e.g., Fig. 3 of Ref. \cite{Oslo}). Therefore, in order to estimate the $\rho(E,J)$ values, we have manually multiplied $\rho(E)$ with a factor, which is determined as the ratio between the number of levels with spins $J =\frac{1}{2}$ and $\frac{3}{2}\hbar$ and the total number of levels existed in the ENSDF library \cite{Helmer2006}. This factor is found to be about 0.27 for $^{153}$Sm. The obtained $\rho(E,J)$ is then used to calculate the partial cumulative curve $N(E_x,J)$ for $J= [\frac{1}{2}, \frac{3}{2}] \hbar$. For the sake of simplicity, the corresponding results, namely the total and partial cumulative curves estimated using the NLD data in Ref. \cite{Oslo}, are called the estimated data/curves hereafter. It is seen in Figs. \ref{cul}(a) and (b) that such an estimation seems to be valid for the low-energy region (below 1 MeV) as both estimated curves for the total and partial cumulative numbers of levels are in excellent agreement with the ENSDF data. It is obvious that the spin distribution is not constant over the excitation energy. Thus, the estimated data presented in Fig. \ref{cul}(b) may not be corrected in the high-energy region above 1 MeV. Since the spin distribution changes very slightly when the excitation energy is low, we believe that our deduction is acceptable with a negligible error for the energy region from 1 MeV to 2 MeV. It is interesting to see in Fig. \ref{cul}(b) that ``This work 1'' almost coincides with the estimated data in the energy region from 0 to about 1.8 MeV, above which the data obtained from our estimation might be no longer valid. ``This work 2'' and ENSDF curves agree with the estimated data up to about 1 MeV only. This result supports strongly the validity of the assumption for ``This work 1'', which is the most common assumption used in the two-step cascade experiments as explained above. This assumption can also be confirmed by comparing the total NLD in Ref. \cite{Oslo} with those obtained from the ENSDF, ``This work 1'', and ``This work 2'' (Fig. \ref{nld}). It is clearly to see in Fig. \ref{nld} that the total NLDs taken from the ENSDF and ``This work 2'' only agree with the data of Ref. \cite{Oslo} below 1 MeV, whereas the agreement between ``This work 1'' and Ref. \cite{Oslo}'s data is extended up to about 1.2 MeV, indicating by two arrows in Fig. \ref{nld}.
The results obtained from "This work 1" as shown in Figs. 4 and 5 indicate two significant contributions of the new levels found within the present work. The first contribution is that for the total NLD, the maximum excitation energy $E_{\rm max}$, defined as the energy threshold below which most of the excited levels have been observed, is now extended to about 1.2 MeV, instead of 1.0 MeV as that obtained from the ENSDF data \cite{Helmer2006} (Figs. \ref{cul}(a) and \ref{nld}). The second contribution is associated with the value of $E_{\rm max}$ for the spin range of [$\frac{1}{2},\frac{3}{2}$]$\hbar$, which has been increased up to about 1.8 MeV (Fig. \ref{cul}(b)). It is evident that the NLD calculated by counting the numbers of discrete levels has been widely considered as the most reliable data, which are often used for the normalization of the experimentally extracted data \cite{OsloMethod} as well as different NLD model calculations \cite{hfbcs,hfb}. However, the present ENSDF library provides the reliable NLD up to about 1 MeV only. By including our new data, we are able to obtain, for the first time, the reliable NLD data up to about 1.2 MeV and 1.8 MeV for the total and partial (within the spin range of [$\frac{1}{2},\frac{3}{2}$]$\hbar$) NLDs, respectively. This second contribution is therefore the most important contribution of the present work.
\subsubsection{Comparison with theoretical models}
\begin{figure}[h]
\includegraphics[width=16cm]{fig6}
\caption{\label{culm} (Color online) Comparison between the experimental total (a) and partial (b) cumulative numbers of levels and those predicted by two phenomenological NLD models.}
\end{figure}
The cumulative number of levels is very helpful for verifying the predictive power of the NLD models. In Fig. \ref{culm}, we compare our experimental cumulative curve (This work 1) with two phenomenological NLD models, namely the back-shifted Fermi gas (BSFG) and constant temperature (CT). The functional forms of these two models are taken from Ref. \cite{egidy1988}, that is
\begin{eqnarray}
&&\rho_{CT}(E,J)=f(J)\rho_{CT}(E)=f(J)\frac{1}{T}e^{(E-E_0)/T}~, \label{ct} \\
&&\rho_{BSFG}(E,J)=f(J)\rho_{BSFG}(E)=f(J) \frac{e^{2\sqrt{a(E-E_1)}}}{12\sqrt2\sigma a^{1/4}(E-E_1)^{5/4}}~, \label{fg} \\
&&f(J)=e^{-J^2/2\sigma^2} - e^{-(J+1)^2/2\sigma^2} \simeq \frac{2J+1}{2\sigma^2}e^{-(J+\frac{1}{2})/2\sigma^2}~, \label{fj}
\end{eqnarray}
where $\sigma_{CT}=0.98A^{0.29}$ and $\sigma_{BSFG} = 0.0146A^{5/3}\frac{1+\sqrt{1+4a(E-E1)}}{2a}$ are the spin cut-off parameters with $E_1$ and $a$ being the back-shifted energy and level density parameters, respectively. Two parameters $E_0$ and $T$ in Eq. (\ref{ct}) are the energy shift and constant temperature, whereas the function $f(J)$ in Eq. (\ref{fj}) is the conventional spin distribution of the NLD \cite{gcmodel}. The free parameters $a, E_1, E_0$, and $T$ of the BSFG and CT are often adjusted to fit the total cumulative number of levels as well as the NLD determined from the experimentally averaged neutron-resonance spacing data ($D_{0}$ value) \cite{egidy2005}. The values of these free parameters taken from Ref. \cite{egidy2005} (see also Table \ref{tab3}) were used to calculate $\rho_{CT}(E)$, $\rho_{BSFG}(E)$, $\rho_{CT}(E,J)$, and $\rho_{BSFG}(E,J)$ ($J=$[$\frac{1}{2},\frac{3}{2}$]$\hbar$). The total and $J$-dependent cumulative numbers of levels are then calculated making use of Eq. (\ref{ne_total}). The results obtained shown in Fig. \ref{culm}(b) indicate that the CT model with parameters taken from Ref. \cite{egidy2005} fits well to our experimental data (This work 1) for the spin range of [$\frac{1}{2},\frac{3}{2}$]$\hbar$, but it is higher than our experimental total cumulative curve (Fig. \ref{culm}(a)). The reason is that the parameters of the CT model taken from Ref. \cite{egidy2005} were given based on the analysis of 21 excited levels below 0.49 MeV within the spin range of [$\frac{1}{2},\frac{9}{2}$]$\hbar$ (close to the spin range of [$\frac{1}{2},\frac{3}{2}$]$\hbar$ within the present work), whereas below 0.49 MeV, there must be in total 37 excited levels within a much larger spin range of [$\frac{1}{2},\frac{19}{2}$]$\hbar$ as in the ENSDF library \cite{Helmer2006}. Consequently, while the CT model describes well the experimental $J$-dependent cumulative curve, it is unable to describe the total one. For the BSFG model with the free parameters taken from the same Ref. \cite{egidy2005}, it completely fails to describe both the total and $J$-dependent experimental cumulative curves (see Fig. \ref{culm}). The above results of the CT and BSFG models clearly demonstrate that the prediction of the phenomenological NLD models depends strongly on the values of their free parameters. For instance, by re-fitting the results of the BSFG model to our total and $J$-dependent experimental cumulative data, we obtain the different sets of free parameters as reported in Table \ref{tab3}. To obtain a reliable predicting power, one should therefore use the microscopic NLD models instead of the phenomenological ones.
\begin{table}[h]
\caption{Values of the free parameters obtained within the CT and BSFG models presented in Fig. \ref{culm}.}
\begin{tabular}{p{6cm}|c|c|c|c}
Model & \multicolumn{2}{c|}{CT} & \multicolumn{2}{c}{BSFG} \\ \hline
Parameter & $E_0$ (MeV) & $T$ (MeV) & $a$ (MeV$^{-1}$) & $E_1$ (MeV) \\ \hline
Parameters from \cite{egidy2005}& $-2.06 \pm 0.29 $ & $0.61 \pm 0.03$ & $17.76 \pm 0.28$ & $-1.08 \pm 0.13$ \\
Fitted to This work 1 in Fig. \ref{culm}(a)& - & - & $3.51 \pm 0.28$ & $-12.09 \pm 1.24 $ \\
Fitted to This work 1 in Fig. \ref{culm}(b)& - & - & $12.73 \pm 0.16$ & $-3.49 \pm 0.07 $ \\
\end{tabular}
\label{tab3}
\end{table}
Within the present paper, three microscopic NLD models have been selected, namely the Hartree-Fock BCS (HFBCS) \cite{hfbcs}, the Hartree-Fock-Bogoliubov plus combinatorial method (HFBC) for the positive (HFBC $\pi^+$) and negative (HFBC $\pi^-$) parities \cite{hfb}, and the recent exact pairing plus independent-particle model at finite temperature (EP+IPM) \cite{epipm}. The HFBCS and HFBC data are accessible from RIPL-2 \cite{ripl2} and RIPL-3 \cite{ripl3}, respectively. These models have been considered to be the most up-to-date microscopic theoretical models for the NLD. Figure \ref{nld1} shows the total NLD $\rho(E)$ obtained within the HFBCS, HFBC, and EP+IPM in comparison with the experimental data. This figure indicates that while the HFBCS agrees with the experimental data only in the very low-energy region (below 0.5 MeV), both the HFBC and EP+IPM offer a good fit to the measured data. Moreover, the HFBC can not describe the data below 0.5 MeV, whereas the EP+IPM, in general, agrees with both low- and high-energy data. Consequently, one can easily see in Fig. \ref{culm1} that only the EP+IPM can describe both the experimental total and partial cumulative curves. This result of EP+IPM does not go beyond our expectation because this model has successfully been used to describe the NLD data of not only hot $^{170-172}$Yb \cite{epipm} and $^{60-62}$Ni \cite{epipm1} nuclei but also several hot rotating $A \sim 200$ isotopes \cite{epipm2}. In addition, the EP+IPM does not use any fitting parameters as discussed in Refs. \cite{epipm,epipm2,epipm3}, whereas the HFBCS and HFBC often employ some fitting parameters (see e.g., Eqs. (17) and (18) of Ref. \cite{hfbcs} or Eq. (25) of Ref. \cite{hfb}) to the experimental total cumulative data at low energy and the $D_0$ value at energy $E = B_n$. The above results, once again, confirm the microscopic nature and universality of the EP+IPM NLD model proposed in Ref. \cite{epipm}. In other words, the presently updated data provide a good test for both phenomenological and microscopic NLD models.
\begin{figure}[h]
\includegraphics[width=8.6cm]{fig7}
\caption{\label{nld1} (Color online) Comparison between the total NLDs obtained within different microscopic NLD models and the experimental data taken from Ref. \cite{Oslo}.}
\end{figure}
\begin{figure}[h]
\includegraphics[width=17.2cm]{fig8}
\caption{\label{culm1} (Color online) Total (a) and partial (b) cumulative numbers of levels obtained within different microscopic NLD models in comparison with the experimental data obtained within the present work (This work 1) and those calculated from the experimental NLD data in Ref. \cite{Oslo} (estimated data).}
\end{figure}
\section{Conclusion}
The present paper studies the excited levels of $^{153}$Sm nucleus populated in the thermal neutron-capture reaction using the $\gamma-\gamma$ coincidence technique and high resolution HPGe detectors. The coincidence technique together with the highly enriched target for $^{152}$Sm isotope allow us to significantly eliminate the influence of $^{150}$Sm excited nucleus in the observed gamma spectrum. In addition, the statistics of the measured data are rather high within the framework of coincident measurements. As a result, we are able to detect many new energy levels and their corresponding gamma transitions, namely 74 primary gamma transitions, 61 intermediate levels, and 291 secondary transitions. The tentative spin value of 53 observed levels is found to be $\frac{3}{2}\hbar$, whereas the remain levels are tentatively adopted to be in the spin range of $[\frac{1}{2},\frac{3}{2}]\hbar$.
By combining the updated energy levels with those obtained from the ENSDF library, we have constructed the new total and partial (within the spin range of $[\frac{1}{2},\frac{3}{2}]\hbar$) cumulative numbers of levels and compared the obtained data with those calculated from the experimental NLD data extracted by using the Oslo method (estimated data) as well as the predictions of different phenomenological and microscopic NLD models. The good agreement between our new cumulative curves with the estimated data allows us to deduce the values of the maximum excitation energy $E_{\rm max}$, which is defined as the energy threshold below which most of the excited levels have been observed, to be extended to around 1.2 and 1.8 MeV for the total and partial (spins of $[\frac{1}{2},\frac{3}{2}]\hbar$) NLD data, respectively. These values of $E_{\rm max}$ are higher than the corresponding values obtained by using the data presently existed in the ENSDF library. Moreover, the newly constructed cumulative curves also agree well with the recent microscopic exact pairing plus independent-particle model at finite temperature in which no fitting parameter has been employed.
All the results obtained within the present work are important as they will provide the updated information on the nuclear level structure and make a step forward to the completed level schemes of excited compound nuclei.
\begin{acknowledgments}
N.N.A, N.X.H, P.D.K, H.H.T, N.Q.H acknowledge the support by the National Foundation for Science and Technology Development (NAFOSTED) of Vietnam through Grant No. 103.04-2017.323. They would also like to thank the Ministry of Science and Technology of Vietnam for the financial support through the project coded KC05.08/16-20. Sincerely thanks are given to Prof. Vuong Huu Tan (former Chairman of Vietnam Atomic Energy Institute) and Prof. Nguyen Nhi Dien (former Director of Dalat Nuclear Research Institute) for their important decisions and supports to implement the neutron beams at Dalat Nuclear Research Reactor, which have been continuously used for the nuclear structure study in Vietnam.
\end{acknowledgments}
|
1,108,101,564,252 | arxiv |
\section{Introduction}\label{sec:intro}
\lipsum[1-7]
\else
\section{Introduction}\label{sec:intro}
In this paper, we propose what we believe is the first fully-distributed method for the estimation of all the quantities and parameters needed by a team of ground (planar) mobile robots to collectively manipulate an unknown load. In particular, the proposed algorithm provides the estimation of the kinematic parameters (equivalent to the grasp matrix), the dynamic parameters (relative position of the center of mass, mass, and rotational inertia) and the kinematic state of the load (velocity of the center of mass and rotational rate).
Most of the work presented in the literature is based on the assumption of the a-priori knowledge of the inertial parameters of the load, although this assumption does not always hold in real-world scenarios~\cite{1989-KimZhe,1992-SchCan,1991-WalFreMar,2002-SzePluBid,2013-SieDerHir}. Thus, collective manipulation tasks would benefit from the implementation of on-line estimation strategies of the inertial parameters of unknown loads for at least two reasons: first, existing control strategies, such as force control and pose estimation could be effectively applied with satisfactory performance and a reduced control effort. Second, time-varying loads could be effectively manipulated, toward the implementation of adaptive or event-driven control strategies in uncertain environments
Furthermore, similarly to other applications in multi-robot systems,
a distributed and decentralized implementation of such estimation strategies would provide robustness and scalability.
The research on estimation of inertial parameters is at its early stage, and main limitations of the existing approaches are centralization and the use of absolute position and acceleration measurements, which are hard and costly to achieve, especially if accurate and noise-free information is needed. Moreover, centralized strategies are notoriously poorly scalable and not robust, due to the existence of a single point of failure~\cite{2005-YonAriTsu,2008-KubKroWah,2013-ErhHir,2012-MarKarHu_Kra}.
The algorithm that we propose has the following properties:
\begin{inparaenum}[\it (i)]
\item there is no central processing unit;
\item each robot is only able to exchange information with its neighbors in the communication network;
\item the communication network is only required to be connected (e.g., a simple line in the worst case);
\item each robot is able only to perform local sensing and computation;
\item the amount of memory and number of computations per step needed by each local instance of the algorithm do not depend on the number of robots but only on the number of communication neighbors.
\end{inparaenum}
The only assumptions that are needed are that each robot is endowed with a planar manipulator that is able to exert and measure the local force applied to the load and to measure the velocity of the contact point. Any other measurement (such as, e.g., position, distance, acceleration, and gyro measurements) is not available to the robots. Furthermore, nothing is known about the manipulated load.
The approach is totally distributed, and relies on the geometry of the rigid body kinematics, the rigid body dynamics, on nonlinear observation, and on consensus strategies. It is based on a sequence of steps that is proven to converge in finite time, after which all the robots will agree on the estimation of all the following quantities, characteristic of the load: its mass, its rotational inertia, the relative position of the contact point of each robot with respect to the geometric center of the contact points, the relative position of the load center of mass with respect to the geometric center, the velocity of the center of mass and the object angular rate.
This paper builds on and expands preliminary results presented in the conference papers~\cite{2014k-FraPetRiz,2015b-FraPetRiz}. Major improvements concern:
\begin{inparaenum}[\it (i)]
\item largely streamlined problem and algorithm formalizations, including a much clearer formalism for the algorithm and an explicit use of the grasp matrix,
\item improvement of the algorithm performance (e.g., improving the estimate of the load angular velocity),
\item consideration of the manipulation torques,
\item local control strategies that guarantee the feasibility of the approach, and \item a full illustrative simulation of the whole approach.
\end{inparaenum}
\fi
\section{Model and Problem Statement}\label{sec:probStat}
\begin{figure}[t]
\centering
\includegraphics[width=0.88\columnwidth]{./figures/MRM_ProblemStatement-youbot}
\caption{Top view of a team of five mobile manipulators performing a planar manipulation task. Each robot can exert a force and torque on the object by means of a planar manipulator (only force is displayed in the picture), and can only measure the velocity of its contact point.}
\label{fig:ProbStat}
\end{figure}
In this section, we formally define the problem of distributively estimating \emph{all} the parameters and the time-varying quantities needed for a decentralized team of $n$ ground mobile manipulators to cooperatively move an unknown \emph{load} $B$ mounted on a cart, as depicted in
Fig.~\ref{fig:ProbStat} from the top.
Since the load moves on the floor it is more convenient to cast the problem in 2D.
Any spatially-related quantity which will be introduced in the following should be considered as the projection of the corresponding 3D quantity on the horizontal plane.
We denote the inertial frame with $\mathcal{W}=O-\mathbf{xy}$
and the load body frame with $\mathcal{B}=C-\mathbf{x}_B\mathbf{y}_B$, where $C$ is the center of mass (CoM) of $B$. We indicate with $\mathbf{p}_C\in\mathbb{R}^2$ and $\mathbf{v}_C=\dot{\mathbf{p}}_C$ the position and velocity of $C$ expressed in $\cal W$, respectively, and with $\omega\in\mathbb{R}$ the intensity of the load angular velocity%
, hereafter called simply \emph{angular rate}.
The dynamics of the manipulated load its the one of a rigid body
\begin{equation}
\mathbf{M} \dot{\boldsymbol{\nu}} + \mathbf{g} = \mathbf{u},
\label{eq:dynamics_load}
\end{equation}
where $\boldsymbol{\nu}=(\mathbf{v}_C^T\;\omega)^T\in\mathbb{R}^3$, is the twist of $B$; $\mathbf{M}={\rm diag}(m,m,J)\in\mathbb{R}^{3\times 3}$, is the inertia matrix with $m>0$ and $J>0$ being the mass and the rotational inertia of the load, respectively; $\mathbf{g}\in\mathbb{R}^3$ is the wrench resulting from the environmental forces such as friction or gravitation (in our setting we assume $\mathbf{g} = \mathbf{0}$); and $\mathbf{u}\in\mathbb{R}^3$ denotes the external wrench applied by the robots to $B$, which will be characterized in the following.
All the previous quantities are expressed in $\mathcal{W}$.
Each robot $i$ contributes to the manipulation tasks with one end effector.
The extension to the more general case of multiple arms per robot is however straightforward.
We denote with $\mathbf{u}_i=(\mathbf{f}_i^T\;\tau_i)^T\in\mathbb{R}^3$ the wrench exerted by the end-effector of robot $i$, expressed in $\mathcal{W}$, where $i=1 \ldots n$. The force $\mathbf{f}_i\in\mathbb{R}^2$ is applied to a contact point $C_i$ of $B$ and lies on the plane $\mathbf{xy}$, and $\tau_i\in \mathbb{R}$ is the intensity of a torque applied about the normal direction to the plane $\mathbf{xy}$.
We assume, naturally, that contact points do not overlap, i.e., $C_i\neq C_j$, $\forall i,j=1\ldots n$.
The total external wrench applied to~$B$ is given by
\begin{equation}
\mathbf{u} = \sum_{i=1}^n \mathbf{G}_i \mathbf{u}_{i} = \mathbf{G} \bar{\mathbf{u}}, %
\end{equation}
where $\mathbf{G}_i\in\mathbb{R}^{3\times 3}$ is the partial grasp matrix, $\mathbf{G}\in\mathbb{R}^{3\times 3 n}$ is the grasp matrix, and $\bar{\mathbf{u}} = \left({\mathbf{u}_1}^T \, \dots \, {\mathbf{u}_n}^T\right)^T$ is the stacked applied wrench that groups the generalized contact force components transmitted through the contact points~\cite{2008-PraTri}.
The partial grasp matrix is defined as $\mathbf{G}_i = \mathbf{P}_i \bar{\mathbf{R}}_i$, where
\begin{equation}
\mathbf{P}_i =
\begin{pmatrix}
\mathbf{I}_{2\times 2} & {\bf 0}_{2\times 1} \\
\left[(\mathbf{p}_{C_i} - \mathbf{p}_C)^\perp\right]^T & 1 \\
\end{pmatrix},
\end{equation}
and $\bar{\mathbf{R}}_i=\mathbf{I}_{3\times 3}$, in our setting, for all $i=1 \ldots n$.
Here $\mathbf{p}_{C_i}\in\mathbb{R}^2$ is the position of $C_i$ in $\cal W$.
The operator $(\cdot)^\perp$ is defined by
\begin{align}
\mathbf{q}^{\perp}= Q \mathbf{q}=
\underbrace{\begin{pmatrix}
0 & -1\\
1 & 0
\end{pmatrix}}_{=Q}
\begin{pmatrix}
q^{x}\\ q^{y}
\end{pmatrix}
=
\begin{pmatrix}
-q^{y}\\ q^{x}
\end{pmatrix},
\end{align}
that is to say, $\mathbf{q}^\perp$ is equal to $\mathbf{q}$ rotated of an angle of $\pi/2$. %
The dynamics~\eqref{eq:dynamics_load} of the manipulated load is then given by %
\begin{equation}
\begin{pmatrix}
\dot{\mathbf{v}}_C\\
\dot \omega
\end{pmatrix}
=
\sum_{i=1}^{n}
\begin{pmatrix}
m^{-1}\mathbf{I}_{2\times 2} & {\bf 0}_{2\times 1} \\
J^{-1}\left[(\mathbf{p}_{C_i} - \mathbf{p}_C)^\perp\right]^T & J^{-1} \\
\end{pmatrix}
\begin{pmatrix}
\mathbf{f}_i\\
\tau_i
\end{pmatrix}.
\label{eq:dynamics_1}
\end{equation}
Let $\mathbf{p}_{G}\in\mathbb{R}^2$ represent the position of the geometric center $G$ of the contact points in $\mathcal{W}$, i.e.,
\[
\mathbf{p}_{G} = \sum_{i=1}^n \mathbf{p}_{C_i}.
\]
We compactly define $\mathbf{z}_i = \mathbf{p}_{C_i} - \mathbf{p}_G$, and $\mathbf{z}_C = \mathbf{p}_G - \mathbf{p}_C$.
Thus, substituting $\mathbf{p}_{C_i} - \mathbf{p}_{C} = \mathbf{z}_i +\mathbf{z}_C $ in~\eqref{eq:dynamics_1} we obtain
\begin{equation}
\begin{pmatrix}
\dot{\mathbf{v}}_C\\
\dot \omega
\end{pmatrix}
=
\sum_{i=1}^{n}
\begin{pmatrix}
m^{-1}\mathbf{I}_{2\times 2} & {\bf 0}_{2\times 1} \\
J^{-1}{\mathbf{z}_i^\perp}^T & J^{-1} \\
\end{pmatrix}
\begin{pmatrix}
\mathbf{f}_i\\
\tau_i
\end{pmatrix} +
\begin{pmatrix}
{\bf 0}_{2\times 1}\\
J^{-1}{\mathbf{z}_C^\perp}^T \\
\end{pmatrix}
\sum_{i=1}^n\mathbf{f}_i.
\label{eq:dynamics_rewritten}
\end{equation}
Based on the dynamics~\eqref{eq:dynamics_rewritten} one can be easily convinced that in order to effectively manipulate the load by controlling its velocity $\mathbf{v}_C$ and angular rate $\omega$ is of fundamental importance that each robot $i$ has an estimate of the constant parameters $m$ and $J$, the time-varying vectors $\mathbf{z}_i(t)$ and $\mathbf{z}_C(t)$, and the quantities to be controlled, i.e., $\mathbf{v}_C(t)$ and $\omega(t)$.
\smallskip
We are now ready to formally state the addressed problem:
\begin{problem*}[Distributed Estimation for Cooperative Manipulation]\label{prob:main}
Given $n$ robots communicating through an
ad-hoc network
and manipulating an unkown load $B$; assume that each robot~$i$ can only
\begin{enumerate}
\item locally measure the velocity $\mathbf{v}_{C_{i}}$ of the contact point $C_{i}$,
\item locally know the applied wrench $\mathbf{u}_{i}$ acting on $C_{i}$,
\item communicate with its one-hop neighbors denoted with $\mathcal{N}_i$.
\end{enumerate}
Design a \emph{fully-distributed algorithm} such that each robot~$i$ is able to estimate the following six quantities:
\begin{enumerate}
\item the (constant) mass $m$ of the load,
\item the (constant) rotational inertia $J$ of the load,
\item the (time-varying) relative position $\mathbf{z}_i(t)$ of the contact point $C_i$ w.r.t. the contact-points geometric center $G$,
\item the (time-varying) relative position $\mathbf{z}_C(t)$ of the CoM of $B$ with respect to $G$,
\item the (time-varying) velocity $\mathbf{v}_C(t)$ of the CoM of $B$, and
\item the (time-varying) angular rate $\omega(t)$.
\end{enumerate}
\end{problem*}
In this work, we consider a quite strict definition of distributed algorithm that requires that the complexity of the computations performed locally by each robot (in terms of number of elementary operations and size of the input/output data) has to be constant with respect to the number of robots $n$, like it is done, e.g., in~\cite{2013e-RobFraSecBue} and other works. A distributed algorithm following this definition is highly scalable with $n$. %
In the next sections we shall constructively prove that the Problem is solvable as long as the communication network is connected, i.e., it exists a multi-hop communication path from any robot to any other robot.
\section{Algorithm Overview}\label{sec:overview}
An overview of the proposed distributed estimation algorithm is given in the scheme of Fig.~\ref{fig:AlgoOverview}. Each rectangular box in the scheme corresponds to a computation performed locally by each robot~$i$. Each circle, instead, corresponds to a consensus-like distributed algorithm that is used to compute the only five global quantities that we shall prove to be needed in the distributed estimation process. The number of these global quantities is independent from the number of robots, and they can be estimated resorting to standard distributed algorithms. Therefore, the overall distributiveness of the approach is ensured. The convergence of these distributed algorithms requires only that the overall communication graph is connected (no all-to-all communication is required). The same applies for our distributed estimation algorithm.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{./figures/overall_draw_vert}
\caption{Overview of the proposed distributed algorithm. \textbf{Top} (dashed blue box): this is a \emph{purely kinematic} phase, only the velocity measurements and the rigid body kinematics are used. After this phase, the estimates of the time-varying quantities $\mathbf{z}_i(t)$ and $\omega(t)$ (in blue) become available to each robot~$i$. \textbf{Bottom} (dashed red box): this is a \emph{dynamical phase}, also the knowledge of the forces and the rigid body dynamics are used. After this phase, the quantities $J$, $\mathbf{z}_C(t)$, $\mathbf{v}_C(t)$, and $m$ (in red) become available to each robot~$i$. %
}
\label{fig:AlgoOverview}
\end{figure}
To better understand the overall functioning of the algorithm, it is convenient to think of it as if it is composed by a \emph{purely kinematic} phase, followed by a \emph{dynamical} one. In the former, only the rigid body kinematics constraint and the velocity measurements are used. After this phase, each robot~$i$ is able to estimate the time-varying quantities $\mathbf{z}_i(t)$ and $\omega(t)$.
In the latter, the knowledge of the local wrench and the rigid body dynamics is also used. After this phase, each robot~$i$ is able to estimate the remaining quantities, i.e., $J$, $\mathbf{z}_C(t)$, $\mathbf{v}_C(t)$, and $m$.
The two phases are described in Sections~\ref{sec:kyn_phase} and~\ref{sec:dyn_phase}.
All the estimation blocks are cascaded, therefore convergence or inconsistency issues present in feedback estimation structures are not affecting this scheme.
\section{Kinematic Phase}\label{sec:kyn_phase}
The objective of this phase
is to distributively compute an estimate of the time-varying quantities $\mathbf{z}_i(t)$ and $\omega(t)$. In the following we shall show how this is possible using only the measured velocities and the rigid body kinematic constraint.
The basic idea of this phase is to split the estimations of $\mathbf{z}_i(t)$ and $\omega(t)$ in two parts. The former is common to both the estimations and consists essentially of the estimation of $\mathbf{z}_{ij}(t)=\mathbf{p}_{i}(t)-\mathbf{p}_{j}(t)$. This part is described in Sec.~\ref{sec:est_z_ij}. The latter comprises two separate estimators of $\mathbf{z}_i(t)$ and $\omega(t)$ and is described in Sec.~\ref{sec:est_z_i_omega}
The reason for passing through the estimation of the $\mathbf{z}_{ij}$'s is briefly explained in the following. In~\cite{2012-AraCarSagCal}, a distributed algorithm is proposed that allows the estimation of the centroid of the positions of a network of robots by only measuring the relative positions between pairs of communicating robots.
This algorithm can be used
to distributively estimate $\mathbf{z}_i(t)$ if each pair of communicating robots
knew the relative position of the contact points $C_i$ and $C_j$, i.e., $\mathbf{z}_{ij}(t)$.
Nevertheless, (see Problem~\ref{prob:main}) each robot only measures the velocity of its contact point and not its position. Our first contribution is to show that, thanks to the rigid body constraint, it is possible to estimate $\mathbf{z}_{ij}(t)$ only resorting to the measures $\mathbf{v}_{i}(t)$ and $\mathbf{v}_{j}(t)$.
Hereinafter, we shall avoid to explicitly note the time dependence whenever clear from the context, in order to enhance the text readability.
\subsection{Estimation of $\mathbf{z}_{ij}(t)$}\label{sec:est_z_ij}
The time-varying vector $\mathbf{z}_{ij}(t)$ that we want to estimate has to obey to the \emph{nonlinear} rigid body constraint
\begin{align}\label{eq:rigid_body_dist}
\mathbf{z}_{ij}^T\mathbf{z}_{ij} = {\rm const}.
\end{align}
This leads us to the main idea behind the estimation of $\mathbf{z}_{ij}(t)$, that is, to exploit the fact that even if the direction of $\mathbf{z}_{ij}(t)$ may vary in time, the norm $\|\mathbf{z}_{ij}\|$ is always constant. Thus, the problem can be broken down into two parts: first, the retrieval of the time-varying direction of $\mathbf{z}_{ij}(t)$, and then the estimation of the constant quantity $\|\mathbf{z}_{ij}\|$.
The time derivative of both sides of~\eqref{eq:rigid_body_dist} results in
\begin{align}\label{eq:rigid_body_vel}
\dot{\mathbf{z}}_{ij}^T\mathbf{z}_{ij} = 0,
\end{align}
which means that the directions of $\mathbf{z}_{ij}$ and $\dot{\mathbf{z}}_{ij}^\perp = Q\dot{\mathbf{z}}_{ij}$ are the same.
We can then explicitly decompose $\mathbf{z}_{ij}$ in two factors
\begin{align}\label{eq:z_ij_decomp}
\mathbf{z}_{ij}
= d_{ij} \vec{\mathbf{y}}_{ij},
\end{align}
where $\vec{\mathbf{y}}_{ij}= \dot{\mathbf{z}}_{ij}^\perp\big/ \|\dot{\mathbf{z}}^\perp\|\in\mathbb{S}^1$ is the unit vector denoting the time-varying oriented line (axis) along which $\mathbf{z}_{ij}$ lies, and $d_{ij}\in\mathbb{R}$ is the coordinate of $\mathbf{z}_{ij}$ on $\vec{\mathbf{y}}_{ij}$.
Let each robot send to its neighbors the (measured) velocity of its contact point using the available one-hop communication links. Then, each robot~$i$ can compute the velocity difference
\begin{align}
\dot{\mathbf{z}}_{ij} = \mathbf{v}_{C_i}-\mathbf{v}_{C_j},
\label{eq:local_vel_difference}
\end{align}
and its orthogonal vector $\dot{\mathbf{z}}_{ij}^\perp $, for each $j\in{\cal N}_i$. As a consequence, $\vec{\mathbf{y}}_{ij}$ is actually locally available to each robot~$i$, $\forall j\in{\cal N}_i$. This is the first milestone of our algorithm, which is formally stated in the following result.
\begin{result}\label{res:y_ij}
The axis $\vec{\mathbf{y}}_{ij}$ along which $\mathbf{z}_{ij}$ lies is directly computed from local measurements and one-hop communication as
\[
\vec{\mathbf{y}}_{ij}
=
\dfrac{{\dot{\mathbf{z}}}_{ij}^\perp}{\|\dot{\mathbf{z}}_{ij}^\perp\|}
=
\frac{ (\mathbf{v}_{C_i} - \mathbf{v}_{C_j})^\perp }{ \| \mathbf{v}_{C_i} - \mathbf{v}_{C_j}\| }
\]
as long as $\|\dot{\mathbf{z}}_{ij}\|=\|\mathbf{v}_{C_i} - \mathbf{v}_{C_j}\|\neq 0$.
\end{result}
\medskip
In order to obtain the sought $\mathbf{z}_{ij}$,
only the estimation of $d_{ij}$ is left.
Due to the rigid body constraint~\eqref{eq:rigid_body_dist}
\begin{align}
|d_{ij}| = \|\mathbf{z}_{ij}\| = {\rm const}
\end{align}
holds, i.e., $d_{ij}$ is either equal to $\|\mathbf{z}_{ij}\|$ or to $-\|\mathbf{z}_{ij}\|$, depending on the fact that $\vec{\mathbf{y}}_{ij}$ and $\mathbf{z}_{ij}$ have the same or the opposite direction.
However, since in~\eqref{eq:z_ij_decomp} both $\mathbf{z}_{ij}(t)$ and $\vec{\mathbf{y}}_{ij}(t)$ are continuous functions of time (for $\vec{\mathbf{y}}_{ij}$ this holds in any open interval in which $\|\dot{\mathbf{z}}_{ij}\|\neq 0$), we have that
\[
\mbox{sign}(d_{ij}) = {\rm const} \;\; \forall t\in T \quad \text{as long as} \quad \|\dot{\mathbf{z}}_{ij}\| \neq 0 \;\; \forall t\in T.
\]
Thus, in any time interval $T$ in which $\|\dot{\mathbf{z}}_{ij}\|\neq 0$ and under the reasonable assumption that the input wrenches are continuous in $T$ we can differentiate both sides of~\eqref{eq:z_ij_decomp}, thus obtaining
\begin{align}\label{eq:diff_eq:z_ij_decomp}
\dot{\mathbf{z}}_{ij} = d_{ij} \dot{\vec{\mathbf{y}}}_{ij},
\end{align}
which is a linear estimation problem that the robot~$i$ can locally solve to estimate the sought $d_{ij}$. In fact,
among the quantities that appear in~\eqref{eq:diff_eq:z_ij_decomp},
robot~$i$ knows the quantity $\dot{\mathbf{z}}_{ij}$ and the time integral of $\frac{\rm d}{\rm d t}\vec{\mathbf{y}}_{ij}$, i.e., $\vec{\mathbf{y}}_{ij}$. Therefore, the estimate of $d_{ij}$ can be carried out resorting to a standard online linear estimation technique described, e.g., in~\cite{1991-SloLi_}, and recapped in the Appendix of~\cite{2014k-FraPetRiz}.
This technique has also the property of averaging out the possible measurement noise. To this aim, the time interval $T$ can be tuned on the basis of the noise level that has to be averaged out in the velocity measurements.
We observe that after the first estimation of $d_{ij}$ there is no need to further estimate $|d_{ij}|$, since this is a constant quantity. Thus, the only signal to keep track of is ${\rm sign}(d_{ij})$. This can be easily done instantaneously implementing two linear observers of the dynamic system~\eqref{eq:diff_eq:z_ij_decomp}: one assuming ${\rm sign}(d_{ij})=1$ and another assuming ${\rm sign}(d_{ij})=-1$. Then, it is sufficient to select, at each time-step, the sign of the observer that provides the best estimate in terms of, e.g., measurement residual.
To conclude the description of the algorithm, every time it happens to be $\|\dot{\mathbf{z}}_{ij}\|=0$, the last estimate of $\mathbf{z}_{ij}$ is kept frozen. In a real implementation the introduction of a suitable threshold to cope with the possible noise is recommended.
We summarize the results of this section in the following.
\begin{result}\label{res:z_ij}
The vector $\mathbf{z}_{ij}$ is estimated locally by robot $i$ and $j$ by the separate computation of its two factors
\begin{itemize}
\item $\vec{\mathbf{y}}_{ij}$ (time-varying axis) computed directly from $\mathbf{v}_{C_i} - \mathbf{v}_{C_j}$ (see Result~\ref{res:y_ij})
\item $d_{ij}$ (constant coordinate) computed from $\mathbf{v}_{C_i} - \mathbf{v}_{C_j}$ solving~\eqref{eq:diff_eq:z_ij_decomp} via filtering and online linear least squares.
\end{itemize}
\end{result}
This part of the algorithm is synthetically shown in the blocks $1,2,3$, and $4$ of the diagram of Fig.~\ref{fig:AlgoOverview}.
\subsection{Estimation of $\mathbf{z}_{i}(t)$ and $\omega(t)$}\label{sec:est_z_i_omega}
As anticipated before, the estimate of $\mathbf{z}_{ij}(t)$ provides a straightforward way to estimate $\mathbf{z}_i$, as
depicted in the block $5$ of the diagram of Fig.~\ref{fig:AlgoOverview}, and
recapped in the following
\begin{result}\label{res:z_i}
Once the estimate of $\mathbf{z}_{ij}(t)$ is available to each robot~$i$, $\forall j\in{\cal N}_i$, each robot~$i$ estimates $\mathbf{z}_{i}$ by using the centroid estimation algorithm described in~\cite{2012-AraCarSagCal}.
\end{result}
In order to estimate the angular rate $\omega$, we use the following relation from rigid body kinematics
\begin{align}
\omega \mathbf{z}_{ij} = -\dot{\mathbf{z}}_{ij}^\perp, \label{eq:rigid_body_omega}
\end{align}
where
$\dot{\mathbf{z}}_{ij}^\perp$ is computed locally from~\eqref{eq:local_vel_difference}, and $\mathbf{z}_{ij}$ is locally estimated, as shown in Sec.~\ref{sec:est_z_ij}.
Multiplying both sides of~\eqref{eq:rigid_body_omega} by $\mathbf{z}_{ij}^T$, we obtain that, for each pair of communicating robots $i$ and $j$, an estimate of $\omega$ is directly given by
\begin{align}
\omega = -\left(\mathbf{z}_{ij}^T\dot{\mathbf{z}}_{ij}^\perp\right)\left(\mathbf{z}_{ij}^T\mathbf{z}_{ij}\right)^{-1}. \label{eq:omega_estimation}
\end{align}
\begin{result}\label{res:w_estim}
$\omega$ is locally computed using~\eqref{eq:omega_estimation}, where $\dot{\mathbf{z}}_{ij}$ comes from direct measurement and one-hop communication and $\mathbf{z}_{ij}$ from Result~\ref{res:z_ij}.
\end{result}
This part of the algorithm is synthetically shown in the block $6$ of the diagram of Fig.~\ref{fig:AlgoOverview}.
The use of~\eqref{eq:omega_estimation} provides robot~$i$ with as many estimates of $\omega$ as the number of its neighbors $j\in{\cal N}_i$. In case of noiseless velocity measurements all those estimates are identical. In case of noisy velocity measurements, this redundancy can be exploited to average out the noise either at the local level (e.g., by averaging the different estimates corresponding to each neighbor) or at the global level (by, e.g., using some dynamic consensus strategy among all the robots~\cite{2010-ZhuMar}). For the sake of presentation clarity we restrain ourselves from presenting these minor details here.
\section{Dynamical Phase}\label{sec:dyn_phase}
The objective of this phase (corresponding to the part boxed by a red dashed line in the block diagram of Fig.~\ref{fig:AlgoOverview}) is to estimate the remaining quantities, i.e., the (constant) rotational inertia $J$, the (time-varying) $\mathbf{z}_C(t)$ and $\mathbf{v}_C(t)$, and the (constant) mass $m$. The order in which they are estimated follows a dependency hierarchy.
This phase makes use of the velocity measurements $\mathbf{v}_{C_i}$, the force inputs $\mathbf{f}_{i}$, as well as the rigid body kinematics and dynamics.
The basic operations of this phase are summarized in the following:
\begin{enumerate}
\item \emph{(estimation of $J$)} exploit the knowledge of $\mathbf{z}_i$ to apply a particular input wrench that cancels the effect of $\mathbf{z}_C$ in~\eqref{eq:dynamics_rewritten}, thus, obtaining a reduced dynamics in which $J$ is the only unknown; then, estimate $J$ using linear least squares;
\item \emph{(estimation of $\mathbf{z}_C(t)$)} use all the previously estimated quantities, the rotational dynamics in~\eqref{eq:dynamics_rewritten}, and the rigid body constraint to recast the estimation of $\mathbf{z}_C$ to a nonlinear observation problem that can be solved by each robot resorting to local computation;
\item \emph{(estimation of $\mathbf{v}_C(t)$)} use rigid body kinematics to compute $\mathbf{v}_C(t)$ from all the quantities estimated so far;
\item \emph{(estimation of $m$)} use a distributed estimation of the total force produced by the robots and $\mathbf{v}_C(t)$ to finally estimate the constant $m$ using linear least squares.
\end{enumerate}
\subsection{Estimation of $J$}\label{sec:est_J}
Being $J$ a constant quantity our strategy is to impose a particular wrench for a short time interval in order to let its estimate converge close enough to the real value.
After this finite time interval any wrench can be applied again.
Let us isolate the rotational dynamics from~\eqref{eq:dynamics_rewritten}
\begin{align}
\dot{\omega} &= \frac{1}{J}\sum_{i=1}^n {\mathbf{z}_i^\perp}^T \mathbf{f}_i + \frac{1}{J}{\mathbf{z}_C^\perp}^T \sum_{i=1}^n \mathbf{f}_i + \frac{1}{J} \sum_{i=1}^n\tau_i
\label{eq:rotational_dynamics}
\end{align}
where:
\begin{inparaenum}[\em (i)]
\item $J$ is the constant to be estimated;
\item $\omega(t)$ is locally known by each robot thanks to Result~\ref{res:w_estim};
\item $\mathbf{z}_i(t)$ is locally known by each robot thanks to Result~\ref{res:z_i};
\item $\mathbf{f}_i$ and $\tau_i$ are locally known by each robot, since they are applied by the robot;
\item $\mathbf{z}_C$ is still unknown.
\end{inparaenum}
If we were able to eliminate $\mathbf{z}_C$ from~\eqref{eq:rotational_dynamics},
then $J$ would become the only unknown in~\eqref{eq:rotational_dynamics}. %
It is easy to verify that the influence of $\mathbf{z}_C$ in~\eqref{eq:rotational_dynamics} is eliminated if each robot~$i$ applies a force $\mathbf{f}_i$ such that $\sum_{i=1}^n \mathbf{f}_i=0$.
A clever choice is to set $\mathbf{f}_i=k_z\mathbf{z}_i^\perp$, where $k_z\neq 0$ is an arbitrary constant. In fact this choice implies
\begin{align*}
\sum_{i=1}^n \mathbf{f}_i &= k_z\sum_{i=1}^n \mathbf{z}_i^\perp = k_z\sum_{i=1}^n (\mathbf{p}_{C_i}-{\mathbf{p}_{G}})^\perp = k_z Q \sum_{i=1}^n (\mathbf{p}_{C_i}-{\mathbf{p}_{G}}) = 0.
\end{align*}
Note that this force can be computed by each robot in a distributed way, since $\mathbf{z}_i$ is locally known thanks to Result~\ref{res:z_i}.
By applying $\mathbf{f}_i=k_z\mathbf{z}_i^\perp$ the rotational dynamics~\eqref{eq:rotational_dynamics} becomes
\begin{align*}
\dot{\omega} &=
\frac{k_z}{J}\sum_{i=1}^n \|\mathbf{z}_i\|^2 + \frac{1}{J}\sum_{i=1}^n \tau_i.
\end{align*}
In order to further simplify the distributed computation let us impose also $\tau_i=0$, $\forall i=1\ldots n$, limited to the time interval in which $J$ is estimated. Hence~\eqref{eq:rotational_dynamics} is further simplified in
\begin{align}
\dot{\omega} = J^{-1} \; k_z\sum_{i=1}^n \|\mathbf{z}_i\|^2.\label{eq:J_lls_problem}
\end{align}
Equation~\eqref{eq:J_lls_problem} expresses a linear relation in which the only unknown is the proportionality factor $J^{-1}$. In fact, $\omega$ and $k_z\Vert \mathbf{z}_i\Vert^2$ are locally known to each robot $i$, which means that the constant number $k_z{\sum_{i=1}^n \Vert \mathbf{z}_i\Vert^2}$ can be computed distributively through an average consensus~\cite{2007-OlsFaMu} right after the moment in which each robot is able to estimate $\mathbf{z}_i$ (block $7$ in the diagram of Fig.~\ref{fig:AlgoOverview}).
Therefore, the estimation of $J$ is recast in~\eqref{eq:J_lls_problem} as a linear least squares estimation problem that can be solved resorting to the same strategy used to estimate $d_{ij}$ in~\eqref{eq:diff_eq:z_ij_decomp}
(block $8$ in the diagram of Fig.~\ref{fig:AlgoOverview}). A summary follows.
\begin{result}
Each robot distributively computes the constant sum $k_z\sum_{i=1}^n \|\mathbf{z}_i\|^2$ using $\mathbf{z}_i$ from Result~\ref{res:z_i} and then average consensus. Then, each robot~$i$ applies a force $\mathbf{f}_i=k_z\mathbf{z}_i^\perp$ for a given time interval, the moment of inertia $J$ is distributively computed by solving the linear least squares problem~\eqref{eq:J_lls_problem}.
\end{result}
If noise is present, at this stage each robot may have a slightly different estimate of $J$. Thus, a standard average consensus algorithm is executed to average out the noise and improve the estimate of $J$.
\subsection{Estimation of $\mathbf{z}_C$}\label{sec:est_z_C(t)}
The main idea behind the estimation of the time-varying quantity $\mathbf{z}_C(t)$ is to rewrite~\eqref{eq:rotational_dynamics} in order to let only the following kind of quantities appear (in addition to $\mathbf{z}_C(t)$):
\begin{itemize}
\item global quantities that can be distributively estimated;
\item local quantities available from the problem setting (measures or inputs) or from the previous results.
\end{itemize}
We shall show the possibility of this rewriting and also that the estimation of $\mathbf{z}_C(t)$ boils down to a solvable nonlinear observation problem.
Let us first decompose the local force $\mathbf{f}_i(t)$ in two parts and recap two important identities
\begin{align}
\mathbf{f}_i(t) = \frac{1}{n} &\sum_{i=1}^n \mathbf{f}_i(t) + \Delta \mathbf{f}_i(t) =\mathbf{f}_{\text{\rm mean}}(t) + \Delta \mathbf{f}_i(t),\label{eq:f_mean}\\
&\sum_{i=1}^n {\mathbf{z}_i^\perp}^T=0 \quad\text{and}\quad
\sum_{i=1}^n \Delta \mathbf{f}_i=0.\label{eq:zero_ident}
\end{align}
We can then rewrite~\eqref{eq:rotational_dynamics}, exploiting~\eqref{eq:f_mean} and~\eqref{eq:zero_ident}, as
\begin{align}
\notag \dot \omega = & \frac{1}{J} \left(\sum_{i=1}^n {\mathbf{z}_i^\perp}^T\right) \mathbf{f}_{\text{\rm mean}} + \frac{n}{J}{\mathbf{z}_C^\perp}^T \mathbf{f}_{\text{\rm mean}}(t)+ \frac{1}{J} \sum_{i=1}^n {\mathbf{z}_i^\perp}^T\Delta \mathbf{f}_i +\\
+& \, \frac{1}{J}{\mathbf{z}_C^\perp}^T \sum_{i=1}^n\Delta \mathbf{f}_i
+ \frac{1}{J}\sum_{i=1}^n \tau_i =\notag\\
=&
\underbrace{\frac{n}{J}{\mathbf{z}_C^\perp}^T \mathbf{f}_{\text{\rm mean}}}_{{\mathbf{z}_C^\perp}^T \bar{\mathbf{f}}}
+
\underbrace{\frac{1}{J} \sum_{i=1}^n {\mathbf{z}_i^\perp}^T \Delta \mathbf{f}_i}_{\eta_1} + \underbrace{\frac{1}{J}\sum_{i=1}^n \tau_i}_{\eta_2}, \notag%
\end{align}
i.e.,
\begin{align}
\dot \omega = \,
{\mathbf{z}_C^\perp}^T \bar{\mathbf{f}}
+ \eta_1 + \eta_2.
\label{eq:omegadot}
\end{align}
The global quantities
\begin{itemize}
\item $\bar{\mathbf{f}}= \dfrac{n}{J}\mathbf{f}_{\rm mean}$,
\item $\eta_1=J^{-1} \sum_{i=1}^n {\mathbf{z}_i^\perp}^T \Delta \mathbf{f}_i = J^{-1} \sum_{i=1}^n {\mathbf{z}_i^\perp}^T \mathbf{f}_i$, and
\item $\eta_2=J^{-1}\sum_{i=1}^n \tau_i$
\end{itemize}
can be all distributively estimated in parallel using three instances of the dynamic consensus algorithm~\cite{2010-ZhuMar} (blocks $9$, $10$, and $11$ in the diagram of Fig.~\ref{fig:AlgoOverview}).
The global quantity $\omega(t)$ is known thanks to Result~\ref{res:w_estim}.
The only unknown in~\eqref{eq:omegadot} is $\mathbf{z}_C(t)$.
Define $\eta = \eta_1 + \eta_2$. The following result holds:
\begin{result}\label{res:recast_omega_dyn}
The rotational dynamics is given by
\begin{align}
\dot \omega = {\mathbf{z}_C^\perp}^T \bar{\mathbf{f}} + \eta \label{eq:omega_dyn_z_C}
\end{align}
where $\omega(t)$ is known thanks to Result~\ref{res:w_estim} and $\bar{\mathbf{f}}(t)$ and $\mathbf{\eta}(t)$ are locally known to each robot through distributed computation.
\end{result}
It is clear that, in~\eqref{eq:omega_dyn_z_C}, $\omega$ and $\mathbf{z}_C(t)$ play the role of state variables and $\bar{\mathbf{f}}$ and $\eta$ are the inputs.
In order to complete~\eqref{eq:omega_dyn_z_C} with the dynamics of $\mathbf{z}_C(t)$ we recall that
$\mathbf{z}_C$ is a constant-norm vector, rigidly attached to the object, hence
\begin{align}
\dot{\mathbf{z}}_C= \omega\,\mathbf{z}_C^\perp.\label{eq:zC_dyn}
\end{align}
Combining~\eqref{eq:omega_dyn_z_C} and~\eqref{eq:zC_dyn}, we obtain the nonlinear system
\begin{align}
\left\{
\begin{aligned}
\dot x_{1} &= -x_{2}x_{3} \\
\dot x_{2} &= x_{1}x_{3}\\
\dot x_{3} & = x_{1}u_2-x_{2}u_1 + u_3\\
y &= x_3
\end{aligned}
\right. ,
\label{eq:nonlinear_sys_zC}
\end{align}
where $\mathbf{z}_{C}=(z_{C}^x\;z_{C}^y)^T=(x_{1}\;x_{2})^T$ is the unknown part of the state vector, $\omega = x_{3}$ is the measured part of the state vector and, consequently, can be considered as the system output, and $\bar{\textbf{f}}=(\bar{f}_x\;\bar{f}_y)^T = (u_1 \; u_2)^T$,
$\eta = u_3$ are known inputs.
\begin{result}\label{res:reduced_prob_z_C}
Estimating $\mathbf{z}_C(t)$ is equivalent to observe the state $(x_{1}\;x_{2})^T$ of the nonlinear system~\eqref{eq:nonlinear_sys_zC} with known output~$y=x_3=\omega$ and known inputs $u_1=\bar{f}_x$, $u_2=\bar{f}_y$, and $u_3=\eta$.
\end{result}
In~\cite{2015b-FraPetRiz}, we studied the observability of~\eqref{eq:nonlinear_sys_zC}:
\begin{prop}
If $x_3 \not\equiv 0 $ and $\left( u_1 \; u_2\right)^T \not\equiv \mathbf{0}$, then system~\eqref{eq:nonlinear_sys_zC} is locally observable in the sense of~\cite{1977-HerKre}.
\end{prop}
\begin{proof} Given in~\cite{2015b-FraPetRiz}.
\end{proof}
Note that the applied torques $\tau_i$, for $i=1 \ldots n$ (which are included in $u_3$) have no influence on the observability of $\mathbf{z}_C(t)$. In~\cite{2015b-FraPetRiz}, we also proposed a nonlinear observer for the system~\eqref{eq:nonlinear_sys_zC}, which is recapped in the following result.
\begin{prop}\label{prop:observer}
Consider the following dynamical system
\begin{equation}
\label{eq:cm_observer_eq}
\begin{aligned}
\dot {\hat x}_1 &= -\hat{x}_2 x_3 + u_2(y - \hat{x}_3)\\
\dot {\hat x}_2 &= \hat x_1 x_3 - u_1(y - \hat{x}_3)\\
\dot {\hat x}_3 &= \hat x_1 u_2 - \hat x_2 u_1 + k_e(y-{\hat x}_3) + u_3,
\end{aligned}
\end{equation}
where $k_e>0$.
If $y(t)\not\equiv 0$ and $\left( u_1(t) \; u_2\right(t))^T \not\equiv \mathbf{0}$, then~\eqref{eq:cm_observer_eq} is an asymptotic observer for~\eqref{eq:nonlinear_sys_zC}, \textit{i.e.}, defining $\hat{\mathbf{x}}= ({\hat x}_1 \ {\hat x}_2 \ {\hat x}_3)^T$ and $\mathbf{x}= ( x_1 \ x_2 \ x_3)^T$, one has that $\hat{\mathbf{x}}(t) \rightarrow \mathbf{x}(t)$ asymptotically.
\end{prop}
\begin{proof}
Given in~\cite{2015b-FraPetRiz}.
\end{proof}
Thanks to Proposition~\ref{prop:observer} we can state the following result:
\begin{result}\label{res:z_C_estim}
The relative position of the CoM w.r.t. the center of the grasping points, i.e., $\mathbf{z}_C(t)$, is distributively computed by using the observer~\eqref{eq:cm_observer_eq} and thanks to the local knowledge of $n$, $J$, $\omega$, $\mathbf{f}_{\text{\rm mean}}$, and $\sum_{i=1}^n {\mathbf{z}_i^\perp}^T\Delta \mathbf{f}_i$ from the previous results.
\end{result}
The just described estimation of $\mathbf{z}_C(t)$ is briefly shown in the blocks $9,10,11$ (dynamic consensus algorithms) and $12$ (observer introduced in~\eqref{eq:cm_observer_eq}) of the diagram of Fig.~\ref{fig:AlgoOverview}.
\subsection{Estimation of $\mathbf{v}_C$}\label{sec:est_v_C(t)}
The velocity of the center of mass $\mathbf{v}_C(t)$ is estimated locally by each robot~$i$ exploiting the rigid body constraint
\begin{align*}
\frac{\rm d}{\rm dt}(\mathbf{p}_{C} - \mathbf{p}_{C_i}) = \omega(\mathbf{p}_{C} - \mathbf{p}_{C_i})^{\perp},
\end{align*}
which can be rewritten as
\begin{align}
\mathbf{v}_{C}(t) &= \mathbf{v}_{C_i}(t) - \omega(t)(\mathbf{z}_C(t) + \mathbf{z}_i(t))^{\perp},
\label{eq:velocity_com}
\end{align}
whose rhs elements are all known since:
\begin{itemize}
\item $\mathbf{v}_{C_i}(t)$ is locally measured by robot $i$
\item $\omega(t)$, $\mathbf{z}_C(t)$, and $\mathbf{z}_i(t)$ are known by each robot $i$ thanks to Results~\ref{res:w_estim},~\ref{res:z_C_estim}, and~\ref{res:z_i}, respectively.
\end{itemize}
\begin{result}\label{res:v_C_estim}
The CoM velocity $\mathbf{v}_C(t)$ is distributively computed using~\eqref{eq:velocity_com} and the knowledge of $\mathbf{v}_{C_i}(t)$, $\omega(t)$, $\mathbf{z}_C(t)$, $\mathbf{z}_i(t)$.
\end{result}
The block $13$ in the diagram of Fig.~\ref{fig:AlgoOverview} represents this part.
\subsection{Estimation of $m$}\label{sec:est_m}
The estimation of the mass $m$ of the load is a straightforward consequence of the estimation of the velocity and total (average) force. In fact, rewriting~\eqref{eq:dynamics_1} as %
$\dot{\mathbf{v}}_C = \frac{1}{m}\sum_{i=1}^n \mathbf{f}_i = \frac{n}{m}\mathbf{f}_{\rm mean}$,
we obtain
\begin{align}
\dot{\mathbf{v}}_C = m^{-1}\; n\,\mathbf{f}_{\rm mean},
\label{eq:lls_mass}
\end{align}
where
\begin{itemize}
\item $n$ is known,
\item $\mathbf{f}_{\text{\rm mean}}$ is distributively estimated from $\mathbf{f}_i$ using dynamic consensus (as in Result~\ref{res:recast_omega_dyn}),
\item $\mathbf{v}_C$ is known locally by each robot $i$ thanks to Result~\ref{res:v_C_estim}.
\end{itemize}
Thus the problem is recast as the linear least square estimation problem~\eqref{eq:lls_mass} that can be solved resorting to the same strategy used to estimate $d_{ij}$ in~\eqref{eq:diff_eq:z_ij_decomp} and $J$ in~\eqref{eq:J_lls_problem}.
\begin{result}\label{res:m_estim}
The mass $m$ is distributively computed from the knowledge of $\mathbf{v}$ and $n$, and $\mathbf{f}_{\rm mean}$ by solving an online linear least square problem via filtering~\eqref{eq:lls_mass}.
\end{result}
The block $14$ in the diagram of Fig.~\ref{fig:AlgoOverview} represents this part.
\section{Case Study: Utility of Simple Local Rules }\label{sec:SimpleLocalRules}
In the previous sections, we have shown how to distributively estimate all the quantities that are needed, e.g., to precisely control a planar load with multiple mobile robots. A part from the phase in which $J$ is estimated (Sec.~\ref{sec:est_J}), in all the other phases we did not suggest any control input to move the load and perform the estimation. The user of the algorithm is free to use any control input, as long as it ensures the observability of the quantities to be estimated. In particular, we have seen that the observability is related to two conditions: non-zero angular rate $\omega$ and non-zero average force $\mathbf{f}_{\rm mean}$. In each phase, either one or both of the two conditions are needed to ensure a convergent estimation.
In the following, we prove that an extremely basic control strategy satisfies, under very mild conditions, the aforementioned observability requirements. Furthermore this control strategy: %
\begin{inparaenum}[\it (i)]
\item can be implemented resorting only to local perception and communication (it is, therefore, distributed); and
\item does not require the knowledge of that parameters and quantities that are the objectives of the distributed estimation (it is estimation-`agnostic'). Hence, it can be applied during the estimation process and independently from it.
\end{inparaenum}
\begin{prop} \label{prop:bounded_nonvanishing_omega}
Assume that the following local control rule is used: $\mathbf{f}_i=\mathbf{f}^*= {\rm const}$, $\tau_i=0$, $\forall i=1 \ldots n$, and denote with $\omega_0$ the rotational rate at $t=0$, then
\begin{enumerate}
\item $\omega(t)$ remains bounded, in particular:
\begin{align}
|\omega(t)| \le \sqrt{\omega_0^2 + 4nJ^{-1} \|\mathbf{f}^*\| \|\mathbf{z}_C\|} \quad \forall t\geq 0\label{eq:omega_bound}
\end{align}
\item $\exists {\bar t} \geq 0$ such that $\omega$ becomes identically $0$, $\forall t\ge {\bar t}$, if and only if the following condition is verified
\begin{align}
2 n \, \textbf{z}_C(0)^T\textbf{f}^* - J\omega^2_0 = 2 n \, \|\textbf{z}_C\|\|\mathbf{f}^*\|.
\label{eq:critical_init_cond}
\end{align}
\end{enumerate}
Therefore, this control law is suitable for the estimation process a part from the zero measure case provided by~\eqref{eq:critical_init_cond}.
\label{PropositionVII2}
\end{prop}
\begin{proof}
In order to prove~\eqref{eq:omega_bound} consider the following scalar quantity $\alpha = \omega^2 - 2 n J^{-1} \textbf{z}_C^T \textbf{f}^*$. Let's now take the derivative of $\alpha$ w.r.t. time. Exploiting~\eqref{eq:omega_dyn_z_C} and~\eqref{eq:zC_dyn} we obtain $\dot \alpha = 0$,
i.e., $\alpha$ is an invariant along the system trajectories when $\mathbf{f}_i = \mathbf{f}^*= {\rm const}$, $\forall i=1 \ldots n$. In particular $\alpha(t) = \alpha(0)$, which implies
\begin{align}
\omega^2(t) &= \omega^2_0 - 2 n J^{-1} \textbf{z}_C^T(0) \textbf{f}^* + 2 n J^{-1} \textbf{z}_C^T(t) \textbf{f}^* \label{eq:equal_omega}\\
&= \omega^2_0 + 2 n J^{-1} ( \textbf{z}_C(t) - \textbf{z}_C(0))^T \textbf{f}^* \notag\\
&\le \omega^2_0 + 2 n J^{-1} \| \textbf{z}_C(t) - \textbf{z}_C(0)\| \|\textbf{f}^*\| \notag\\
&\le \omega^2_0 + 4 n J^{-1} \| \textbf{z}_C\| \| \textbf{f}^* \|,
\label{eq:ineq_omega}
\end{align}
which proves~\eqref{eq:omega_bound}. Note that to derive~\eqref{eq:ineq_omega} we used the fact that $\| \textbf{z}_C\|$ is constant over time.
In order to prove~\eqref{eq:critical_init_cond} we impose that $0=\omega({\bar t})=\dot \omega({\bar t}) = \ddot \omega({\bar t}) = \ldots$ for all the derivatives.
Imposing that $\omega({\bar t})=0$ in~\eqref{eq:equal_omega} for $t={\bar t}$ we have
\begin{align}
0 = \omega^2_0 - 2 n J^{-1} \textbf{z}_C^T(0) \textbf{f}^* + 2 n J^{-1} \textbf{z}_C^T({\bar t}) \textbf{f}^*.
\label{eq:invariant_at_T}
\end{align}
Setting $\dot \omega({\bar t}) = 0$, $\mathbf{f}_i=\mathbf{f}^*$, $\tau_i=0$, $\forall i=1 \ldots n$ in~\eqref{eq:omegadot} we obtain
\begin{align}
{\textbf{z}_C^\perp({\bar t})}^T\mathbf{f}^* = 0,
\quad
\Rightarrow
\quad
{\textbf{z}_C^T({\bar t})}\mathbf{f}^* = \|\textbf{z}_C\|\|\mathbf{f}^*\|.
\label{eq:z_C_f_start_prodoct}
\end{align}
Substituting~\eqref{eq:z_C_f_start_prodoct} in~\eqref{eq:invariant_at_T} gives
\begin{align}
0 = \omega^2_0 - 2 n J^{-1} \textbf{z}_C^T(0) \textbf{f}^* + 2 n J^{-1} \|\textbf{z}_C\|\|\mathbf{f}^*\|, \label{eq:invariant_at_T_II}
\end{align}
that, reordered, gives~\eqref{eq:critical_init_cond}. The proof is concluded by noticing that $\omega({\bar t})=\dot \omega({\bar t})=0$ implies (see~\eqref{eq:omega_dyn_z_C} and~\eqref{eq:zC_dyn}) that all the higher order derivatives of $\omega$ at ${\bar t}$ are zero as well.
\end{proof}
The use of the simple local control action of Proposition~\ref{prop:bounded_nonvanishing_omega} ensures the sought observability conditions under the very mild conditions~\eqref{eq:critical_init_cond}. However, it causes the load CoM velocity to grow linearly over time (see, e.g.,~\eqref{eq:lls_mass}). Therefore, it is wise to modify that control action by periodically changing the direction of the common force (i.e., switching between $\mathbf{f}^*$ and $-\mathbf{f}^*$ on a periodical basis). In this way, the CoM velocity will oscillate bounded around zero.
It is also important to have available a control strategy that is able to stop the load motion if needed (like, e.g., at the end of all the estimation phases). In the following result we show that another simple control strategy can be effectively used for braking or stopping purposes.
\begin{prop} \label{prop:stopping_motion}
Assume that the following local control rule is used: $\mathbf{f}_i=-b \mathbf{v}_{C_i}$, $\tau_i=0$, $\forall i=1 \ldots n$, with $b>0$. Then, both $\omega$ and $\mathbf{v}_C$ converge asymptotically to zero with a convergence rate that is proportional to $b$.
\end{prop}
\begin{proof}
From~\eqref{eq:velocity_com} and using the identities on the left in~\eqref{eq:zero_ident} it is straighforward to derive the following two identities
\begin{align}
\sum_{i=1}^n\mathbf{v}_{C_i} = n\mathbf{v}_{C} + n\omega\mathbf{z}_{C}^\perp\;;
\quad
\mathbf{v}_{C_i} = \mathbf{v}_{C} + \omega(\mathbf{z}_{C} + \mathbf{z}_i)^\perp,
\notag
\end{align}
which can be used to obtain, respectively
\begin{align}
&\quad \quad \quad n\mathbf{f}_{\rm mean} = -b \sum_{i=1}^n \mathbf{v}_{C_i} = -bn\mathbf{v}_{C} -nb\omega\mathbf{z}_{C}^\perp, \quad
\label{eq:fstar_damped}\\
\eta &= -\frac{b}{J}\sum_{i=1}^n\mathbf{z}_i^{\perp^T}\mathbf{v}_{C_i} =
-\frac{b}{J}
\Bigg[
(\mathbf{v}_{C} + \omega\mathbf{z}_{C}^\perp )^T
\underbrace{\sum_{i=1}^n\mathbf{z}_i^\perp}_{=0}
+
\omega\sum_{i=1}^n\mathbf{z}_i^{\perp^T} \mathbf{z}_i^\perp
\Bigg]\notag\\
&=
-\frac{b}{J}\omega\sum_{i=1}^n\|\mathbf{z}_i\|^2.\label{eq:eta_damped}
\end{align}
Plugging~\eqref{eq:fstar_damped} and~\eqref{eq:eta_damped} in~\eqref{eq:omega_dyn_z_C} and~\eqref{eq:lls_mass} we obtain
\begin{align}
J \dot \omega &=
-bn
\left(
{\mathbf{z}_C^\perp}^T \mathbf{v}_{C}
+
\omega{\mathbf{z}_C^\perp}^T \mathbf{z}_{C}^\perp
\right)
-b\omega\sum_{i=1}^n\|\mathbf{z}_i\|^2
\notag\\
m \dot{\mathbf{v}}_C &= -bn
\left(
\mathbf{v}_{C} + \omega\mathbf{z}_{C}^\perp
\right).
\notag
\end{align}
Take
$V=\dfrac{J\omega^2+m\|\mathbf{v}_C\|^2}{2}$ as Lyapunov candidate.
We obtain
\begin{align}
\dot V &=
-bn
\left(
\mathbf{v}_{C}^T\mathbf{v}_{C}
+
{2\omega\mathbf{z}_C^\perp}^T \mathbf{v}_{C}
+
\omega^2{\mathbf{z}_C^\perp}^T \mathbf{z}_{C}^\perp
\right)
-b\omega^2\sum_{i=1}^n\|\mathbf{z}_i\|^2
= \notag\\
& = -b
\left(
n\|
\mathbf{v}_{C}
+
\omega\mathbf{z}_C^\perp
\|^2
+
\omega^2\sum_{i=1}^n\|\mathbf{z}_i\|^2
\right) < 0 \quad \forall [\mathbf{v}_{C}^T \,\omega]\neq \mathbf{0}^T
\notag
\end{align}
which proves the thesis of the proposition.
\end{proof}
\section{Numerical Test}\label{sec:numRes}
In order to give an idea about how the whole algorithm works in practice we made a basic simulation of the estimation algorithm where a planar load with $m=50$\,kg and $J=86.89$\,kg\,m$^{2}$ is manipulated by a team of $n=10$ mobile manipulators communicating over a simple line-topology network (i.e., the less connected and more challenging one).
The velocity measurements are added a zero-mean Gaussian noise with covariance $\boldsymbol\Sigma_{i} = \sigma^{2} \mathbf{I}_{2\times 2}$, and $\sigma = 0.3$ m/s.
The signals needed to understand and evaluate the execution of the entire algorithm are shown in Fig.~\ref{fig:estimation}.
In the first step each robot applies an arbitrary force and executes the procedure described in Sec.~\ref{sec:est_z_ij} to estimate the relative distances between contact points. Due to the presence of noise the estimation is kept `frozen' every time the signal-to-noise ratio is too small, i.e., for what concerns this simulation, whenever $\|\dot{\mathbf{z}}_{ij}\| \leq 0.5$ m/s.
The first plot of Fig.~\ref{fig:estimation} reports the convergence to zero of the Estimation Error Relative Distance (EERD) index, which is defined as EERD$(t)=\sum_{i=1}^{n-1} \sum_{j=i}^n G(i,j) [(\mathbf{z}_{ij}(t)-\hat{\mathbf{z}}_{ij}(t))^T(\mathbf{z}_{ij}(t)-\hat{\mathbf{z}}_{ij}(t))]^{1/2}$, where $\hat{\star}$ is used from now to indicate the estimate of ``$\star$''.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{./figures/estim.pdf}
\caption{One illustrative simulation of the whole estimation algorithm described in Secs.~\ref{sec:kyn_phase} and~\ref{sec:dyn_phase}. From top to the bottom, respectively: the trend of the EERD index, the estimates of $\omega$, the trend of the EEC index, the estimates of $J$, the observations of $\mathbf{z}_C$, the estimates of $\mathbf{v}_C$, and the estimates of $m$.}
\vspace{-2mm}
\label{fig:estimation}
\end{figure}
Starting from $t=10$\,s each robot applies the control rules given in the Propositions~\ref{prop:bounded_nonvanishing_omega}~and~\ref{prop:stopping_motion} that guarantee both the observability and boundedness of $\left[\mathbf{v}_C^T \,\, \omega\right]^T$. At $t=10$\,s, %
$\omega$ and $\mathbf{z}_i$ start to be estimated, as described in Sec.~\ref{sec:est_z_i_omega}. The second and third plots of Fig.~\ref{fig:estimation} report $\omega$ and $\hat{\omega}$, and the Estimation Error CoM relative distance index (EEC), respectively, where EEC$(t)=\sum_{i=1}^{n} [(\mathbf{z}_{i}(t)-\hat{\mathbf{z}}_{i}(t))^T(\mathbf{z}_{i}(t)-\hat{\mathbf{z}}_{i}(t))]^{1/2}$.
Subsequently, at $t=20$\,s, the first step of the dynamical phase is executed, as described in Sec.~\ref{sec:est_J}. First each robot runs an average consensus in order to locally estimate the constant value $k_z{\sum_{i=1}^n \Vert \mathbf{z}_i\Vert^2}$. Then, at $t=30$\,s each robot $i$ runs a least square estimation of $J$ using also the knowledge of $\hat{\omega}$.
Each robot checks the convergence of the least squares estimation evaluating the variance of the estimator~\cite{2005-Gee}.
Then, starting at $t=40$\,s, the local estimates are exchanged over the network and an average consensus is run to agree on a common estimate $\hat{J}=85.67$\,kg\,m$^2$ (fourth plot in Fig.~\ref{fig:estimation}). Then the angular rate is brought to zero (Proposition~\ref{prop:stopping_motion}).
Afterwards, at $t=80$\,s each robot starts
the nonlinear observation of $\mathbf{z}_C$ described in Sec.~\ref{sec:est_z_C(t)}. The observer errors reach zero at about $t=135$\,s, as visible in the fifth plot of Fig.~\ref{fig:estimation}.
The estimate $\hat{\mathbf{v}}_C$ is then computed using~\eqref{eq:velocity_com} (sixth plot of Fig.~\ref{fig:estimation}), which then allows to compute $\hat{m}$, as explained in Sec.~\ref{sec:est_m}, by a preliminary collections of samples and local least squares estimations. An average consensus, starting at $t=180$\,s, allows then to reach an accurate estimate of $m$ at $t=200$\,s (seventh and last plot of Fig.~\ref{fig:estimation}).
The duration of the entire algorithm is $200$\,s, of which a large portion is needed to collect samples to run the local least squares and the consensus algorithms for the constant parameters $m$, $J$ and $d_{ij}$. The durations of these phases depend on the noise level. Ideally, in absence of noise, a single sample would be sufficient to perform the estimation, while in the real, noisy, case a trade-off between robustness~\cite{1991-SloLi_} and duration of the estimation phase is requested.
Finally, also the convergence time of $\hat{\mathbf{z}}_C$ can be shortened by acting on the value of $k_e$ in~\eqref{eq:cm_observer_eq}, and the gains of the consensus algorithms can be tuned in order to speed up the agreement~\cite{2007-OlsFaMu}.
\section{Conclusions}\label{sec:concl}
In this paper, we propose a fully-distributed method for the estimation of the parameters needed by a team of ground (planar) mobile robots to collectively manipulate an unknown load. The proposed algorithm provides the estimation of the kinematic and dynamic parameters, as well as the estimate of the kinematic state of the load, i.e., velocity of the center of mass and rotational rate.
The approach is totally distributed, and relies on the geometry of the rigid body kinematics, on the rigid body dynamics, on nonlinear observations, and on consensus strategies. It is based on a sequence of steps that is proven to converge in finite time, and at the end of the procedure all the robots will agree on the estimation of parameters.
The only requirements are related to the communication network ,that is only required to be connected, and to the capability of each robot to be able to control the local force applied to the load, while measuring the velocity of the contact point. A testing simulation has been run to confirm the effectiveness of our approach.
Extension to the manipulation of 3D objects and experimental tests could be
topics to be addressed in
future works.
\bibliographystyle{IEEEtran}
|
1,108,101,564,253 | arxiv | \section{Introduction}
The moduli space $\mathscr{F}$ of polarized K3 surfaces is often constructed as the arithmetic quotient of a Hermitian symmetric domain, and comes with a natural Baily-Borel compactification $\mathscr{F} \subset \mathscr{F}^*$. A long standing problem has been to compare this compactifcation with other compactifactions which carry a more geometric meaning, such as those coming from Geometric Invariant Theory (GIT). In particular, if $\mathfrak{M}$ denotes a GIT compactification, there is often a birational period map $\mathfrak{p}: \mathfrak{M} \dashrightarrow \mathscr{F}^*$ thanks to the global Torelli theorem for K3 surfaces, and a natural question is whether this map can be resolved in a modular way.
A conjectural generalization building off work of Shah \cite{Sha80} and Looijenga \cite{Loo2,Loo1} is proposed by Laza and O'Grady in \cite{LO16}. When $\mathscr{F}$ is a Type IV locally symmetric variety associated to a lattice of the form $U^2 \oplus D_{N-2}$ (e.g. hyperelliptic quartic K3s when $N=18$, quartic K3s when $N=19$, or double EPW sextics when $N=20$), they conjecture a systematic way to resolve the period map $\mathfrak{p}$ via a series of birational transformations governed by certain divisors present in $\mathscr{F}^*$. They confirm their conjectures in the case of hyperelliptic quartic K3 surfaces in \cite{LO} (i.e. when $N=18$); we briefly review some of their results.
Let $C$ be a smooth curve in $\mathbb{P}^1\times\mathbb{P}^1$ of bidegree $(4,4)$, and let $\pi: X_C \to \mathbb{P}^1 \times \mathbb{P}^1$ be the double cover of the quadric surface branched along $C$. The resulting surface $X_C$ is a smooth hyperelliptic polarized K3 surface of degree $4$, whose polarization is given by the pullback $\pi^*(\mathcal{O}_{\mathbb{P}^1}(1) \boxtimes \mathcal{O}_{\mathbb{P}^1}(1))$. The corresponding period domain gives a moduli space $\mathscr{F} \subset \mathscr{F}^*$. If $\mathfrak{M}:=|\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(4,4)|\mathbin{/\mkern-6mu/}\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)$ denotes the GIT quotient of $(4,4)$ curves on $\mathbb{P}^1 \times \mathbb{P}^1$, then there is a birational period map $\mathfrak{p}: \mathfrak{M} \dashrightarrow \mathscr{F}^*$. In \cite{LO}, Laza and O'Grady described the birational map $\mathfrak{p}$ as a series of explicit wall crossings. Let $\lambda$ denote the Hodge line bundle on $\mathscr{F}$, and let $\Delta$ = $H/2$, where $H$ is the Heegner divisor parametrizing periods of K3 surfaces which are double covers of a quadric cone.
In this setting, Laza-O'Grady show that one can interpolate between $\mathscr{F}^*$ and $\mathfrak{M}$ by considering $\mathscr{F}(\beta) := \mathrm{Proj} R(\lambda + \beta \Delta)$ and varying $ 0 \leq \beta \leq 1$. One aspect of their proof is a variation of GIT (VGIT) study on the moduli space of $(2,4)$-complete intersection curves in $\mathbb{P}^3$. Denoting this space by $\mathfrak{M}(t)$, the authors show that each step $\mathscr{F}(\beta)$ can be realized as the VGIT moduli space $\mathfrak{M}(t)$ for some specific $t(\beta)$.
If $c\in (0,\frac{1}{2})$ is a rational number, then $(\mathbb{P}^1 \times \mathbb{P}^1, cC)$ is a log Fano pair, and so K-stability provides a natural framework to construct alternative compactifications of the moduli space of smooth $(4,4)$ curves. This framework is established in \cite{ADL}, where we constructed proper good moduli spaces parametrizing $\mathbb{Q}$-Gorenstein smoothable K-polystable log Fano pairs $(X, cD)$, where $D$ is a rational multiple of $-K_X$ and $c$ is a rational number. Furthermore, we showed that the moduli spaces undergo wall crossings as the weight $c$ varies. Let $\overline{\mathcal{K}}_{c}$ be the connected component of the moduli stack parametrizing K-semistable log Fano pairs which admit $\mathbb{Q}$-Gorenstein smoothings to $(\mathbb{P}^1 \times \mathbb{P}^1, c C)$, where $C$ is a $(4,4)$ curve. By \cite{ADL}, the moduli stack $\overline{\mathcal{K}}_c$ admits a proper good moduli space $\overline{K}_c$. The goal of this paper is to show that this K-moduli space $\overline{K}_c$, and the wall crossings obtained by varying the weight vector $c$, coincide with the wall crossings given by the VGIT $\mathfrak{M}(t)$ under the correspondence $t=\frac{3c}{2c+2}$. In particular, varying the weight $c$ on the K-moduli space $\overline{K}_c$ interpolates between $\mathfrak{M}$ and $\mathscr{F}^*$, and gives the intermediate spaces an alternative modular meaning.
\begin{theorem}\label{mthm:thmintro}\leavevmode
Let $\overline{\mathcal{K}}_c$ be the moduli stack parametrizing K-semistable log Fano pairs $(X,cD)$ admitting $\mathbb{Q}$-Gorenstein smoothings to $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ where $C$ is a smooth $(4,4)$ curve. Let $\mathscr{M}$ be the GIT quotient stack of $(4,4)$ curves on $\mathbb{P}^1 \times \mathbb{P}^1$. Let $\mathscr{M}(t)$ be the VGIT quotient stack of $(2,4)$ complete intersection curves in $\mathbb{P}^3$ of slope $t$ (see Definition \ref{def:VGIT}).
\begin{enumerate}
\item Let $c \in (0, \frac{1}{8})$ be a rational number. Then there is an isomorphism of Artin stacks $\overline{\mathcal{K}}_c \cong \mathscr{M}$. In particular, a $(4,4)$-curve $C$ on $\mathbb{P}^1\times\mathbb{P}^1$ is GIT (poly/semi)semistable if and only if $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ is K-(poly/semi)stabile.
\item Let $c \in (0, \frac{1}{2})$ be a rational number. Then there is an isomorphism of Artin stacks $\overline{\mathcal{K}}_c\cong \mathscr{M}(t)$ with $t=\frac{3c}{2c+2}$. Moreover, such isomorphisms commute with the wall crossing morphisms for K-moduli stacks $\overline{\mathcal{K}}_c$ and GIT moduli stacks $\mathscr{M}(t)$.
\end{enumerate}
\end{theorem}
We note here that the comparison between K-moduli spaces and (V)GIT moduli spaces in various explicit settings has been studied before, such as \cite{MM93, OSS16, SS17, LX19, Fuj17, GMGS18, ADL}.
Combining Theorem \ref{mthm:thmintro} with the main results in \cite{LO}, we obtain the following isomorphisms between moduli spaces and their natural polarizations. In particular, the wall crossing morphisms between our K-moduli spaces $\overline{K}_c$ form a natural interpolation of the period map $\mathfrak{p}:\mathfrak{M}\dashrightarrow \mathscr{F}^*$. For an explicit description of K-moduli wall crossings, see Remarks \ref{rem:walls-value} and \ref{rem:walls-detail}.
\begin{thm}\label{mthm:spaceiso}
Let $\overline{K}_c$ be the good moduli space parametrizing K-polystable log Fano pairs $(X,cD)$ admitting $\mathbb{Q}$-Gorenstein smoothings to $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ where $C$ is a smooth $(4,4)$ curve. Let $\mathfrak{M}(t)$ be the VGIT quotient space of $(2,4)$ complete intersection curves in $\mathbb{P}^3$ of slope $t$ (see Definition \ref{def:VGIT}). Then for any rational number $c\in (0,\frac{1}{2})$, we have
\[
\overline{K}_c\cong \mathfrak{M}(t)\cong \mathscr{F}(\beta), \quad \textrm{where }t=\frac{3c}{2c+2} \textrm{ and }\beta=\min\left\{1,\frac{1-2c}{6c}\right\}.
\]
Moreover, the CM $\mathbb{Q}$-line bundle on $\overline{K}_c$, the VGIT polarization on $\mathfrak{M}(t)$, and the Laza-O'Grady polarization on $\mathscr{F}(\beta)$ (i.e. the push forward of $\lambda+\beta\Delta$ under $\mathscr{F}\dashrightarrow\mathscr{F}(\beta)$) are all proportional up to positive factors.
\end{thm}
As a consequence of the above theorems and \cite[Theorem 1.1(iv)]{LO}, we identify the final K-moduli space $\overline{K}_{\frac{1}{2}-\epsilon}$ with Looijenga's semitoric compactification $\widehat{\sF}$ of $\mathscr{F}$. In part (1) of the following theorem, we give an alternative proof of \cite[Second part of Theorem 1.1(iv)]{LO} using K-stability. Part (2) suggests that $\mathscr{F}^*$ can be viewed as a moduli space of log Calabi-Yau pairs as expected in \cite[Conjecture 1.8]{ADL}.
\begin{thm}\label{mthm:slcK3}
Let $0<\epsilon,\epsilon'\ll 1$ be two sufficiently small rational numbers. Then we have isomorphisms $\overline{K}_{\frac{1}{2}-\epsilon}\cong \mathfrak{M}(\frac{1}{2}-\epsilon')\cong \widehat{\sF}$. Moreover, we have the following.
\begin{enumerate}
\item The moduli space $\mathfrak{M}(\frac{1}{2}-\epsilon')$ parametrizes quartic hyperelliptic K3 surfaces with semi-log canonical singularities.
\item The Hodge line bundle over $\overline{K}_{\frac{1}{2}-\epsilon}$ is semiample with ample model $\mathscr{F}^*$.
\end{enumerate}
\end{thm}
Finally, we discuss some partial generalizations of Theorem \ref{mthm:thmintro} to higher degree curves on $\mathbb{P}^1\times\mathbb{P}^1$.
\begin{thm}\label{mthm:alldeg}
Let $d\geq 3$ be an integer.
Let $\overline{\mathcal{K}}_{d,c}$ be the moduli stack parametrizing K-semistable log Fano pairs $(X,cD)$ admitting $\mathbb{Q}$-Gorenstein smoothings to $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ where $C$ is a smooth $(d,d)$ curve. Let $\mathscr{M}_d$ be the GIT quotient stack of $(d,d)$ curves on $\mathbb{P}^1 \times \mathbb{P}^1$. Let $\mathscr{M}_d(t)$ be the VGIT quotient stack of $(2,d)$ complete intersection curves in $\mathbb{P}^3$ of slope $t\in (0, \frac{2}{d})$ (see Definition \ref{def:VGIT-alldeg}).
\begin{enumerate}
\item Let $c \in (0, \frac{1}{2d})$ be a rational number. Then there is an isomorphism of Artin stacks $\overline{\mathcal{K}}_{d,c} \cong \mathscr{M}_d$. In particular, $C$ is GIT (poly/semi)semistable on $\mathbb{P}^1\times\mathbb{P}^1$ if and only if $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ is K-(poly/semi)stabile.
\item Let $c \in (0, \frac{4-\sqrt{2}}{2d})$ be a rational number. Then there is an isomorphism of Artin stacks $\overline{\mathcal{K}}_{d,c}\cong \mathscr{M}_d(t)$ with $t=\frac{6c}{dc+4}$. Moreover, such isomorphisms commute with the wall crossing morphisms for K-moduli stacks $\overline{\mathcal{K}}_{d,c}$ and GIT moduli stacks $\mathscr{M}_d(t)$.
\end{enumerate}
\end{thm}
\subsection*{Organization}
This paper is organized as follows. In Section \ref{sec:prelim} we recall the definitions of K-stability, normalized volumes, and the CM-line bundle. We also recall the main results of \cite{ADL}, and define the relevant moduli functor. In Section \ref{sec:LO}, we recall the background on K3 surfaces and review the main results of \cite{LO}. In Section \ref{sec:surfaces}, we determine which surfaces can appear as degenerations of $\mathbb{P}^1 \times \mathbb{P}^1$ on the boundary of the K-moduli spaces. Key ingredients are Theorems \ref{thm:indexbound} and \ref{thm:surfaces} which bound the Gorenstein indices of singular surfaces using normalized volumes. In Section \ref{sec:main}, we compare the GIT compactification with the K-stability compactification, and study the wall crossings that appear for K-moduli. In particular, we present the proofs of Theorems \ref{mthm:thmintro}, \ref{mthm:spaceiso}, and \ref{mthm:slcK3}. These are achieved by the index estimates mentioned above, computation of CM line bundles, and a modification of Paul-Tian's criterion \cite{PT06} to work over non-proper bases. Note that the VGIT of $(2,4)$-complete intersections in $\mathbb{P}^3$ for a general slope does not provide a $\mathbb{Q}$-Gorenstein flat log Fano family over a proper base, but only such a family over the complete intersection locus as a quasi-projective variety. This creates an issue that the usual Paul-Tian's criterion cannot be directly applied. In order to resolve this issue, we trace the change of K/VGIT stability conditions along their wall crossings, and argue that their polystable replacements indeed coincide. Finally, in Section \ref{sec:generaldegree}, we discuss some generalizations for higher degree curves on $\mathbb{P}^1 \times \mathbb{P}^1$ and prove Theorem \ref{mthm:alldeg}.
\subsection*{Acknowledgements}
We would like to thank David Jensen, Radu Laza, Zhiyuan Li, Xiaowei Wang, and Chenyang Xu for helpful discussions. This material is based upon work supported
by the National Science Foundation under Grant No. DMS-1440140 while the authors were in residence at the Mathematical Sciences Research Institute in Berkeley, California,
during the Spring 2019 semester. The authors were supported in part by the American Insitute of Mathematics as part of the AIM SQuaREs program. KA was partially supported by an NSF Postdoctoral Fellowship. KD was partially supported by the Gamelin Endowed Postdoctoral Fellowship of the MSRI. YL was partially supported by the Della Pietra Endowed Postdoctoral Fellowship of the MSRI and the NSF Grant No. DMS-2001317.
\section{Preliminaries}\label{sec:prelim}
\subsection{K-stability of log Fano pairs}
We first recall necessary background to define K-stability of log Fano pairs.
\begin{defn}
Let $X$ be a normal variety and let $D$ be an effective $\mathbb{Q}$-divisor on $X$.
We say such $(X,D)$ is a \emph{log pair}.
If $X$ is projective and $-(K_X+D)$ is $\mathbb{Q}$-Cartier ample, then the log pair $(X,D)$ is called a \emph{log Fano pair}. The variety $X$ is a \emph{$\mathbb{Q}$-Fano variety} if $(X,0)$ is a klt log Fano pair.
\end{defn}
Next, we recall the definition of K-stability of log Fano pairs.
\iffalse
\begin{defn}[\cite{Tia97, Don02}]
Let $X$ be a normal projective variety, and $L$ an ample line bundle.
\begin{enumerate}[label=(\alph*)]
\item A \emph{normal test configuration} $(\mathcal X;\mathcal L)/\mathbb{A}^1$ of $(X;L)$ consists of the following data:
\begin{itemize}
\item a normal variety $\mathcal X$ together with a flat projective morphism $\pi:\mathcal X\to \mathbb{A}^1$;
\item a $\pi$-ample line bundle $\mathcal L$ on $\mathcal X$;
\item a $\mathbb{G}_m$-action on $(\mathcal X;\mathcal L)$ such that $\pi$ is $\mathbb{G}_m$-equivariant with respect to the standard action of $\mathbb{G}_m$ on $\mathbb{A}^1$ via multiplication;
\item $(\mathcal X\setminus\mathcal X_0;\mathcal L|_{\mathcal X\setminus\mathcal X_0})$
is $\mathbb{G}_m$-equivariantly isomorphic to $(X;L)\times(\mathbb{A}^1\setminus\{0\})$.
\end{itemize}
\item Let $w_m$ be the weight of the $\mathbb{G}_m$-action on the determinant line $\det H^0(X_0, L_0^{\otimes m})$, and $N_m:=h^0(X,L^{\otimes m})$. Then we have an asymptotic expansion
\[
\frac{w_m}{mN_m}=F_0+m^{-1} F_1+m^{-2} F_2+\cdots
\]
with $F_i\in \mathbb{Q}$. The \emph{generalized Futaki invariant} of $(\mathcal X;\mathcal L)/\mathbb{A}^1$ is defined as $\mathrm{Fut}(\mathcal X;\mathcal L)=-2F_1$. More precisely, if we write
\[
N_m=a_0 m^n + a_1 m^{n-1} + O(m^{n-2}),\quad w_m=b_0 m^{n+1} + b_1 m^n + O(m^{n-1}),
\]
then $\mathrm{Fut}(\mathcal X;\mathcal L)=\frac{2(a_1 b_0-a_0 b_1)}{a_0^2}$.
\end{enumerate}
\end{defn}
\fi
\begin{defn}[\cite{Tia97, Don02, Li15, LX14, OS15}]
Let $(X,D)$ be a log Fano pair. Let $L$ be an ample line bundle on $X$ such that $L\sim_{\mathbb{Q}}-l(K_X+D)$ for some $l\in \mathbb{Q}_{>0}$.
\begin{enumerate}[label=(\alph*)]
\item A \emph{normal test configuration} $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ of $(X,D;L)$ consists of the following data:
\begin{itemize}
\item a normal variety $\mathcal X$ together with a flat projective morphism $\pi:\mathcal X\to \mathbb{A}^1$;
\item a $\pi$-ample line bundle $\mathcal L$ on $\mathcal X$;
\item a $\mathbb{G}_m$-action on $(\mathcal X;\mathcal L)$ such that $\pi$ is $\mathbb{G}_m$-equivariant with respect to the standard action of $\mathbb{G}_m$ on $\mathbb{A}^1$ via multiplication;
\item $(\mathcal X\setminus\mathcal X_0;\mathcal L|_{\mathcal X\setminus\mathcal X_0})$
is $\mathbb{G}_m$-equivariantly isomorphic to $(X;L)\times(\mathbb{A}^1\setminus\{0\})$.
\item an effective $\mathbb{Q}$-divisor $\mathcal D$ on $\mathcal X$ such that $\mathcal D$ is the Zariski closure of $D\times(\mathbb{A}^1\setminus\{0\})$ under the identification between $\mathcal X\setminus\mathcal X_0$ and $X\times(\mathbb{A}^1\setminus\{0\})$.
\end{itemize}
A normal test configuration is called a \emph{product} test configuration if \[
(\mathcal X,\mathcal D;\mathcal L)\cong(X\times\mathbb{A}^1,D\times\mathbb{A}^1;\mathrm{pr}_1^* L\otimes\mathcal{O}_{\mathcal X}(k\mathcal X_0))
\] for some $k\in\mathbb{Z}$. A product test configuration is called a \emph{trivial} test configuration if the above isomorphism is $\mathbb{G}_m$-equivariant with respect to the trivial $\mathbb{G}_m$-action on $X$ and the standard $\mathbb{G}_m$-action on $\mathbb{A}^1$ via multiplication.
\iffalse
\item For each $1\leq i\leq k$, let $\tilde{w}_{i,m}$ be the weight of the $\mathbb{G}_m$-action on the determinant line $\det H^0(D_{i,0}, L_{i,0}^{\otimes m})$, and $\tilde{N}_{i,m}:=h^0(D_{i},L_{i}^{\otimes m})$. Then we have an asymptotic expansion
\[
\tilde{N}_{i,m}=\tilde{a}_{i,0} m^{n-1}+O(m^{n-2}),\quad
\tilde{w}_{i,m}=\tilde{b}_{i,0} m^{n}+O(m^{n-1}).
\]
We define $
\tilde{a}_0=\sum_{i=1}^k c_i \tilde{a}_{i,0}$ and $\tilde{b}_0=\sum_{i=1}^k c_i \tilde{b}_{i,0}$.
The \emph{relative Chow weight} of $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ is defined as $\mathrm{CH}(\mathcal X,\mathcal D;\mathcal L):=\frac{a_0\tilde{b}_0-b_0\tilde{a}_0}{a_0^2}$.
The \emph{generalized Futaki invariant} of $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ is defined as $\mathrm{Fut}(\mathcal X,\mathcal D;\mathcal L)=\mathrm{Fut}(\mathcal X;\mathcal L)+\mathrm{CH}(\mathcal X,\mathcal D;\mathcal L)$.
\fi
\item For a normal test configuration $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ of $(X,D)$, denote its natural compactification over $\mathbb{P}^1$ by $(\overline{\cX},\overline{\cD};\overline{\mathcal{L}})$.
The \emph{generalized Futaki invariant} of $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ is defined by the following intersection formula due to \cite{Wan12, Oda13a}:
\[
\mathrm{Fut}(\mathcal X,\mathcal D;\mathcal L):=\frac{1}{(-(K_X+D))^n}\left(\frac{n}{n+1}\cdot\frac{(\bar{\mathcal L}^{n+1})}{l^{n+1}}+\frac{(\bar{\mathcal L}^n\cdot (K_{\bar{\mathcal X}/\mathbb{P}^1}+\bar{\mathcal D}))}{l^n}\right).
\]
\item The log Fano pair $(X,D)$ is said to be:
\begin{enumerate}[label=(\roman*)]
\item \emph{K-semistable} if $\mathrm{Fut}(\mathcal X,\mathcal D;\mathcal L)\geq 0$ for any normal test configuration $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ and any $l\in\mathbb{Q}_{>0}$ such that $L$ is Cartier;
\item \emph{K-stable} if it is K-semistable and $\mathrm{Fut}(\mathcal X,\mathcal D;\mathcal L)=0$ for a normal test configuration $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ if and only if it is a trivial test configuration; and
\item \emph{K-polystable} if it is K-semistable and $\mathrm{Fut}(\mathcal X,\mathcal D;\mathcal L)=0$ for a normal test configuration $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ if and only if it is a product test configuration.
\end{enumerate
\item
Let $(X,D)$ be a klt log Fano pair. Then a normal test configuration $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ is called a \emph{special test configuration} if $\mathcal L\sim_{\mathbb{Q}}-l(K_{\mathcal X/\mathbb{A}^1}+\mathcal D)$ and $(\mathcal X,\mathcal D+\mathcal X_0)$ is plt. In this case, we say that $(X,D)$ \emph{specially degenerates to} $(\mathcal X_0,\mathcal D_0)$ which is necessarily a klt log Fano pair.
\end{enumerate}
\end{defn}
\newpage
\begin{rem}\leavevmode
\begin{enumerate}
\item The concept of K-(semi/poly)stability of log Fano pairs can also be defined via test configurations that are possibly non-normal. For the general definitions we refer to \cite[Section 2.1]{ADL}. By \cite[Proposition 3.15]{BHJ17}, we know that generalized Futaki invariants will not increase under normalization of test configurations.
\item Odaka proved in \cite{Oda12} that any K-semistable log Fano pair is klt. By the work of Li and Xu \cite{LX14}, to test K-(poly/semi)stability of a klt log Fano pair, it suffices to test the sign of generalized Futaki invariants only on special test configurations.
\end{enumerate}
\end{rem}
The following lemma is very useful in the proof of Theorem \ref{mthm:thmintro}.
\begin{lem}\label{lem:zerofut}\leavevmode
\begin{enumerate}
\item\cite{Kem78} Let $G$ be a reductive group acting
on a polarized projective scheme $(Y,L)$.
Let $y\in Y$ be a closed point. Let $\sigma:\mathbb{G}_m\to G$
be a 1-PS. Denote by $y'=\lim_{t\to 0}\sigma(t)\cdot y$.
If $y$ is GIT semistable and $\mu^{L}(y,\sigma)=0$,
then $y'$ is also GIT semistable.
\item\cite[Lemma 3.1]{LWX18}
Let $(X,D)$ be a log Fano pair. Let $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$
be a normal test configuration of $(X,D)$.
If $(X,D)$ is K-semistable and $\mathrm{Fut}(\mathcal X,\mathcal D;\mathcal L)=0$,
then $(\mathcal X,\mathcal D;\mathcal L)/\mathbb{A}^1$ is a special test configuration and $(\mathcal X_0,\mathcal D_0)$ is also K-semistable.
\end{enumerate}
\end{lem}
\iffalse
\subsection{Valuative criteria for K-stability}
We now recall the \emph{valuative criteria} for K-stability due to \cite{Fuj16, Li17} (see also \cite{BX18}).
\begin{definition} Let $X$ be a normal variety of dimension $n$. We say that $E$ is a \emph{prime divisor over} $X$ if $E$ is a divisor on a normal variety $Y$ where $f: Y \to X$ is a proper birational morphism.
Let $L$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $X$. Take $m \in \mathbb{Z}_{> 0}$ such that $mL$ is Cartier and let $x \in \mathbb{R}_{\geq 0}$. If $X$ is projective, we define the \emph{volume} of $L-xE$ on $X$ as
\[\mathrm{vol}_X(L -xE) := \mathrm{vol}_Y(f^*L- xE) = \limsup_{\substack{m \to \infty \\ mL\textrm{ is Cartier}}} \dfrac{h^0(X, \mathcal{O}_X(mL-\lceil mx \rceil E))}{m^n/n!}.\]\end{definition}
\begin{remark} By \cite[Definition 1.1, Remark 1.2]{Fuj16}, the above $\limsup$ is actually a limit, the function $\mathrm{vol}_X(L - xE)$ is a monotonically decreasing continuous function which vanishes for $x$ sufficiently large, and the definition does not depend on the choice of $f: Y \to X$. \end{remark}
\begin{definition} Let $(X,D)$ be a log pair such that $K_X+D$ is $\mathbb{Q}$-Cartier, and let $E$ be a prime divisor over $X$. Assume $E$ is a divisor on $Y$ where $f: Y \to X$ is a proper birational morphism from a normal variety $Y$. We define the \emph{log discrepancy} of $E$ with respect to $(X,D)$ as
\[A_{(X,D)}(\mathrm{ord}_E) := 1 + \mathrm{ord}_E(K_Y - f^*(K_X + D)), \] where $\mathrm{ord}_E$ is the divisorial valuation measuring order of vanishing along $E$. If $(X,D)$ is a log Fano pair, we also define the following functional
\[
S_{(X,D)}(\mathrm{ord}_E):=\frac{1}{\mathrm{vol}_X(-K_X-D)}\int_{0}^{\infty} \mathrm{vol}_X(-K_X-D-tE)dt.
\]
\end{definition}
The following summarizes the valuative criteria of uniform K-stability \cite{Fuj16}, K-semistability \cite{Fuj16, Li17}, and K-stability \cite{BX18}. Part (2) of this theorem can be viewed as an alternative definition of uniform K-stability of log Fano pairs.
\begin{theorem}[\cite{Fuj16, Li17, BX18}]\label{thm:valuative}
Let $(X,D)$ be a log Fano pair.
\begin{enumerate}
\item $(X,D)$ is K-semistable (resp. K-stable) if and only if for
any prime divisor $E$ over $X$,
\[
A_{(X,D)}(\mathrm{ord}_E)\geq~(\textrm{resp. }>)~S_{(X,D)}(\mathrm{ord}_E).
\]
\item $(X,D)$ is uniformly K-stable if and only if
there exists $\epsilon>0$ such that
\[
A_{(X,D)}(\mathrm{ord}_E)\geq (1+\epsilon) S_{(X,D)}(\mathrm{ord}_E)
\]
for any prime divisor $E$ over $X$.
\end{enumerate}
\end{theorem}
\begin{definition}
Let $X$ be a $\mathbb{Q}$-Fano variety, and let $D\sim_{\mathbb{Q}}-rK_X$ be an
effective $\mathbb{Q}$-divisor. For a rational number $0<c<r^{-1}$, we say
that $(X,D)$ is \emph{$c$-K-(poly/semi)stable (resp. uniformly
$c$-K-stable)} if $(X,cD)$ is K-(poly/semi)stable
(resp. uniformly K-stable).
\end{definition}
The following is a useful interpolation result (see also \cite[Lemma 2.6]{Der16}).
\begin{prop}\label{prop:k-interpolation}\cite[Proposition 2.13]{ADL}
Let $X$ be a $\mathbb{Q}$-Fano variety, and let $D$ and $\Delta$ be effective $\mathbb{Q}$-divisors satisfying the following properties:
\begin{itemize}
\item Both $D$ and $\Delta$ are proportional to $-K_X$ under $\mathbb{Q}$-linear equivalence.
\item $-K_X-D$ is ample, and $-K_X-\Delta$ is nef.
\item The log pairs $(X,D)$ and $(X,\Delta)$ are K-(poly/semi)stable and K-semistable, respectively.
\end{itemize}
Then we have
\begin{enumerate}
\item If $D\neq 0$, then $(X,tD+(1-t)\Delta)$ is K-(poly/semi)stable for any $t\in (0,1]$.
\item If $D=0$, then $(X,(1-t)\Delta)$ is K-semistable
for any $t\in (0,1]$.
\item If $\Delta\sim_{\mathbb{Q}}-K_X$ and $(X,\Delta)$ is klt, then $(X,tD+(1-t)\Delta)$ is uniformly K-stable for any $t\in (0,1)$.
\end{enumerate}
\end{prop}
\fi
\subsection{Normalized volumes}
In this section, we consider a klt singularity $x\in (X,D)$, that is, a klt log pair $(X,D)$ with a closed point $x\in X$. Recall that a \emph{valuation $v$ on $X$ centered at $x$} is a real valuation of $\mathbb{C}(X)$ such that the valuation ring $\mathcal{O}_v$ dominates $\mathcal{O}_{X,x}$ as local rings. The set of such valuations is denoted by $\mathrm{Val}_{X,x}$.
We briefly review normalized volume of valuations as introduced by Chi Li \cite{Li18}. See \cite{LLX18} for a survey on recent developments.
\begin{defn}
Let $x\in (X,D)$ be an $n$-dimensional klt singularity.
\begin{enumerate}[label=(\alph*)]
\item The \emph{volume} is a function $\mathrm{vol}_{X,x}:\mathrm{Val}_{X,x}\to \mathbb{R}_{\geq 0}$ defined in \cite{ELS03} as
\[
\mathrm{vol}_{X,x}(v):=\lim_{k\to\infty}\frac{\dim_{\mathbb{C}}\mathcal{O}_{X,x}/\{f\in\mathcal{O}_{X,x}\mid v(f)\geq k\}}{k^n/n!}.
\]
\item The \emph{log discrepancy} is a function $A_{(X,D)}:\mathrm{Val}_{X,x}\to \mathbb{R}_{>0}\cup\{+\infty\}$ defined in \cite{JM12, BdFFU15}. If $v=a\cdot\mathrm{ord}_E$ where $a\in\mathbb{R}_{>0}$ and $E$ is a prime divisor over $X$ centered at $x$, then
\[
A_{(X,D)}(v)=a(1+\mathrm{ord}_E(K_{Y}-\pi^*(K_X+D))),
\]
where $\pi:Y\to X$ provides a birational model $Y$ of $X$ containing $E$ as a divisor. In this paper, we only deal with divisorial valuations.
\item The \emph{normalized volume} is a function $\widehat{\mathrm{vol}}_{(X,D),x}:\mathrm{Val}_{X,x}\to \mathbb{R}_{>0}\cup\{+\infty\}$ defined in \cite{Li18} as
\[
\widehat{\mathrm{vol}}_{(X,D),x}(v):=\begin{cases}
A_{(X,D)}(v)^n\cdot\mathrm{vol}_{X,x}(v) & \textrm{ if } A_{(X,D)}(v)<+\infty\\
+\infty & \textrm{ if }A_{(X,D)}(v)=+\infty
\end{cases}
\]
The \emph{local volume} of a klt singularity $x\in (X,D)$ is defined as
\[
\widehat{\mathrm{vol}}(x,X,D):=\min_{v\in\mathrm{Val}_{X,x}}\widehat{\mathrm{vol}}_{(X,D),x}(v).
\]
Note that the existence of a normalized volume minimizer is proven in \cite{Blu18}. From \cite{LX16} we know that $\widehat{\mathrm{vol}}(x,X,D)$ can be approximated by normalized volume of divisorial valuations.
\end{enumerate}
\end{defn}
The following theorem from \cite{LL16} generalizing \cite[Theorem 1.1]{Fuj15} and \cite[Theorem 1.2]{Liu18} is crucial. Note that it also follows from the valuative criterion for K-semistability by Fujita \cite{Fuj16} and C. Li \cite{Li17}.
\begin{thm}[{\cite[Proposition 4.6]{LL16}}]\label{thm:local-vol-global}
Let $(X,D)$ be a K-semistable log Fano pair of dimension $n$. Then for any closed point $x\in X$, we have
\[
(-K_X-D)^n\leq \left(1+\frac{1}{n}\right)^n\widehat{\mathrm{vol}}(x,X,D).
\]
\end{thm}
\subsection{CM line bundles}
The CM line bundle of a flat family of polarized projective varieties was introduced algebraically by Tian \cite{Tia97} as a functorial line bundle over the base.
We start with the definition of CM line bundles due to Paul and Tian \cite{PT06, PT09} using the Knudsen-Mumford expansion (see also \cite{FR06}).
\begin{defn}[log CM line bundle]\label{defn:logCM}
Let $f:\mathcal X\to T$ be a proper flat morphism of connected schemes of finite type over $\mathbb{C}$. Let $\mathcal L$ be an $f$-ample line bundle on $\mathcal X$. Let $\mathcal D_i$ ($i\in \{1,2,\cdots, k\}$) be a closed subscheme of $\mathcal X$ such that $f|_{\mathcal D_i}:\mathcal D_i\to T$ is flat of pure dimension $n-1$. Let $c_i\in[0,1]$ be rational numbers.
A result of Knudsen-Mumford \cite{KM76} says that there exists line bundles $\lambda_j=\lambda_{j}(\mathcal X,\mathcal L)$ on $T$ such that for all $k$,
\[
\det f_!(\mathcal L^k)=\lambda_{n+1}^{\binom{k}{n+1}}\otimes\lambda_n^{\binom{k}{n}}\otimes\cdots\otimes\lambda_0.
\]
By flatness, the Hilbert polynomial $\chi(\mathcal X_t,\mathcal L_t^k)=a_0 k^n+a_1 k^{n-1}+ O(k^{n-2})$.
Then the \emph{CM line bundle} of the data $f:(\mathcal X\to T,\mathcal L)$ is defined as
\[
\lambda_{\mathrm{CM},f,\mathcal L}:=\lambda_{n+1}^{\mu+n(n+1)}\otimes\lambda_n^{-2(n+1)},
\]
where $\mu=\mu(\mathcal X,\mathcal L):=\frac{2a_1}{a_0}$.
The \emph{Chow line bundle} is defined as
\[
\lambda_{\operatorname{Chow},f,\mathcal L}:=\lambda_{n+1}.
\]
The \emph{log CM $\mathbb{Q}$-line bundle} of the data $(f:\mathcal X\to T, \mathcal L,\mathcal D:=\sum_{i=1}^k c_i\mathcal D_i)$ is defined as
\[
\lambda_{\mathrm{CM},f,\mathcal D,\mathcal L}:=\lambda_{\mathrm{CM},f,\mathcal L}-\frac{n(\mathcal L_t^{n-1}\cdot\mathcal D_t)}{(\mathcal L_t^n)}\lambda_{\operatorname{Chow},f,\mathcal L}+(n+1)\lambda_{\operatorname{Chow},f|_{\mathcal D},\mathcal L|_{\mathcal D}},
\]
where \[
(\mathcal L_t^{n-1}\cdot\mathcal D_t):=\sum_{i=1}^k c_i (\mathcal L_t^{n-1}\cdot\mathcal D_{i,t}),\quad
\lambda_{\operatorname{Chow},f|_{\mathcal D},\mathcal L|_{\mathcal D}}:=\bigotimes_{i=1}^k\lambda_{\operatorname{Chow},f|_{\mathcal D_i},\mathcal L|_{\mathcal D_i}}^{\otimes c_i}.
\]
\end{defn}
Next, we recall the concept of $\mathbb{Q}$-Gorenstein flat families of log Fano pairs.
\begin{defn}\label{defn:qgorfamily} Let $f:\mathcal X\to T$ be a proper flat morphism between normal varieties. Let $\mathcal D$ be an effective $\mathbb{Q}$-divisor on $\mathcal X$. We say $f:(\mathcal X,\mathcal D)\to T$ is a \emph{$\mathbb{Q}$-Gorenstein flat family of log Fano pairs} if the following conditions hold:
\begin{itemize}
\item $f$ has normal, connected fibers;
\item $\textrm{Supp}(\mathcal D)$ does not contain any fiber;
\item $-(K_{\mathcal X/T}+\mathcal D)$ is $\mathbb{Q}$-Cartier and $f$-ample.
\end{itemize}
We define the \emph{CM $\mathbb{Q}$-line bundle} of $f:(\mathcal X,\mathcal D)\to T$ to be $\lambda_{\mathrm{CM},f,\mathcal D}:=l^{-n}\lambda_{\mathrm{CM},f,\mathcal D,\mathcal L}$, where $\mathcal L:=-l(K_{\mathcal X/T}+\mathcal D)$ is an $f$-ample Cartier divisor on $\mathcal X$ for some $l\in\mathbb{Z}_{>0}$.
\end{defn}
We consider the following class of log Fano pairs.
\begin{definition}\label{defn:qgorsmoothable}
Let $c,r$ be positive rational numbers such that $c<\min\{1, r^{-1}\}$.
A log Fano pair $(X,cD)$ is \emph{$\mathbb{Q}$-Gorenstein smoothable} if there exists a $\mathbb{Q}$-Gorenstein flat family of log Fano pairs $\pi:(\mathcal X,c\mathcal D)\to B$ over a pointed smooth curve $(0\in B)$
such that the following holds:
\begin{itemize}
\item Both $-K_{\mathcal X/B}$ and $\mathcal D$ are $\mathbb{Q}$-Cartier, $\pi$-ample and $\mathcal D\sim_{\mathbb{Q},\pi}-rK_{\mathcal X/B}$;
\item Both $\pi$ and $\pi|_{\mathcal D}$ are smooth morphisms
over $B\setminus\{0\}$;
\item $(\mathcal X_0,c\mathcal D_0)\cong (X,cD)$.
\end{itemize}
A $\mathbb{Q}$-Gorenstein flat family of log Fano pairs $f:(\mathcal X,c\mathcal D)\to T$ is called a \emph{$\mathbb{Q}$-Gorenstein smoothable log Fano family} if
all fibers are $\mathbb{Q}$-Gorenstein smoothable log Fano pairs and $\mathcal D$ is $\mathbb{Q}$-Cartier.
\end{definition}
The next criterion is important when checking K-stability in explicit families. It is a partial generalization of \cite[Theorem 1]{PT06} and \cite[Theorem 3.4]{OSS16}.
\begin{thm}\label{thm:paultian}\cite[Theorem 2.22]{ADL}
Let $f:(\mathcal X,\mathcal D)\to T$ be a $\mathbb{Q}$-Gorenstein flat family of log Fano pairs over a normal projective variety $T$. Let $G$ be a reductive group acting on $\mathcal X$ and $T$ such that $\mathcal D$ is $G$-invariant and $f$ is $G$-equivariant.
Assume in addition that
\begin{enumerate}[label=(\alph*)]
\item if $\mathrm{Aut}(\mathcal X_t,\mathcal D_t)$ is finite for $t\in T$ then the stabilizer subgroup $G_t$ is also finite;
\item if $(\mathcal X_t,\mathcal D_t)\cong (\mathcal X_{t'}, \mathcal D_{t'})$ for $t,t'\in T$, then $t'\in G\cdot t$;
\item $\lambda_{\mathrm{CM},f,\mathcal D}$ is an ample $\mathbb{Q}$-line bundle on $T$.
\end{enumerate}
Then $t\in T$ is GIT (poly/semi)stable with respect to the $G$-linearized $\mathbb{Q}$-line bundle $\lambda_{\mathrm{CM},f,\mathcal D}$ if $(\mathcal X_t, \mathcal D_t)$ is a K-(poly/semi)stable log Fano pair.
\end{thm}
The following proposition provides an intersection formula for log CM line bundles. For the case without divisors this was proven by Paul and Tian \cite{PT06}. The current statement follows from {\cite[Proposition 3.7]{CP18}}.
\begin{prop}\label{prop:logCM2}\cite[Proposition 2.23]{ADL}
Let $f:(\mathcal X,\mathcal D)\to T$ be a $\mathbb{Q}$-Gorenstein flat family of $n$-dimensional log Fano pairs over a normal proper variety $T$. Then
\begin{equation}\label{eq:CM-intersection}
\mathrm{c}_1(\lambda_{\mathrm{CM},f,\mathcal D})=-f_*((-K_{\mathcal X/T}-\mathcal D)^{n+1}).
\end{equation}
\end{prop}
\subsection{K-moduli spaces of log Fano pairs}
In this subsection, we gather recent results on the construction of K-moduli spaces of log Fano pairs.
In \cite{ADL}, we construct K-moduli stacks (resp. proper good moduli spaces) of $\mathbb{Q}$-Gorenstein smoothable K-semistable (resp. K-polystable) log Fano pairs $(X,cD)$ where $D \sim_{\mathbb{Q}} -rK_X$, and $c$ is a rational number.
\begin{thm}\label{thm:lwxlog}\cite[Theorem 3.1 and Remark 3.25]{ADL}
Let $\chi_0$ be the Hilbert polynomial of an anti-canonically polarized Fano manifold. Fix $r\in\mathbb{Q}_{>0}$ and a rational number $c\in (0,\min\{1,r^{-1}\})$. Consider the following moduli pseudo-functor over reduced base $S$:
\[
\mathcal{K}\mathcal M_{\chi_0,r,c}(S)=\left\{(\mathcal X,\mathcal D)/S\left| \begin{array}{l}(\mathcal X,c\mathcal D)/S\textrm{ is a $\mathbb{Q}$-Gorenstein smoothable log Fano family,}\\ \mathcal D\sim_{S,\mathbb{Q}}-rK_{\mathcal X/S},~\textrm{each fiber $(\mathcal X_s,c\mathcal D_s)$ is K-semistable,}\\ \textrm{and $\chi(\mathcal X_s,\mathcal{O}_{\mathcal X_s}(-kK_{\mathcal X_s}))=\chi_0(k)$ for $k$ sufficiently divisible.}\end{array}\right.\right\}.
\]
Then there exists a reduced Artin stack $\mathcal{K}\mathcal M_{\chi_0,r,c}$ (called a \emph{K-moduli stack}) of finite type over $\mathbb{C}$ representing the above moduli pseudo-functor. In particular, the $\mathbb{C}$-points of $\mathcal{K}\mathcal M_{\chi_0,r,c}$ parametrize K-semistable $\mathbb{Q}$-Gorenstein smoothable log Fano pairs $(X,cD)$ with Hilbert polynomial $\chi(X,\mathcal{O}_X(-mK_X))=\chi_0(m)$ for sufficiently divisible $m$ and $D\sim_{\mathbb{Q}}-rK_X$.
Moreover, the Artin stack $\mathcal{K}\mathcal M_{\chi_0,r,c}$ admits a good moduli space $KM_{\chi_0,r,c}$ (called a \emph{K-moduli space}) as a proper reduced scheme of finite type over $\mathbb{C}$, whose closed points parametrize K-polystable log Fano pairs.
\end{thm}
By \cite[Proposition 3.35]{ADL}, we know that the universal log Fano family over $\mathcal{K}\mathcal M_{\chi_0,r,c}$ provides a CM $\mathbb{Q}$-line bundle $\lambda_c$ over $\mathcal{K}\mathcal M_{\chi_0,r,c}$ which descends to a CM $\mathbb{Q}$-line bundle $\Lambda_c$ over the good moduli space $KM_{\chi_0,r,c}$.
Recently, it was shown by Xu and Zhuang that the above K-moduli spaces are projective with ample CM $\mathbb{Q}$-line bundles.
\begin{thm}\label{thm:projectivity}\cite[Theorem 7.10]{XZ19}
The CM $\mathbb{Q}$-line bundle $\Lambda_c$ over $KM_{\chi_0,r,c}$ is ample. Hence $KM_{\chi_0,r,c}$ is a projective scheme.
\end{thm}
\begin{rem}
If we drop the $\mathbb{Q}$-Gorenstein smoothable condition, then K-moduli stacks and spaces of log Fano pairs with fixed numerical conditions (such as volume and finite coefficient set) exist as Artin stacks and separated algebraic spaces, respectively. For a precise statement, see e.g. \cite[Theorem 2.21]{XZ19}. These follow from recent works of \cite{Jia17, BX18, ABHLX19, BLX19, Xu19}.
\end{rem}
The following result shows that any K-moduli stack $\mathcal{K}\mathcal M_{\chi_0,r,c}$ parametrizing two-dimensional $\mathbb{Q}$-Gorenstein smoothable log Fano pairs is always normal. For the special case of plane curves on $\mathbb{P}^2$, see \cite[Proposition 4.6]{ADL}.
\begin{thm}\label{thm:modnormal}
Let $\chi_0$ be the Hilbert polynomial of an anti-canonically polarized smooth del Pezzo surface. Fix $r\in\mathbb{Q}_{>0}$ and a rational number $c\in (0,\min\{1,r^{-1}\})$. Then the K-moduli stack $\mathcal{K}\mathcal M_{\chi_0,r,c}$ is isomorphic to the quotient stack of a smooth scheme by a projective general linear group. In particular, both $\mathcal{K}\mathcal M_{\chi_0,r,c}$ and $KM_{\chi_0,r,c}$ are normal.
\end{thm}
\begin{proof}
Fix a sufficiently divisible $m\in\mathbb{Z}_{>0}$. Denote by
\[
\chi(k):=\chi_0(mk),\quad \tilde{\chi}(k)=\chi_0(mk)-\chi_0(mk-r), \quad \textrm{and}\quad N_m:=\chi_0(m)-1.
\]
Recall that in \cite[Section 3.1]{ADL}, we construct a locally closed subscheme $Z^{\mathrm{klt}}$ of the relative Hilbert scheme $\mathrm{Hilb}_{\chi}(\mathbb{P}^{N_m})\times\mathrm{Hilb}_{\tilde{\chi}}(\mathbb{P}^{N_m})$ which parametrizes $\mathbb{Q}$-Gorenstein smoothable log Fano pairs $(X,cD)$ such that they are embedded into $\mathbb{P}^{N_m}$ by $|-mK_X|$ and $X$ is klt.
Denote by $Z$ the dense open subscheme of $Z^{\mathrm{klt}}$ parametrizing $(X,D)$ where both $X$ and $D$ are smooth.
Let $Z_c^\circ$ be the Zariski open subset of $Z^{\mathrm{klt}}$ parametrizing K-semistable log Fano pairs $(X,cD)$. Denote by $Z_c^{\mathrm{red}}$ the reduced scheme supported on $Z_c^\circ$. Then $\mathcal{K}\mathcal M_{\chi_0,r,c}$ is defined as the quotient stack $[Z_c^{\mathrm{red}}/\mathrm{PGL}(N_m+1)]$. Hence it suffices to show that $Z^{\mathrm{klt}}$ is smooth which would then imply that $Z_c^{\mathrm{red}}$ is smooth. The argument below is inspired by \cite[Lemma 9.7]{ADL}.
Denote by $Z^{\mathbb{Q}\mathrm{F}}$ the locally closed subscheme of $\mathrm{Hilb}_{\chi}(\mathbb{P}^{N_m})$ parametrizing $\mathbb{Q}$-Gorenstein smoothable $\mathbb{Q}$-Fano varieties $X$ that are embedded into $\mathbb{P}^{N_m}$ by $|-mK_X|$. Since we are in dimension $2$, any point $\mathrm{Hilb}(X)\in Z^{\mathbb{Q}\mathrm{F}}$ corresponds to a log del Pezzo surface $X$ with only $T$-singularities. Hence $X$ has unobstructed $\mathbb{Q}$-Gorenstein deformations by \cite[Theorem 8.2]{Hac04} and \cite[Proposition 3.1]{hp}. Thus $Z^{\mathbb{Q}\mathrm{F}}$ is a smooth scheme. Denote by $Z^{\rm sm}$ the Zariski open subset of $Z^{\mathbb{Q}\mathrm{F}}$ parametrizing smooth Fano manifolds $X$ such that there exists a smooth divisor $D\sim_{\mathbb{Q}}-rK_X$. The openness of $Z^{\rm sm}$ follows from openness of smoothness, $H^0(X,\mathcal{O}_X(D))$ being constant since $H^i(X,\mathcal{O}_X(D))=0$ for $i \geq 1$ by Kodaira vanishing, and the fact that smooth families of Fano manifolds have locally constant Picard groups. Denote by $Z^{\mathrm{bs}}:=\overline{Z^{\rm sm}}\cap Z^{\mathbb{Q}\mathrm{F}}$. Hence $Z^{\mathrm{bs}}$ is the disjoint union of some connected components of $Z^{\mathbb{Q}\mathrm{F}}$.
Denote the first projection by $\mathrm{pr}_1: Z^{\mathrm{klt}}\to \mathrm{Hilb}_{\chi}(\mathbb{P}^{N_m})$. Clearly $\mathrm{pr}_1(Z^{\mathrm{klt}})$ is contained in $Z^{\mathbb{Q}\mathrm{F}}$. We claim that $\mathrm{pr}_1(Z^{\mathrm{klt}})=Z^{\mathrm{bs}}$, and that the restriction morphism $\mathrm{pr}_1:Z^{\mathrm{klt}}\to Z^{\mathrm{bs}}$ is proper and smooth.
We first show that $\mathrm{pr}_1(Z^{\mathrm{klt}})=Z^{\mathrm{bs}}$ and $\mathrm{pr}_1:Z^{\mathrm{klt}}\to Z^{\mathrm{bs}}$ is proper. Since $Z$ is a dense open subset of $Z^{\mathrm{klt}}$, we know that
\[
Z^{\rm sm}=\mathrm{pr}_1(Z)\subset\mathrm{pr}_1(Z^{\mathrm{klt}})\subset \overline{\mathrm{pr}_1(Z)}\cap Z^{\mathbb{Q}\mathrm{F}}=\overline{Z^{\rm sm}}\cap Z^{\mathbb{Q}\mathrm{F}}=Z^{\mathrm{bs}}.
\]
Hence the surjectivity of $\mathrm{pr}_1:Z^{\mathrm{klt}}\to Z^{\mathrm{bs}}$ would follow from its properness. We will verify properness by checking the existence part of valuative criterion. Let $0\in B$ be a pointed curve with $B^\circ:=B\setminus\{0\}$. Consider two morphisms $f^\circ: B^\circ\to Z^{\mathrm{klt}}$ and $g:B\to Z^{\mathbb{Q}\mathrm{F}}$ such that $g|_{B^\circ}=\mathrm{pr}_1\circ f^\circ$. It suffices to show that $f^\circ$ extends to $f:B\to Z^{\mathrm{klt}}$ such that $g=\mathrm{pr}_1\circ f$. We have a $\mathbb{Q}$-Gorenstein smoothable family $p:\mathcal X\to B$ induced by $g$, and a $\mathbb{Q}$-Cartier Weil divisor $\mathcal D^\circ$ on $\mathcal X^\circ:=p^{-1}(B^\circ)$ induced by $f^\circ$ whose support does not contain any fiber $\mathcal X_b$, and $\mathcal D^\circ\sim_{\mathbb{Q},B^\circ}-rK_{\mathcal X^\circ/B^\circ}$. We define $\mathcal D:=\overline{\mathcal D^\circ}$. Then, by taking Zariski closure, it is clear that $\mathcal D\sim_{\mathbb{Q}, B}-rK_{\mathcal X/B}$ since $\mathcal X_0$ is a $\pi$-linearly trivial Cartier prime divisor on $\mathcal X$. Thus $(\mathcal X,\mathcal D)\to B$ is a $\mathbb{Q}$-Gorenstein smoothable log Fano family. This finishes proving properness and surjectivity of $\mathrm{pr}_1:Z^{\mathrm{klt}}\to Z^{\mathrm{bs}}$.
Finally, we will show that $\mathrm{pr}_1:Z^{\mathrm{klt}}\to Z^{\mathrm{bs}}$ is a smooth morphism. Indeed, we will show that it is a smooth $\mathbb{P}^{N_r}$-fibration where $N_r:=\chi_0(r)-1$. If $\mathrm{Hilb}(X,D)\in \mathrm{pr}_1^{-1}(Z^{\rm sm})$, then we know that $h^0(X,\mathcal{O}_X(D))=\chi(X,\mathcal{O}_X(D))=\chi_0(r)$ since $H^i(X,\mathcal{O}_X(D))=0$ for any $i\geq 1$ by Kodaira vanishing.
Hence the fiber over $\mathrm{Hilb}(X)\in Z^{\rm sm}$ is isomorphic to $\mathbb{P}(H^0(X,\mathcal{O}_X(D)))\cong\mathbb{P}^{N_r}$. Hence we may restrict to the case when $\mathrm{Hilb}(X)\in Z^{\mathrm{bs}}\setminus Z^{\rm sm}$. Assume that $(X,D)\in Z^{\mathrm{klt}}$ is $\mathbb{Q}$-Gorenstein smoothable where $\pi: (\mathcal X,\mathcal D)\to B$ is a $\mathbb{Q}$-Gorenstein smoothing over a pointed curve $0\in B$ with $(\mathcal X_0,\mathcal D_0)\cong (X,D)$. Then by Lemma \ref{lem:flatness} below we know that $\pi_*\mathcal{O}_{\mathcal X}(\mathcal D)$ is locally free with fiber over $b\in B$ isomorphic to $H^0(\mathcal X_b,\mathcal{O}_{\mathcal X_b}(\mathcal D_b))$. Hence it is easy to conclude that for any effective Weil divisor $D'\sim D$ the pair $(X,D')$ is also $\mathbb{Q}$-Gorenstein smoothable. Since the Weil divisor class group $\mathrm{Cl}(X)$ of $X$ is finitely generated, we know that there are only finitely many Weil divisor classes $[D]$ such that $[D]= -r[K_X]$ in $\mathrm{Cl}(X)\otimes_{\mathbb{Z}} \mathbb{Q}$. Hence the fiber $\mathrm{pr}_1^{-1}(\mathrm{Hilb}(X))$ is isomorphic to a disjoint union of finitely many copies of $\mathbb{P}^{N_r}$. However, since $\mathrm{pr}_1:Z^{\mathrm{klt}}\to Z^{\mathrm{bs}}$ is proper with connected fibers over a dense open subset $Z^{\rm sm}$ and $Z^{\mathrm{bs}}$ is normal, taking Stein factorization yields that $\mathrm{pr}_1$ has connected fibers everywhere. Hence $\mathrm{pr}_1^{-1}(\mathrm{Hilb}(X))\cong\mathbb{P}^{N_r}$ for any $\mathrm{Hilb}(X)\in Z^{\mathrm{bs}}$. Therefore, $\mathrm{pr}_1$ has smooth fibers and smooth base which implies that $Z^{\mathrm{klt}}$ is Cohen-Macaulay. Hence, miracle flatness implies that $\mathrm{pr}_1$ is flat and hence smooth. The proof is finished.
\end{proof}
\begin{lem}\label{lem:flatness}
For $c,r\in\mathbb{Q}_{>0}$ with $cr<1$, let $(\mathcal X,c\mathcal D)\to B$ be a $\mathbb{Q}$-Gorenstein flat family of log Fano pairs over a smooth curve $B$ where $\mathcal D\sim_{\mathbb{Q},B}-rK_{\mathcal X/B}$ is a $\mathbb{Q}$-Cartier Weil divisor on $\mathcal X$. Then the function $B\ni b\mapsto h^0(\mathcal X_b,\mathcal{O}_{\mathcal X_b}(\mathcal D_b))$ is constant.
\end{lem}
\begin{proof}
By inversion of adjunction we know that $\mathcal X$ has klt singularities. Since $\mathcal D$ and $\mathcal D_b$ are $\mathbb{Q}$-Cartier Weil divisors on $\mathcal X$ and $\mathcal X_b$ respectively, we know that both $\mathcal{O}_{\mathcal X}(\mathcal D)$ and $\mathcal{O}_{\mathcal X_b}(\mathcal D_b)$ are Cohen-Macaulay by \cite[Corollary 5.25]{KM98}. Hence $\mathcal{O}_{\mathcal X_b}(\mathcal D_b)\cong \mathcal{O}_{\mathcal X}(\mathcal D)\otimes \mathcal{O}_{\mathcal X_b}$. By Kawamata-Viehweg vanishing, we know that $H^i(\mathcal X_b,\mathcal{O}_{\mathcal X_b}(\mathcal D_b))=0$ for any $b\in B$ and $i\geq 1$. Hence the statement follows from the semi-continuity theorem and flatness of $\mathcal{O}_{\mathcal X}(\mathcal D)$ over $B$.
\end{proof}
\section{Overview of previous results, Laza-O'Grady, and VGIT}\label{sec:LO} We refer the reader to \cite{LO16, LO18b, LO} for more details.
\subsection{Hyperelliptic K3 surfaces of degree 4}
A \emph{K3 surface} $X$ is a connected projective surface with Du Val singularities such that $\omega_X\cong\mathcal{O}_X$ and $H^1(X,\mathcal{O}_X)=0$. A K3 surface $X$ together with an ample line bundle $L$ on $X$ is called a \emph{polarized K3 surface} $(X,L)$ of degree $(L^2)$.
A polarized K3 surface $(X,L)$ is \emph{hyperelliptic} if the map $\varphi_L: X \dashrightarrow |L|^\vee$ is regular, and is a double cover of its image. All hyperelliptic quartic K3s are obtained by the following procedure (see \cite[Remark 2.1.3]{LO16}). Consider a normal quadric surface $Q \subset \mathbb{P}^3$, and $B \in |\omega_Q^{-2}|$ with ADE singularities (in particular, GIT stable when $Q\cong\mathbb{P}^1\times\mathbb{P}^1$). Then the double cover $\pi: X \to Q$ ramified over $B$ is a hyperelliptic quartic K3 with polarization $L = \pi^*\mathcal{O}_{Q}(1)$ and at worst ADE singularities.
Given a smooth $(4,4)$ curve $C$ on $\mathbb{P}^1 \times \mathbb{P}^1$, the double cover $\pi: X_C \to \mathbb{P}^1 \times \mathbb{P}^1$ ramified over $C$ is a hyperelliptic polarized K3 surface of degree 4. The polarization is given by $L_C = \pi^*\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(1,1)$. One can ask how the GIT moduli space of $(4,4)$ curves on $\mathbb{P}^1 \times \mathbb{P}^1$ compares to the moduli space of hyperelliptic K3 surfaces of degree 4 constructed via periods.
\subsection{Moduli of K3 surfaces}
Let $\Lambda$ be the lattice $U^2 \oplus D_{16}$, where $U$ is the hyperbolic plane and $D_{16}$ is the negative definite lattice corresponding to Dynkin diagram $D_{16}$. Let $\mathscr{D} = \{ |\sigma| \in \mathbb{P}(\Lambda \otimes \mathbb{C}) \mid \sigma^2 = 0, (\sigma + \overline{\sigma})^2 > 0\}.$ The connected component $\mathscr{D}^+$ is a type IV bounded symmetric domain. Let $\Gamma(\Lambda) = O^+(\Lambda) < O(\Lambda)$ be the index two subgroup mapping $\mathscr{D}^+$ to itself. We define the locally symmetric variety $\mathscr{F} = \Gamma \setminus \mathscr{D}^+$, and we let $\mathscr{F} \subset \mathscr{F}^*$ be its Baily-Borel compactification (see \cite[Section 3.1]{LZ}).
It turns out that $\mathscr{F}$ can be identified as the period space for hyperelliptic quartic K3 surfaces (see \cite[Remark 2.2.4]{LO16}). The rough idea is that $\mathscr{F}$ sits inside a larger period domain $\mathscr{F}^\prime$ which serves as a moduli space for quartic K3s, and $\mathscr{F}$ is naturally isomorphic to a divisor in $\mathscr{F}^\prime$ whose points correspond to the periods of the hyperelliptic K3s.
Let $\mathfrak{M}$ denote the GIT moduli space of $(4,4)$ curves on $\mathbb{P}^1 \times \mathbb{P}^1$. Shah proved that $(4,4)$ curves with ADE singularities are GIT-stable and, by associating to $C$ the corresponding period point of the K3 surface, one obtains a rational period map $\mathfrak{p}: \mathfrak{M} \dashrightarrow \mathscr{F}^*$ (\cite[Theorem 4.8]{Sha80}). By the Global Torelli theorem, the period map $\mathfrak{p}$ is actually birational. Laza-O'Grady show that the indeterminacy locus of $\mathfrak{p}$ is a subset of $\mathfrak{M}$ of dimension 7 (see e.g. \cite[Corollary 4.10]{LO}). The goal of Laza-O'Grady's work is to describe this birational map explicitly, as a series of flips and divisorial contractions.
The intersection of $\mathscr{F}$ and the image of the regular locus of $\mathfrak{p}$ is $\mathscr{F} \setminus H_h$, where $H_h$ is a Heegner divisor. Geometrically, it parametrizes periods of hyperelliptic K3s which are double covers of a quadric cone, and is defined as follows. The vector $w \in \Lambda$ is hyperbolic if $w^2 = -4$ and the divisibility $div(w) = 2$ (the positive generator of $(w, \Lambda)$). The Heegner divisor $H_h \subset \mathscr{F}$ is the locus of $O^+(\Lambda)$-equivalence classes of points $[\sigma] \in \mathscr{D}^+$ such that $\sigma^\perp$ contains a hyperbolic vector.
\subsection{Results of Laza-O'Grady and VGIT for (2,4)-complete intersections in $\mathbb{P}^3$}\label{sec:LOG-VGIT}
By the work of Baily-Borel, the compact space $\mathscr{F}^*$ can always be identified with $\mathrm{Proj} R(\mathscr{F}, \lambda)$, where $\lambda$ is the Hodge line bundle on $\mathscr{F}$. If $\Delta = H_h /2$, then it was shown in \cite{LO16} that $\mathfrak{M} \cong \mathrm{Proj} R(\mathscr{F}, \lambda + \Delta)$. Let $\beta \in [0,1] \cap \mathbb{Q}$. In \cite{LO}, Laza-O'Grady prove that the ring of sections $R(\mathscr{F}, \lambda + \beta \Delta)$ is finitely generated, and therefore $\mathscr{F}(\beta) = \mathrm{Proj} R(\mathscr{F}, \lambda + \beta \Delta)$ can be viewed as a projective variety interpolating between the GIT and Baily-Borel moduli spaces. Moreover, they calculate the set of critical values, and show that the birational period map is the composition of explicitly understood divisorial contractions and flips. In fact, they show that the intermediate spaces arise from variation of GIT (VGIT). They also show that the first step in their program produces $\widehat{\sF}=\mathscr{F}(\epsilon)\to \mathscr{F}^*$ as the $\mathbb{Q}$-Cartierization of $H_h \subset \mathscr{F}^*$ for $0<\epsilon\ll 1$. In particular, this gives a small partial resolution $\widehat{\sF}$ of $\mathscr{F}^*$ which parametrizes hyperelliptic quartic K3s with slc singularities. In what follows, we review VGIT and their results in further detail.
We now introduce the VGIT $\mathscr{M}(t)$, largely modeled off of \cite[Section 5]{LO}. A smooth $(2,4)$-complete intersection inside $\mathbb{P}^3$ determines $X_C$, a smooth hyperelliptic K3 of degree 4. Let $U$ be the parameter space for all $(2,4)$-complete intersection closed subschemes in $\mathbb{P}^3$. Then $U$ has a natural action of $\mathrm{SL}(4)$, though we note that $U$ is not projective. We let $E$ be the vector bundle over $|\mathcal{O}_{\mathbb{P}^3}(2)|$ whose fiber over $Q \in |\mathcal{O}_{\mathbb{P}^3}(2)|$ is given by $H^0(Q, \mathcal{O}_Q(4))$. Then $U \subseteq \mathbb{P}(E)$ and $\textrm{codim}_{\mathbb{P}(E)}\mathbb{P}(E) \setminus U \geq 2$.
There is a map $\mathrm{chow}: U \to \operatorname{Chow}$ to the Chow variety parametrizing 1-dimensional cycles inside $\mathbb{P}^3$. We denote by $\operatorname{Chow}_{(2,4)}$ the closure of the image of $\mathrm{chow}$. Note then that there is regular embedding:
\[ U \hookrightarrow \mathbb{P}(E) \times \operatorname{Chow}_{(2,4)}. \]
\iffalse
\textcolor{blue}{YL: I think this section we only prove that CM line bundles and VGIT polarizations on $U$ are proportional. Part of the reason is that \cite[Theorem 2.22]{ADL} requires a $\mathbb{Q}$-Gorenstein log Fano family over a proper base, while in our case we don't have a $\mathbb{Q}$-Gorenstein family of log Fano pairs over $Z$ since $\mathcal D$ is not pure-dimensional over $Z$. Instead we would need to work over $U$.}
\textcolor{magenta}{KD: I moved the proposition and proof about K implies GIT before the first wall to the proof of Theorem \ref{thm:firstwall}.}
\fi
Next, we describe the the universal family of log Fano pairs over $U$. We need this to set up the VGIT and in Section \ref{sec:CM} to compute the CM line bundle. We begin by considering the following diagram
\begin{center}
\begin{tikzcd}
(\mathscr{X}, \mathscr{D}) \arrow[d, "f"] \arrow[r, hook] & \mathbb{P}^3 \times \mathbb{P}(E) \arrow[dl, "p_2"] \\
\mathbb{P}(E) \arrow[d, "\pi"] & \\
\mathbb{P}(H^0(\mathbb{P}^3, \mathcal{O}(2))) = \mathbb{P}^9 & \\
\end{tikzcd}
\end{center}
We let $p_1$ (resp. $p_2$) denote the first (resp. second) projections, and let $f: (\mathscr{X}, \mathscr{D}) \to \mathbb{P}(E)$ be the universal family over $\mathbb{P}(E)$, where we view $(\mathscr{X}, \mathscr{D}) \subseteq \mathbb{P}^3 \times \mathbb{P}(E)$. We let $\mathcal{Q} \subset \mathbb{P}^3 \times \mathbb{P}^9$ denote the universal family over $\mathbb{P}^9$ with morphism $\phi: \mathcal{Q} \to \mathbb{P}^9$, and let $E = \phi_* \mathcal{O}_{\mathcal{Q}}(4,0)$. Pointwise, we have
\begin{center}
\begin{tikzcd}
H^0(Q, \mathcal{O}_Q(4)) \arrow[r,hook] \arrow[d, mapsto] & \mathbb{P}(E) \arrow[d] \\
{[Q]} \arrow[r,hook] & \mathbb{P}^9\\
\end{tikzcd}
\end{center}
Using the notation of Laza-O'Grady (see \cite[(5.2)]{LO}), we denote by $\eta: = \pi^*\mathcal{O}_{\mathbb{P}^9}(1)$ and $\xi := \mathcal{O}_{\mathbb{P}(E)}(1)$. We recall the following result of Benoist.
\begin{prop}\cite[Theorem 2.7]{benoist}\label{prop:benoist2,4}
If $t \in \mathbb{Q}$, then the $\mathbb{Q}$-Cartier class $\eta + t\xi$ on $\mathbb{P}(E)$ is ample if and only if $t \in (0, \frac{1}{3}) \cap \mathbb{Q}$.
\end{prop}
We now set up the VGIT, following \cite[Section 5.1]{LO}. Let $\mathscr{P}$ denote the closure of $U$ in $\mathbb{P}(E) \times \operatorname{Chow}_{(2,4)}$. Let $p_1$ and $p_2$ be the first and second projections from $\mathscr{P}$ to $\mathbb{P}(E)$ and $\operatorname{Chow}_{(2,4)}$, respectively. The action of $\mathrm{SL}(4)$ on $\mathbb{P}^3$ extends to an action on $\mathscr{P}$. To construct a GIT quotient, we thus need to specify a $\mathrm{SL}(4)$ linearized ample line bundle on $\mathscr{P}$.
Fix a rational number $0 < \delta < \frac{1}{6}$. For $t \in (\delta, 1/2) \cap \mathbb{Q}$, consider the $\mathbb{Q}$-line bundle
\[N_t := \frac{1 - 2t}{1-2\delta} p_1^*(\eta + \delta \xi) + \frac{t - \delta}{2(1-2\delta)} p_2^*L_{\infty} ,\]
where $L_{\infty}$ is the restriction of the natural polarization of the Chow variety to $\operatorname{Chow}_{(2,4)}$. One can check that $N_t$ is ample for $\delta < t < \frac{1}{2}$ and semiample for $t = \frac{1}{2}$.
\begin{definition}\label{def:VGIT}
Let $\delta \in \mathbb{Q}$ satisfy $0 < \delta < \frac{1}{6}$. For each $t \in (\delta, \frac{1}{2}] \cap \mathbb{Q}$, we define the VGIT quotient stack $\mathscr{M}(t)$ of slope $t$ to be, and the VGIT quotient space $\mathfrak{M}(t)$ of slope $t$ to be
\[ \mathscr{M}(t) := [\mathscr{P}^{\rm ss}(N_t)/\mathrm{PGL}(4)], \quad \mathfrak{M}(t):=\mathscr{P}\mathbin{/\mkern-6mu/}_{N_t} \mathrm{SL}(4).\]
\end{definition}
\begin{remark}\leavevmode
\begin{enumerate}
\item Laza and O'Grady show that the VGIT quotients do not depend on choice of $\delta$, so the lack of $\delta$ in the notation is justified (see also Theorem \ref{thm:LOmain-alldeg}(1)).
\item Since $N_t$ is only semi-ample for $t = \frac{1}{2}$, they define $\mathfrak{M}(\frac{1}{2})$ to be $\mathrm{Proj} R(\mathscr{P}, N_{\frac{1}{2}})^{\mathrm{SL}(4)}$, and show this is isomorphic to $\operatorname{Chow}_{(2,4)}\mathbin{/\mkern-6mu/} \mathrm{SL}(4)$.
\end{enumerate} \end{remark}
The following two results from \cite{LO} will be required to relate the VGIT moduli spaces and K-moduli spaces.
\begin{prop}{\cite[Proposition 5.4]{LO}}\label{prop:Linfinity}
Let $\mathrm{chow}: U \to \operatorname{Chow}_{(2,4)}$ be the Hilbert-Chow morphism and let $\overline{L}_{\infty}\in \mathrm{Pic} (\mathbb{P}(E))_{\mathbb{Q}}$ be the unique extension of $\mathrm{chow}^*L_\infty$ to $\mathbb{P}(E)$. Then,
\[ \overline{L}_\infty = 4\eta + 2 \xi .\]
\end{prop}
\begin{lem}{\cite[Proposition 5.11]{LO}}\label{lem:GITssU}
For each $t\in (\delta,\frac{1}{2}] \cap \mathbb{Q}$, the VGIT semistable locus $\mathscr{P}^{\rm ss}(N_t)$ of slope $t$ is a Zariski open subset of $U$.
\end{lem}
We now state the main VGIT result of \cite{LO}, noting that their results also hold for the VGIT quotient stacks. Let $\mathrm{Hilb}_{(2,4)}$ denote the closure of $U$ inside the relevant Hilbert scheme, and let $L_m$ denote the Pl\"ucker line bundle corresponding to the $m$th Hilbert point.
\begin{theorem}\cite[Theorem 5.6]{LO}\label{thm:LOmain}
Let $\delta$ be as above. The following hold:
\begin{enumerate}
\item For $t \in (\delta, \frac{1}{3})$, the moduli space $\mathfrak{M}(t) \cong \mathbb{P}(E) \mathbin{/\mkern-6mu/}_{\eta + t\xi} \mathrm{SL}(4)$.
\item For $t \in (\delta, \frac{1}{6})$, we have $\mathfrak{M}(t) \cong \mathfrak{M}$.
\item For $m \geq 4$, we have $\mathrm{Hilb}_{(2,4)}\mathbin{/\mkern-6mu/}_{L_m} \mathrm{SL}(4) \cong \mathfrak{M}(t(m))$, where $t(m) = \dfrac{(m-3)^2}{2(m^2 - 4m + 5)}$.
\item $\mathfrak{M}(\frac{1}{2})\cong \operatorname{Chow}_{(2,4)}\mathbin{/\mkern-6mu/} \mathrm{SL}(4)$.
\end{enumerate}
\end{theorem}
Before stating their main result, we review some results from VGIT.
\subsubsection{Variation of GIT} The general theory of Variation of GIT quotients (VGIT) can be found in \cite{Tha96, dolgachevhu}. The goal here is to compare $\mathfrak{M}(t)$ for $t \in (\delta, \frac{1}{2}) \cap \mathbb{Q}$, in particular how varying the line bundle $N_t$ changes the GIT quotient. The main results of VGIT state that this interval can be subdivided into finitely many open chambers, and on each open chamber the space $\mathfrak{M}(t)$ remains unchanged (\cite[Theorem 2.4]{Tha96} and \cite[Theorem 0.2.3]{dolgachevhu}). The finitely many values where the space $\mathfrak{M}(t)$ does change are called walls. Here, there are birational morphisms $\mathfrak{M}(t-\epsilon) \to \mathfrak{M}(t) \leftarrow \mathfrak{M}(t+\epsilon)$, and there are additionally wall-crossing rational maps $\mathfrak{M}(t-\epsilon) \dashrightarrow \mathfrak{M}(t+\epsilon)$ (\cite[Theorem 3.3]{Tha96}).
Later on, we will need the following foundational results in VGIT, and we refer the reader to the survey \cite[Sections 3 and 4]{LazaGIT}, and the references therein.
\begin{lem}\label{lem:VGITbasics
Let $(X,\mathcal L_0)$ be a polarized projective variety. Let $G$ be a reductive group acting on $(X,\mathcal L_0)$. Let $\mathcal L$ be a $G$-linearized line bundle on $X$. For a rational number $0<\epsilon\ll 1$, consider the G-linearized ample $\mathbb{Q}$-line bundle $\mathcal L_{\pm}:= \mathcal L_{0}\otimes \mathcal L^{\otimes(\pm \epsilon)}$.
\begin{enumerate}
\item Let $X \mathbin{/\mkern-6mu/}_{\mathcal{L}_0} G$ and $X \mathbin{/\mkern-6mu/}_{\mathcal{L}_{\pm}} G$ denote the VGIT quotients. If $X^{\rm ss}(0)$ and $X^{\rm ss}(\pm)$ denote the respective VGIT semistable loci, then there are open inclusions $X^{\rm ss}(\pm) \subseteq X^{\rm ss}(0)$.
\item For any closed point $x\in X^{\rm ss}(0)\setminus X^{\rm ss}(\pm)$, there exists a $1$-PS $\sigma$ in $G$ such that
\[
\mu^{\mathcal L_0}(x, \sigma)=0, \quad\textrm{and}\quad \mu^{\mathcal L_\pm}(x, \sigma)<0.
\]
\end{enumerate}
\end{lem}
\begin{proof}
(1) This is the well-known semi-continuity property of semistable loci from \cite[Theorem 4.1]{Tha96} and \cite[\S3.4]{dolgachevhu} (see also \cite[Lemma 3.10]{LazaGIT}).
(2) By symmetry we may assume that $x$ is VGIT unstable with respect to $\mathcal L_+$. Hence by Hilbert-Mumford numerical criterion, there exists a $1$-PS $\sigma_0$ in $G$ such that $\mu^{\mathcal L_+}(x, \sigma_0)<0$. Let $T$ be a maximal torus of $G$ containing $\sigma_0$. By \cite[Chapter 2, Proposition 2.14]{MFK94}, we know that there exist two rational piecewise linear function $h_0$ and $h$ on $\mathrm{Hom}_{\mathbb{Q}}(\mathbb{G}_m, T)$ such that for any $1$-PS $\lambda$ in $T$, we have
\[
\mu^{\mathcal L_0}(x,\lambda)=h_0(\lambda), \quad \textrm{and} \quad \mu^{\mathcal L}(x,\lambda)=h(\lambda).
\]
Since $x\in X^{\rm ss}(0)$, we know that $h_0(\lambda)\geq 0$ for any $\lambda\in \mathrm{Hom}_{\mathbb{Q}}(\mathbb{G}_m, T)$. On the other hand, $\mu^{\mathcal L_+}(x,\sigma_0)=h_0(\sigma_0)+\epsilon h(\sigma_0)<0$. Hence there exists $\sigma\in \mathrm{Hom}_{\mathbb{Q}}(\mathbb{G}_m, T)$ such that $h_0(\sigma)=0$ and $h(\sigma)<0$. The proof is finished.
\end{proof}
Finally, we state the main result from \cite{LO}.
\begin{theorem}\cite[Theorem 1.1]{LO}\label{thm:LOwallcrossings}
Let $\beta\in [0,1]$, and let $t(\beta) = \dfrac{1}{4\beta +2} \in [\frac{1}{6}, \frac{1}{2}]$. The period map \[\mathfrak{p}: \mathfrak{M} \cong \mathscr{F}(1) \dashrightarrow \mathscr{F}(0) \cong \mathscr{F}^*\] is the composition of elementary birational maps with 8 critical values of $\beta$. Moreover, there is an isomorphism $\mathfrak{M}(t(\beta)) \cong \mathscr{F}(\beta)$. In particular, the intermediate spaces are the VGIT quotients described above, and are related by elementary birational maps. Finally, the map $\mathscr{F}(1/8) \to \mathscr{F}(0) \cong \mathscr{F}^*$ is the $\mathbb{Q}$-Cartierization of $H_h$.
\end{theorem}
\section{Degenerations of $\mathbb{P}^1 \times \mathbb{P}^1$ in K-moduli spaces}\label{sec:surfaces}
\subsection{K-moduli spaces of curves on $\mathbb{P}^1 \times \mathbb{P}^1$}
In this section, we will define the K-moduli spaces which generically parametrize smooth $(d,d)$-curves on $\mathbb{P}^1\times\mathbb{P}^1$.
\begin{prop}\label{prop:p1xp1kss} Let $d\geq 3$ be an integer. Let $C$ be a $(d,d)$-curve on $\mathbb{P}^1\times\mathbb{P}^1$. If $\mathrm{lct}(\mathbb{P}^1\times\mathbb{P}^1;C)>\frac{2}{d}$ (resp. $\geq \frac{2}{d}$), then the log Fano pair $(\mathbb{P}^1 \times \mathbb{P}^1, cC)$ is K-stable (resp. K-semistable) for any $c\in (0, \frac{2}{d})$. In particular, $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ is K-stable for any $c\in (0, \frac{2}{d})$ if either $C$ is smooth or $d=4$ and $C$ has at worst ADE singularities.
\end{prop}
\begin{proof} This follows from interpolation (see \cite[Proposition 2.13]{ADL} or \cite[Lemma 2.6]{Der16}), since the pair $(\mathbb{P}^1 \times \mathbb{P}^1, \frac{2}{d}C)$ is klt (resp. lc) and $\mathbb{P}^1\times\mathbb{P}^1$ is K-polystable. \end{proof}
We begin to define the K-moduli stack $\overline{\mathcal{K}}_{d,c}$ and the K-moduli space $\overline{K}_{d,c}$.
Let $\chi_0(\cdot)$ be the Hilbert polynomial of the polarized Fano manifold $(\mathbb{P}^1\times\mathbb{P}^1,-K_{\mathbb{P}^1\times\mathbb{P}^1})$, i.e. $\chi_0(m)= 4m^2 +4m+1$. Consider the K-moduli stack $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ and K-moduli space $KM_{\chi_0, d/2, c}$ where $d\geq 3$ is an integer and $c\in (0, \frac{2}{d})\cap\mathbb{Q}$.
\begin{prop}\label{prop:modconnected}
Let $d\geq 3$ be an integer.
The K-moduli stack $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ and K-moduli space $KM_{\chi_0, d/2, c}$ are both normal. Moreover, we have the following cases.
\begin{enumerate}
\item If $d$ is odd, then $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ is connected and generically parametrizes $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ where $C\in |\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(d,d)|$ is a smooth curve.
\item If $d$ is even, then $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ has at most two connected components. One of these components generically parametrizes $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ where $C\in |\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(d,d)|$ is a smooth curve; the other component, if it exists, generically parametrizes $(\mathbb{F}_1, cC')$ where $C'\in |\mathcal{O}_{\mathbb{F}_1}(-\frac{d}{2}K_{\mathbb{F}_1})|$ is a smooth curve on the Hirzebruch surface $\mathbb{F}_1$.
\end{enumerate}
\end{prop}
\begin{proof}
The normality of $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ and $KM_{\chi_0, d/2, c}$ is a direct consequence of Theorem \ref{thm:modnormal}. For the rest, notice that there are only two smooth del Pezzo surfaces of degree $8$ up to isomorphism: $\mathbb{P}^1\times\mathbb{P}^1$ and $\mathbb{F}_1$. In addition, they are not homeomorphic since their intersection pairings on $H^2(\cdot, \mathbb{Z})$ are not isomorphic. By Proposition \ref{prop:p1xp1kss} we know that $(\mathbb{P}^1\times\mathbb{P}^1,cC)$ where $C$ is a smooth $(d,d)$-curve is always parametrized by $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$.
If $d$ is odd, then $-\frac{d}{2}K_{\mathbb{F}_1}$ is not represented by any Weil divisor since it has fractional intersection with the $(-1)$-curve on $\mathbb{F}_1$. Hence $\mathbb{F}_1$ will not appear in $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ when $d$ is odd. The proof is finished.
\end{proof}
\begin{defn}\label{defn:modulispace}
Let $d\geq 3$ be an integer.
For $c\in(0, \frac{2}{d})\cap \mathbb{Q}$, let $\overline{\mathcal{K}}_{d,c}$ denote the connected component of $\mathcal{K}\mathcal M_{\chi_0, d/2, c}$ where a general point parametrizes $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ where $C\in |\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(d,d)|$ is a smooth curve. In other words, $\overline{\mathcal{K}}_{d,c}$ is the moduli stack parametrizing
K-semistable log Fano pairs $(X,cD)$, where $X$ admits a $\mathbb{Q}$-Gorenstein smoothing to $\mathbb{P}^1 \times \mathbb{P}^1$ and the effective $\mathbb{Q}$-Cartier Weil divisor $D \sim_{\mathbb{Q}} -\frac{d}{2}K_X$. We let $\overline{K}_{d,c}$ denote the good moduli space of $\overline{\mathcal{K}}_{d,c}$. From Theorems \ref{thm:projectivity}, \ref{thm:modnormal}, and Proposition \ref{prop:modconnected} we know that $\overline{\mathcal{K}}_{d,c}$ is a connected normal Artin stack of finite type over $\mathbb{C}$, and $\overline{K}_{d,c}$ is a normal projective variety over $\mathbb{C}$.
\end{defn}
The following theorem is a direct consequence of \cite[Theorem 1.2]{ADL} and Proposition \ref{prop:p1xp1kss}.
\begin{thm}\label{thm:generalwall}
Let $d\geq 3$ be an integer.
There exist rational numbers
\[
0=c_0 <c_1<c_2<\cdots< c_k =\frac{2}{d}
\]
such that for each $0\leq i\leq k-1$ the K-moduli stacks $\overline{\mathcal{K}}_{d,c}$ are independent of the choice of $c\in (c_i,c_{i+1})$. For each $1\leq i\leq k-1$ and $0<\epsilon\ll 1$, we have open immersions
\[
\overline{\mathcal{K}}_{d,c_i-\epsilon}\hookrightarrow \overline{\mathcal{K}}_{d, c_i}\hookleftarrow \overline{\mathcal{K}}_{d,c_i+\epsilon}
\]
which induce projective birational morphisms
\[
\overline{K}_{d,c_i-\epsilon}\rightarrow \overline{K}_{d, c_i}\leftarrow \overline{K}_{d,c_i+\epsilon}.
\]
Moreover, all the above morphisms have local VGIT presentations as in \cite[(1.2)]{AFS17}.
\end{thm}
In this paper, we are mainly interested in the case when $d=4$, although some results for general $d$ are presented in Section \ref{sec:generaldegree}. We always abbreviate $\overline{\mathcal{K}}_{4,c}$ and $\overline{K}_{4,c}$ to $\overline{\mathcal{K}}_{c}$ and $\overline{K}_{c}$, respectively.
\subsection{Classification of degenerations of $\mathbb{P}^1 \times \mathbb{P}^1$}
The goal of this section is to prove Theorem \ref{thm:surfaces}, which states that if $(X,cD)$ is a pair parametrized by $\overline{\mathcal{K}}_{c}$ for some $c \in (0, \frac{1}{2})$, then $X$ is isomorphic to either $\mathbb{P}^1 \times \mathbb{P}^1$ or $\mathbb{P}(1,1,2)$. Later on, we will show (in Theorem \ref{thm:surfacesalld}) that the same is true in $\overline{\mathcal{K}}_{d,c}$ for $0 < c < \frac{4-\sqrt{2}}{2d}$ and $d \geq 3$. First we show that if $X$ is a normal $\mathbb{Q}$-Gorenstein deformation of $\mathbb{P}^1 \times \mathbb{P}^1$, then $\rho(X) \leq 2$.
\begin{prop}\label{prop:rho} Let $X$ be a log del Pezzo surface. Suppose that $X$ admits a $\mathbb{Q}$-Gorenstein deformation to $\mathbb{P}^1 \times \mathbb{P}^1$. Then $\rho(X) \leq 2$. \end{prop}
\begin{proof}
Let $\mathcal X\to T$ be a $\mathbb{Q}$-Gorenstein smoothing of $X$, i.e. $0\in T$ is a smooth germ of pointed curve, $\mathcal X_0\cong X$, and $\mathcal X_t\cong \mathbb{P}^1\times\mathbb{P}^1$ for $t\in T\setminus \{0\}$. By passing to a finite cover of $0\in T$, we may assume that $\mathcal X^\circ\cong (\mathbb{P}^1\times\mathbb{P}^1)\times T^\circ$ where $\mathcal X^\circ:=\mathcal X\setminus \mathcal X_0$ and $T^\circ := T \setminus \{0\}$.
First using \cite[Lemma 2.11]{Hac04}, we show that $\mathrm{Cl}(\mathcal{X}) \cong \mathbb{Z}^{2}$. Indeed, consider the exact sequence
\[ 0 \to \mathbb{Z}X \to \mathbb{Z} X \to \mathrm{Cl}(\mathcal{X})\to \mathrm{Cl}(\mathcal{X}^\circ) \to 0,\] which gives $\mathrm{Cl}(\mathcal{X}) \cong \mathrm{Cl}(\mathcal{X}^\circ)\cong \mathbb{Z}^{2}$.
Now we follow the proof of \cite[Proposition 6.3]{Hac04}. First note that there is an isomorphism $\mathrm{Pic}(\mathcal{X}) \to \mathrm{Pic}(X)$, and so we obtain the inequality:
\[ \rho(X) = \dim \mathrm{Pic}(X) \otimes \mathbb{Q} = \dim \mathrm{Pic}(\mathcal{X}) \otimes \mathbb{Q} \leq \dim \mathrm{Cl}(\mathcal{X}) \otimes \mathbb{Q} = 2,\]
with equality if and only if $\mathcal{X}$ is $\mathbb{Q}$-factorial. \end{proof}
A result of Hacking-Prokhorov now classifies the possible $\mathbb{Q}$-Gorenstein smoothings of $\mathbb{P}^1 \times \mathbb{P}^1$ (see \cite[Theorem 1.2]{hparxiv} and \cite[Proposition 2.6]{hp}).
\begin{prop}[Hacking-Prokhorov]\label{prop:rhoindex}
Let $X$ be a log del Pezzo surface admitting a $\mathbb{Q}$-Gorenstein smoothing to $\mathbb{P}^1\times\mathbb{P}^1$. There are two cases.
\begin{enumerate}
\item If $\rho(X)=1$, then $X$ is a $\mathbb{Q}$-Gorenstein partial smoothing of a weighted projective plane $\mathbb{P}(a^2,b^2,2c^2)$ where $(a,b,c)\in\mathbb{Z}_{>0}^3$ subject to the equation
\[
a^2+b^2+2c^2=4abc.
\]
In particular, the local index $\mathrm{ind}(x,K_X)$ is odd for any $x\in X$.
\item If $\rho(X)=2$, then $X$ only has quotient singularities of type
$\frac{1}{n^2}(1,an-1)$ where $\gcd(a,n)=1$.
\end{enumerate}
\end{prop}
Suppose $x\in X$ is a surface $T$-singularity. We denote by $\mu_x$ the \emph{Milnor number} of a $\mathbb{Q}$-Gorenstein smoothing of $x\in X$. If $x\in X$ is a cyclic quotient $T$-singularity of type $\frac{1}{en^2}(1,ena-1)$, then $\mu_x=e-1$.
\begin{theorem}\label{thm:indexbound} Let $(X,cD)
$ be a K-semistable log Fano pair that admits a $\mathbb{Q}$-Gorenstein smoothing
to $(\mathbb{P}^1\times\mathbb{P}^1, cC_t)$ with $c\in (0,\frac{2}{d})$ and $C_t$ a curve of bidgree $(d,d)$.
Let $x\in X$ be any singular point.
\begin{enumerate}
\item If $d$ is even or $\mathrm{ind}(x,K_X)$ is odd, then
\[
\mathrm{ind}(x,K_X)\leq\begin{cases}
\min\{\lfloor\frac{3}{\sqrt{2}(2-cd)}\rfloor,d+1\} & \textrm{ if }\mu_x=0,\\
\min\{\lfloor\frac{3}{2(2-cd)}\rfloor,d\} & \textrm{ if }\mu_x=1.
\end{cases}
\]
\item If $d$ is odd and $\mathrm{ind}(x,K_X)$ is even, then $\rho(X)=2$, $\mu_x=0$, and
\[
\mathrm{ind}(x,K_X)\leq \min\{2\lfloor\tfrac{3}{2\sqrt{2}(2-cd)}\rfloor,2d-2\}.
\]
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\beta:=1-cd/2\in (0,1)$.
We know that an index $n$ point $x\in X$ is a
cyclic quotient singularity of type $\frac{1}{n^2}(1,na-1)$ or $\frac{1}{2n^2}(1, 2na-1)$ where $\gcd(a,n)=1$.
We know that $dK_X+2D\sim 0$ when $d$ is odd and $\frac{d}{2}K_X+D\sim 0$ when $d$ is even, so if $x\not\in D$ then
$n\mid d$ hence $n\leq d$ (in fact $n\leq \frac d{2}$ if
$d$ is even). From now on let us assume
$x\in D$. Let $(\tilde{x}\in \widetilde{X})$ be the
smooth cover of $(x\in X)$, with $\widetilde{D}$ being the
preimage of $D$. Assume $\tilde{x}\in\widetilde{X}$ has local coordinates $(u,v)$ where the cyclic group action is scaling on
each coordinate. Let $u^i v^j$ be a monomial appearing
in the equation on $\widetilde{D}$ with minimum $i+j=\mathrm{ord}_{\tilde{x}}\widetilde{D}$.
\textbf{Case 1.} Assume $d$ is even and $\mu_x=0$. Then the orbifold group of $x\in X$ has order $n^2$.
Since the finite degree formula
is true in dimension $2$ by \cite[Theorem 4.15]{LLX18}, we have
\[
\widehat{\mathrm{vol}}(\tilde{x},\widetilde{X},c\widetilde{D})
=n^2\cdot\widehat{\mathrm{vol}}(x,X,c D).
\]
On the other hand, Theorem \ref{thm:local-vol-global} implies that
\[
8\beta^2=(-K_X-cD)^2\leq \frac{9}{4}\widehat{\mathrm{vol}}(x, X, cD)=\frac{9}{4n^2}\widehat{\mathrm{vol}}(\tilde{x},\widetilde{X},c\widetilde{D}).
\]
So we have
\begin{equation}\label{eq:index1}
n\leq \frac{3\sqrt{ \widehat{\mathrm{vol}}(\tilde{x},\widetilde{X},c\widetilde{D})}}{4\sqrt{2}\beta}
\leq \frac{3(2-c~\mathrm{ord}_{\tilde{x}}\widetilde{D})}{4\sqrt{2}\beta}.
\end{equation}
In particular we have $n<\frac{3}{2\sqrt{2}\beta} $.
We know that $\mathrm{lct}_{\tilde{x}}(\widetilde{X};\widetilde{D})>
c$, and Skoda \cite{skoda} implies $\mathrm{lct}_{\tilde{x}}(\widetilde{X};\widetilde{D})\leq
\frac{2}{\mathrm{ord}_{\tilde{x}}\widetilde{D}}$, so we have
$ \mathrm{ord}_{\tilde{x}}\widetilde{D}<\frac{2}{c}$.
Since $\frac{d}{2}K_X+D\sim 0$, we have $i+(na-1)j\equiv \frac{d}{2}na\mod n^2$ which implies $i\equiv j\mod n$.
If $\beta\geq \frac{3}{2\sqrt{2}d+3}$
then $n<\frac{3}{2\sqrt{2}\beta}\leq d+\frac{3}{2\sqrt{2}}$ which implies $n\leq d+1$. Thus we may assume
$\beta<\frac{3}{2\sqrt{2}d+3}$. Then
\[
i+j= \mathrm{ord}_{\tilde{x}}\widetilde{D}<\frac{2}{c}<d+\frac{3}{2\sqrt{2}}.
\]
Hence $i+j\leq d+1$. Assume to the contrary that $n\geq d+2$. Then $i\equiv j\mod
n$ and $i+j<n$ implies that $i=j$. Hence $i+(na-1)j\equiv \frac{d}{2}na\mod n^2$ implies $i\equiv \frac{d}{2}\mod n$. But since $i\leq \frac{d+1}{2}<n$,
we know that $i=j=\frac{d}{2}$. Then \eqref{eq:index1} implies that
\[
n\leq \frac{3(2-c(i+j))}{4\sqrt{2}\beta}=\frac{6\beta}{4\sqrt{2}\beta}<2.
\]
We reach a contradiction.
\textbf{Case 2.} Assume $d$ is even and $\mu_x=1$. Then the orbifold group of $x\in X$ has order $n^2$. By a similar argument as in Case 1, we know that
\[
8\beta^2=(-K_X-cD)^2\leq \frac{9}{4}\widehat{\mathrm{vol}}(x, X, cD)=\frac{9}{8n^2}\widehat{\mathrm{vol}}(\tilde{x},\widetilde{X},c\widetilde{D}).
\]
Hence
\begin{equation}\label{eq:index2}
n\leq \frac{3\sqrt{ \widehat{\mathrm{vol}}(\tilde{x},\widetilde{X},c\widetilde{D})}}{8\beta}
\leq \frac{3(2-c~\mathrm{ord}_{\tilde{x}}\widetilde{D})}{8\beta}.
\end{equation}
In particular we have $n<\frac{3}{4\beta}$.
If $\beta\geq \frac{3}{4d+3}$
then $n<\frac{3}{4\beta}\leq d+\frac{3}{4}$ which implies $n\leq d$. Thus we may
assume $\beta<\frac{3}{4d+3}$. Then
\[
i+j= \mathrm{ord}_{\tilde{x}}\widetilde{D}<\frac{2}{c}
<d+\frac{3}{4}.
\]
Hence $i+j\leq d$. Assume to the contrary that $n\geq d+1$.
Then $i\equiv j\mod n$ and $i+j<n$ implies $i=j$.
Hence $i+(na-1)j\equiv \frac{d}{2}na\mod n^2$ implies $i\equiv \frac{d}{2}\mod n$. But since $i\leq \frac{d}{2}<n$,
we know that $i=j=\frac{d}{2}$. Then \eqref{eq:index2} implies that
\[
n\leq \frac{3(2-c(i+j))}{8\beta}=\frac{6\beta}{8\beta}<1.
\]
We reach a contradiction.
\textbf{Case 3.} Assume $d$ is odd and $\mu_x=0$. In this case we have $dK_X+2D\sim 0$ which implies $2(i+(na-1)j)\equiv dna \mod n^2$.
If $n$ is odd, then clearly $i\equiv j\mod n$. By the same argument as Case 1, we know $i=j=\frac{d}{2}$ if $n\geq d+2$, hence a contradiction.
If $n$ is even, then we do a finer analysis. Since both $d$ and $a$ are odd, from $2(i+(na-1)j)\equiv dna \mod n^2$ we know that $i-j\equiv \frac{n}{2}\mod n$. Thus
$n\leq 2(i+j)<\frac{4}{c}=\frac{2d}{1-\beta}$. Besides, \eqref{eq:index1} implies that $n<\frac{3}{2\sqrt{2}\beta}$. Hence
\[
n<\min\left\{\frac{2d}{1-\beta},\frac{3}{2\sqrt{2}\beta}\right\}\leq \frac{2\sqrt{2}(2d)+3}{2\sqrt{2}(1-\beta)+2\sqrt{2}\beta}=2d+\frac{3}{2\sqrt{2}}.
\]
Thus $n\leq 2d$. Assume to the contrary that $n=2d$, then $i+j\geq \frac{n}{2}=d$. Hence \eqref{eq:index1} implies that
\[
2d=n\leq \frac{3(2-c(i+j))}{4\sqrt{2}\beta}\leq \frac{3(2-cd)}{4\sqrt{2}\beta}=\frac{3}{2\sqrt{2}}.
\]
We reach a contradiction. Thus we have $n\leq 2d-2$.
\textbf{Case 4.} Assume $d$ is odd and $\mu_x=1$. Then by \cite[Proposition 2.6]{hp}, we know that $\rho(X)=1$. So $n$ is odd by Proposition \ref{prop:rhoindex}. Hence $2(i+(2na-1)j)\equiv dna \mod n^2$ implies $i\equiv j\mod n$. By a similar argument as in Case 2, we know $i=j=\frac{d}{2}$ if $n\geq d+1$, hence a contradiction.
\end{proof}
The index bounds in Theorem \ref{thm:indexbound} allow us to limit the surfaces that appear in pairs parameterized by the moduli stack $\overline{\mathcal{K}}_{c}$.
\begin{theorem}\label{thm:surfaces}
Let $(X,cD)$ be a K-semistable log Fano pair that admits a $\mathbb{Q}$-Gorenstein smoothing to $(\mathbb{P}^1\times\mathbb{P}^1, cC_t)$ with $c\in (0,\frac{1}{2})$ and $C_t$ a $(4,4)$ curve.
Then, $X$ must be isomorphic to either $\mathbb{P}^1 \times \mathbb{P}^1$ or $\mathbb{P}(1,1,2)$. \end{theorem}
\begin{proof} By Proposition \ref{prop:rho}, we know that $\rho(X) \leq 2$. We start with $\rho(X) = 1$. In this case, by Proposition \ref{prop:rhoindex}, we know that $X$ is a weighted projective space of the form $\mathbb{P}(a^2, b^2, 2c^2)$ where $a^2 + b^2 + 2c^2 = 4abc$, or a partial smoothing.
We begin enumerating the possible integer solutions and see that the first few are \[( a,b,c) = (1,1,1), (1,3, 1), (1, 3, 5), (11, 3, 5), \dots.\] We can exclude the last 2 (and any with higher index) by the index bound of Theorem \ref{thm:indexbound}. The first gives $\mathbb{P}(1,1,2)$ and the second gives $\mathbb{P}(1,2,9)$. We now show that the singularity $\frac{1}{9}(1,2)$ cannot appear.
Assume to the contrary that $x\in X$ is of type $\frac{1}{9}(1,2)$. Suppose $D \sim -2K_X$ and consider a smooth covering $(\tilde{x} \in \widetilde{X}) \to (x \in X)$. Note that we may assume $x \in D$, because otherwise if $x \notin D$ then $\mathrm{ind}(x,K_X) \leq 2$, and we obtain a contradiction. Consider local coordinates of $\tilde{x} \in \widetilde{X}$ namely $(u,v)$. Let $u^i v^j$ be a monomial appearing in the equation on $\widetilde{D}$ with minimum $i+j=\mathrm{ord}_{\tilde{x}}\widetilde{D}$. Then $i + 2j \equiv 6 \mod 9$. Since we know that $(X, cD)$ is klt at $x$, we have that \[ \frac{2}{i+j} \geq \mathrm{lct}(\widetilde{D}) > c\] and so in particular $i + j < \frac{2}{c}$.
By \eqref{eq:index1} with $n=3$ and $\beta=1-2c$, we have
\[2-(i+j)c \geq 4\sqrt{2}(1-2c).\] Since this inequality holds for some $0 < c < \frac{1}{2}$, we have $i+j\leq 3$ because otherwise
\[
2-(i+j)c\leq 2-4c<4\sqrt{2}(1-2c)
\]
which contradicts the previous inequality. Putting this together with $i + 2j = 6 \mod 9$, we see that $(i, j) = (0,3)$.
Consider the valuation $w$ on $\widetilde{X}$ which is the monomial valuation in the coordinates $(u,v)$ of weights $(1,2)$. In particular $w(\widetilde{D}) = 6$. Moreover, $A_{\widetilde{X}}(w) = 3$ and $\mathrm{vol}(w) = \frac{1}{2}$. Then we note that
\[
\widehat{\mathrm{vol}}(\tilde{x}, \widetilde{X}, c\widetilde{D}) \leq (A_{\widetilde{X}}(w) - c~w(\widetilde{D}))^2\mathrm{vol}(w) = \frac{(3-6c)^2}{2}.
\] By \eqref{eq:index1} we have
\[
4\sqrt{2}(1-2c)\leq \sqrt{\widehat{\mathrm{vol}}(\tilde{x}, \widetilde{X}, c\widetilde{D})}\leq \frac{3-6c}{\sqrt{2}}
\]
which gives $4\sqrt{2} \leq \frac{3}{\sqrt{2}}$, a contradiction. Thus the surface $X$ with a $\frac{1}{9}(1,2)$ singularity cannot appear. In particular, the only surface with $\rho(X) = 1$ is $X \cong \mathbb{P}(1,1,2)$.
Now we consider $\rho(X) = 2$. By Proposition \ref{prop:rhoindex}, we know that the only singular points of $X$ are of the form $\frac{1}{n^2}(1, na-1)$ with $n\leq 5$. We already excluded $\frac{1}{9}(1,2)$ so we only need to consider $n = 2, 4, 5$.
Let us consider $n=4$, namely a singularity of type $\frac{1}{16}(1, 3)$. We show that this singularity cannot occur.
As before, consider a smooth covering $(\tilde{x} \in \widetilde{X}) \to (x \in X)$ and suppose $D \sim -2K_X$. Note that we may assume $x \in D$, because otherwise if $x \notin D$ then $\mathrm{ind}(x, K_X) \leq 2$, and we obtain a contradiction. Consider local coordinates of $\tilde{x} \in \widetilde{X}$ namely $(u,v)$.
Let $u^i v^j$ be a monomial appearing in the equation on $\widetilde{D}$ with minimum $i+j=\mathrm{ord}_{\tilde{x}}\widetilde{D}$.
Then $i + 3j \equiv 8 \mod 16$, and $i + j < \frac{2}{c}$. By \eqref{eq:index1} with $n=4$ and $\beta=1-2c$, we have
\[ 4 \sqrt{2} (1-2c) \leq (2-c(i+j)).\]
Since this inequality holds for some $0 < c < \frac{1}{2}$, we have $i+j\leq 3$ by the same reason in $n=3$. This contradicts with $i+3j\equiv 8\mod 16$. In particular, a singularity of type $\frac{1}{16}(1,3)$ cannot occur.
Next let us consider $n=5$, namely a singularity of type $\frac{1}{25}(1,4)$ or $\frac{1}{25}(1,9)$. We again show that these singularities cannot occur. With the same set up as the previous paragraph, we have either $i+4j\equiv 10 \mod 25$ or $i+9j\equiv 20 \mod 25$. Moreover, we again have $i+j\leq 3$ by the same reason in $n=3,4$ but this contradicts to the congruence equations. Therefore, a singularity of type $\frac{1}{25}(1,4)$ or $\frac{1}{25}(1,9)$ cannot occur.
After the above discussions, the only case left to study is $\rho(X)=2$ and $X$ has only singularities of type $\frac{1}{4}(1,1)$. If $X$ is singular, then by \cite[Table 6 and Theorem 7.15]{nakayama} (see also \cite{AN}), we know that $X$ is isomorphic to a blow up of $\mathbb{P}(1,1,4)$ at a smooth point. However, in this case $X$ admits a $\mathbb{Q}$-Gorenstein smoothing to the Hirzebruch surface $\mathbb{F}_1$ which is not homeomorphic to $\mathbb{P}^1\times\mathbb{P}^1$. This is a contradiction. Hence $X$ is smooth and isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$.
\end{proof}
\begin{remark}\label{rmk:oursurfaces}
Let $(X,cD)$ be a K-semistable log Fano pair that admits a $\mathbb{Q}$-Gorenstein smoothing to $(\mathbb{P}^1\times\mathbb{P}^1, cC_t)$ with $c\in (0,\frac{1}{2})$ and $C_t$ a $(4,4)$ curve. By Theorem \ref{thm:surfaces}, this implies that $X$ is either $\mathbb{P}^1 \times \mathbb{P}^1$ or $\mathbb{P}(1,1,2)$. Therefore, there exists a closed embedding $(X,D)\hookrightarrow\mathbb{P}^3$ such that $X \in |\mathcal{O}_{\mathbb{P}^3}(2)|$ and $D\sim -2K_X$ are $(2,4)$ complete intersections inside $\mathbb{P}^3$. Hence, all K-semistable pairs $(X, cD)$ with $c \in (0,\frac{1}{2})$ are parametrized by a Zariski open subset of $U$. \end{remark}
\begin{thm}\label{thm:surfacesalld}
Let $(X,cD)$ be a K-semistable log Fano pair that admits a $\mathbb{Q}$-Gorenstein smoothing to $(\mathbb{P}^1\times\mathbb{P}^1, cC_t)$ with $c\in (0,\frac{4-\sqrt{2}}{2d})$ and $C_t$ a $(d,d)$ curve where $d \geq 3$.
Then, $X$ must be either $\mathbb{P}^1 \times \mathbb{P}^1$ or $\mathbb{P}(1,1,2)$.
\end{thm}
\begin{proof}
By Proposition \ref{prop:rho}, $\rho(X) \leq 2$. By the index bound of Theorem \ref{thm:indexbound}, for $ c < \frac{4 - \sqrt{2}}{2d}$ we know that $\mathrm{ind}(x, K_X) < 3$.
If $\rho(X)=1$, then by Proposition \ref{prop:rhoindex} we know that $X$ is Gorenstein which implies that $X\cong \mathbb{P}(1,1,2)$. If $\rho(X)=2$, then by Proposition \ref{prop:rhoindex} we know that either $X$ is smooth hence isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$, or $X$ has only singularities of type $\frac{1}{4}(1,1)$. The latter case cannot happen by the end of the proof of Theorem \ref{thm:surfaces}.
Therefore, the only surfaces appearing are $\mathbb{P}^1 \times \mathbb{P}^1$ and $\mathbb{P}(1,1,2)$.
\end{proof}
\section{Wall crossings for K-moduli and GIT}\label{sec:main}
In this section, we prove Theorem \ref{mthm:thmintro}, that is, for $0<c < \frac {1}{2}$, the K-moduli stack $\overline{\mathcal{K}}_{c}$ coincides with the GIT moduli stack $\mathscr{M}(t)$ with $t=\frac{3c}{2c+2}$ (see Definition \ref{def:VGIT}). The important observation comes from Theorem \ref{thm:surfaces}: the surfaces $X$ in the pairs parametrized by $\overline{\mathcal{K}}_{c}$ are $\mathbb{P}^1 \times \mathbb{P}^1$ or $\mathbb{P}(1,1,2)$ which are quadric surfaces in $\mathbb{P}^3$, and the divisors $D$ can therefore be viewed as $(2,4)$-complete intersections in $\mathbb{P}^3$.
\subsection{The first wall crossing}
In this section, we show that GIT-(poly/semi)stability of $(4,4)$-curves on $\mathbb{P}^1\times\mathbb{P}^1$ and $c$-K-(poly/semi)stability coincide for $c < \frac 1{8}$. Moreover, we show that $c_1=\frac{1}{8}$ is the first wall for K-moduli stacks $\overline{\mathcal{K}}_c$.
\begin{defn}
A $(4,4)$-curve $C$ on $\mathbb{P}^1\times\mathbb{P}^1$ gives a point $[C] \in \mathbf{P}_{4,4}:= \mathbb{P}(H^0(\mathbb{P}^1\times\mathbb{P}^1, \mathcal{O}(4,4)))$. We say $C$ is \emph{GIT (poly/semi)stable} if $[C]$ is GIT (poly/semi)stable with respect to the natural $\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)$-action on $(\mathbf{P}_{4,4}, \mathcal{O}(2))$. We define the \emph{GIT quotient stack} $\mathscr{M}$ and the \emph{GIT quotient space} $\mathfrak{M}$ as
\[
\mathscr{M}:=[\mathbf{P}_{4,4}^{\rm ss}/\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)], \qquad
\mathfrak{M}:=\mathbf{P}_{4,4}^{\rm ss}\mathbin{/\mkern-6mu/} \mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1).
\]
\end{defn}
\begin{theorem}\label{thm:firstwall} For any $0 < c < \frac 1{8}$, a curve $C \subset \mathbb{P}^1\times\mathbb{P}^1$ of bidgree $(4,4)$ is GIT-(poly/semi)stable if and only if the log Fano pair $(\mathbb{P}^1 \times \mathbb{P}^1, cC)$ is K-(poly/semi)stable. Moreover, there is an isomorphism of Artin stacks $\overline{\mathcal{K}}_{c} \cong \mathscr{M}$. \end{theorem}
\iffalse
\begin{proof} By Theorem \ref{thm:surfaces}, the only possible surfaces appearing as K-semistable pairs are $\mathbb{P}^1 \times \mathbb{P}^1$ and $\mathbb{P}(1,1,2)$. We show that any pair $(\mathbb{P}(1,1,2), cD)$ is not K-semistable under the assumption $c < \frac{1}{8}$.
Assume, for the sake of contradiction, that $(X = \mathbb{P}(1,1,2), cD)$ is a K-semistable point of $\overline{\mathcal{K}}_{2,c}$. Then $D$ is of degree 8. Let $E$ be the exceptional divisor, a $(-2)$-curve over the singular point $p\in\mathbb{P}(1,1,2)$. Then the log discrepancy
\[ A_{(X,cD)}(E) \leq 1 = \begin{cases}
1, & \text{if $D$ misses the singular point}\\
1-c.\mathrm{ord}_p(D)/2, & \text{otherwise}
\end{cases}\]
On the other hand,
\[ V = \frac{1}{\mathrm{vol}_X(-K_X -cD)} \int_{0}^{\infty} \mathrm{vol}_X(-K_X -cD-tE) dt, \]
where $-K_X - cD \sim \mathcal{O}(4-8c)$, and $\mathrm{vol}_X(\mathcal{O}(1)-tE) = \max\{\frac{1}{2} - \frac{2t^2}{(4-8c)^2}, 0\}$. So we see that
\[ V = \frac{4-8c}{3},\] and since $c < \frac{1}{8}$ we see that $A_{(X,cD)}(E) < V$ and thus $(X,cD)$ is K-unstable for any $c < \frac{1}{8}$ by the valuative criterion for K-stability (see e.g. \cite[Theorem 2.9]{ADL}).
\fi
\begin{proof}
We first show that K-(poly/semi)stability of $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ implies GIT (poly/semi)stability of $C$ for any $c\in (0,\frac{1}{2})$. Consider the universal family $\pi: (\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbf{P}_{(4,4)}, c\mathcal C) \to \mathbf{P}_{(4,4)}$ over the parameter space of $(4,4)$-curves on $\mathbb{P}^1\times\mathbb{P}^1$. It is clear that $\mathcal C \in |\mathcal{O}(4,4,1)|$. Hence by Proposition \ref{prop:logCM2} we have
\begin{align*}
\lambda_{\mathrm{CM}, \pi, c\mathcal C}& = -\pi_* (-K_{\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbf{P}_{(4,4)}/\mathbf{P}_{(4,4)}}-c\mathcal C)^3
= -\pi_* (\mathcal{O}(2-4c,2-4c,-c))^3 \\
& = -3(\mathcal{O}_{\mathbb{P}^1\times\mathbb{P}^1}(2-4c,2-4c)^2) \mathcal{O}_{\mathbf{P}_{(4,4)}}(-c)
= \mathcal{O}_{\mathbf{P}_{(4,4)}}(3(2-4c)^2 c).
\end{align*}
Hence the CM line bundle $\lambda_{\mathrm{CM}, \pi, c\mathcal C}$ is ample whenever $c\in (0, \frac{1}{2})$. Hence the statement of K implying GIT directly follows from Theorem \ref{thm:paultian}.
Next we show the converse, i.e. GIT-(poly/semi)stability of $C$ implies K-(poly/semi)stability of $(\mathbb{P}^1\times\mathbb{P}^1,cC)$ for $c< \frac{1}{8}$. Indeed, using similar argument as the proof of \cite[Theorem 5.2]{ADL} with a key ingredient from properness of K-moduli spaces, it suffices to show that any pair $(X,D)$ appearing in the K-moduli stack $\overline{\mathcal{K}}_c$ for $c < \frac{1}{8}$ satisfies that $X\cong\mathbb{P}^1\times\mathbb{P}^1$ and $D$ is a $(4,4)$-curve. Since $\mathbb{P}^1\times\mathbb{P}^1$ has no non-trivial smooth degeneration, it suffices to show that $X$ is smooth. Assume to the contrary that $X$ is singular at a point $x\in X$. Then by \cite{LL16} we know that
\[
8(1-2c)^2=(-K_X-cD)^2\leq \frac{9}{4}\widehat{\mathrm{vol}}(x,X,cD)\leq \frac{9}{4}\widehat{\mathrm{vol}}(x,X)\leq \frac{9}{2}.
\]
This implies that $c\geq \frac{1}{8}$ which is a contradiction.
Hence, for $c < \frac{1}{8}$, a K-semistable pair $(X,cD)$ must be isomorphic to $(\mathbb{P}^1 \times \mathbb{P}^1, cC)$, where $C$ is a $(4,4)$-curve.
Summing up, the equivalence of K-(poly/semi)stability with GIT (poly/semi)stability yields a morphism $\phi: \mathscr{M}\to \overline{\mathcal{K}}_{c}$ which descends to an isomorphism $\mathfrak{M}\xrightarrow{\cong}\overline{K}_c$.
To conclude, it suffices to show that $\phi$ is an isomorphism between Artin stacks. The proof is similar to \cite[Theorem 3.24]{ADL}. Denote by $T:=\mathbf{P}_{4,4}^{\rm ss}$. Let $\pi:(\mathcal X,\mathcal D)\to T$ be the universal family. Recall from \cite[Section 3.1]{ADL} and Theorem \ref{thm:modnormal} that $\overline{\mathcal{K}}_c\cong [Z_{c}^\circ/\mathrm{PGL}(N_m+1)]$ where $Z_{c}^\circ$ is the K-semistable locus in the Hilbert scheme of embedded by $m$-th multiple of anti-canonical divisors. Denote by $\pi': (\mathcal X',\mathcal D')\to T'$ the universal family over $T':=Z_c^\circ$. Let $P$ be the $\mathrm{PGL}(N_m+1)$-torsor over $T$ induced from the vector bundle $\pi_*\mathcal{O}_{\mathcal X}(-mK_{\mathcal X/T})$. Then from \cite[Proof of Theorem 3.24]{ADL} we see that there is an $\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)$-equivariant morphism $\psi: P\to T'$ whose descent is precisely $\phi$. Hence in order to show $\phi$ is isomorphic it suffices to show that $\psi$ provides an $\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)$-torsor. Indeed, since $\pi':\mathcal X'\to T'$ is isotrivial where all fibers are isomorphic to $\mathbb{P}^1\times\mathbb{P}^1$, we may find an \'etale covering $\cup_i V_i\twoheadrightarrow T'$ such that there is an isomorphism $\rho_i: \mathcal X'\times_{T'} V_i\xrightarrow{\cong} (\mathbb{P}^1\times\mathbb{P}^1)\times V_i$. Hence by pushing forward $(\mathcal X',\mathcal D')\times_{T'} V_i$ and its natural frame from $\mathbb{P}^{N_m+1}$ to $(\mathbb{P}^1\times\mathbb{P}^1)\times V_i$ under $\rho_i$, we obtain a section $V_i\to P\times_{T'}V_i$ of $\psi\times_{T'}V_i$ which trivializes $\psi$. Thus the proof is finished.
\end{proof}
The following proposition shows that $c_1=\frac{1}{8}$ is the first wall of the K-moduli stacks $\overline{\mathcal{K}}_c$.
\begin{prop}\label{prop:firstwallreplace}
Let $C=4H$ where $H$ is a smooth $(1,1)$-curve on $\mathbb{P}^1\times\mathbb{P}^1$. Let $c\in (0,\frac{1}{2})$ be a rational number. Then $(\mathbb{P}^1\times\mathbb{P}^1,cC)$ is K-semistable (resp. K-polystable) if and only if $c\leq \frac{1}{8}$ (resp. $<\frac{1}{8}$). Moreover, the K-polystable degeneration of $(\mathbb{P}^1\times\mathbb{P}^1,\frac{1}{8}C)$ is isomorphic to $(\mathbb{P}(1,1,2), \frac{1}{8}C_0)$ where $C_0=4 H_0$ and $H_0$ is the section at infinity.
\end{prop}
\begin{proof}
We first show that $(\mathbb{P}^1\times\mathbb{P}^1,\frac{1}{8}C)$ is K-semistable where $(\mathbb{P}(1,1,2), \frac{1}{8}C_0)$ is its K-polystable degeneration. Choose an embedding $\mathbb{P}^1\times\mathbb{P}^1\hookrightarrow\mathbb{P}^3$ as a smooth quadric surface. Then $H$ is a hyperplane section of $\mathbb{P}^1\times\mathbb{P}^1$. Pick projective coordinates $[x_0,x_1,x_2,x_3]$ of $\mathbb{P}^3$ such that the hyperplane section through $H$ is given by $x_3=0$. Then the $1$-PS $\sigma: \mathbb{G}_m \to \mathrm{PGL}(4)$ given by $\sigma(t)[x_0,x_1,x_2,x_3]=[tx_0,tx_1,tx_2,x_3]$ provides a special test configuration of $(\mathbb{P}^1\times\mathbb{P}^1, \frac{1}{2}H)$ whose central fiber is an ordinary quadric cone with a section at infinity of coefficient $\frac{1}{2}$, i.e. isomorphic to $(\mathbb{P}(1,1,2), \frac{1}{2}H_0)$. By \cite{LL16} we know that $(\mathbb{P}(1,1,2), \frac{1}{2}H_0)$ admits a conical K\"ahler-Einstein metric hence is K-polystable. The K-semistability of $(\mathbb{P}^1\times\mathbb{P}^1,\frac{1}{8}C)$ follows from openness of K-semistability \cite{BLX19, Xu19}.
Next we show that $(\mathbb{P}^1\times \mathbb{P}^1, cC)$ is K-polystable for $c\in (0,\frac{1}{8})$. Clearly, it is K-semistable by interpolation \cite[Proposition 2.13]{ADL}. Let $(X,cD)$ be its K-polystable degeneration. By Theorem \ref{thm:firstwall}, we know that $X\cong\mathbb{P}^1\times\mathbb{P}^1$. Since $C=4H$, we have $D=4H_0$ for some $(1,1)$-curve $H_0$. If $H_0$ is reducible, then $(X,cD)$ is isomorphic to the self-product of $(\mathbb{P}^1, c[0])$. Since $(\mathbb{P}^1, c[0])$ is K-unstable, we know that $(X,cD)$ is also K-unstable by \cite{Zhu19}. Thus $H_0$ must be irreducible which implies that $(\mathbb{P}^1\times\mathbb{P}^1, cC)\cong (X,cD)$ is K-polystable.
Thus the proof is finished.
\end{proof}
\begin{remark}\leavevmode
\begin{enumerate}
\item The first K-moduli wall crossing at $c_1 = \frac{1}{8}$ has the following diagram
\[
\overline{K}_{\frac{1}{8}+\epsilon}\xrightarrow{\phi_1^+} \overline{K}_{\frac{1}{8}} \xleftarrow[\cong]{\phi_1^-} \overline{K}_{\frac{1}{8}-\epsilon}= \mathfrak{M}
\]
where the composition $(\phi_1^{-})^{-1}\circ \phi_1^+: \overline{K}_{\frac{1}{8}+\epsilon}\to \mathfrak{M}$ is
the Kirwan blowup of the point $[4H]$ in the GIT quotient $\mathfrak{M}$. Across this wall, we replace the quadruple $(1,1)$ curve $4H$ on $\mathbb{P}^1 \times \mathbb{P}^1$ with GIT polystable degree $8$ curves on $\mathbb{P}(1,1,2)$ which do not pass through the singular point $[0,0,1]$. This behavior is similar to \cite[Theorem 1.3]{ADL}.
\item From Remarks \ref{rem:walls-value} and \ref{rem:walls-value}, we will see that $c_2=\frac{1}{5}$ is the second K-moduli wall. Moreover, if a degree $8$ curve $D$ passes through the singular point of $X =\mathbb{P}(1,1,2)$, then we see that for any $c < \frac{1}{5}$ the pair $(X, cD)$ is K-unstable.
\end{enumerate} \end{remark}
\subsection{Computations on CM line bundles}\label{sec:CM}
The main goals of this section are to compute the CM line bundle of the log Fano family from Section \ref{sec:LOG-VGIT}, and to show that over the complete intersection locus $U$, the CM $\mathbb{Q}$-line bundle is proportional to the VGIT line bundle.
\begin{prop}\label{prop:CM-Z}
With the notation from Section \ref{sec:LOG-VGIT}, we have
\[
-f_*((-K_{\mathscr{X}/\mathbb{P}(E)} - c\mathscr{D})^3)=
(2-4c)^2(4c+4)\left(\eta+\frac{3c}{2c+2}\xi\right).
\]
\end{prop}
\begin{proof}
By construction we have:
\begin{align*} \mathcal{O}_{\mathbb{P}^3 \times \mathbb{P}(E)}(\mathscr{X}) &= p_1^*\mathcal{O}_{\mathbb{P}^3}(2) \otimes p_2^* \pi^* \mathcal{O}_{\mathbb{P}^9}(1); \\
\mathcal{O}_{\mathscr{X}}(\mathscr{D}) &= p_1^* \mathcal{O}_{\mathbb{P}^3}(4)\vert_{\mathscr{X}} \otimes p_2^*\mathcal{O}_{\mathbb{P}(E)}(1) \vert_{\mathscr{X}}. \end{align*}
First note that $K_{\mathscr{X}/\mathbb{P}(E)} = K_{\mathscr{X}} - f^*K_{\mathbb{P}(E)}$, and by adjunction,
\begin{align*}
K_{\mathscr{X}} &= (K_{\mathbb{P}^3 \times \mathbb{P}(E)} + \mathscr{X})|_{\mathscr{X}} \\
& = (p_1^*\mathcal{O}(-4) \otimes p_2^*\mathcal{O}(K_{\mathbb{P}(E)}) \otimes p_1^*\mathcal{O}(2) \otimes p_2^*\pi^*\mathcal{O}_{\mathbb{P}^9}(1))\vert_{\mathscr{X}} \\
&= \mathcal{O}_{\mathscr{X}}(-2) \otimes p_2^*\mathcal{O}(K_{\mathbb{P}(E)})\vert_{\mathscr{X}} \otimes p_2^*\pi^*\mathcal{O}_{\mathbb{P}^9}(1)\vert_{\mathscr{X}}
\end{align*}
So in particular we have
\[K_{\mathscr{X}/\mathbb{P}(E)} = \mathcal{O}_{\mathscr{X}}(-2) \otimes f^*\pi^*\mathcal{O}_{\mathbb{P}^9}(1) .\]
Since $\mathscr{D} = \mathcal{O}_{\mathscr{X}}(4) \otimes p_2^*\mathcal{O}_{\mathbb{P}(E)}(1) \vert_{\mathscr{X}}$, we see that
\[ \mathcal{O}_{\mathscr{X}}(-K_{\mathscr{X}/\mathbb{P}(E)} - c\mathscr{D}) = \mathcal{O}_{\mathscr{X}}(2-4c) \otimes f^* \pi^*\mathcal{O}_{\mathbb{P}^9}(-1) \otimes f^* \mathcal{O}_{\mathbb{P}(E)}(-c).\]
Let $H_Y$ denote an element of the class $\mathcal{O}_Y(1)$ for $Y = \mathscr{X}, \mathbb{P}^3, \mathbb{P}(E),$ or $\mathbb{P}^9$. We compute
\begin{align*}
-f_*(-K_{\mathscr{X}/\mathbb{P}(E)} -c\mathscr{D})^3 &= -f_*(((2-4c)H_{\mathscr{X}})^3 - 3((2-4c)H_{\mathscr{X}})^2\cdot(cf^*H_{\mathbb{P}(E)}+f^*\pi^*H_{\mathbb{P}^9}) + \\
& 3((2-4c)H_\mathscr{X} \cdot (cf^*H_{\mathbb{P}(E)}+f^*\pi^*H_{\mathbb{P}^9})^2 ) - (cf^*H_{\mathbb{P}(E)}+f^*\pi^*H_{\mathbb{P}^9}) ^3) \\
&= -f_*((2-4c)^3(\mathscr{X}|_{\mathscr{X}}) - 3(2-4c)^2H_\mathbb{P}^3 \cdot (\mathscr{X}|_{\mathscr{X}})\cdot (cf^*H_{\mathbb{P}(E)}+f^*\pi^*H_{\mathbb{P}^9}))\\
&= -(2-4c)^3 \pi^*H_{\mathbb{P}^9} + 6(2-4c)^2(cH_{\mathbb{P}(E)} + \pi^*H_{\mathbb{P}^9})
\end{align*}
Thus the proof is finished since $\eta = \pi^*H_{\mathbb{P}^9}$ and $\xi = H_Z$.\end{proof}
\begin{prop}\label{prop:CM-U}
Let $f_U:(\mathscr{X}_U,\mathscr{D}_U)\to U$ be the restriction of $f:(\mathscr{X},\mathscr{D})\to \mathbb{P}(E)$ over $U\subset \mathbb{P}(E)$.
We denote the CM $\mathbb{Q}$-line bundle of $f_U$ with coefficient $c$ by $\lambda_{U,c}:=\lambda_{\mathrm{CM}, f_U, c\mathscr{D}_U}$.
Denote by $\eta_U$ and $\xi_U$ the restriction of $\eta$ and $\xi$ to $U$.
Then for any $c\in [0,\frac{1}{2})$ we have
\begin{equation}\label{eq:CM-U}
\lambda_{U,c}=(2-4c)^2(4c+4)\left(\eta_U+\frac{3c}{2c+2}\xi_U\right).
\end{equation}
\end{prop}
\begin{proof}
We take $l\in\mathbb{Z}_{>0}$ sufficiently divisible such that $\mathscr{L}:=-l(K_{\mathscr{X}/\mathbb{P}(E)}+c\mathscr{D})$ is a Cartier divisor on $\mathscr{X}$. From the above computation, we see that $\mathscr{L}\sim_{f}\mathcal{O}_X(l(2-4c))$ which implies that $\mathscr{L}$ is $f$-ample.
Denote by $\mathscr{L}_{U}:=\mathscr{L}|_{\mathscr{X}_U}$.
Since both $\mathscr{X}$ and $\mathbb{P}(E)$ are smooth projective varieties, using Grothendieck-Riemann-Roch theorem, for $q\gg 1$ we have that
\begin{align*}
\mathrm{c}_1(f_*(\mathscr{L}^{\otimes q}))& =\frac{q^{3}}{6}f_*(\mathscr{L}^3)-\frac{q^2}{4}f_*(K_{\mathscr{X}/\mathbb{P}(E)}\cdot\mathscr{L}^2)+O(q),\\
\mathrm{c}_1(f_*(\mathscr{L}^{\otimes q}\otimes\mathcal{O}_{\mathscr{X}}(-\mathscr{D})))& =\frac{q^{3}}{6}f_*(\mathscr{L}^3)-\frac{q^2}{2}f_*(\mathscr{D}\cdot\mathscr{L}^2)-\frac{q^2}{4}f_*(K_{\mathscr{X}/\mathbb{P}(E)}\cdot\mathscr{L}^2)+O(q).
\end{align*}
Thus $\mathrm{c}_1((f|_\mathscr{D})_*(\mathscr{L}|_{\mathscr{D}}^{\otimes q}))=\frac{q^2}{2}f_*(\mathscr{D}\cdot\mathscr{L}^2)+O(q)$.
Since CM line bundles are functorial, by similar arguments to \cite[Proposition 2.23]{ADL} we have that
\[
\mathrm{c}_1(\lambda_{\mathrm{CM}, f_U, c\mathscr{D}_U, \mathscr{L}_U})= -l^2 f_*((-K_{\mathscr{X}/\mathbb{P}(E)}-c\mathscr{D})^3)|_U.
\]
This implies \eqref{eq:CM-U} by Proposition \ref{prop:CM-Z}.
\end{proof}
\begin{prop}\label{prop:proportional} The CM $\mathbb{Q}$-line bundle $\lambda_{U,c}$ and the VGIT polarization $N_t$ are proportional up to a positive constant when restricted to $U$ where $t=t(c):=\frac{3c}{2c+2}$.
\end{prop}
\begin{proof}
By Proposition \ref{prop:CM-U}, we see that $\lambda_{U,c}$ is a positive multiple of $\eta_U + \frac{3c}{2c+2}\xi_U$.
By Proposition \ref{prop:Linfinity},
\begin{align*}
N_t |_U &= \frac{1 - 2t}{1-2\delta} p_1^*(\eta + \delta \xi) |_U + \frac{t - \delta}{2(1-2\delta)} p_2^*L_{\infty}|_U \\
&= \frac{1 - 2t}{1-2\delta} (\eta_U + \delta \xi_U) + \frac{t - \delta}{2(1-2\delta)} (4 \eta_U + 2 \xi_U ) \\
&= \eta_U + t \xi_U.
\end{align*}
Hence for $t = \frac{3c}{2c+2}$, we see that $\lambda_{U,c}$ is a positive multiple of $N_t |_U$. \end{proof}
\subsection{K-moduli wall crossings and VGIT}
In this section we will prove Theorem \ref{mthm:thmintro}(2) by an inductive argument on walls.
\begin{theorem}[=Theorem \ref{mthm:thmintro}(2)]\label{thm:wallscoincide}
Let $c \in (0, \frac{1}{2})$ be a rational number. Then there is an isomorphism between Artin stacks $\overline{\mathcal{K}}_c\cong \mathscr{M}(t(c))$ with $t(c)=\frac{3c}{2c+2}$. Moreover, such isomorphisms commute with wall crossing morphisms.
\end{theorem}
We first set up some notation.
Recall that the open subset $U\subset \mathbb{P}(E)$ is defined to be the locus parametrizing $(X, D)$ where $X$ is a quadric surface in $\mathbb{P}^3$ and $D$ is the complete intersection of $X$ with some quartic surface in $\mathbb{P}^3$.
Let $U_c^{\mathrm{K}}$ denote the open subset of $U$ parametrizing $c$-K-semistable log Fano pairs. Let $U_c^{\mathrm{GIT}}:=\mathscr{P}^{\rm ss}(N_{t})$ denote the VGIT semistable locus in $\mathscr{P}$ with slope $t=t(c)=\frac{3c}{2c+2}$ which is also contained in $U$ by Lemma \ref{lem:GITssU}.
We say a point $[(X,D)]\in U$ is $c$-GIT (poly/semi)stable if it is GIT (poly/semi)stable in $\mathscr{P}$ with slope $t(c)$.
By Theorem \ref{thm:generalwall}, we know that there are finitely many walls in $(0,\frac{1}{2})$ for K-moduli stacks $\overline{\mathcal{K}}_c$. Denote the sequence of VGIT walls and K-moduli walls by
\[
0=w_0<w_1<w_2<\cdots<w_{\ell}=\frac{1}{2},
\]
i.e. either $c=w_i$ is a wall for K-moduli stacks $\overline{\mathcal{K}}_c$, or $t=t(w_i)$ is a wall for VGIT moduli stacks $\mathscr{M}(t)$.
The following proposition allows us to replace K-moduli stacks $\overline{\mathcal{K}}_c$ by a quotient stack of $U_c^{\mathrm{K}}$. An essential ingredient is Theorem \ref{thm:surfaces}.
\begin{prop}\label{prop:K-stackinU}
There is an isomorphism of stacks $[U_c^{\mathrm{K}}/\mathrm{PGL}(4)]\xrightarrow{\cong} \overline{\mathcal{K}}_c$. Moreover, we have open immersions $
U_{c-\epsilon}^{\mathrm{K}}\hookrightarrow U_{c}^{\mathrm{K}}\hookleftarrow U_{c+\epsilon}^{\mathrm{K}}$
which descends (via the above isomorphisms) to wall-crossing morphisms $\overline{\mathcal{K}}_{c-\epsilon}\hookrightarrow \overline{\mathcal{K}}_{c}\hookleftarrow\overline{\mathcal{K}}_{c+\epsilon}$.
\end{prop}
\begin{proof}
Since $U_c^{\mathrm{K}}$ parametrizes $c$-K-semistable log Fano pairs, by universality of K-moduli stacks we know that there exists a morphism $\psi: [U_c^{\mathrm{K}}/\mathrm{PGL}(4)]\to \overline{\mathcal{K}}_c$. In order to show $\psi$ is an isomorphism, we will construct the inverse morphism $\psi^{-1}:\overline{\mathcal{K}}_c\to [U_c^{\mathrm{K}}/\mathrm{PGL}(4)]$. We follow notation from Theorem \ref{thm:modnormal}. Let $T\subset Z_{c}^{\mathrm{red}}$ be the connected component where a general point parametrizes $\mathbb{P}^1\times\mathbb{P}^1$. By Definition \ref{defn:modulispace} we know that $\overline{\mathcal{K}}_c\cong [T/\mathrm{PGL}(N_m+1)]$. Let $T'=\mathrm{pr}_1(T)\subset \mathrm{Hilb}_{\chi}(\mathbb{P}^{N_m})$. By Theorems \ref{thm:modnormal} and \ref{thm:surfaces} we know that $T'$ is smooth and contains a (possibly empty) smooth divisor $H'$ parametrizing $\mathbb{P}(1,1,2)$. Moreover, both $T'\setminus H'$ and $H'$ are $\mathrm{PGL}(N_m+1)$-orbits in $\mathrm{Hilb}_{\chi}(\mathbb{P}^{N_m})$.
In order to construct $\psi^{-1}$, we will first construct a $\mathrm{PGL}(4)$-torsor $\mathcal{P}'/T'$. The argument here is similar to \cite[Proof of Theorem 5.15]{ADL}. Let $\pi:(\mathcal X,\mathcal D)\to T$ and $\pi':\mathcal X'\to T'$ be the universal families. Since $\pi'$ is an isotrivial $\mathbb{P}^1\times\mathbb{P}^1$-fibration over $T'\setminus H'$, there exists a flat quasi-finite morphism $\widetilde{T}\to T'$ from a smooth variety $\widetilde{T}$ that is \'etale away from $H'$ whose image intersects $H'$ (unless $H'$ is empty). From the fact that $T'\setminus H'$ and $H'$ are $\mathrm{PGL}(N_m+1)$-orbits, we know that there exists $T_i'=g_i\cdot \widetilde{T}$ where $g_i\in\mathrm{PGL}(N_m+1)$ such that $\sqcup_i T_i'\to T$ is a fppf covering. Moreover, we may assume that $\pi'\times_{T'} (T_i'\setminus H_i'):\mathcal X'_{T_i'\setminus H_i'}\to T_i'\setminus H_i'$ is a trivial $\mathbb{P}^1\times\mathbb{P}^1$-bundle for each $i$ where $H_i'=H'\times_{T'} T_i'$. Let $\mathcal L_i'$ be the Weil divisorial sheaf on $\mathcal X'_{T_i'}$ as the Zariski closure of $\mathcal{O}(1,1)$ on $\mathcal X'_{T_i'\setminus H_i'}$. After replacing $T_i'$ by its Zariski covering, we may assume that $\mathcal L_i'^{[-2]}\cong \omega_{\mathcal X'_{T_i'}/T_i'}$. By Kawamata-Viehweg vanishing, we know that $(\pi'_{T_i'})_*\mathcal L_i'$ is a rank $4$ vector bundle over $T_i'$. Let $\mathcal{P}_i'/T_i'$ be the $\mathrm{PGL}(4)$-torsor induced by projectivized basis of $(\pi'_{T_i'})_*\mathcal L_i'$. Since the cocycle condition of $\{(\pi'_{T_i'})_*\mathcal L_i'/T_i\}_i$ is off by $\pm 1$, we know that $\{\mathcal{P}_i'/T_i'\}$ is a fppf descent datum which descends to a $\mathrm{PGL}(4)$-torsor $\mathcal{P}'/T'$ by \cite[Tag 04U1]{stacksproject}. It is clear that $\mathcal{P}'/T'$ is $\mathrm{PGL}(N_m+1)$-equivariant. Denote by $\mathcal{P}:=\mathcal{P}'\times_{T'} T$. Hence the morphism $\mathcal{P}\to U_c^{\mathrm{K}}$ given by $(t,[s_0,s_1,s_2,s_3])\mapsto [s_0,s_1,s_2,s_3](\mathcal X_t,\mathcal D_t)$ induces $\psi^{-1}:\overline{\mathcal{K}}_c\to [U_c^{\mathrm{K}}/\mathrm{PGL}(4)]$. The proof is finished.
\end{proof}
In order to prove Theorem \ref{thm:wallscoincide}, we run an inductive argument on the walls $w_i$. The following proposition is an initial step for induction.
\begin{prop}\label{prop:induction0}
For any $c\in (0, w_1)$, we have $U_c^{\mathrm{K}}=U_c^{\mathrm{GIT}}$.
\end{prop}
\begin{proof}
Since both $U_c^{\mathrm{K}}$ and $U_c^{\mathrm{GIT}}$ are independent of the choice of $c\in (0, w_1)$, it suffices to show that they are equal for $0< c\ll 1$.
By Theorem \ref{thm:LOmain}(2), we know that $[(X,D)]\in U_c^{\mathrm{GIT}}$ if and only if $X\cong \mathbb{P}^1\times\mathbb{P}^1$ and $D$ is a GIT semistable $(4,4)$-curve.
By Theorem \ref{thm:firstwall} and Proposition \ref{prop:K-stackinU}, we know that $U_c^{\mathrm{K}}$ consists of exactly the same points as $U_c^{\mathrm{GIT}}$. Hence the proof is finished.
\end{proof}
Next, we divide each induction step into two statements as Propositions \ref{prop:induction1} and \ref{prop:induction2}.
\begin{prop}\label{prop:induction1}
Assume that for any $c\in (0, w_i)$ we have $U_c^{\mathrm{K}}=U_c^{\mathrm{GIT}}$. Then $U_{w_i}^{\mathrm{K}}=U_{w_i}^{\mathrm{GIT}}$.
\end{prop}
\begin{proof}
For simplicity, denote by $w:=w_i$. We first show that $U_w^{\mathrm{K}}\subset U_w^{\mathrm{GIT}}$. Let $[(X,D)]$ be a point in $U_w^{\mathrm{K}}$. By Proposition \ref{prop:K-stackinU}, we know that $[U_w^{\mathrm{K}}/\mathrm{PGL}(4)]\cong \overline{\mathcal{K}}_w$.
By Theorem \ref{thm:generalwall}, the K-moduli wall crossing morphism $\overline{K}_{w-\epsilon}\to \overline{K}_w$ is surjective which is induced by the open immersion $U_{w-\epsilon}^{\mathrm{K}}\hookrightarrow U_w^{\mathrm{K}}$. Hence there exists a $w$-K-polystable point $[(X_0, D_0)]\in U_w^{\mathrm{K}}$, a $(w-\epsilon)$-K-semistable point $[(X',D')]\in U_{w-\epsilon}^{\mathrm{K}}$, and two $1$-PS's $\sigma$ and $\sigma'$ of $\mathrm{SL}(4)$, such that
\begin{equation}\label{eq:induction1}
\lim_{t\to 0 }\sigma(t)\cdot [(X,D)]= [(X_0, D_0)],\qquad
\lim_{t\to 0 }\sigma'(t)\cdot [(X',D')]= [(X_0, D_0)].
\end{equation}
In other words $(X_0,D_0)$ is the $w$-K-polystable degeneration of $(X,D)$, while the existence of $(X',D')$ follows from surjectivity of $\overline{K}_{w-\epsilon}\to \overline{K}_w$.
Denote the above two special test configurations by $(\mathcal X, w\mathcal D)$ and $(\mathcal X',w\mathcal D')$ respectively. Since $(X_0, wD_0)$ is K-polystable, we know that $\mathrm{Fut}(\mathcal X',w\mathcal D')=0$. Since the generalized Futaki invariant is proportional to the GIT weight of the CM $\mathbb{Q}$-line bundle $\lambda_{U,w}$ which is again proportional to $N_t(w)|_U$ by Proposition \ref{prop:proportional}, we have that the GIT weight $\mu^{N_{t(w)}}([(X',D')], \sigma')=0$. By assumption, we have $[(X',D')]\in U_{w-\epsilon}^{\mathrm{K}}= U_{w-\epsilon}^{\mathrm{GIT}}\subset U_{w}^{\mathrm{GIT}}$. Hence Lemma \ref{lem:zerofut}(1) implies that $[(X_0,D_0)]\in U_{w}^{\mathrm{GIT}}$ which implies $[(X,D)]\in U_w^{\mathrm{GIT}}$ by openness of the GIT semistable locus. Thus we have shown that $U_w^{\mathrm{K}}\subset U_w^{\mathrm{GIT}}$.
Next we show the reverse containment $U_w^{\mathrm{GIT}}\subset U_w^{\mathrm{K}}$. Let $[(X,D)]$ be a point in $U_w^{\mathrm{GIT}}$. By almost the same argument as the previous paragraph except replacing K-stability with GIT stability, we can find $[(X_0,D_0)]\in U_w^{\mathrm{GIT}}$, $[(X',D')]\in U_{w-\epsilon}^{\mathrm{GIT}}$, and two $1$-PS's $\sigma,\sigma'$ of $\mathrm{SL}(4)$ such that \eqref{eq:induction1} holds, and
\[
\mu^{N_{t(w)}}([(X,D)], \sigma)=\mu^{N_{t(w)}}([(X',D')], \sigma')=0.
\]
Note that the surjectivity of wall-crossing morphisms in VGIT follows from \cite{LO} (see Theorem \ref{thm:LOwallcrossings}).
By assumption we have $[(X',D')]\in U_{w-\epsilon}^{\mathrm{GIT}}=U_{w-\epsilon}^{\mathrm{K}}\subset U_w^{\mathrm{K}}$.
Again using Proposition \ref{prop:proportional} we get $\mathrm{Fut}(\mathcal X',w\mathcal D';\mathcal L)=0$ where $(\mathcal X',w\mathcal D';\mathcal L)$ is the test configuration of $(X',wD',\mathcal{O}_{X'}(1))$ induced by $\sigma'$. Since $(X',wD')$ is K-semistable, by \cite[Section 8.2]{LX14} we know that $\mathcal X'$ is regular in codimension $1$. Since $\mathcal X_0'=X_0$ is Cohen-Macaulay, we know that $\mathcal X'$ is $S_2$ which implies that $\mathcal X'$ is normal. Hence Lemma \ref{lem:zerofut}(2) implies that $(X_0, wD_0)$ is K-semistable, and so is $(X,wD)$ by the openness of K-semistability \cite{BLX19, Xu19}. The proof is finished.
\end{proof}
\begin{prop}\label{prop:induction2}
Assume that for any $c\in (0, w_i]$ we have $U_c^{\mathrm{K}}=U_c^{\mathrm{GIT}}$. Then $U_{c'}^{\mathrm{K}}=U_{c'}^{\mathrm{GIT}}$ for any $c'\in (w_i, w_{i+1})$.
\end{prop}
\begin{proof}
For simplicity, denote by $w:=w_i$.
Since the K-semistable locus $U_{c'}^{\mathrm{K}}$ and the GIT semistable locus $U_{c'}^{\mathrm{GIT}}$ are independent of the choice of $c'\in (w_i,w_{i+1})$, it suffices to show that $U_{w+\epsilon}^{\mathrm{K}} = U_{w + \epsilon}^{\mathrm{GIT}}$.
We first show $U_{w+\epsilon}^{\mathrm{K}} \subset U_{w + \epsilon}^{\mathrm{GIT}}$. Assume to the contrary that $[(X,D)]\in U_{w+\epsilon}^{\mathrm{K}}\setminus U_{w+\epsilon}^{\mathrm{GIT}}$.
We note that by Proposition \ref{prop:K-stackinU} and Lemma \ref{lem:VGITbasics} there are open immersions $U_{w+\epsilon}^\mathrm{K} \hookrightarrow U_w^{\mathrm{K}}$ and $U_{w+\epsilon}^{\mathrm{GIT}} \hookrightarrow U_w^{\mathrm{GIT}}$.
By assumption we have $[(X,D)]\in U_{w+\epsilon}^\mathrm{K}\subset U_w^{\mathrm{K}}=U_w^{\mathrm{GIT}}$, hence $[(X,D)]$ is $w$-GIT semistable but $(w+\epsilon)$-GIT unstable. Thus by Lemma \ref{lem:VGITbasics}
there exists a 1-PS $\sigma: \mathbb{G}_m \to \mathrm{SL}(4)$ such that
\begin{equation}\label{eq:induction2}
\mu^{N_{t(w)}}([(X,D)], \sigma)=0, \qquad \mu^{N_{t(w+\epsilon)}}([(X,D)], \sigma)<0.
\end{equation}
Denote by $\zeta_0:=\lim_{t\to 0}\sigma(t)\cdot [(X,D)] \in \mathscr{P}$. Since $[(X,D)]$ is $w$-GIT semistable, by Lemma \ref{lem:zerofut}(1) and \eqref{eq:induction2} we know that $\zeta_0$ is also $w$-GIT semistable, in particular $\zeta_0=[(X_0,D_0)]\in U$. Denote by $(\mathcal X,w\mathcal D;\mathcal L)/\mathbb{A}^1$ the test configuration of $(X,wD;\mathcal{O}_X(1))$ induced by $\sigma$. Hence by \eqref{eq:induction2} and Proposition \ref{prop:proportional}, we have $\mathrm{Fut}(\mathcal X,(w+\epsilon)\mathcal D)<0$. This implies that $(X,(w+\epsilon)D)$ is K-unstable which contradicts the assumption that $[(X,D)]\in U_{w+\epsilon}^{\mathrm{K}}$. Thus we conclude that $U_{w+\epsilon}^{\mathrm{K}}\subset U_{w+\epsilon}^{\mathrm{GIT}}$.
Next, if $[(X,D)] \in U_{w+\epsilon}^\mathrm{K}$ is $(w+\epsilon)$-K-polystable, then we claim that $[(X,D)]$ is $(w+\epsilon)$-GIT polystable. We have already shown that $[(X,D)]$ is $(w+\epsilon)$-GIT semistable. Let us take a 1-PS $\sigma'$ of $\mathrm{SL}(4)$ degenerating $[(X,D)]$ to a $(w+\epsilon)$-GIT polystable point $[(X',D')]$. Hence we have $\mu^{N_{t(w+\epsilon)}}([(X,D)],\sigma')=0$.
By Proposition \ref{prop:proportional}, we have $\mathrm{Fut}(\mathcal X',(w+\epsilon)\mathcal D';\mathcal L')=0$ where $(\mathcal X',(w+\epsilon)\mathcal D';\mathcal L')$ is the test configuration of $(X,(w+\epsilon)D;\mathcal{O}_X(1))$ induced by $\sigma'$. Since $[(X',D')]\in U_{w+\epsilon}^{\mathrm{GIT}}\subset U_w^{\mathrm{GIT}}=U_w^{\mathrm{K}}$ by assumption, we know that $(X',wD')$ is K-semistable hence klt. Thus $(\mathcal X',(w+\epsilon)\mathcal D')$ is a special test configuration with vanishing generalized Futaki invariant. Since $(X,(w+\epsilon)D$ is K-polystable, we know that $(X,D)\cong (X',D')$ which implies that $[(X,D)]$ and $[(X',D')]$ belong to the same $\mathrm{SL}(4)$-orbit in $U$. Hence $[(X,D)]$ is $(w+\epsilon)$-GIT polystable.
Finally we show that $U_{w+\epsilon}^{\mathrm{K}}= U_{w+\epsilon}^{\mathrm{GIT}}$. Consider the following commutative diagram
\begin{center}
\begin{tikzcd}
U_{w+\epsilon}^{\mathrm{K}} \arrow[d, hook, "f"] \arrow [r] & {[U_{w+\epsilon}^{\mathrm{K}}/\mathrm{PGL}(4)]} \arrow[d, hook, "g"]\arrow [r] & U_{w+\epsilon}^{\mathrm{K}}\mathbin{/\mkern-6mu/} \mathrm{PGL}(4) \arrow[d, "h"]\\
U_{w+\epsilon}^{\mathrm{GIT}} \arrow [r] & {[U_{w+\epsilon}^{\mathrm{GIT}}/\mathrm{PGL}(4)]} \arrow[r]& U_{w+\epsilon}^{\mathrm{GIT}}\mathbin{/\mkern-6mu/} \mathrm{PGL}(4)
\end{tikzcd}
\end{center}
Since $f$ is an open immersion between smooth varieties, its descent $g$ is separated and representable. By Lemma \ref{prop:K-stackinU} we know $[U_{w+\epsilon}^{\mathrm{K}}/\mathrm{PGL}(4)]\cong \overline{\mathcal{K}}_{w+\epsilon}$, hence $g$ maps closed points to closed points as shown in the previous paragraph, and $h$ is quasi-finite. Since the GIT quotients on the third column are isomorphic to the K-moduli space $\overline{K}_{w+\epsilon}$ and the VGIT moduli space $\mathfrak{M}(t(w+\epsilon))$ respectively, they are both proper. Thus $h$ is a finite morphism. Then we apply \cite[Proposition 6.4]{alper} to conclude that $g$ is a finite morphism as well. In particular, this implies that $f$ is finite hence surjective. The proof is finished.
\end{proof}
\begin{proof} [Proof of Theorem \ref{thm:wallscoincide}]
By Propositions \ref{prop:induction0}, \ref{prop:induction1}, and \ref{prop:induction2} on induction of the walls $\{w_i\}_{i=0}^{\ell}$, we conclude that $U_c^{\mathrm{K}}=U_c^{\mathrm{GIT}}$ for any $c\in (0,\frac{1}{2})$. Hence the theorem follows from Proposition \ref{prop:K-stackinU} and the definition $\mathscr{M}(t(c))=[U_c^{\mathrm{GIT}}/\mathrm{PGL}(4)]$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mthm:thmintro}]
Part (1) follows from Theorem \ref{thm:firstwall}. Part (2) is precisely Theorem \ref{thm:wallscoincide}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mthm:spaceiso}]
The first isomorphism follows from Theorem \ref{mthm:thmintro}. The second isomorphism follows from Theorem \ref{thm:LOwallcrossings}. For the proportionality statements, the first one between CM $\mathbb{Q}$-line bundle and VGIT polarization follows from Proposition \ref{prop:proportional}, while the second one between VGIT polarization and push forward of $\lambda+\beta\Delta$ follows from \cite[Proposition 7.6]{LO}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mthm:slcK3}]
Since there are finitely many K-moduli (resp. GIT) walls for $c\in (0,\frac{1}{2})$ (resp. $t\in (0,\frac{1}{2})$), we may assume that $\epsilon$ and $\epsilon'$ satisfy the relation $\epsilon=\frac{3\epsilon'}{2\epsilon'+2}$, i.e. $\frac{1}{2}-\epsilon' = t(\frac{1}{2}-\epsilon)$.
By Theorem \ref{mthm:thmintro}, we have $\mathfrak{M}(\frac{1}{2}-\epsilon')\cong \overline{K}_{\frac{1}{2}-\epsilon}$. The isomorphism $\mathfrak{M}(\frac{1}{2}-\epsilon')\cong\widehat{\mathscr{F}}$ follows from \cite[Theorem 1.1]{LO}.
For part (1),
from the above isomorphisms we know that $\mathfrak{M}(\frac{1}{2}-\epsilon')$ parametrizes K-polystable klt log Fano pairs $(X,(\frac{1}{2}-\epsilon')D)$. By ACC of log canonical thresholds \cite{HMX14}, we know that $(X,\frac{1}{2}D)$ is log canonical. Hence taking double cover of $X$ branched along $D$ we obtain a hyperelliptic K3 surface $S$ with only slc singularities. The proof is finished.
For part (2), notice that by taking fiberwise double covers of the universal log Fano family over $\overline{\mathcal{K}}_{\frac{1}{2}-\epsilon}$, we obtain a universal family of slc K3 surfaces $\mathcal{S}\to \mathcal T$ where $\mathcal T\to \overline{\mathcal{K}}_{\frac{1}{2}-\epsilon}$ is a $\bm{\mu}_2$-gerbe. In particular, the Hodge line bundle $\lambda_{\mathrm{Hodge},\mathcal T}$ of the K3 family $\mathcal{S}/\mathcal T$ is the pull-back of the Hodge line bundle $\lambda_{\mathrm{Hodge}, \frac{1}{2}-\epsilon}$ over $\overline{\mathcal{K}}_{\frac{1}{2}-\epsilon}$. Taking good moduli spaces of $\mathcal T\to T$ and $\overline{\mathcal{K}}_{\frac{1}{2}-\epsilon}\to \overline{K}_{\frac{1}{2}-\epsilon}$ gives an isomorphism $T\xrightarrow{\cong}\overline{K}_{\frac{1}{2}-\epsilon}$. Since both spaces are isomorphic to $\widehat{\mathscr{F}}$, we know that $\mathscr{F}$ admits an open immersion into $T$ whose complement has codimension at least $2$. In particular, we know that $\lambda_{\mathrm{Hodge}, T}|_{\mathscr{F}}=\lambda_{\mathrm{Hodge}, \mathscr{F}}$, and the conclusion follows from $\mathscr{F}^*=\mathrm{Proj} R(\mathscr{F}, \lambda_{\mathrm{Hodge}, \mathscr{F}})$.
\end{proof}
\begin{rem}\label{rem:walls-value}
According to \cite{LO}, the $t$-walls for VGIT quotients $\mathfrak{M}(t)$ and $\beta$-walls for the Hassett-Keel-Looijenga program for $\mathscr{F}(\beta)=\mathrm{Proj} R(\mathscr{F},\lambda+\beta\Delta)$ with $N=18$ (under the transformation rule $t=\frac{1}{4\beta+2}$) are given by
\[
t\in \left\{\frac{1}{6}, \frac{1}{4}, \frac{3}{10}, \frac{1}{3}, \frac{5}{14}, \frac{3}{8}, \frac{2}{5}, \frac{1}{2} \right\},\qquad
\beta\in \left\{1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \frac{1}{6}, \frac{1}{8}, 0 \right\}.
\]
By the transformation rule $t=\frac{3c}{2c+2}$, we obtain the $c$-walls for K-moduli stacks $\overline{\mathcal{K}}_c$ are
\[
c\in \left\{\frac{1}{8}, \frac{1}{5}, \frac{1}{4}, \frac{2}{7}, \frac{5}{16}, \frac{1}{3}, \frac{4}{11}, \frac{1}{2} \right\}.
\]
Note that $c=\frac{1}{2}$ corresponds to the log Calabi-Yau wall crossing $\overline{K}_{\frac{1}{2}-\epsilon}\to \mathscr{F}^*$, while the rest walls are in the log Fano region.
\end{rem}
\begin{rem}\label{rem:walls-detail}(cf. \cite[Section 6]{LO}) Let $i\in \{1,2,\cdots, 7\}$ be an index. For the $i$-th K-moduli wall $c_i$, we have K-moduli wall crossing morphisms
\[
\overline{K}_{c_i-\epsilon}\xrightarrow{\phi_i^{-}}\overline{K}_{c_i}\xleftarrow{\phi_i^{+}}\overline{K}_{c_i+\epsilon}.
\]
Denote by $\Sigma_i^{\pm}$ the closed subset of $\overline{K}_{c_i\pm\epsilon}$ parametrizing pairs that are $(c_i\pm \epsilon)$-K-polystable but not $c_i$-K-polystable. As observed in \cite[Section 6]{LO}, we know that a general point $[(X,D)]$ in $\Sigma_i^{-}$ (resp. $\Sigma_i^{+}$) parametrizes a curve $D$ on $X\cong\mathbb{P}^1\times\mathbb{P}^1$ (resp. $X\cong\mathbb{P}(1,1,2)$).
In Table \ref{table:singularities}, we rephrase results from \cite{LO}, especially \cite[Table 2]{LO}, to describe the generic singularities (in local analytic form) presented in the curves $D$.
Note that a general curve $D$ in $\Sigma_i^+$ is smooth when $i=1$, and singular only at the cone vertex $v=[0,0,1]$ of $\mathbb{P}(1,1,2)$ when $2\leq i\leq 7$.
\begin{table}[htbp!]\renewcommand{\arraystretch}{1.5}
\caption{Singularities along the K-moduli walls}\label{table:singularities}
\begin{tabular}{|c|c|l|l|}
\hline
$i$ & $c_i$ & \textbf{Sing. of $D$ in $\Sigma_i^-$} & \textbf{Sing. of $D$ in $\Sigma_i^+$}\\ \hline \hline
1 & $\frac{1}{8}$ & quadruple conic & $v\not\in D$ \\
2 & $\frac{1}{5}$ & triple conic + transverse conic & $A_1$ \\
3 & $\frac{1}{4}$ & $J_{4,\infty}: ~x^3+x^2y^4=0$ & $A_2$ \\
4 & $\frac{2}{7}$ & $J_{3,0}:~ x^3 + b_1 x^2y^3 + y^9 + b_2 xy^7=0$ & $A_3$ \\
5 & $\frac{5}{16}$ & $E_{14}:~ x^3 + y^8 + axy^6 = 0$ & $A_4$ \\
6 & $\frac{1}{3}$ & $E_{13}:~ x^3 + xy^5 + ay^8 = 0$ & $A_5$ \\
7 & $\frac{4}{11}$ & $E_{12}:~ x^3 + y^7 + axy^5 = 0$ & $A_7$ \\ \hline
\end{tabular}
\end{table}
\end{rem}
\section{Some results for $(d,d)$ curves}\label{sec:generaldegree}
In this section we discuss some generalizations of our results to $(d,d)$-curves on $\mathbb{P}^1\times\mathbb{P}^1$ including the proof of Theorem \ref{mthm:alldeg}. We assume $d\geq 3$ throughout this section.
\subsection{VGIT for $(2,d)$ complete intersections in $\mathbb{P}^3$}
Let $\mathbf{P}_{(d,d)}:=\mathbb{P}(H^0(\mathbb{P}^1\times\mathbb{P}^1,\mathcal{O}(d,d)))$. We say a $(d,d)$-curve $C$ on $\mathbb{P}^1\times\mathbb{P}^1$ is \emph{GIT (poly/semi)stable} if $[C]$ is GIT (poly/semi)stable with respect to the natural $\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)$-action on $(\mathbf{P}_{(d,d)},\mathcal{O}(2))$. We define the GIT moduli stack $\mathscr{M}_d$ and the GIT moduli space $\mathfrak{M}_d$ of degree $(d,d)$ curves as
\[
\mathscr{M}_d:= [\mathbf{P}_{(d,d)}^{\rm ss}/\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1)],\qquad
\mathfrak{M}_d:=\mathbf{P}_{(d,d)}^{\rm ss}\mathbin{/\mkern-6mu/}\mathrm{Aut}(\mathbb{P}^1\times\mathbb{P}^1).
\]
Next, we describe the VGIT of $(2,d)$ complete intersection curves in $\mathbb{P}^3$ based on \cite{benoist, CMJL14, LO}. Our set-up is a direct generalization of Section \ref{sec:LOG-VGIT}. Let
\[
\pi:\mathbb{P}(E_d)\to \mathbb{P}(H^0(\mathbb{P}^3,\mathcal{O}(2)))=\mathbb{P}^9
\]
be the projective space bundle with fiber $\mathbb{P}(H^0(Q,\mathcal{O}_Q(d)))$ over a quadric surface $[Q]\in\mathbb{P}^9$. Let $f: (\mathscr{X},\mathscr{D})\to \mathbb{P}(E_d)$ the universal family of quadric surfaces with $(2,d)$ intersections over $\mathbb{P}(E_d)$. Denote by $\eta:=\pi^*\mathcal{O}_{\mathbb{P}^9}(1)$ and $\xi:=\mathcal{O}_{\mathbb{P}(E_d)}(1)$. Then we have the following result of Benoist, where a special case of $d=4$ is stated in Proposition \ref{prop:benoist2,4}.
\begin{prop}\label{prop:benoist-alldeg}\cite[Theorem 2.7]{benoist}
If $t \in \mathbb{Q}$, then the $\mathbb{Q}$-Cartier class $\overline{N}_t:=\eta + t\xi$ on $\mathbb{P}(E_d)$ is ample if and only if $t \in (0, \frac{1}{d-1}) \cap \mathbb{Q}$.
\end{prop}
Let $U_{(2,d)}\subset\mathbb{P}(E_d)$ be the complete intersection locus as an open subset. Then we know that $\textrm{codim}_{\mathbb{P}(E_d)}\mathbb{P}(E_d)\setminus U_{(2,d)}\geq 2$. There is a birational morphism $\mathrm{chow}: U_{(2,d)}\to \operatorname{Chow}_{(2,d)}$ as a restriction of the Hilbert-Chow morphism. Hence the graph of $\mathrm{chow}$ gives a locally closed embedding
\[
U_{(2,d)}\hookrightarrow \mathbb{P}(E_d)\times \operatorname{Chow}_{(2,d)}.
\]
Denote by $\mathscr{P}_d$ the closure of $U_{(2,d)}$ in $\mathbb{P}(E_d)\times \operatorname{Chow}_{(2,d)}$. Let $p_1$ and $p_2$ be the first and second projections from $\mathscr{P}_d$ to $\mathbb{P}(E_d)$ and $\operatorname{Chow}_{(2,d)}$, respectively. The action of $\mathrm{SL}(4)$ on $\mathbb{P}^3$ extends naturally to actions on $U_{2,d}$, $\mathbb{P}(E_d)$, $\operatorname{Chow}_{(2,d)}$, and $\mathscr{P}_d$. Similar to Section \ref{sec:LOG-VGIT}, we will specify a family of $\mathrm{SL}(4)$-linearized ample $\mathbb{Q}$-line bundles on $\mathscr{P}_d$.
Fix a rational number $0 < \delta < \frac{2}{3d}$. For $t \in (\delta, \frac{2}{d}] \cap \mathbb{Q}$, consider the $\mathbb{Q}$-line bundle
\[N_t := \frac{2 - dt}{2-d\delta} p_1^*(\eta + \delta \xi) + \frac{t - \delta}{2-d\delta} p_2^*L_{\infty} ,\]
where $L_{\infty}$ is the restriction of the natural polarization of the Chow variety to $\operatorname{Chow}_{(2,d)}$. Since $\frac{2}{3d}<\frac{1}{d-1}$, Proposition \ref{prop:benoist-alldeg} implies that $\eta+\delta\xi$ is ample on $\mathbb{P}(E_d)$. It is clear that $L_\infty$ is ample on $\operatorname{Chow}_{(2,d)}$. Hence $N_t$ is ample for $\delta<t<\frac{2}{d}$ and semiample for $t=\frac{2}{d}$.
\begin{definition}\label{def:VGIT-alldeg}
Let $\delta \in \mathbb{Q}$ satisfy $0 < \delta < \frac{2}{3d}$. For each $t \in (\delta, \frac{2}{d}) \cap \mathbb{Q}$, we define the VGIT quotient stack $\mathscr{M}_d(t)$ and the VGIT quotient space $\mathfrak{M}_d(t)$ of slope $t$ to be
\[ \mathscr{M}_d(t) := [\mathscr{P}_d^{\rm ss}(N_t)/\mathrm{PGL}(4)], \quad \mathfrak{M}_d(t):=\mathscr{P}_d\mathbin{/\mkern-6mu/}_{N_t} \mathrm{SL}(4).\]
\end{definition}
The above definition a priori depends on the choice of $\delta\in (0,\frac{2}{3d})$. Nevertheless, similar to \cite{LO} we will show in Theorem \ref{thm:LOmain-alldeg}(1) that both $\mathscr{M}_d(t)$ and $\mathfrak{M}_d(t)$ do not depend on the choice of $\delta$, hence are well-defined for all $t\in (0,\frac{2}{d})$. Before stating the main VGIT result Theorem \ref{thm:LOmain-alldeg}, we need some preparation.
\begin{lem}\label{lem:proportional-alldeg}
With notation as above, we have $N_t|_{U_{(2,d)}}= \overline{N}_t|_{U_{(2,d)}}$ for any $t\in (\delta,\frac{2}{d}]\cap \mathbb{Q}$.
\end{lem}
\begin{proof}
Denote by $\overline{L}_\infty$ the unique extension of $L_\infty|_{U_{(2,d)}}$ to $\mathbb{P}(E_d)$. By the same argument as \cite[Proposition 5.4]{LO}, we get that $\overline{L}_\infty=d\eta+2\xi$. Hence we have
\begin{align*}
N_t|_{U_{(2,d)}}& =\frac{2 - dt}{2-d\delta} (\eta + \delta \xi)|_{U_{(2,d)}} + \frac{t - \delta}{2-d\delta} \overline{L}_{\infty}|_{U_{(2,d)}}\\
& = \frac{2 - dt}{2-d\delta} (\eta + \delta \xi)|_{U_{(2,d)}} + \frac{t - \delta}{2-d\delta} (d\eta+2\xi)|_{U_{(2,d)}} = (\eta+t\xi)|_{U_{(2,d)}}.
\end{align*}
The proof is finished.
\end{proof}
The following lemma is very useful (see \cite[Propositions 4.6 and 6.2]{CMJL14} and Lemma \ref{lem:GITssU} for $d=3,4$).
\begin{lem}\label{lem:GITssU-alldeg}
For each $t\in (\delta,\frac{2}{d})\cap\mathbb{Q}$ (resp. $t\in (0,\frac{1}{d-1})\cap\mathbb{Q})$, the VGIT semistable locus $\mathscr{P}_d^{\rm ss}(N_t)$ (resp. $\mathbb{P}(E_d)^{\rm ss}(\overline{N}_t)$) of slope $t$ is a Zariski open subset of $U_{(2,d)}$.
\end{lem}
\begin{proof}
We first consider the VGIT semistable locus of $\mathbb{P}(E_d)$.
Let $([Q], [s])$ be a point in $\mathbb{P}(E_d)\setminus U_{(2,d)}$ where $Q=(q=0)$ is a non-normal quadric surface in $\mathbb{P}^3$ and $0\neq s\in H^0(Q,\mathcal{O}_Q(d))$. Let $g\in H^0(\mathbb{P}^3, \mathcal{O}_{\mathbb{P}^3}(d))$ be a lifting of $s$. We choose suitable projective coordinates $[x_0,x_1,x_2,x_3]$ of $\mathbb{P}^3$ such that one of the following holds:
\begin{enumerate}[label=(\alph*)]
\item $q=x_0 x_1$, and $g=x_0 h$ where $h\in \mathbb{C}[x_0,\cdots, x_3]_{d-1}$, and $x_1\nmid h$.
\item $q=x_0^2$, and $g=x_0 h$ where $h\in \mathbb{C}[x_0,\cdots, x_3]_{d-1}$, and $x_0\nmid h$.
\end{enumerate}
Let $\sigma$ be the $1$-PS in $\mathrm{SL}(4)$ of weights $(-3,1,1,1)$ with respect to the chosen coordinates. By \cite[Proposition 2.15]{benoist}, for any $t\in (0,\frac{2}{d}]$ we have
\[
\mu^{\overline{N}_t}(([Q], [s]), \sigma)\leq \mu(q,\sigma)+t\mu(g,\sigma)\leq -2+t(d-4)<0.
\]
Hence $([Q], [s])$ is VGIT unstable of slope $t$ by the Hilbert-Mumford numerical criterion.
Next, we consider the VGIT semistable locus of $\mathscr{P}_d$. It is clear that any point $z$ in $\mathscr{P}_d\setminus U_{(2,d)}$ has the form $z=(([Q],[s]), \mathrm{chow}(\mathscr{C}))$ where $([Q],[s])\in \mathbb{P}(E_d)\setminus U_{(2,d)}$, $\mathscr{C}\in \mathrm{Hilb}_{(2,d)}\setminus U_{(2,d)}$, and $\mathrm{chow}: \mathrm{Hilb}_{(2,d)}\to \operatorname{Chow}_{(2,d)}$ is the Hilbert-Chow morphism. We choose $[x_0,\cdots,x_3]$ and $\sigma$ as above. Then
\[
\mu^{N_t}(z, \sigma)=\frac{2-dt}{2-d\delta} \mu^{\overline{N}_{\delta}}(([Q],[s]),\sigma)+\frac{t-\delta}{2-d\delta}\mu^{L_\infty}(\mathrm{chow}(\mathscr{C}),\sigma).
\]
From the above argument we get $\mu^{\overline{N}_{\delta}}(([Q],[s]),\sigma)<0$. By \cite[Propostion 5.8]{LO} we know that $\mu^{L_\infty}(\mathrm{chow}(\mathscr{C}),\sigma)<0$. Hence $\mu^{N_t}(z,\sigma)<0$ for any $t\in (\delta,\frac{2}{d})\cap\mathbb{Q}$ and the proof is finished.
\end{proof}
Indeed, we have a stronger result on VGIT semistable loci (see \cite[Lemma 6.8]{LO} for $d=4$).
\begin{lem}\label{lem:GITssnormal}
For each $t\in (\delta,\frac{2}{d})\cap\mathbb{Q}$ (resp. $t\in (0,\frac{1}{d-1})\cap\mathbb{Q})$, any VGIT semistable point in $\mathscr{P}_d^{\rm ss}(N_t)$ (resp. $\mathbb{P}(E_d)^{\rm ss}(\overline{N}_t)$) of slope $t$ has the form $([Q],[s])$ where $\mathrm{rank}(Q)\geq 3$.
\end{lem}
\begin{proof}
Let $z=([Q],[s])$ be a point in $U_{(2,d)}$ where $\mathrm{rank}(Q)\leq 2$. Hence by Lemma \ref{lem:GITssU-alldeg} it suffices to show instability of $z$ in $\mathbb{P}(E_d)$ and $\mathscr{P}_d$ respectively. We will assume $t\in (0,\frac{2}{d})\cap \mathbb{Q}$ throughout the proof. Choose a projective coordinate $[x_0,\cdots,x_3]$ such that $Q=(q=0)$ is defined by $q=x_0^2$ or $x_0x_1$. Let $g\in H^0(\mathbb{P}^3, \mathcal{O}_{\mathbb{P}^3}(d))$ be a lifting of $s$. Let $\sigma$ be the $1$-PS in $\mathrm{SL}(4)$ of weights $(-1,-1,1,1)$ with respect to the chosen coordinates. Then by \cite[Proposition 2.15]{benoist}
\[
\mu^{\overline{N}_t}(z, \sigma)\leq \mu(q,\sigma)+t\mu(g,\sigma)\leq -2+td<0.
\]
Hence $z$ is $\overline{N}_t$-unstable in $\mathbb{P}(E_d)$. It is clear that $\lim_{r\to 0}\lambda(r)\cdot ([Q],[s])=([Q], [g(0,0,x_2,x_3)])$ in $\mathbb{P}(E_d)$. Hence for general $s$ we see that $\lim_{r\to 0}\lambda(r)\cdot ([Q],[s])$ belongs to $U_{(2,d)}$. In particular, Lemma \ref{lem:proportional-alldeg} implies that $\mu^{N_t}(z,\sigma)=\mu^{\overline{N}_t}(z,\sigma)<0$, so $z$ is $N_t$-unstable in $\mathscr{P}_d$ when $s$ is general. Since the GIT unstable locus is closed, we conclude that $z$ is $N_t$-unstable for any choice of $s$.
\end{proof}
The following theorem is a generalization of \cite[Theorem 5.6]{LO}.
\begin{theorem}\label{thm:LOmain-alldeg}
Let $\delta$ be as above. The following hold:
\begin{enumerate}
\item The VGIT semistable locus $\mathscr{P}_d^{\rm ss}(N_t)$ is independent of the choice of $\delta$.
\item For $t \in (\delta, \frac{1}{d-1})$, we have $\mathscr{M}_d(t)\cong [\mathbb{P}(E_d)^{\rm ss}(\overline{N}_t)/\mathrm{PGL}(4)]$ and $\mathfrak{M}_d(t) \cong \mathbb{P}(E_d) \mathbin{/\mkern-6mu/}_{\overline{N}_t} \mathrm{SL}(4)$.
\item For $t \in (\delta, \frac{2}{3d})$, we have $\mathscr{M}_d(t)\cong \mathscr{M}_d$ and $\mathfrak{M}_d(t) \cong \mathfrak{M}_d$.
\end{enumerate}
\end{theorem}
\begin{proof}
(1) Let $\delta$ and $\delta'$ be two rational numbers in $(0, \frac{2}{3d})$. Denote by $G:=\mathrm{SL}(4)$. Denote the corresponding polarization on $\mathscr{P}_d$ by $N_t$ and $N_t'$. Since both GIT semistable loci $\mathscr{P}_d$ with respect to $N_t$ and $N_t'$ are contained in $U_{(2,d)}$ where their restrictions are the same by Lemmas \ref{lem:proportional-alldeg} and \ref{lem:GITssU-alldeg}, \cite[Lemma 4.17]{CMJL14} implies that for $m\in\mathbb{N}$ sufficiently divisible we have
\[
H^0(\mathscr{P}_d, N_t^{\otimes m})^G\xrightarrow{\cong } H^0(U_{(2,d)}, N_t|_{U_{(2,d)}}^{\otimes m})^G=H^0(U_{(2,d)}, N_t'|_{U_{(2,d)}}^{\otimes m})^G\xleftarrow[]{\cong} H^0(\mathscr{P}_d, N_t'^{\otimes m})^G.
\]
Since both $\mathscr{P}_d^{\rm ss}(N_t)$ and $\mathscr{P}_d^{\rm ss}(N_t')$ are the union of non-vanishing loci of $G$-invariant sections in the first and last terms of the above diagram, we know that they are equal. Hence $\mathscr{P}_d^{\rm ss}(N_t)$ is independent of the choice of $\delta$.
(2) The proof is similar to (1) using Lemmas \ref{lem:proportional-alldeg}, \ref{lem:GITssU-alldeg}, and \cite[Lemma 4.17]{CMJL14}.
(3) By (2) it suffices to show that $[\mathbb{P}(E_d)^{\rm ss}(\overline{N}_t)/\mathrm{PGL}(4)]\cong \mathscr{M}_d$ for $t\in (0, \frac{2}{3d})$. By Lemma \ref{lem:GITssnormal}, we know that any GIT semistable point $z\in \mathbb{P}(E_d)$ with respect to $\overline{N}_t$ has the form $z=([Q],[s])$ where $\mathrm{rank}(Q)\geq 3$. We will show that under the assumption $t<\frac{2}{3d}$ the quadric surface $Q$ must be smooth. Assume to the contrary that $Q=(q=0)$ is singular. Then we may choose a projective coordinate $[x_0,\cdots,x_3]$ of $\mathbb{P}^3$ such that $q\in \mathbb{C}[x_1,x_2,x_3]_2$. Let $\sigma$ be the $1$-PS in $\mathrm{SL}(4)$ with weights $(3,-1,-1,-1)$. Let $g\in H^0(\mathbb{P}^3,\mathcal{O}_{\mathbb{P}^3}(d))$ be a lifting of $s$. Then by \cite[Proposition 2.15]{benoist} we have
\[
\mu^{\overline{N}_t}(z, \sigma)\leq \mu(q, \sigma)+t\mu(g, \sigma)\leq -2+t\cdot 3d<0.
\]
Hence $z$ is $\overline{N}_t$-unstable on $\mathbb{P}(E_d)$. Since $\sigma$ fixes $Q$, we know that $\lim_{r\to 0}\sigma(r)\cdot z$ belongs to $U_{(2,d)}$. Hence $\mu^{N_t}(z, \sigma)=\mu^{\overline{N}_t}(z, \sigma)<0$ by Lemma \ref{lem:proportional-alldeg} which implies that $z$ is $N_t$-unstable on $\mathscr{P}_d$. The rest of the proof is similar to \cite[Lemma 4.18]{CMJL14}.
\end{proof}
\begin{rem}
When $t=\frac{2}{d}$, we can define the VGIT quotient stack and space by
\[
\mathscr{M}_d(\tfrac{2}{d}):=[\operatorname{Chow}_{(2,d)}^{\rm ss}/\mathrm{PGL}(4)],\qquad \mathfrak{M}_d(\tfrac{2}{d}):=\operatorname{Chow}_{(2,d)}\mathbin{/\mkern-6mu/} \mathrm{SL}(4).
\]
As in \cite{LO}, one can show that there are natural wall crossing morphisms $\mathscr{M}_d(\frac{2}{d}-\epsilon)\to \mathscr{M}_d(\frac{2}{d})$ and $\mathfrak{M}_d(\frac{2}{d}-\epsilon)\to \mathfrak{M}_d(\frac{2}{d})$ for $0<\epsilon\ll 1$. We omit further discussion on the Chow quotient since it is not directly related to our K-moduli spaces when $d\neq 4$ (see e.g. Remark \ref{rem:OSS}).
\end{rem}
\subsection{Proofs}
In this section we prove Theorem \ref{mthm:alldeg}.
We first prove part (1) of Theorem \ref{mthm:alldeg}.
\begin{proof}[Proof of Theorem \ref{mthm:alldeg}(1)]
The proof is similar to Theorem \ref{thm:firstwall}. Consider the universal family $\pi_d: (\mathbb{P}^1 \times \mathbb{P}^1 \times \mathbf{P}_{(d,d)}, c\mathcal C) \to \mathbf{P}_{(d,d)}$ over the parameter space of $(d,d)$-curves on $\mathbb{P}^1\times\mathbb{P}^1$. It is clear that $\mathcal C \in |\mathcal{O}(d,d,1)|$. Hence by Proposition \ref{prop:logCM2} we know that the CM $\mathbb{Q}$-line bundle $\lambda_{\mathrm{CM}, \pi_d, c\mathcal C}$ is equal to $\mathcal{O}_{\mathbf{P}_{(d,d)}}(3(2-dc)^2 c)$ which is ample for $c\in (0,\frac{2}{d})$. Hence K-(poly/semi)stability of $(\mathbb{P}^1\times\mathbb{P}^1, cC)$ implies GIT (poly/semi)stability of $C$. For the other direction, let $(X,cD)$ be a K-semistable pair parametrized by $\overline{\mathcal{K}}_{d,c}$ with $c\in (0, \frac{1}{2d})$. By \cite{LL16}, for any point $x\in X$ we have
\[
\widehat{\mathrm{vol}}(x,X)\geq \widehat{\mathrm{vol}}(x,X,cD)\geq \frac{4}{9}(-K_X-cD)^2=\frac{32}{9}(1-dc)^2>2.
\]
This implies that any $x\in X$ is smooth, hence $X\cong\mathbb{P}^1\times\mathbb{P}^1$. The rest of the proof is exactly the same as Theorem \ref{thm:firstwall}.
\end{proof}
\begin{rem}
Similar to Proposition \ref{prop:firstwallreplace}, we have that $c_1=\frac{1}{2d}$ is the first K-moduli wall for $(d,d)$-curves on $\mathbb{P}^1\times\mathbb{P}^1$ which replaces $(\mathbb{P}^1\times\mathbb{P}^1, dH)$ by $(\mathbb{P}(1,1,2), D)$ where $H$ is a smooth $(1,1)$-curve.
\end{rem}
Next, we prove part (2) of Theorem \ref{mthm:alldeg}. Before starting the proof, we need some preparation on CM line bundles as a generalization of Propositions \ref{prop:CM-U} and \ref{prop:proportional}.
\begin{prop}\label{prop:CM-U-alldeg}
For simplicity, denote by $U:=U_{(2,d)}$.
Let $f_U:(\mathscr{X}_U,\mathscr{D}_U)\to U$ be the restriction of $f:(\mathscr{X},\mathscr{D})\to \mathbb{P}(E_d)$ over $U\subset \mathbb{P}(E_d)$.
We denote the CM $\mathbb{Q}$-line bundle of $f_U$ with coefficient $c$ by $\lambda_{U,c}:=\lambda_{\mathrm{CM}, f_U, c\mathscr{D}_U}$.
Then $\lambda_{U,c}$ and $N_t|_U$ are proportional up to a positive constant where $t=t(c):=\frac{6c}{dc+4}$ and $c\in (0,\frac{2}{d})$.
\end{prop}
\begin{proof}
By the same computations as Section \ref{sec:CM}, we get $\lambda_{U,c}=(2-dc)^2(dc+4)(\eta+\frac{6c}{dc+4}\xi)|_U$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mthm:alldeg}(2)]
We first fix some notation. Let $U_c^{\mathrm{K}}$ be the open subset of $U=U_{(2,d)}$ parametrizing $c$-K-semistable log Fano pairs. Let $U_c^{\mathrm{GIT}}:=\mathscr{P}_d^{\rm ss}(N_t)$ be the open subset of $U$ parametrizing VGIT semistable points of slope $t=t(c)=\frac{6c}{dc+4}$. Similar to Proposition \ref{prop:K-stackinU}, by Theorem \ref{thm:surfacesalld} we know that $[U_c^{\mathrm{K}}/\mathrm{PGL}(4)]\cong \overline{\mathcal{K}}_{d,c}$ as long as $c\in (0, \frac{4-\sqrt{2}}{2d})$. Hence it suffices to show $U_c^{\mathrm{K}}=U_c^{\mathrm{GIT}}$ for $c\in (0,\frac{4-\sqrt{2}}{2d})$.
We follow the strategy in the proof of Theorem \ref{thm:wallscoincide}, that is, by induction on the walls for K-moduli and VGIT. It suffices to generalize Propositions \ref{prop:induction0}, \ref{prop:induction1}, and \ref{prop:induction2} to $(2,d)$ complete intersections under the assumption $c<\frac{4-\sqrt{2}}{2d}$. The generalization of Proposition \ref{prop:induction0} follows from Theorems \ref{mthm:alldeg}(1) and \ref{thm:LOmain-alldeg}(3). For Propositions \ref{prop:induction1} and \ref{prop:induction2}, we can generalize them using $[U_c^{\mathrm{K}}/\mathrm{PGL}(4)]\cong \overline{\mathcal{K}}_{d,c}$, Proposition \ref{prop:CM-U-alldeg}, and Theorem \ref{thm:generalwall}.
\end{proof}
\begin{rem}\label{rem:OSS}
If $d\neq 4$ then the isomorphism $\overline{K}_{d,c}\cong \mathfrak{M}_d(t)$ can fail for $c>\frac{4-\sqrt{2}}{2d}$. For instance, it was observed in \cite[Example 5.8]{OSS16} that $\mathbb{P}(1,2,9)$ appears in the K-moduli space $\overline{K}_{3,\frac{1}{2}}$. We will further investigate the case $d=3$ in a forthcoming work.
\end{rem}
\iffalse
\textcolor{red}{The strategy goes as follows.}
\textcolor{magenta}{KD: I think this strategy has now been implemented successfully above!}
\textcolor{red}{First, we show that the CM line bundle and the VGIT polarization are proportional when restrict to $U$. Next, if we denote by $U_c^{\mathrm{K}}$ and $U_c^{\mathrm{GIT}}$ the $c$-K-semistable and $t=t(c)$-GIT semistable locus, then they are both inside $U$. We want to show that they are the same. This is an argument based on induction and \cite[Lemma 3.31]{ADL}.
\begin{enumerate}
\item For $0<c\ll 1$, show they are the same. This is done in the last section.
\item Suppose we reach a wall on either K-side or GIT side. Then we have open immersions $U_{c-\epsilon}^{\mathrm{K}}\hookrightarrow U_c^{\mathrm{K}}$ and $U_{c-\epsilon}^{\mathrm{GIT}}\hookrightarrow U_c^{\mathrm{GIT}}$. By the surjectivity of wall-crossing morphisms, for any point $(X,D)\in U_c^{\mathrm{K}}$ there exists $c$-K-polystable point $(X_0, D_0)\in U_c^{\mathrm{K}}$ and $(c-\epsilon)$-K-semistable point $(X',D')\in U_{c-\epsilon}^{\mathrm{K}}$, such that $\lambda:(X,D)\rightsquigarrow (X_0, D_0)$ and $\lambda':(X',D')\rightsquigarrow (X_0, D_0)$. By \cite[Lemma 3.1]{LWX18} we know that $\mathrm{Fut}_{c}(\lambda')=0$, hence $\mu_t(\lambda')=0$. Since $(X',D')\in U_c^{\mathrm{GIT}}$, by \cite{Kem78} we know that $(X_0,D_0)\in U_c^{\mathrm{GIT}}$ as well. Hence $(X',D')\in U_c^{\mathrm{GIT}}$ by openness. Thus we have shown $U_c^{\mathrm{K}}\subset U_c^{\mathrm{GIT}}$.
The reverse argument shows that $U_c^{\mathrm{GIT}}\subset U_c^{\mathrm{K}}$ hence they are equal.
\item Suppose we have shown that $U_c^{\mathrm{K}}=U_c^{\mathrm{GIT}}$. We want to show that $U_{c+\epsilon}^{\mathrm{K}}=U_{c+\epsilon}^{\mathrm{GIT}}$. Clearly we have open immersions $U_{c+\epsilon}^{\mathrm{K}}\hookrightarrow U_c^{\mathrm{K}}$ and $U_{c+\epsilon}^{\mathrm{GIT}}\hookrightarrow U_c^{\mathrm{GIT}}$. We first show $U_{c+\epsilon}^{\mathrm{K}}\subset U_{c+\epsilon}^{\mathrm{GIT}}$.
Assume to the contrary that $(X,D)\in U_{c+\epsilon}^{\mathrm{K}}\setminus U_{c+\epsilon}^{\mathrm{GIT}}$. Since $(X,D)$ is $t(c)$-GIT semistable but $t(c+\epsilon)$-GIT unstable, there exists $\lambda:\mathbb{G}_m\to \mathrm{SL}(4)$ such that $\mu_{t(c)}(\lambda)=0$ and $\mu_{t(c+\epsilon)}(\lambda)<0$. Hence $\lambda:(X,D)\rightsquigarrow (X_0,D_0)$ where $(X_0,D_0)$ is $t(c)$-GIT semistable by \cite{Kem78}. From (2) we know that $(X_0, D_0)\in U_c^{\mathrm{K}}$ hence $\mathrm{Fut}_{c+\epsilon}(\lambda)<0$. This implies $(X,D)$ is K-unstable, a contradiction.
\item Next we show $U_{c+\epsilon}^{\mathrm{GIT}}\subset U_{c+\epsilon}^{\mathrm{K}}$. This is done by a properness argument. If $(X,D)$ is $t(c+\epsilon)$-GIT semistable, we find a family over a punctured curve $B\setminus\{0\}$ such that $(X_b, D_b)$ is $(c+\epsilon)$-K-semistable and $(X,D)\cong (X_0, D_0)$. Then properness of K-moduli spaces produces a $(c+\epsilon)$-K-polystable limit $(X_0',D_0')$. Hence $(X_0',D_0')$ is $t(c+\epsilon)$-GIT polystable by (a polystable version of) (3). Then by separatedness of GIT quotient we know that $(X_0,D_0)\rightsquigarrow (X_0',D_0')$ hence $(X_0,D_0)$ is $(c+\epsilon)$-K-semistable.
\end{enumerate}}
\fi
\bibliographystyle{alpha}
|
1,108,101,564,254 | arxiv | \section{Introduction}
Human spatial action localization and classification in videos are
challenging tasks that are key to better video understanding. Action
detection is especially challenging, as it requires localizing the actor in the
scene, as well as classifying the action. This is done for every frame in a
video with little or no context. In contrast, a related task is action
recognition, which uses signals from all video frames to predict the action. Action
detection has important applications, such as surveillance and human-robot
interaction. However, most current approaches are computationally
expensive and are far from real-time performance, which limits their usage in
real life applications.
Understanding actions in videos has been an active area of research in recent
years. Following the success of deep convolutional neural networks (CNNs) on
the task of image classification, researchers have used CNNs for the tasks of action
recognition and localization. For image classification, appearance is typically the only
cue available, represented by RGB pixel values. Videos provide an extra
signal: motion. Researchers have worked on many different ways to
model motion cues, including 3D CNNs and recurrent neural networks.
One of the most successful approaches are two-stream networks
\cite{DBLP:journals/corr/SimonyanZ14}, which usually consist of a
spatial network that models appearance, whose input is RGB frames, and a
temporal network that models motion. Optical flow is often chosen as input to
this network; however, other inputs can be used, such as dense trajectories. While
adding the temporal stream often improves the model, it adds complexity, as optical
flow is usually computed using a third party algorithm, which works
separately from the RGB stream. This limits the ability for parallelization and
full utilization of compute resources like GPUs, in addition to memory
overhead. Also, using a third party algorithm prevents the model from
being trainable end-to-end such that the visual and motion pathways cannot
learn to co-ordinate. Finally, as shown in \cite{1712.08416}, optical
flow algorithms optimize the end-point-error (EPE), which does not necessarily
align with the objective for action detection.
One of the challenges of the action detection task is the absence of large-scale
annotated datasets. This problem forces researchers to work with
relatively shallow architectures or use an architecture that is pre-trained on
the image classification task. Only recently have large-scale datasets
for action recognition emerged, such as \textit{Kinetics}
\cite{DBLP:journals/corr/KayCSZHVVGBNSZ17}. Pre-training on a large-scale
dataset for action recognition should transfer well to the task of action
localization.
To the best of our knowledge, all past efforts that used two-stream networks
for action detection trained the two streams separately. The predictions from
both streams were then fused using a fusion algorithm. Training the two streams
separately prevents the model from exploiting dependencies between the
appearance and motion cues. As a downside, training the two networks jointly on
the small action localization dataset might lead to overfitting, as the
model will have a very high capacity when compared to amount of labeled data. However,
pre-training on \textit{Kinetics} should solve this overfitting problem.
In this work, we propose an end-to-end trainable framework for real-time spatial
action detection. Following the advances in real-time object detection, we
build our framework with motivation from \textit{YOLOv2}
\cite{DBLP:conf/cvpr/RedmonF17}, the state-of-the-art real-time object
detector. We generalize its architecture to a two-stream network architecture for
action detection. Instead of training each stream separately, we train both
streams jointly by fusing the final activations from each stream and applying a
convolutional layer to produce the final prediction.
We replace the usual third party algorithms used for computing optical flow
\cite{Farneback:2003:TME:1763974.1764031,Zach:2007:DBA:1771530.1771554,5551149}
with a trainable neural network. We use \textit{Flownet2} \cite{IMKDB17} for optical flow
computation and integrate it in our architecture at the beginning of the
temporal stream. Using \textit{Flownet2} has two advantages: first, the
framework becomes end-to-end trainable. While \textit{Flownet2} is trained to
optimize the EPE, the computed optical flow might not be optimal for the
objective of action detection. Fine-tuning \textit{Flownet2} for the task of
action detection should result in better optical flow for our objective
\cite{1712.08416}. Secondly, while other efforts on action detection usually
use implementations of optical flow algorithms that are totally separate
from the model, integrating the optical flow computation in the network
improves the computational speed of the framework, as it makes better use of
parallelization and reduces the data transfer overhead.
Finally, to address the overfitting problem that may be caused by the use of
small-scale datasets or by training the two streams jointly, we pre-train our model
for the task of action recognition on \textit{Kinetics}. The
pre-trained model is then trained on the task of action detection with a weak
learning rate to preserve a relatively generic feature initialization and to
prevent overfitting.
We test our framework using \textit{UCF-101-24}
\cite{DBLP:journals/corr/abs-1212-0402}, a realistic and challenging dataset for
action localization. We use temporally trimmed videos as our framework does
not yet include temporal localization.
\section{Related Work}
In recent years, deep CNNs have been very successful for computer vision tasks.
Specifically, they have shown great improvements for the tasks of image
classification
\cite{DBLP:journals/corr/SzegedyLJSRAEVR14,DBLP:journals/corr/HeZRS15} and
object detection
\cite{DBLP:journals/corr/GirshickDDM13,DBLP:conf/cvpr/RedmonF17} when compared to
traditional hand-crafted methods. Studying actions in videos has been an
active area of research. Videos provide two types of information: appearance,
which is what exists in static images or individual frames of video, and motion.
Researchers have used different approaches for modeling motion, including
two-stream networks and 3D-CNNs.
Two-stream networks \cite{DBLP:journals/corr/SimonyanZ14} have been one of the
most successful approaches for modeling motion for the tasks of action
recognition and detection. In this approach, the network is designed as two
feed-forward pathways:
a spatial stream for modeling appearance and a temporal stream for
modeling motion. While RGB images are a good representation of appearance
information, optical flow is a good representation for motion. The
spatial and temporal streams take RGB frames and optical flow as inputs,
respectively. Many efforts for solving the action detection problem have followed
this approach. \citet{DBLP:journals/corr/GkioxariM14},
motivated by R-CNNs \cite{DBLP:journals/corr/GirshickDDM13}, use selective
search to find region proposals. They use two separate CNNs (appearance and
motion) for feature extraction. These features are fed to a Support Vector Machine (SVM) to predict
action classes. Region proposals are linked using the Viterbi algorithm.
\citet{DBLP:journals/corr/WeinzaepfelHS15} obtain
frame-level region proposals using EdgeBox
\cite{edge-boxes-locating-object-proposals-from-edges}. The frames are then
linked by tracking high-scoring proposals using a tracking-by-detection approach,
which uses two separate CNNs for modeling appearance and motion, and a SVM
classifier similar to \cite{DBLP:journals/corr/GirshickDDM13}. \citet{Peng2016}, motivated by faster-R-CNN \cite{NIPS2015_5638}, use region
proposal networks (RPNs) to find frame-level region proposals. They use a
motion RPN to obtain high quality proposals and show that it is complementary
to an appearance RPN. Multiple frame optical flows are stacked together and demonstrate
improvement in the motion R-CNN. Region proposals are then linked using the Viterbi
algorithm. Both appearance and motion streams are trained separately. \citet{DBLP:journals/corr/SinghSC16} were the first deep learning-based
approach to address real-time performance for action detection. They
proposed using the single shot detector (SSD) \cite{DBLP:journals/corr/LiuAESR15}, which is a
real-time object detector. They also employ a real-time, but less
accurate, optical flow computation \cite{DBLP:journals/corr/KroegerTDG16}.
Combining these two components, they managed to achieve a rate of \SI{28}{fps}.
They propose a novel greedy algorithm for online incremental action linking
across the temporal dimension. While this work is significantly faster than
previous efforts,
they sacrificed accuracy for speed by using a less accurate flow computation.
\citet{DBLP:journals/corr/KalogeitonWFS17} propose
generalizing the anchor box regression method used by faster R-CNN
\cite{NIPS2015_5638} and SSD \cite{DBLP:journals/corr/LiuAESR15} to anchor
cuboids, which consist of a sequence of bounding boxes over time. They take a
fixed number of frames as input. Then, feature maps from all frames in this sequence
are used to regress and find scores for anchor cuboids. At test time, the anchor
cuboids are linked to create tubelets, which do not have a fixed temporal
extent. While most methods for solving the action detection problem followed the
two-stream approach, \citet{DBLP:journals/corr/HouCS17} use
3D-CNNs. They suggest generalizing R-CNN to videos by designing a tube CNN (T-CNN). Instead of obtaining frame-level action
proposals and using a post-processing algorithm to link actions temporally to
form action tubes, T-CNN learns the action tubes directly from RGB frames.
Optical flow estimation has been dominated by variational approaches that follow
\cite{Horn81determiningoptical}. Though recently, approaches that use deep CNNs for optical flow estimation
\cite{DBLP:journals/corr/RanjanB16,7410673,IMKDB17} have shown promise.
\textit{Flownet} \cite{7410673} is the first end-to-end trainable deep CNN for
optical flow estimation. It is trained using synthetic data to optimize EPE.
The authors provide two architectures to estimate optical flow. The first is a standard CNN that
takes the concatenated channels from two consequent frames and predicts the
flow directly. The second is a two-stream architecture that attempts to find
a good representation for each image before they are combined by a correlation
layer. However, \textit{Flownet} falls behind other top methods due to
inaccuracies with small displacements present in realistic data.
\textit{Flownet2} \cite{IMKDB17} addresses this problem by introducing a
stacked architecture, which includes a subnetwork that is specialized to small
displacements. It achieves more than 50\% improvement in EPE compared to
\textit{Flownet}. Having a trainable network for estimating optical flow can be
very useful, especially when integrated with other tasks. \citet{1712.08416} studied the integration of trainable optical
flow networks \textit{Flownet} \cite{7410673} and \textit{Spynet}
\cite{DBLP:journals/corr/RanjanB16} on the task of action recognition. They
came up with multiple conclusions that suggest that fine-tuning optical
flow networks for the objective of action recognition consistently demonstrated
improvement.
\section{Methodology}
\begin{figure*}[!h]
\centering
\includegraphics[scale=0.5]{arch.pdf}
\caption{Our framework takes a sequence of video frames as input. \textbf{(a)}
\textit{Flownet2} is used to estimate optical flow, which is input to the
motion stream. \textbf{(b)} The two streams follow the \textit{YOLOv2}
architecture. \textbf{(c)} We apply early fusion by concatenating the
activations from both streams channel-wise and then applying a 1x1
convolutional kernel on the fused activations. \textbf{(d)} Finally, similar
to \textit{YOLOv2}, the final feature maps are used to regress bounding boxes,
class scores, and overlap estimates.}
\label{arch}
\end{figure*}
We propose a framework for efficient and accurate action detection, as outlined
in Figure~\ref{arch}. We follow the two-stream network architecture
\cite{DBLP:journals/corr/SimonyanZ14} and integrate optical flow computation in
our framework by using \textit{Flownet2} as input to the motion stream.
We build each stream on \textit{YOLOv2} \cite{DBLP:conf/cvpr/RedmonF17}. In
contrast to previous methods, instead of training each stream separately, we
apply early fusion and train both streams jointly. Finally, the fused feature
maps are used to regress bounding boxes, class scores, and overlap estimates,
similar to \textit{YOLOv2}.
\subsection{Two-Stream YOLOv2 with Early Fusion}
\textit{YOLO} \cite{DBLP:journals/corr/RedmonDGF15} is a real time object
detector. While there have been many successful object detection methods, such as
R-CNN \cite{DBLP:journals/corr/GirshickDDM13}, these methods rely on
extracting region proposals for candidate objects, either by an external
algorithm like Selective Search or EdgeBox, or by a RPN. These proposals are then fed to a CNN to extract features and predict
object classes. In contrast, \textit{YOLO} defines object detection as a
regression problem. A single network predicts both the spatial bounding boxes
and their associated object classes. This design enables end-to-end training and
optimization which allows \textit{YOLO} to run in real-time (\SI{45}{fps}).
Compared to R-CNN \cite{DBLP:journals/corr/GirshickDDM13}, \textit{YOLO} uses
the entire image to predict objects and their locations, meaning that it
encodes appearance as well as contextual information about object classes. This
is very critical for the task of action detection, as context is an extremely
important clue for which action class is present in the scene (e.g., surfing is
associated with sea, skiing is associated with snow). \textit{YOLOv2} is an
improved version of \textit{YOLO}, which adopts the anchor box idea that is
used by R-CNN and SSD. A pass-through layer is added, which brings high
resolution features from early layers on the network to the final low
resolution layers. This layer improves the performance with small-scale objects
that the previous version struggled with. Moreover, \textit{YOLOv2} is even
faster, as it maintains high accuracy with small-scale images. The
fully-connected layer was removed, which makes the network completely
convolutional, reducing the number of parameters. We built our framework on
\textit{YOLOv2}, as it is the best fit for our objective, running at above
real-time speeds while maintaining state-of-the-art accuracy. Moreover, it
encodes better contextual information, which is critical for the task of action
detection. We use the open-source implementation and the pre-trained models
provided by \url{https://github.com/longcw/yolo2-pytorch}.
In contrast to previous efforts, we train both input streams jointly.
Training the two streams independently prevents the networks from
learning complementary features. Associating appearance and motion cues can be
very useful for identifying the action in
the scene. We apply early fusion by concatenating the final activations of
both streams channel-wise. We apply a 1x1 convolutional kernel on top of the
fused activations. By applying this convolution, we combine the features from
both streams across each spatial location where there is high correspondence.
The final activations are used to regress bounding boxes, class scores, and
overlap estimates, similar to \textit{YOLOv2}.
\subsection{Integrating Flownet}
Previous two-stream approaches for solving action detection use non-trainable
optical flow algorithms
\cite{Farneback:2003:TME:1763974.1764031,Zach:2007:DBA:1771530.1771554,5551149}
that are completely separate from their detection model. In contrast, we
integrate optical flow computation in our pipeline. This provides two
advantages. Firstly, our framework becomes fully trainable end-to-end.
Fine-tuning optical flow for the task in hand can be very useful. \citet{1712.08416} observe that a CNN trained to optimize the
EPE might not be the best representative of motion for the task of action
recognition. They propose fine-tuning the optical flow network for action
recognition with a weak learning rate and they observe consistent
improvements. Motivated by this work, we fine-tune \textit{Flownet2} for the
task of action detection. Secondly, integrating \textit{Flownet2} in our
pipeline leverages the computational power of GPUs, as all we need is a forward
pass starting from the video frames to the final detections. Other methods
usually use publicly available CPU implementations of variational optical
flow algorithms, which are significantly slower, in addition to data
transfer overhead. While \cite{DBLP:journals/corr/SinghSC16} uses a less
accurate, faster optical flow algorithm called DIS-Fast
\cite{DBLP:journals/corr/KroegerTDG16}, \textit{Flownet2} has architectures
that are faster with matching quality, or with the same speed with significantly higher
quality, as shown in Table~\ref{flownet_compare}. We chose to test our model
with three variations of \textit{Flownet2}. The full-stack architecture
\textit{Flownet2}, is the most accurate, but slowest architecture.
\textit{Flownet2-CSS} a less accurate, but faster version. Finally, we test with
\textit{Flownet2-SD}, a relatively small network that is specialized toward small
displacements. This model is relatively less accurate than the first two;
however, it is significantly faster. We use the open-source implementation and
pre-trained models provided by
\url{https://github.com/NVIDIA/flownet2-pytorch}.
\newcolumntype{Y}{>{\centering\arraybackslash}X}
\newcolumntype{s}{>{\hsize=.5\hsize}Y}
\def\tabularxcolumn#1{m{#1}}
\begin{table}[!h]
\caption{Average Endpoint Error (AEE) and Runtime comparison of different
variations of \textit{Flownet} and DIS-Fast, as reported in \cite{IMKDB17}}.
\begin{tabularx}{\columnwidth}{|Y|s|s|}
\hline
Method & \thead{Sintel Final \\ AEE (Train)} & \thead{Runtime \\ (ms per frame)} \\
\hline
DIS-Fast \cite{DBLP:journals/corr/KroegerTDG16} (CPU) & 6.31 & 70 \\
\hline
FlownetS \cite{7410673} (GPU) & 5.45 & \textbf{18} \\
Flownet2-CSS \cite{IMKDB17} (GPU) & 3.55 & 69 \\
Flownet2 \cite{IMKDB17} (GPU) & \textbf{3.14} & 123 \\
\hline
\end{tabularx}
\label{flownet_compare}
\end{table}
\subsection{Pre-Training Using Kinetics}
One of the challenges that researchers face when working on the task of action
detection is the absence of large-scale annotated datasets. Providing
bounding boxes for every frame in every video for a large-scale dataset is an
extremely difficult task. One of the most successful ways to deal with this
kind of problem is through transfer learning. Deep CNN architectures trained on
large-scale image classification datasets like ImageNet \cite{imagenet_cvpr09} have
shown that they can learn features generic enough such that they can be used for
other vision tasks. This suggests that features learned from one task can be
transferred to another. It was also observed that the more similar the two tasks are, the
better the performance after transfer.
After the release of \textit{Kinetics}
\cite{DBLP:journals/corr/KayCSZHVVGBNSZ17}, \citet{DBLP:journals/corr/CarreiraZ17} studied the effect of pre-training
different architectures with \textit{Kinetics} and then used the pre-trained
model to train smaller datasets (e.g., UCF-101, HMDB) for the same task of
action recognition. They report a consistent boost in performance after
pre-training; however the extent of the improvement varies with different
architectures. In this study, the transfer should be optimal, as the target and
source tasks are the same. Previous efforts for solving action detection
usually use network architectures pre-trained on image classification using
ImageNet networks or are pre-trained on the task of object detection using Pascal
VOC \cite{Everingham15}. However, T-CNN \cite{DBLP:journals/corr/HouCS17} uses a
pre-trained C3D model \cite{Tran_2015_ICCV} that is trained using the
\textit{UCF-101} action recognition dataset, which is considerably smaller than
\textit{Kinetics}.
The tasks of action recognition and detection are very similar. In fact, action
recognition can be considered a subtask of action detection. Similarly, action detection and object detection are also related, mainly through the
localization subtask. In order to gain benefit from both tasks and make use of
the large-scale \textit{Kinetics} dataset, we start with \textit{YOLOv2}
architectures for both streams that are pre-trained on object detection using
Pascal VOC. We then train our framework using \textit{Kinetics} with a weak
learning rate in order to preserve some of the features that can help with
localization, while fine-tuning for a different classification task.
\section{Experiments}
We evaluate different variations of our architecture with respect to detection
performance and runtime:
\begin{itemize}
\item \textit{Flownet2} provides improvement in both speed and accuracy.
Therefore, to test the quality of \textit{Flownet2} compared to other accurate
optical flow algorithms, we substitute the method of \citet{5551149}, an accurate but slow optical flow algorithm.
\item Fine-tuning \textit{Flownet2} for the task of action detection produces
optical flow that is a better representation of the action-related motion in
the scene. To validate this idea, we train models with frozen and fine-tuned
\textit{Flownet2} parameters.
\item To investigate transfer learning from the task of activity recognition, we
train models with and without \textit{Kinetics} pre-training. For the models
that were not pre-trained, we use the parameters
trained on object detection using PASCAL VOC.
\item Finally, to have the ability to choose between accuracy and speed, we
substitute \textit{Flownet2} with either
\textit{Flownet2-SD} or \textit{Flownet2-CSS}, observing how they compare
in terms of accuracy and speed to the full-stack estimator.
\end{itemize}
\subsection{Dataset} We use \textit{UCF-101} to test our framework. This is a
dataset that consists of videos for 101 actions in realistic environments
collected from YouTube. This dataset is mainly used for the task of action
recognition. For the action detection task, a subset of 24 actions have been
annotated with bounding boxes, consisting of 3,207 videos.
This is currently the largest dataset available for the task of action
detection. While this dataset includes untrimmed videos, we use the trimmed ones,
as our framework does not include a temporal localization component. We use
split 1 for splitting training and testing data.
\subsection{Evaluation Metric} We use frame mean average precision (f-mAP) to
evaluate our methods. This computes the area under the precision recall curve for
the frame-level detections. A true positive is a detection that has an
intersection over union (IoU) more than a threshold \(\alpha\) with the ground truth,
and the action class is predicted correctly.
\subsection{Implementation Details}
We use PyTorch \cite{paszke2017automatic} for all experimentation. For
\textit{Kinetics} pre-training, we initialize both streams using parameters
trained on PASCAL VOC. We use the SGD
optimizer with a learning rate of 0.0008. We pre-train
\textit{Kinetics} with optical flow from \textit{Flownet2}. We
trained \textit{UCF-101} using the Adam optimizer with a learning rate of
\(5\times10^{-5}\) and batch size of 32. We observed that the Adam optimizer added more stability
when training a multi-task objective. We apply random cropping, HSV
distortion, and horizontal flipping for data augmentation. During training, we
sample two consecutive frames randomly from each sequence. We scale the images
and optical flow to \(320\times320\). For fine-tuning all the \textit{Flownet2}
architectures, we used a learning rate of \(10^{-7}\).
We used the pre-computed Brox \textit{et al.} optical flow provided by
\url{https://github.com/gurkirt/realtime-action-detection}. For testing, we
select the detection box with the highest score in the current frame. We do not
apply any post-processing action linking algorithm.
\section{Results}
\subsection{Ablation Study}
We experiment with different variations of our architecture to show the value
of our proposals. We report the frame mAP at different IoU thresholds for 8
different models in Table.~\ref{abalation}. First, to study the impact of
pre-training using \textit{Kinetics}, we compare it against models
pre-trained using Pascal VOC. We can observe a consistent improvement when
pre-training with \textit{Kinetics}, for both networks trained with Brox,
optical flow, where we notice a 2.5\% gain in frame mAP (0.5 threshold)
or using \textit{Flownet2} where the gain is 4.5\%. The difference
in the gain can be explained by the fact that we pre-trained \textit{Kinetics} using
\textit{Flownet2}. Second, we study the value of fine-tuning \textit{Flownet2}
for the task of action detection. We compare models with frozen and fine-tuned
\textit{Flownet2} parameters. We observe an improvement of 2\% for models
pre-trained with Pascal VOC and 2.5\% for models pre-trained using
\textit{Kinetics}. Combining pre-training with fine-tuning \textit{Flownet2},
we see a gain of \(7\%\). We notice that a model pre-trained with
\textit{Kinetics} and fine-tuned for action detection outperforms all
other variations for all different IoU thresholds.
Finally, we test with \textit{Flownet2-CSS} and \textit{Flownet2-SD} which are
faster, less accurate variations of \textit{Flownet2}. We observe that with
pre-training and fine-tuning, these models outperform the Brox optical flow-trained
model (Brox + VOC), while being significantly faster. We show the AUC curves
for all 8 models we tested in Figure~\ref{auc_curve}.
\newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}}
{\renewcommand{\arraystretch}{1.2}
\begin{table}[!h]
\caption{Comparison of variants of our architecture using f-mAP. We test with
different IoU thresholds \(\alpha\)}.
\begin{tabular}{|m{1.5in}|P{0.4in}|P{0.4in}|P{0.41in}|}
\hline
Model & \thead{\(\alpha\) = 0.2} & \thead{\(\alpha\) = 0.5} &
\thead{\(\alpha\) = 0.75} \\
\hline
Brox + VOC & 77.93 & 70.64 & 32.73 \\
Brox + Kinetics & 80.24 & 73.18 & 33.81 \\
Flownet2 + VOC & 75.43& 66.97 & 28.57 \\
Flownet2 + Kinetics & 79.41 & 71.51 & 32.83 \\
Tuned Flownet2 + VOC & 76.69 & 69.03 & 31.88\\
Tuned Flownet2 + Kinetics & \textbf{81.31} & \textbf{74.07} & \textbf{34.41}\\
Tuned Flownet2-CSS + Kinetics & 79.90 & 72.13 & 32.24 \\
Tuned Flownet2-SD + Kinetics & 78.86 & 71.67 & 33.39 \\
\hline
\end{tabular}
\label{abalation}
\end{table}
\begin{figure}
\includegraphics[trim={1cm, 0cm, 1.5cm, 1.5cm},
clip,width=\columnwidth]{auc.eps}
\vspace{-2.5em}
\caption{AUC plot for \textit{UCF-101-24} dataset using variations of
our architecture. }
\label{auc_curve}
\end{figure}
\newcolumntype{C}{>{\centering\arraybackslash}m{3.5cm}}
\newcolumntype{k}{>{\hsize=.2\hsize}C}
\newcolumntype{K}{>{\raggedleft\arraybackslash}k}
\begin{figure*}[t]
\begin{tabularx}{\textwidth}{KCCCC}
\rot{\scriptsize{Horse Riding}} &
\includegraphics[scale=0.15]{hr_1.jpg}
&
\includegraphics[scale=0.15]{hr_2.jpg} &
\includegraphics[scale=0.15]{hr_3.jpg} &
\includegraphics[scale=0.15]{hr_4.jpg} \\
\rot{\scriptsize{Pole Vaulting}} &
\includegraphics[scale=0.15]{pv_1.jpg}
&
\includegraphics[scale=0.15]{pv_2.jpg} &
\includegraphics[scale=0.15]{pv_3.jpg} &
\includegraphics[scale=0.15]{pv_4.jpg} \\
\rot{\scriptsize{Skiing}} & \includegraphics[scale=0.15]{sk_1.jpg} &
\includegraphics[scale=0.15]{sk_2.jpg} &
\includegraphics[scale=0.15]{sk_3.jpg} &
\includegraphics[scale=0.15]{sk_4.jpg} \\
\rot{\scriptsize{Cliff Diving}} &
\includegraphics[scale=0.15]{cd_1.jpg}
&
\includegraphics[scale=0.15]{cd_2.jpg} &
\includegraphics[scale=0.15]{cd_3.jpg} &
\includegraphics[scale=0.15]{cd_4.jpg} \\
\end{tabularx}
\label{samples}
\caption{Action detection results for four action classes from the
\textit{UCF-101}
dataset using a model pre-trained using \textit{Kinetics}, and using
tuned
\textit{Flownet2} optical flow as input.}
\end{figure*}
\subsection{Comparison with Top Performers}
We compare our results with other top performers on the \textit{UCF-101-24}
dataset, as shown in Table.~\ref{top_performers}. It should be noted that out
of all reported results, only one variation of the Singh \textit{et al.}
framework runs in real-time (\SI{28}{fps}).
We observe that all of our models that use \textit{Kinetics} pre-training and
fine-tuning for \textit{Flownet2} variants outperform the other top performers.
However, we can only fairly compare our results to \citet{DBLP:journals/corr/HouCS17}, as both our tests use temporally trimmed
videos from the \textit{UCF-101} dataset. The other methods
\cite{DBLP:journals/corr/KalogeitonWFS17, DBLP:journals/corr/SinghSC16,
DBLP:journals/corr/WeinzaepfelHS15, Peng2016} test on untrimmed videos, as
they perform both spatial and temporal detections. While they have an advantage
over our framework as linking actions temporally can improve the spatial
detections, they also suffer from a disadvantage as they have
a greater chance of getting a false positive if they detect an action in a frame
where there is no action being performed.
\footnotetext[1]{As reported in \url{
https://github.com/gurkirt/realtime-action-detection }}
\begin{table}[!h]
\caption{Comparison of the f-mAP with other top performers using IoU
threshold of \(\alpha\).}
\begin{tabularx}{\columnwidth}{|X|P{0.4in}|}
\hline
Model & \thead{\(\alpha\) = 0.5} \\
\hline
Weinzaepfel \textit{et al.} \cite{DBLP:journals/corr/WeinzaepfelHS15}
\(\dagger\) & 35.84 \\
Hou \textit{et al.} \cite{DBLP:journals/corr/HouCS17} \(\star\) & 41.37 \\
Peng \textit{et al.} \cite{Peng2016} \(\dagger\) & 65.37 \\
Singh \textit{et al.} \cite{DBLP:journals/corr/SinghSC16} RGB + DIS-Fast
\(\dagger\) \(\psi\) & 65.66\footnotemark[1] \footnotetext{As reported in
https://github.com/gurkirt/realtime-action-detection }\\
Singh \textit{et al.} \cite{DBLP:journals/corr/SinghSC16} RGB + Brox
\(\dagger\) & \textbf{68.31}\footnotemark[1]\\
Kalogeiton \textit{et al.}\cite{DBLP:journals/corr/KalogeitonWFS17}
\(\dagger\) & 67.1 \\
\hline
Brox + Kinetics \(\star\) & 73.18 \\
Tuned Flownet2 + Kinetics \(\star\) & \textbf{74.07} \\
Tuned Flownet2-CSS + Kinetics \(\star\) \(\psi\) & 72.13\\
Tuned Flownet2-SD + Kinetics \(\star\) \(\psi\) & 71.67 \\
\hline
\end{tabularx}
\raggedright \(\dagger\) : untrimmed videos. \(\star\) : trimmed videos.
\(\psi\) : real-time .
\label{top_performers}
\end{table}
\subsection{Detection Runtime}
We propose an end-to-end trainable pipeline. Integrating the flow computation
in our framework using \textit{Flownet2} improves the compute resources
utilization. We can make the best use of GPU parallelization in addition to
reducing the overhead caused by memory transfer if the framework is separated
into two parts. The frame per second (fps) rates for our architectures are shown in
Table.~\ref{runtime}. We used a NVIDIA GTX Titan X GPU for testing the runtime
speed which is the same card used for previously proposed work on real-time
action detection \cite{DBLP:journals/corr/SinghSC16}. We test using a batch
sizes of 1 and 4. With a batch size of 1 (online), the system
will have no latency. If a small latency is acceptable, we can buffer the input
frames to use a batch size of 4 which improves the frame per second
rate. We compare our results to \citet{DBLP:journals/corr/SinghSC16}, the
only real-time method for action detection. However, in their reported runtime,
they do not account for the overhead caused by transferring the optical
flow computed using DIS-Fast to their two-stream SSD networks. Nevertheless,
our
model using
\textit{Flownet2-SD} is the fastest, achieving \SI{25}{fps} with no latency
or \SI{31}{fps} with minimal latency.
\begin{table}[!h]
\caption{Frames per second rate of our models compared to the other reported
real-time method.}
\begin{tabularx}{\columnwidth}{|X|P{0.7in}|P{0.7in}|}
\hline
Model & batch size = 1 & batch size = 4 \\
\hline
\citet{DBLP:journals/corr/SinghSC16} RGB+DIS-Fast & - & 28\\
\hline
Tuned Flownet2 + Kinetics & 12 & 15 \\
Tuned Flownet2-CSS + Kinetics & 17 & 21 \\
Tuned Flownet2-SD + Kinetics & \textbf{25} & \textbf{31} \\
\hline
\end{tabularx}
\label{runtime}
\end{table}
\section{Conclusion}
In this work, we propose a real-time, end-to-end trainable two-stream network
for action detection by generalizing the \textit{YOLOv2} network architecture.
We train two-stream \textit{YOLOv2} networks jointly to learn
complementary features between the appearance and motion streams. We show that
transfer learning from the task of action recognition to action detection introduces
a boost in performance. Additionally, fine-tuning a trainable optical flow
estimator for the task of action detection results in a better representation
for the action-related motion in the scene, improving our model's performance.
Finally, we show that by integrating the optical flow computation and training
end-to-end, our framework runs in real-time (\SI{31}{fps}), faster than all
previous methods.
\section*{Acknowledgement}
We would like to thank Brendan Duke of the Machine Learning Research Group at
the University of Guelph for his help with training the \textit{Kinetics}
dataset and helpful suggestions toward improving the manuscript.
\bibliographystyle{unsrtnat}
|
1,108,101,564,255 | arxiv | \section{Introduction}\label{S72}
\subsection{Description of the problem investigated}
In recent years there has been a rapid development in the field of self-similar Iterated Function Systems (IFS) with overlapping construction. Most importantly, Hochman \cite{Hochman} proved for any self-similar measure $\nu$ that we can have dimension drop (that is $\dim_{\rm H} \nu<\min\left\{1,\dim_{\rm S}\nu \right\}$, )
only if there is a superexponential concentration of cylinders (see Section \ref{S118} for the definitions of the various dimensions used in the paper). Consequently,
for a one-parameter family of self-similar measures $\left\{\nu_\alpha\right\}_\alpha$ on $\mathbb{R}$, satisfying a certain non-degeneracy condition (Definition \ref{S80})
the Hausdorff dimension of the measure $\nu_\alpha$ is equal to the minimum of its similarity dimension and $1$ for all parameters $\alpha$ except for a small exceptional set of parameters $E$. This exceptional set $E$ is so small that its packing dimension (and consequently its Hausdorff dimension) is zero.
The corresponding problem for the singularity versus absolute continuity of self-similar measures
was treated by Shmerkin and Solomyak \cite{shmerkin2014absolute}. They considered one-parameter families of self-similar measures constructed by one-parameter families of homogeneous self-similar IFS, also satisfying the non-degeneracy condition of Hochman Theorem. It was proved in
\cite[Theorem A]{shmerkin2014absolute}
that for such families $\left\{\nu_\alpha\right\}$ of self-similar measures if the similarity dimension of the measures in the family is greater than $1$
then for all but a set of Hausdorff dimension zero of parameters $\alpha$, the measure $\nu_\alpha$ is absolute continuous with respect to the Lebesgue measure. The results presented in this note imply that in this case it can happen that the set of exceptional parameters have packing dimension $1$ as opposed to Hochman's Theorem where we remind that the packing dimension of the set of exceptional parameters is equal to $0$.
Still, we do not know what causes the drop of dimension or the singularity of a self-similar measure on the line of similarity dimension greater than $1$. In particular it is a natural question whether the only reason for the drop of the dimension or singularity of self-similar measures having similarity dimension larger than $1$ is the "exact overlap". More precisely, let $\left\{\varphi_i\right\}_{i=1}^{m}$ be a self-similar IFS and $\nu$ be a corresponding self-similar measure. We say that there is an exact overlap if we can find two distinct $\mathbf{i}=(i_1, \dots ,i_k)$ and $\mathbf{j}=(j_1, \dots ,j_\ell )$ finite words such that
\begin{equation}\label{S54}
\varphi_{i_1}\circ \cdots \circ \varphi_{i_k}
=
\varphi_{j_1}\circ \cdots \circ \varphi_{j_\ell }.
\end{equation}
The following two questions have naturally arisen for a long time (e.g. Question 1 below appeared as \cite[Question 2.6]{peres2000problems}):
\begin{description}
\item[Question 1] Is it true that a self-similar measure has Hausdorff dimension strictly smaller than the minimum of $1$ and its similarity dimension only if we have exact overlap?
\item[Question 2] Is it true for a self-similar measure $\nu$ having similarity dimension greater than one, that $\nu$ is singular only if there is exact overlap?
\end{description}
Most of the experts believe that the answer to Question 1 is positive and it has been confirmed in some special cases \cite{Hochman}. On the other hand, a result of
Nazarov, Peres and Shmerkin
indicated that the answer to Question 2 should be negative.
Namely, they
constructed in \cite{nazarov2012convolutions} a planar self-affine set having dimension greater than one, such that the angle-$\alpha$ projection of its natural measure
was singular for a dense $G_\delta$ set of parameters $\alpha$. However, this was not a family of self-similar measures. Up to our best knowledge
before this note, Question 2 has not yet been answered.
\bigskip
\subsection{New results}
We consider one-parameter families of homogeneous self-similar measures on the line, having similarity dimension greater than $1$. We call the set of those parameters for which the measure is singular, set of parameters of singularity.
\begin{description}
\item[(a)] We point out that the answer to Question 2 above is negative. (Theorem \ref{S65}).
\item[(b)] We consider one-parameter families of self-similar measures for which the set of parameters of singularity is big in the sense that it is a dense $G_\delta$ set but in the same time the parameter set of singularity is small in the sense that it is a set of Hausdorff dimension zero.
We call such families antagonistic. We point out that there are many antagonistic families. Actually, we show that such antagonistic families are dense in a natural collection of one parameter families. (Proposition \ref{S102}.)
\item[(c)] As a corollary, we obtain that it happens quite frequently that in Shmerkin-Solomyak Theorem (Theorem \ref{S85}) the exceptional set has packing dimension $1$. (Corollary \ref{S103}.)
\item[(d)] We extend the scope of \cite[Proposition 8.1]{peres2000sixty} from infinite Bernoulli convolution measures to
very general one-parameter families of (not necessarily self-similar, or self-affine) IFS, and state that the parameter set of singularity is a $G_\delta$ set (Theorems \ref{S128}, \ref{S11}).
\end{description}
\subsection{Comments}
\bigskip
The main goal of this note is to make the observation that the combination of an already existing method of Peres, Schlag and Solomyak \cite{peres2000sixty} and a result due to Manning and the first author of this note \cite{manning2013dimension} yields that the answer to Question 2 is negative.
There are two ingredients of our argument:
\begin{description}
\item[(i)] The fact that the set of parameters of singularity is a $G_\delta$ set in any reasonable one-parameter family of self-similar measures on the line.
\item[(ii)] The existence of a one-parameter family of self-similar measures having similarity dimension greater than one (for all parameters) with a dense set of parameters of singularity.
\end{description}
It turned out that both of these ingredients have been available for a while in the literature. Although in an earlier version of this note the authors had their longer proof for \textbf{(i)}, we learned from
B. Solomyak that \textbf{(i)} has already been proved in
\cite[Proposition 8.1]{peres2000sixty}
in the
special case of infinite Bernoulli convolutions.
Actually, the authors of \cite{peres2000sixty} acknowledged that the short and elegant proof of
\cite[Proposition 8.1]{peres2000sixty} is due to
Elon Lindenstrauss.
We extend the scope of
\cite[Proposition 8.1]{peres2000sixty} to a more general case. Then following the supposition of the anonymous referee we finally got a very general case.
So, to prove \textbf{(i)}, we will present here a more detailed and very general extension of the proof of \cite[Proposition 8.1]{peres2000sixty}.
On the other hand \textbf{(ii)} was proved in \cite{manning2013dimension}.
\subsection{Notation}\label{S99}
First we introduce the Hausdorff and similarity dimensions of a measure and then we present some definitions related to the singularity and absolute continuity of the family of measures considered in the paper.
\subsubsection{The different notions of dimensions used in the paper }\label{S118}
\begin{itemize}
\item The notion of the \emph{Hausdorff and box dimension of a set } is well known (see e.g. \cite{FalconerTechniques}).
\item \emph{Hausdorff dimension of a measure}: Let $\mathfrak{m}$ be a measure on $\mathbb{R}^d$. The Hausdorff dimension of $\mathfrak{m}$ is defined by
\begin{equation}\label{S117}
\dim_{\rm H} \mathfrak{m}:=\inf\left\{
\dim_{\rm H} A:\mathfrak{m}(A)>0, \mbox{ and $A$ is a Borel set}
\right\},
\end{equation}
see \cite[p. 170]{FalconerTechniques} for an equivalent definition.
\item
We will use the following definition of the Packing dimension of a set $H\subset\mathbb{R}^d$ \cite[p. 23.]{FalconerTechniques}:
\begin{equation}\label{S200}
\dim_{\rm P} H = \inf\{\sup_i \overline{\dim_{\rm B}} E_i\ :\ H\subset \bigcup_{i=1}^\infty E_i\},
\end{equation}
where $\overline{\dim_{\rm B}}$ stands for the upper box dimension.
The most important properties of the packing dimension can be found in
\cite{FalconerTechniques}.
\item \emph{Similarity dimension of a self-similar measure}: Consider the self-similar IFS on the line: $\mathcal{F}:=\left\{\varphi(x):=r_i \cdot x+t_i\right\}_{i=1}^{m}$, where $r_i\in(-1,1)\setminus \left\{0\right\}$. Further we are given the probability vector $\mathbf{w}:=(w_1, \dots ,w_m)$. Then there exists a unique measure $\nu$ satisfying $\nu(H)=\sum\limits_{i=1}^{m}w_i \cdot \nu\left(\varphi_{i}^{-1}
\left(H\right)\right)$. (See \cite{FalconerTechniques}.) We call $\nu=\nu_{\mathcal{F},\mathbf{w}}$ the self-similar measure corresponding to $\mathcal{F}$ and $\mathbf{w}$. The similarity dimension of $\nu$ is defined by
\begin{equation}\label{S71}
\dim_{\rm S}(\nu_{\mathcal{F},\mathbf{w}}):= \frac{\sum\limits_{i=1}^{m}w_i\log w_i}{\sum\limits_{i=1}^{m}w_i\log r_{ i}}.
\end{equation}
\end{itemize}
\subsubsection{The projected families of a self-similar measure}\label{S119}
Let
\begin{equation}\label{S2}
\mathcal{F}_\alpha:=\left\{
\varphi_{i}^{\alpha}(x):=r_{\alpha,i} \cdot x+t_{i}^{(\alpha)}
\right\}_{i=1}^{m},\quad \alpha\in A,
\end{equation}
be a one-parameter family of self-similar IFS on $\mathbb{R}$ and let $\mu$ be a measure on the symbolic space
$\Sigma:=\left\{1, \dots ,m\right\}^\mathbb{N}.
$
We write
$$
\varphi_{i_1 \dots i_n}^{\alpha}:=\varphi_{i_1}^{\alpha}\circ \cdots\circ\varphi_{i_n}^{\alpha} \mbox{ and }
r_{\alpha,i_1 \dots i_n}:=r_{\alpha,i_1} \cdots r_{\alpha,i_n}.
$$
The natural projection $\Pi_\alpha:\Sigma\to\mathbb{R}$
is defined by
\begin{equation}\label{S31}
\Pi_\alpha(\mathbf{i}):=\lim\limits_{n\to\infty} \varphi_{i_1 \dots i_n}^{\alpha}(0)=
\sum\limits_{k=1}^{\infty }t_{i_k}^{(\alpha)}r_{\alpha,i_1 \dots ,i_{k-1}},
\end{equation}
where $r_{\alpha,i_1 \dots ,i_{k-1}}:=1$ when $k=1$.
Let $\mu$ be a probability measure on $\Sigma$. We study the family of its push forward measures $\left\{\nu_\alpha\right\}_{\alpha\in A}$:
\begin{equation}\label{S57}
\nu_\alpha(H):=
(\Pi_\alpha)_*\mu(H):=\mu(\Pi_\alpha^{-1}(H)),
\end{equation}
where $H$ is a Borel subset of $\Sigma$.
The elements of the symbolic space $\Sigma:=\left\{1, \dots ,m\right\}^\mathbb{N}$ are denoted by $\mathbf{i}=(i_1,i_2, \dots )$.
If $\mathbf{w}:=(w_1, \dots ,w_m)$ is a probability vector and $\mu$ is the infinite product of $\mathbf{w}$, that is
$\mu=\left\{w_1, \dots ,w_m\right\}^\mathbb{N}$ then the corresponding one-parameter family of self-similar measures defined in \eqref{S57} is denoted by $\left\{\nu_{\alpha,\mathbf{w}}\right\}_{\alpha\in A}$.
The set of parameters of singularity and the set of parameters of absolute continuity with $L^q$-density
are denoted by
\begin{equation}\label{S55}
\mathfrak{Sing}(\mathcal{F}_\alpha,\mu) :=\left\{\alpha\in A:\nu_\alpha\bot\mathcal{L}\mathrm{eb}\right\} .
\end{equation}
and
\begin{equation}\label{S56}
\mathfrak{Cont}_Q(\mathcal{F}_\alpha,\mu) :=
\left\{\alpha:\nu_\alpha\ll\mathcal{L}\mathrm{eb}
\mbox{ with $L^q$ density for a $q>1$}
\right\}.
\end{equation}
\begin{definition}\label{S58}
Using the notation introduced in \eqref{S2}-\eqref{S56} we say that the family $\left\{\nu_\alpha\right\}_{\alpha\in A}$ is \textbf{antagonistic} if both of the two conditions below hold:
\begin{equation}\label{S59}
\dim_{\rm H}\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)= \dim_{\rm H} \left(\mathfrak{Cont}_Q(\mathcal{F}_\alpha,\mu)\right)^c=0
\end{equation}
and
\begin{equation}\label{S60}
\mathfrak{Sing}(\mathcal{F}_\alpha,\mu) \mbox{ is a dense $G_\delta$ subset of $A$}.
\end{equation}
\end{definition}
Clearly, $\mathfrak{Sing}\subset \left(\mathfrak{Cont}_Q\right)^c$. Our aim is to prove that the angle-$\alpha$ projections of the natural measure of the Sierpi\'nski-carpet is an antagonistic family. This implies that in Shmerkin-Solomyak's Theorem, \cite[Theorem A]{shmerkin2014absolute} (this is Theorem \ref{S85} below) the exceptional set has packing dimension $1$.
\subsection{Regularity properties of $\mathcal{F}_\alpha$}
Whenever we say that $\left\{\nu_\alpha\right\}_{\alpha\in A}$ is a one-parameter family of self-similar IFS we always mean that $\left\{\nu_\alpha\right\}_{\alpha\in A}$ is constructed from a pair $(\mathcal{F}_\alpha,\mu)$ as in \eqref{S57}, for a $\mu=\mathbf{w}^{\mathbb{N}}$, where $\mathbf{w}=(w_1, \dots ,w_m)$ is a probability vector.
\begin{PA}\label{S114}
Throughout this note, we always assume that the one-parameter family of self-similar IFS $\left\{\mathcal{F}_\alpha\right\}_{\alpha\in A}$ satisfies properties \textbf{P1}-\textbf{P4} below:
\begin{description}
\item[P1] The parameter domain is a non-empty, proper open interval $A$.
\item[P2] $0<r_{\min}:=\inf\limits_{\alpha\in A,i \leq m}|r_{\alpha,i}| \leq \sup\limits_{\alpha\in A,i \leq m}|r_{\alpha,i}|=:r_{\max}<1$.
\item[P3] $t_{\max}^*:=\sup\limits_{\alpha\in A,i \leq m}|t_{i}^{(\alpha)}|<\infty $.
\item[P4] Both of the functions $\alpha\mapsto t_{i}^{(\alpha)}$ and $\alpha\mapsto r_\alpha$, $\alpha\in A$, can be extended to $\overline{A}$ (the closure of $A$) such that these extensions are both continuous.
\end{description}
\end{PA}
Note that \textbf{P4} implies \textbf{P3}.
It follows from properties \textbf{P2} and \textbf{P3} that there exists a big $\xi\in\mathbb{R}^+$ such that
\begin{equation}\label{S30}
\mathrm{spt}(\nu_\alpha)\subset (-\xi,\xi),\quad \forall \alpha\in A.
\end{equation}
We always confine ourselves to this interval $(-\xi,\xi)$. In particular, whenever we write $H^c$ for a set $H\subset \mathbb{R}$ we mean $(-\xi,\xi)\setminus H$.
It will be our goal to prove that additionally the following properties also hold for some of the families under consideration:
\begin{description}
\item[P5A] $\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ is dense in $A$.
\item[P5B] $\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ is a $G_\delta$ dense subset of $A$.
\end{description}
We will prove below that Properties \textbf{P5A} and \textbf{P5B} are equivalent.
Our motivating example, where all of these properties hold is as follows.
\subsection{Motivating example}
Our most important example is the family of angle-$\alpha$ projection of the natural measure of the usual Sierpi\'nski carpet. We will see that the set of angles of singularity is a dense $G_\delta$ set which has Hausdorff dimension zero and packing dimension $1$. First we define the Sierpi\'nski carpet.
\begin{definition}
Let $\mathbf{t}_1, \dots ,\mathbf{t}_8\in\mathbb{R}^2$ be the $8$ elements of the set
$\left\{\left\{0,1,2\right\}\times\left\{0,1,2\right\}\setminus\left\{(1,1)\right\}\right\}$ in any particular order. The Sierpi\'nski carpet is the attractor of the IFS
\begin{equation}\label{S126}
\mathcal{S}:=\left\{ \varphi_i(x,y):= \frac{1}{3}(x,y)+\frac{1}{3}\mathbf{t}_i\right\}_{i=1}^{8}.
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=4cm]{sk_1}
\includegraphics[width=4cm]{sk_2}
\includegraphics[width=4cm]{sk_3}
\caption{The first three approximations of the Sierpi\'nski carpet}\label{S127}
\end{figure}
\end{definition}
\begin{example}[Motivating example]\label{S1}
Let $\mathcal{S}$ be the IFS given in \eqref{S126}. Let $\mu:=\left(\frac{1}{8}, \dots ,\frac{1}{8}\right)^{\mathbb{N}}$ be the uniform distribution measure on the symbolic space $\Sigma:=\left\{1, \dots ,8\right\}^{\mathbb{N}}$. Further we write $\Pi$ for the natural projection from $\Sigma$ to the attractor $\Lambda$. Let $\nu:=\Pi_*\mu$.
Let $\ell _\alpha\subset \mathbb{R}^2$ be the line having angle $\alpha$ with the positive half of the $x$-axis (see Figure \ref{S127}).
Let $\mathrm{proj}_\alpha$ be the
angle-$\alpha$ projection from $\mathbb{R}^2$ to the line $\ell _\alpha$. For each $\alpha$, identifying $\ell_\alpha$ with the $x$-axis, $\mathrm{proj}_\alpha$
defines a one parameter family of self-similar IFS
on the $x$-axis:
$$\mathcal{S}_\alpha:=\left\{\varphi_{i}^{(\alpha)}\right\}_{i=1}^{8},$$
where $\alpha\in A:= (0, \pi)$ and $\varphi_{i}^{(\alpha)}(x) = r_{\alpha,i}x + t_{i}^{(\alpha)}$ with $r_{\alpha,i}\equiv 1/3$ and $t_{i}^{(\alpha)} = \mathbf{t}_i\cdot (\cos(\alpha), \sin(\alpha))$. For an $\mathbf{i}\in\Sigma$ we define the natural projection
$\Pi_\alpha(\mathbf{i})$ as in \eqref{S31}.
Clearly, $\Pi_\alpha:=\mathrm{proj}_\alpha\circ\Pi$.
The natural invariant measure for $\mathcal{S}_\alpha$ is $\nu_\alpha:=(\Pi_\alpha)_*\mu$. Obviously, $\nu_\alpha=(\mathrm{proj}_\alpha)_*\nu$.
\begin{figure}[H]
\centering
\includegraphics[width=12cm]{proj_meas}
\caption{The projected system}\label{S127}
\end{figure}
\end{example}
The fact that Property \textbf{P5A} holds for the special case in the example was proved in \cite[p.216]{manning2013dimension}. It follows from the proof of B\'ar\'any and Rams
\cite[Theorem 1.2 ]{barany2014dimension} that property \textbf{P5A} holds also for
the projected family of the natural measure for most of those self-similar carpets, which have dimension greater than one.
\begin{remark}[The cardinality of parameters of exact overlaps]\label{S116}
It is obvious that in the case of the angle-$\alpha$ projections of a general self-similar carpet, exact overlap can happen only for countably many parameters. However, this is not true in general. To see this, we follow the ideas in the paper of Cs. S\'andor \cite{sandor2004family} and construct
the one parameter family of self-similar IFS $\left\{S_{i}^{(u)}\right\}_{i=1}^{3}$, $u\in U$, where $S_{i}^{(u)}:=\lambda_{i}^{(u)}(x+1)$ and
$(\lambda_{1}^{(u)},\lambda_{2}^{(u)},\lambda_{3}^{(u)})=\left(\frac{u}{1+\varepsilon},
u,u+\varepsilon
\right)$, further $U:=\left[\frac{1}{3}+\frac{\varepsilon}{3},
\frac{1}{3}+\eta-\varepsilon\right]$ for sufficiently small $\eta>0$
and $0<\varepsilon<\frac{3}{4}\eta$. Then for all $u\in U$ we have:
\begin{description}
\item[(a)] there is an exact overlap, namely: $S_{132}^{(u)}\equiv S_{213}^{(u)}$,
\item[(b)] the similarity dimension of the attractor is greater than $1$,
\item[(c)] the Hausdorff dimension of the attractor is smaller than $1$.
\end{description}
\end{remark}
\section{Theorems we use from the literature}
For the ease of the reader
here we collect those theorems we refer to in this note. We always use the notation of Section \ref{S72}. The theorems below are more general as stated here. We confine ourselves to the generality that matters for us.
\subsection{Hochman Theorems}\label{S86}
\begin{theorem}\cite[Theorems 1.7, Theorems 1.8]{Hochman}\label{S78}
Given the one-parameter family $\left\{\mathcal{F}_\alpha\right\}_{\alpha\in A}$ in the form as in \eqref{S2}.
For $\mathbf{i},\mathbf{j}\in\Sigma^n:=\left\{1, \dots ,m\right\}^n$ we define
\begin{equation}\label{S92}
\Delta_{\mathbf{i},\mathbf{j}}(\alpha):=
\varphi_{\mathbf{i}}^{\alpha}(0)-\varphi_{\mathbf{i}}^{\alpha}(0)
\mbox{ and }\Delta_n(\alpha):=\min\limits_{\mathbf{i},\mathbf{j}\in\Sigma^n}
\left\{\Delta_{\mathbf{i},\mathbf{j}}(\alpha)\right\}.
\end{equation}
Moreover, we define the exceptional set of parameters $E\subset A$
\begin{equation}\label{S76}
E:=
\bigcap\limits_{\varepsilon>0}
\bigcup\limits_{N=1}^{\infty }
\bigcap\limits_{n>N}
\Delta_{n}^{-1}\left(-\varepsilon^n,\varepsilon^n\right).
\end{equation}
Then for an $\alpha\in E^c$ and for every probability vector $\mathbf{w}$
the Hausdorff dimension of the corresponding self-similar measure $\nu_{\alpha,\mathbf{w}}$ is
\begin{equation}\label{S73}
\dim_{\rm H} (\nu_{\alpha,\mathbf{w}})=\min\left\{1,\dim_{\rm S}(\nu_{\alpha,\mathbf{w}})\right\}
\end{equation}
\end{theorem}
The following Condition will also be important:
\begin{definition}\label{S74}
We say that for an $\alpha\in A$, $\mathcal{F}_\alpha$ satisfies
\textbf{Condition H} if
\begin{equation}\label{S75}
\exists \rho=\rho(\alpha)>0, \ \exists n_k=n_k(\alpha)\uparrow\infty,\quad
\Delta_{n_k}(\alpha)>\rho^{n_k}.
\end{equation}
\end{definition}
Observe that $\alpha\in E^c$ if and only if $\mathcal{F}_\alpha$ satisfies Condition H.
\begin{definition}\label{S80}
We say that the \textbf{Non-Degeneracy Condition} holds if
\begin{equation}\label{S81}
\forall \mathbf{i},\mathbf{j}\in \Sigma,\ \mathbf{i}\ne\mathbf{j},\
\exists \alpha\in A\mbox{ s.t. }
\Pi_\alpha(\mathbf{i})\ne\Pi_\alpha(\mathbf{j}).
\end{equation}
\end{definition}
\begin{theorem}\cite[, Theorems 1.7, Theorems 1.8]{Hochman}\label{S77}
Assume that the Non-Degeneracy Condition holds and the following functions are real analytic:
\begin{equation}\label{S79}
\alpha\mapsto r_{\alpha,i},\ i=1, \dots ,m\mbox{ and }
\alpha\mapsto t_{i}^{(\alpha)}.
\end{equation}
Then
\begin{equation}\label{S82}
\dim_{\rm H} E=\dim_{\rm P}E=0.
\end{equation}
\end{theorem}
\subsection{Shmerkin-Solomyak Theorem}
\begin{theorem}\cite[Theorem A]{shmerkin2014absolute}\label{S85}
We assume that the conditions of Theorem \ref{S77} hold.
Here we confine ourselves to homogeneous self-similar IFS on the line of the form
\begin{equation}\label{S83}
\mathcal{F}_\alpha:=\left\{
\varphi_{i}^{\alpha}(x):=r_{\alpha} \cdot x+t_{i}^{(\alpha)}
\right\}_{i=1}^{m},\quad \alpha\in A.
\end{equation}
Then there exists an exceptional set $E\subset A$ with $\dim_{\rm H} E=0$
such that for any $\alpha\in E^c$ and for any probability vector $\mathbf{w}=(w_1, \dots ,w_m)$ with $\dim_{\rm S} (\nu_{\alpha,\mathbf{w}})>1$ we have
$$
\nu_{\alpha,\mathbf{w}}\ll\mathcal{L}\mathrm{eb}\mbox{ with }
L^q\mbox{ density, for some }q>1.
$$
\end{theorem}
\subsection{An extension of B\'ar\'any-Rams Theorem}
L\' {i}dia Torma realized in her Master's Thesis \cite{Lidia} that the proof of
B\'ar\'any and Rams \cite[Theorem 1.2]{barany2014dimension}, related to the projections of general self-similar carpets, works in a much more general setup, without any essential change.
\begin{theorem}[Extended version of B\'ar\'any-Rams Theorem]\label{S32}
Given an $a\in\mathbb{R}\setminus\left\{0\right\}$. Let $\mathcal{T}=\left\{n \cdot a\right\}_{n\in \mathbb{Z}}$ be the corresponding lattice on $\mathbb{R}$.
Moreover, given the self-similar IFS on the line of the form:
\begin{equation}\label{S33}
\mathcal{S}:=\left\{S_i(x):=\frac{1}{L} \cdot x+t_i\right\}_{i=1}^{m},
\end{equation}
where $L\in\mathbb{N}$, $L \geq 2$ and $t_i\in \mathcal{T}$ for all $i\in\left\{1, \dots ,m\right\}$.
We are also given a probability vector $\mathbf{w}=(w_1, \dots ,w_m)$ with rational weights $w_i=p_i/q_i$, $p_i,q_i\in \mathbb{N}\setminus\{0\}$ satisfying
\begin{equation}\label{S87}
L\nmid Q:=\mathrm{lcm}\left\{q_1, \dots ,q_m\right\},\quad s=:\dim_{\rm S}\nu =\frac{-\sum\limits_{i=1}^{m}w_i\log w_i}{\log L}>1,
\end{equation}
where $\nu$ is the self-similar measure corresponding to the weights $\mathbf{w}$. That is
$
\nu=\sum\limits_{i=1}^{m} w_i \cdot \nu\circ S_{i}^{-1}
$.
Then we have
\begin{equation}\label{S35}
\dim_{\rm H} \nu<1.
\end{equation}
\end{theorem}
\section{$\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ is a $G_\delta$ set}
As we have already mentioned the following result appeared as \cite[Proposition 8.1]{peres2000sixty} in the special case when the family of self-similar measures
is the Bernoulli convolution measures. We extend the original proof of \cite[Proposition 8.1]{peres2000sixty} to the following much more general situation.
\begin{theorem}\label{S128}
Let $R\subset \mathbb{R}^d$ be a non-empty bounded open set. Let $U$ be a metric space (the parameter domain). Let $\lambda$ be a finite Radon measure with $\mathrm{spt}(\lambda)\subset R$ (the reference measure). For every $\alpha$ we are given a probability Radon measure $\nu_\alpha$ such that $\mathrm{spt(\nu_\alpha)}\subset R$.
Let
\begin{equation}\label{R99}
\mathcal{C}_R:=\left\{f:R\to[0,1]: f \mbox{ is continuous }\right\}.
\end{equation}
For every $F\in\mathcal{C}_R$ we define $\Phi_f:U\to R$
\begin{equation}\label{R98}
\Phi_f(\alpha):=\int_R f(x)d\nu_\alpha(x).
\end{equation}
Finally, we define
\begin{equation}\label{R97}
\mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right):=
\left\{\alpha\in U: \nu_\alpha\perp \lambda\right\}.
\end{equation}
If $\alpha\mapsto \Phi_f(\alpha)$ is lower semi-continuous then $\mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right)$ is a $G_\delta$ set.
\end{theorem}
\begin{proof}
Recall that $\nu_\alpha$ is a probability measure for all $\alpha$. Note that
without loss of generality we may assume that $\lambda$ is also a probability measure on $R$.
For every $\varepsilon>0$ we define
$$
\mathcal{A}_\varepsilon:=
\left\{f\in\mathcal{C}_R:\ \int f(x)d\lambda(x)<\varepsilon
\right\}.
$$
We follow the proof of \cite[Proposition 8.1]{peres2000sixty} and a suggestion of an unknown referee. First we fix an arbitrary sequence $\varepsilon_n\downarrow 0$
and then define
$$
S_\bot:=
\bigcap\limits_{n=1}^{\infty }
\bigcup\limits_{{f\in\mathcal{A}_{\varepsilon_n}}}
\left\{\alpha\in U: \Phi_f(\alpha)>1-\varepsilon_n\right\}.
$$
Since we assumed that $\alpha\mapsto\Phi_f(\alpha)$ is lower semi-continuous, the set
$\left\{\alpha\in U: \Phi_f(\alpha)>1-\varepsilon_n\right\}$ is open. That is
$S_\bot$ is a $G_\delta$ set.
Hence it is enough to prove that
\begin{equation}\label{S120}
\mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right)
=S_\bot.
\end{equation}
First we prove that $ \mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right)\subseteq S_\bot.$
Let $\beta\in \mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right)$. Fix an arbitrary $\varepsilon>0$.
Then by definition we can find a $T\subset R$ such that
\begin{equation}\label{S121}
\nu_\beta(T)=1,\qquad \lambda(T)=0.
\end{equation}
Recall that both $\lambda$ and $\nu_\beta$ are Radon probability measures. So we can choose a compact $C_\varepsilon\subset T$ such that
\begin{equation}\label{S122}
\nu_\beta(C_\varepsilon)>1-\varepsilon,\
\lambda(C_\varepsilon)=0.
\end{equation}
Using that $\lambda$ is a Radon measure, we can choose an open set $V_\varepsilon\subset R$ such that $C_\varepsilon\subset V_\varepsilon$ and
$\lambda(V_\varepsilon)<\varepsilon$.
We can choose an $f_\varepsilon\in\mathcal{C}_R$
such that $\mathrm{spt}(f_\varepsilon)\subset V_\varepsilon$ and $f_\varepsilon|_{C_\varepsilon}\equiv 1$ (see \cite[p. 39]{rudin1986real}).
Then $\int f_\varepsilon d\lambda(x) \leq \lambda(V_\varepsilon)<\varepsilon$ (that is $f_\varepsilon\in \mathcal{A}_\varepsilon$) and
$\int f_\varepsilon(x)d\nu_\beta(x) \geq \nu_\beta(C_\varepsilon)>1-\varepsilon$. Since $\varepsilon>0$ was arbitrary we obtain that $\beta\in S_\bot$.
Now we prove that $ S_\bot\subseteq\mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right).$
Let $\beta\in S_\bot$. Then for every $n$ there exists an $f_n\in\mathcal{C}_R$ such that
\begin{equation}\label{S123}
\int f_{n}(x)d\nu_\beta(x)>1-\varepsilon_n \mbox{ and }
\int f_n d\lambda(x)<\varepsilon_n.
\end{equation}
Let $C_\beta:=\mathrm{spt}(\nu_\beta)$. Clearly, $C_\beta$ is compact and $C_\beta\subset R$. We define
$$
g_n:=f_n\mathds{1}_{C_\beta}, \mbox{ and }
g:=\mathds{1}_{C_\beta}.
$$
Clearly, $0 \leq g_n(x) \leq g(x)$ for all $x\in C_\beta$ and
$$
\int g(x) d\nu_\beta(x)=1,\
\int g_{n}(x)d\nu_\beta(x)>1-\varepsilon_n \mbox{ and }
\int g_n d\lambda(x)<\varepsilon_n.
$$
Hence,
$$
g_n\stackrel{L_1(\nu_\beta)}{\longrightarrow}g.
$$
Thus, we can select a subsequence $g_{n_k}$ such that $g_{n_k}(x)\to g(x)$ for $\nu_\beta$- almost all $x\in C_\beta$. Let
$$
D_\beta:=
\left\{x\in C_\beta:
g_{n_k}(x)\to g(x)
\right\}.
$$
Then on the one hand we have
\begin{equation}\label{S124}
\nu_\beta( D_\beta)=1.
\end{equation}
On the other hand using the Lebesgue Dominated Convergence Theorem:
\begin{multline}\label{S125}
\lambda(D_\beta)=\int_{D_\beta} g(x)d\lambda(x)=
\int_{D_\beta} \lim\limits_{k\to\infty} g_{n_k}(x) d\lambda(x)
\\
=\lim\limits_{k\to\infty} \int_{D_\beta} g_{n_k}(x)d\lambda(x) \leq \lim\limits_{k\to\infty} \varepsilon_{n_k}=0.
\end{multline}
Putting together \eqref{S124} and \eqref{S125} we obtain that
$\beta\in\mathfrak{Sing}_\lambda\left(\left\{\nu_\alpha\right\}_{\alpha\in U}\right)$.
\end{proof}
\begin{theorem}\label{S11}
We consider one-parameter families of measures $\nu_\alpha$ on $\mathbb{R}^d$ for some $d \geq 1$, which are constructed as follows: The
parameter space $U$ is a non-empty compact metric space.
We are given a continuous mapping
\begin{equation}\label{R92}
\Pi:U\times\Omega\to R\subset\mathbb{R}^d,
\end{equation}
where $R$ is an open ball in $\mathbb{R}^d$ and
$\Omega$ is a compact metric space
(in our applications $U$ is a compact interval, $\Omega=\Sigma$ and $\Pi_\alpha$ is the natural projection corresponding to the parameter $\alpha$).
Moreover let $\mu$ be a probability
Radon measure on $\Omega$.
(In our applications $\mu$ is Bernoulli measure on $\Sigma$.)
For every $\alpha\in U$ we define
\begin{equation}\label{S61}
\nu_\alpha:=(\Pi_\alpha)_*\mu.
\end{equation}
Clearly, $\nu_\alpha$ is a Radon measure whose support is contained in $R$.
Finally let $\lambda$ be a Radon (reference) measure whose support is also contained in $R$. (In our applications $\lambda$ is the Lebesgue measure $\mathcal{L}\mathrm{eb}_d$ restricted to $R$.)
Then the set of parameters of singularity
\begin{equation}\label{S63}
\mathfrak{Sing}_\lambda(\Pi_\alpha,\mu):=
\left\{\alpha\in U:
\nu_\alpha\bot \lambda
\right\}
\end{equation}
is a $G_\delta$ set.
\end{theorem}
\begin{proof}
This theorem immediately follows from Theorem \ref{S128}\ if we prove that for every $f\in\mathcal{C}_R$ the function $\Phi_f(\cdot)$ is continuous.
To see this we
set $\psi:U\times \Omega\to \mathbb{R}$,
$$
\psi(\alpha,\omega)=f(\Pi_\alpha(\omega)),\mbox{ then }
\Phi_f(\alpha):=\int f(x)d\nu_\alpha(x)=\int \psi(\alpha,\omega)d\mu(\omega),
$$
where the last equality follows from the change of variables formula.
By compactness, $\psi$ is uniformly continuous. Hence for every $\varepsilon>0$ we can choose $\delta>0$ such that whenever $\mathrm{dist}\left((\alpha_1,\omega),(\alpha_2,\omega)\right)<\delta$ then
$|\psi\left(\alpha_1,\omega\right)-\psi\left(\alpha_2,\omega\right)|<\varepsilon$, where $\mathrm{dist}((\alpha_1,\omega),(\alpha_2,\omega)):=
\max\left\{dist_U(\alpha_1,\alpha_2),
dist_\Omega(\omega_1,\omega_2)
\right\}$. Using that $\mu$ is a probability measure, we obtain that $|\Phi_f(\alpha_1)-\Phi_f(\alpha_2)|<\varepsilon$ whenever $\mathrm{dist}_U(\alpha_1,\alpha_2)<\delta$.
\end{proof}
\begin{corollary}\label{S64}Using the notation of Section \ref{S99} and assuming our Principal Assumption (defined on page \pageref{S114}) we obtain that the set of parameters of singularity $\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$
is a $G_\delta$ set.
\end{corollary}
The proof is obvious since our Principal Assumptions imply that the conditions of Theorem \ref{S11} hold.
To derive another corollary we need the following fact. It is well known, but we could not look it up in the literature, therefore we include its proof here.
\begin{fact}
Let $H\subset \mathbb{R}^d$ be a $G_\delta$ set which is not a nowhere dense set. Then $\dim_{\rm P} H = d$.
\end{fact}
\begin{proof}
Since $H$ is not a nowhere dense set, there exist a ball $B$ such that $B\subset \overline{H}$. That is $V:=B\cap H$ is a dense $G_\delta$ set in $B$, that is by Banach's Theorem $V$ is not a set of first category. So, if
$V\subset \cup_{i=1}^{\infty }E_i$ then there exists an $i$ such that $E_i$ is not nowhere dense in $B$. That is there exists a ball $B'\subset B$ such that $B'\subset \overline{E}_i$.
Then $\dim_{\rm B} E_i=d$. Hence by \eqref{S200} we have $\dim_{\rm P}H \geq \dim_{\rm P}V=d$. On the other hand, $\dim_{\rm P}H \leq d$ always holds.
\end{proof}
Applying this for $\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ we obtain that
\begin{corollary}\label{S115}
Under the conditions of Theorem \ref{S11}, for the set of parameters of singularity $\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ the following holds:
\begin{description}
\item[(i)] Either $\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ is nowhere dense or
\item[(ii)] $\dim_{\rm P} \left(\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)\right)=d$.
\end{description}
\end{corollary}
Henna Koivusalo called the attention of the authors for the following immediate corollary of
Theorem \ref{S11}:
\begin{remark}\label{R91}
Let $\mu$ be a compactly supported Borel measure on $\mathbb{R}^2$ with $\dim_{\rm H} \mu>1$. Let $\nu_\alpha:=(\mathrm{proj}_\alpha)_*\mu$. Then
Theorem \ref{S11} immediately implies that either the singularity set
$$
\mathfrak{Sing}_{\mathcal{L}\mathrm{eb}}\left(\left\{\nu_\alpha\right\}_{\alpha\in [0,\pi)}\right)=\left\{\alpha\in[0,\pi):\nu_\alpha\perp\mathcal{L}\mathrm{eb}_1 \right\}.
$$ or its complement is big in topological sense. More precisely,
\begin{description}
\item[(a)] Either $\mathfrak{Sing}_{\mathcal{L}\mathrm{eb}}\left(\left\{\nu_\alpha\right\}_{\alpha\in [0,\pi)}\right)$ is a residual subset of $[0,\pi)$ or
\item[(b)] $\left(\mathfrak{Sing}_{\mathcal{L}\mathrm{eb}}\left(\left\{\nu_\alpha\right\}_{\alpha\in [0,\pi)}\right)\right)^c$ contains an interval.
\end{description}
We remind the reader that a set is called residual if is its complement is a set of first category and residual sets are considered as "big" in topological sense.
In contrast we recall that by Kaufman's Theorem (see e.g. \cite[Theorem 9.7]{mattila1999geometry})
we have
\begin{equation}\label{R90}
\nu_\alpha\ll \mathcal{L}\mathrm{eb}_1 \mbox{ for $\mathcal{L}\mathrm{eb}_1$
almost all } \alpha\in[0,\pi).
\end{equation}
\end{remark}
The following theorem shows that there are reasons other than exact overlaps for the singularity of self-similar measures having similarity dimension greater than one.
\begin{theorem}\label{S65}
Using the notation of our Example \ref{S1} (angle-$\alpha$ projections of the Sierpi\'nski carpet), we obtain that
\begin{equation}\label{S66}
\mathfrak{Sing}(\mathcal{S}_\alpha,\mu)=\left\{\alpha\in A:\nu_\alpha\bot\mathcal{L}\mathrm{eb}\right\} \mbox{ is a dense $G_\delta$ set}
\end{equation}
and
\begin{equation}\label{S67}
\dim_{\rm H} \left(\mathfrak{Cont}_Q(\mathcal{S}_\alpha,\mu)^c\right)=0.
\end{equation}
That is $(\mathcal{S}_\alpha,\mu)$ is antagonistic in the sense of Definition \ref{S58}.
\end{theorem}
\begin{proof}
The first part follows from Corollary \ref{S64} and from the fact that
property \textbf{P5A } holds for the projections of the Sierpi\'nski-carpet. This was proved in \cite{manning2013dimension}.
Now we turn to the proof of the second part of the Theorem. This assertion would immediately follow from Shmerkin and Solomyak \cite[Theorem A]{shmerkin2014absolute} if we could guarantee that the Non-Degeneracy Condition holds. Unfortunately in this case it does not hold. Still it is possible to gain the same conclusion not from the assertion of \cite[Theorem A]{shmerkin2014absolute} but from its proof, combined with \cite[Lemma 5.4]{shmerkin2014absolute} as it was explained by P. Shmerkin to the authors \cite{test1}. For completeness we point out the only two steps of the original proof of \cite[Theorem A]{shmerkin2014absolute} where we have to make slight modifications.
Let $\mathcal{P}$ be the set of probability Borel measures on the line. We write
\begin{equation}\label{R96}
\mathcal{D}:=
\left\{
\mu\in\mathcal{P}:
|\widehat{\mu}(\xi)|=\Ordo_\mu\left(|\xi|^{-\sigma}\right)
\mbox{ for some } \sigma>0
\right\}.
\end{equation}
The elements of $\mathcal{D}$ are the probability measures on the line with power Fourier-decay.
Let $\left\{\varphi^{(\alpha)}_i\right\}_{i=1}^{8}$ be the IFS defined in Example \ref{S1}.
Now we write the projected self-similar natural measure $\nu_\alpha$ of the Sierpi\'nski carpet in the infinite convolution form. That is we consider $\nu_\alpha$ as the distribution of the following infinite random sum:
\[
\nu_\alpha\sim\sum_{n=1}^\infty (1/3)^{n-1}A_n,
\]
where $A_n$ are independent Bernoulli random variables with $\mathbb{P}(A_n=\varphi^{(\alpha)}_i(0))=1/8$. For $k\geq 2$ integers we decompose the random sum on the right hand side as
\[
\nu_\alpha\sim\sum_{\substack{n=1\\ k\nmid n}}^\infty (1/3)^{n-1}A_n + \sum_{\substack{n=1\\ k\mid n}}^\infty (1/3)^{n-1}A_n.
\]
Writing $\eta'_{\alpha,k}$ and $\eta''_{\alpha,k}$ for the distribution of the first and the second random sum, respectively, we get $\nu_\alpha=\eta'_{\alpha,k}*\eta''_{\alpha,k}$. Our goal is to show that with appropriately chosen $k$ we can apply \cite[Corollary 5.5]{shmerkin2014absolute} to $\eta'_{\alpha,k}$ and $\eta''_{\alpha,k}$ which would conclude the proof. To this end it is enough to show that on the one hand
\begin{equation}\label{R95}
\dim_{\rm H} \eta'_{\alpha,k}=1\quad \mbox{ for every $k$ large enough }
\end{equation}
and on the other hand we have
\begin{equation}\label{R94}
\eta''_{\alpha,k}\in\mathcal{D}, \quad \forall k \geq 2.
\end{equation}
This is the first place where we depart from the proof of
\cite[Theorem A]{shmerkin2014absolute}. According to \cite[Theorem 5.3]{shmerkin2015projections} if $\dim_{\rm S} \eta'_{\alpha,k}>1$ (which holds if $k$ is big enough), then there exists a countable set $E'_k$ such that $\dim_{\rm H} \eta'_{\alpha,k}=1$ for all $\alpha \notin E'_k$. Note that the original proof at this point relies on the non-degeneracy condition, what we do not use here.
To get the Fourier decay of $\eta''_{\alpha,k}$ we follow the proof of \cite[Theorem A]{shmerkin2014absolute}.
In our special case, we may choose the function $f$ in the middle of
page 5147 in \cite{shmerkin2014absolute} as
\[
f(\alpha)=\frac{\mathrm{proj}_\alpha\left(\frac{2}{3},0\right)
-\mathrm{proj}_\alpha\left(\frac{1}{3},\frac{2}{3}\right)}{\mathrm{proj}_\alpha\left(0,\frac{2}{3}\right)
-\mathrm{proj}_\alpha\left(\frac{1}{3},\frac{2}{3}\right)}
=2\tan(\alpha)-
.
\]
Clearly $f$ is non-constant and $f^{-1}$ preserves the Hausdorff dimension. Hence by \cite[Lemma 6.2 and Proposition 3.1]{shmerkin2014absolute} there is a set $E''_k$ of Hausdorff dimension $0$ such that $\eta''_{\alpha,k}$ has power Fourier-decay for all $\alpha \notin E''_k$. Altogether, setting the $0$-dimensional exceptional set of parameters $E=\bigcup_{k=2} ^{\infty} E'_k\cup E''_k$, by \cite[Corollary 5.5]{shmerkin2014absolute} we have that $\nu_\alpha$ is absolutely continuous with an $L^q$ density for some $q>1$ for all $\alpha \notin E$ exactly as in the proof of \cite[Theorem A]{shmerkin2014absolute} with no further modifications.
\end{proof}
In Theorem \ref{S65} we have proved that the family of the angle-$\alpha$
projection of the Sierpi\'nski-carpet is antagonistic in the sense of Definition \ref{S58}. In the rest of this note we prove that there are many antagonistic families.
\section{An equi-homogeneous family for which the Non-Degeneracy Condition holds}
First of all we remark that the Non-Degeneracy Condition does not hold for all families.
For example let
\begin{equation}\label{S89}
\mathcal{F}_\alpha:=\left\{\frac{1}{2} \cdot x+t_{i}^{(\alpha)}\right\}_{i=1}^{m},\quad m \geq 2.
\end{equation}
Then for every $\alpha$, $\Pi_\alpha(\mathbf{i})=\Pi_\alpha(\mathbf{j})$ for
$\mathbf{i}=(1,2, \dots ,2, \dots )$ and $\mathbf{j}=(2,1, \dots ,1, \dots )$.
So, the non-degeneracy condition does not hold.
However, if the contraction ratio is the same $\lambda\in\left(0,\frac{1}{2}\right)$ for all maps of all IFS in the family (the family is equi-homogeneous) and the translations are independent real-analytic functions then the Non-Degeneracy Condition holds:
\begin{proposition}\label{S88}
Given
\begin{equation}\label{S89}
\mathcal{F}_\alpha:=\left\{\lambda \cdot x+t_{i}^{(\alpha)}\right\}_{i=1}^{m},\quad m \geq 2, \quad \alpha\in A,
\end{equation}
where
\begin{description}
\item[(a)] $\lambda\in \left(0, \frac{1}{2}\right)$ and
\item[(b)] For $\ell =1, \dots ,m$, the functions $\alpha\mapsto t_{\ell }^{(\alpha)}=\sum\limits_{k=0}^{\infty }a_{\ell ,k} \cdot \alpha^k$,
are independent
real-analytic functions:
\begin{equation}\label{S90}
\forall \alpha\in A,\ \sum\limits_{i=1}^{m}\gamma_i \cdot t_i^{(\alpha)}\equiv 0 \mbox{ iff }
\gamma_1= \cdots =\gamma_m=0.
\end{equation}
\end{description}
Then $\left\{\mathcal{F}_\alpha\right\}_{\alpha\in A}$ satisfies the Non-Degeneracy Condition.
\end{proposition}
\begin{proof}
Fix two distinct $\mathbf{i},\mathbf{j}\in \Sigma$. For every $\ell =1, \dots ,m$
,
define $q_\ell :=q_\ell (\mathbf{i},\mathbf{j})$ by
\begin{equation}\label{S91}
q_{\ell }:=\sum\limits_{\left\{k:i_k=\ell \right\}} \lambda^{(k-1)}
-
\sum\limits_{\left\{k:j_k=\ell \right\}} \lambda^{(k-1)}.
\end{equation}
Then
\begin{equation}\label{S93}
\Pi_\alpha(\mathbf{i})-\Pi_\alpha(\mathbf{j})
=
\sum\limits_{k=0}^{\infty }
\alpha^k \cdot b_{k},
\end{equation}
where
\begin{equation}\label{S96}
b_k:=\sum\limits_{\ell =1}^{m}
a_{\ell ,k}\cdot q_\ell
\end{equation}
for all $k\in\mathbb{N}^+$, where $\mathbb{N}^+:=\mathbb{N}\setminus \left\{0\right\}$. Observe that for $\mathbf{b}:=(b_0,b_1, \dots )$ and $\forall \ell=1,\dots,m$ for
$\mathbf{a}_\ell :=(a_{\ell ,0},a_{\ell ,1},a_{\ell ,2}, \dots a_{\ell ,k}, \dots )$ we have that \eqref{S96} can be written as
\begin{equation}\label{S97}
\sum\limits_{\ell =1}^{m}q_\ell \cdot \mathbf{a}_\ell =\mathbf{b.}
\end{equation}
Assume that
\begin{equation}\label{S94}
\forall \alpha\in A,\quad \Pi_\alpha(\mathbf{i})-\Pi_\alpha(\mathbf{j})\equiv 0.
\end{equation}
To complete the proof it is enough to verify that
$
\mathbf{i}=\mathbf{j}.
$
Using \eqref{S93}, we obtain from \eqref{S94} that
$b_k=0$ for all $k\in\mathbb{N}^+$. Note that \eqref{S90} states that the vectors $\left\{\mathbf{a}_\ell \right\}_{\ell =1}^{m}$ are independent.
So, from $\mathbf{b}=\mathbf{0}$ and from \eqref{S97}
we get that $q_1= \cdots =q_m=0$. This and $\lambda\in\left(0, \frac{1}{2}\right)$ implies that $\mathbf{i}=\mathbf{j}$.
\end{proof}
\section{Antagonistic families of Self-similar IFS}
Here we prove the following assertion:
The collection of one-parameter families of IFS and self-similar measures are dense in the collection of equi-homogeneous IFS having contraction ratio $1/L$ ($L\in \mathbb{N}^+$) equipped with invariant measures with similarity dimension greater than one. To state this precisely, we need some definitions:
\begin{definition}\label{S98}\ First we consider collections of equi-homogeneous self-similar IFS having at least $4$ functions.
\begin{description}
\item[(i)] Let $\pmb{\mathfrak{F}_{L}}$ be the collection of all pairs $(\mathcal{F}_\alpha,\mu)$ satisfying the conditions below:
\begin{itemize}
\item $\left\{\mathcal{F}_\alpha\right\}_{\alpha\in A}$ is of the form:\begin{equation}\label{S40}
\mathcal{F}_\alpha:=\left\{\varphi_{i}^{(\alpha)}(x):=\frac{1}{L} \cdot x +t_{i}^{(\alpha)}\right\}_{i=1}^{m},\quad \alpha\in A,
\end{equation}
where $m \geq 4$, $A\subset \mathbb{R}$ is a proper interval ($\overline{A}$ is compact) and
\begin{equation}\label{S100}
L\in \mathbb{N},\qquad 3 \leq L\leq m-1.
\end{equation}
Moreover, the functions $\alpha\mapsto t_{\ell }^{\alpha}$ are continuous on $\overline{A}$ for all $\ell =1, \dots ,m$.
\item Let $\mu$ be an infinite product measure $\mu:=(w_1, \dots ,w_m)^\mathbb{N}$ on $\Sigma:=\left\{1, \dots ,m\right\}^\mathbb{N}$ satisfying:
\begin{equation}\label{S101}
s:=\frac{-\sum\limits_{i=1}^{m}w_i\log w_i}{\log L}>1,
\end{equation}
\end{itemize}
\item[(ii)] Now we define a rational coefficient sub-collection $\pmb{\mathfrak{F}_{L,\mathrm{rac}}}\subset \pmb{\mathfrak{F}_{L}}$
satisfying a non-resonance like condition \eqref{S34} below:
\begin{itemize}
\item
$\alpha\mapsto t_{i}^{(\alpha)}$ are polynomials of rational coefficients. We assume that $\left\{t_{i}^{(\alpha)}\right\}_{i=1}^{m}$ are independent, that is \eqref{S90} holds. Moreover,
\item The weights $w_i$ are rational: $\left\{w_i\right\}_{i=1}^{m}$, $w_i=r_i/q_i$, with $r_i,q_i\in \mathbb{N}\setminus \left\{0\right\}$ satisfying:
\begin{equation}\label{S34}
L\nmid \mathrm{lcm}\left\{q_1, \dots ,q_m\right\},
\end{equation}
where lcm is the least common multiple.
Let
$\nu_\alpha:=(\Pi_\alpha)_*\mu.$
\end{itemize}
\end{description}
\end{definition}
\begin{proposition}\label{S102}\
\begin{description}
\item[(a)] All elements $\{\nu_\alpha\}$ of $\pmb{\mathfrak{F}_{L,\mathrm{rac}}}$ are antagonistic.
\item[(b)] $\pmb{\mathfrak{F}_{L,\mathrm{rac}}}$ is dense in $ \pmb{\mathfrak{F}_{L}}$ in the $\sup$ norm.
\end{description}
\end{proposition}
\begin{proof}
\textbf{(a)} It follows from Proposition \ref{S88} that
we can apply Shmerkin-Solomyak Theorem (Theorem \ref{S85}). This yield that $\mathfrak{Cont}_Q$ (defined in \eqref{S56}) satisfies
$\dim_{\rm H} (\mathfrak{Cont}_Q(\mathcal{F}_\alpha,\mu))^c=0$. On the other hand,
for every rational parameter $\alpha$, $(\mathcal{F}_\alpha,\mu)$ satisfies the conditions of Theorem \ref{S32}. So, for every $\alpha\in\mathbb{Q}$ we have $\dim_{\rm H} \nu_\alpha<1$. Using this and Corollary \ref{S64} we get that
$\mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ is a dense $G_\delta$ set.
So, $\left\{\nu_a\right\}_{\alpha\in A}$ is antagonistic.
\textbf{(b)} Let $(\widetilde{\mathcal{F}}_\alpha,\widetilde{\mu})\in \pmb{\mathfrak{F}_{L}}$, with
$\widetilde{\mathcal{F}}_\alpha:=\left\{\varphi_{i}^{(\alpha)}(x):=\frac{1}{L} \cdot x +\widetilde{t}_{i}^{(\alpha)}\right\}_{i=1}^{m}$ and
$\widetilde{\mu}=(\widetilde{w}_1, \dots ,\widetilde{w}_m)^\mathbb{N}$. Fix an $\varepsilon>0$.
We can find independent polynomials $\alpha\mapsto t_{i}^{(\alpha)}$ $i=1, \dots ,m$
of rational coefficients such that $\|\widetilde{t}_{i}^{(\alpha)}-t_{i}^{(\alpha)}\|<\varepsilon$ for all $\alpha\in \overline{A}$ and $i=1, \dots ,m$.
Moreover,
we can find a product measure $\mu=(w_1, \dots ,w_m)^\mathbb{N}$ such that for $\mathbf{w}=(w_1, \dots ,w_m)$ we have $\|\mathbf{w}-\widetilde{\mathbf{w}}\|<\varepsilon$
and $\mathbf{w}$
has rational coefficients $w_i=p_i/q_i$ satisfying \eqref{S34}.
\end{proof}
\begin{corollary}\label{S103}
Let $(\mathcal{F}_\alpha,\mu)\in\pmb{\mathfrak{F}_{L,\mathrm{rac}}}$
Then
\begin{equation}\label{S104}
\dim_{\rm P} \left( \mathfrak{Sing}(\mathcal{F}_\alpha,\mu)\right)=1.
\end{equation}
\end{corollary}
\begin{proof}
From Solomyak-Shemerkin Theorem, we obtain that $ \mathfrak{Sing}(\mathcal{F}_\alpha,\mu)$ is dense. Then the assertion follows from Corollary \ref{S115}.
\end{proof}
\begin{acknowledgement}
The authors would like to say thanks for very useful comments and suggestions to Bal\'azs B\'ar\'any, Henna Koivusalo, Micha\l\ Rams, Pablo Shmerkin and Boris Solomyak.
Moreover, we are grateful to the anonymous referee for a suggestion which made it possible to soften the conditions of Theorem \ref{S128}.
\end{acknowledgement}
\bibliographystyle{plain}
|
1,108,101,564,256 | arxiv | \section*{Executive summary}
Magnetic fields are involved in every astrophysical process on every scale: from planetary and stellar
interiors to neutron stars, stellar wind bubbles and supernova remnants; from the interstellar medium in
galactic disks, nuclei, spiral arms and halos to the intracluster and intergalactic media. They are
involved in essentially every particle acceleration process and are thus fundamental to non-thermal
physics in the Universe. Key questions include the origin of magnetic fields, their evolution over
cosmic time, the amplification and decay processes that modify their strength, and their impact on
other processes such as star formation and galaxy evolution. Astrophysical plasmas provide a unique
laboratory for testing magnetic dynamo theory. The study of magnetic fields requires observations that
span the wavelength range from radio through infrared, optical, UV, X-ray, and gamma-ray.
Canada has an extremely strong record of research in cosmic magnetism, and has a significant
leadership role in several ongoing and upcoming global programs. This white paper will review the
science questions to be addressed in the study of cosmic magnetic fields and will describe the
observational and theoretical opportunities and challenges afforded by the telescopes and modelling
capabilities of today and tomorrow.
\newpage
\section{Introduction}
Magnetic fields are ubiquitous in space, playing what must be crucial, but often poorly understood, roles in many astrophysical processes. Magnetic fields span many orders of magnitude in both their physical scale and field strength reaching as high as $10^{15}$~Gauss in magnetars to as low as $10^{-9}$~Gauss in intergalactic regions. Since they do not radiate and cannot be observed directly, their study is challenging. Canadians have a long history leading studies to answer many important questions about magnetism in the cosmos. In the last decade, more than 30\% of all refereed astronomy papers, with Canadian contributions, refer to ``magnetism'' or ``magnetic fields'' (ADS).
Canadian contributions include the first detections of magnetic fields in white dwarfs \citep{1970ApJ...161L..77K}, pre-main sequence Herbig Ae/Be stars \citep{2005A&A...442L..31W}, and evolved post-AGB stars \citep{2015MNRAS.446.1988S}, the first identification of the magnetic field reversal between the local arm and the Sagittarius arm \citep{1979Natur.279..115S}, the first detection of magnetic fields in high-redshift objects \citep{1982ApJ...263..518K}, the first detection of a magnetic field within a cluster of galaxies \citep{1986A&A...156..386V}, the largest ever catalogue of extragalactic rotation measures \citep{2009ApJ...702.1230T}, the best map of Galactic Faraday rotation \citep{2012A&A...542A..93O}, the best model of the large-scale structure of the magnetic field in the disk of the Milky Way \citep{2011ApJ...728...97V}, the best detailed maps of polarized dust by Planck \citep[e.g.,][]{collaboration2018planck}, the first broadband all-sky survey of radio polarization \citep{2019AJ....158...44W}, and fundamental new processing algorithms such as the polarisation gradient \citep{2011Natur.478..214G,2017MNRAS.466.2272H}, polarisation stacking \citep{2015ASPC..495..241S} and real-time ionospheric Faraday correction. Canadians are also world leaders in the development of techniques and technology related to radio magnetism studies, such as telescope dishes, receivers and correlators.
\section{\label{sec:obs}Observational techniques}
Radiation of relativistic particles in the presence of magnetic fields produces intrinsically polarized synchrotron radiation making radio observations, and in particular radio polarization, especially useful for probing magnetic fields. Polarized starlight optical wavelengths and dust grains at mm wavelengths provide useful magnetic field tracers in other wavebands. High-energy observations provide additional, complementary information about cosmic ray populations that can inform magnetic field studies. Some of these topics are covered in other white papers including: E025 (star formation), E076 (dust), and E081 (interstellar medium, ISM).
Synchrotron emission and Faraday rotation occur along virtually all sight lines through the Galaxy. The parameter that characterizes the medium to a distance $d$ is the Faraday depth
$\phi(d) = {0.812 \,} \int_d^{\rm{telescope}} {n_e(r)} {B_{||}(r)} {\rm{d}l}$
where $n_e$[cm$^{-3}$] is the electron density, $B_{||}$[$\mu$G] is the
line-of-sight component of the magnetic field, and ${\rm{d}}l$[pc] is the distance along the line of sight.
In the simple case of a polarized extragalactic source seen through the Galactic disk, or a pulsar seen through part of the disk, the Faraday depth becomes the Rotation Measure (RM). For this case
polarization angle, ${\theta}$, is a linear function of ${{\lambda}^2}$ and RM is relatively simple to measure; extensive surveys of point-source RMs have been used to great effect to map the large-scale structure of the Galactic magnetic field. When emission and rotation are mixed, $\phi$ is no longer proportional to ${\lambda}^2$ and interpretation of polarization data on the extended emission becomes more complicated. Different Faraday depths can occur at different distances along the line of sight, and the true situation is portrayed by the Faraday depth spectrum, produced by applying Rotation Measure Synthesis. The resolution in Faraday depth depends mostly on the longest wavelength of the data. The maximum width of the Faraday depth structures that can be successfully mapped depends mostly on the shortest wavelength. In other words, combining low frequencies with wide bandwidths is the key to successful investigation of the magneto-ionic medium.
This is key when choosing an observing band or instrument for Faraday rotation studies.
\section{\label{sec:science}Key Science Questions}
With significant involvement and leadership in many current and upcoming international magnetism related projects and telescopes, Canadians are in a position to make significant contributions to many fundamental questions in magnetism science. We present a summary of some of the questions for which significant Canadian contributions for advancement can be expected over the next decade.
\textbf{How do magnetic fields influence stellar evolution?}
Magnetic fields are a natural consequence of the dynamic plasmas that comprise a star. They directly and indirectly impact stellar lives through modification of convective and circulatory interior flows, redistribution of angular momentum and nucleosynthetic chemicals, channeling and modification of mass loss and accretion, and shedding of rotational angular momentum through magnetic braking. Ultimately, these effects lead to important modification of stellar evolutionary pathways \citep[e.g.,][]{2018CoSka..48..124K} and stellar feedback effects \citep[e.g.,][]{2005ApJ...626..350H}, such as mechanical energy deposition in the ISM and supernova explosions, and hence the properties of stellar remnants and the structure and chemistry of the local Galactic environment.
\textbf{What is the magnetic field in ISM drivers such as supernova remnants and molecular clouds?} Magnetic fields pervade in the interstellar medium and are believed to shape the process of star formation, yet probing magnetic fields is challenging. Zeeman splitting can be used to measure the total magnetic field and Faraday rotation measurements of background sources can be used to find the direction and magnitude of the component of magnetic field along the line-of-sight to star forming regions \citep{2018A&A...614A.100T}.
The blast wave from supernova explosions expands to large scales, sweeping up and compressing the ambient magnetic field making supernova remnants (SNR) excellent probes for local structures in the Galactic mean field \citep{1998ApJ...493..781G,2016A&A...587A.148W}. However, broadband fits, X-ray observations and 3D simulations of SNR including efficient particle acceleration show evidence for additional magnetic field amplification at SN shocks
\citep[e.g.,][]{2007Natur.449..576U, 2014ApJ...789...49F}. The exact 3D structure and strength of SNR magnetic fields, particularly in the early phases of their evolution, remains unclear.
\textbf{What is the small scale structure of the Galactic magnetic field?} Turbulent magnetic fields are thought to be a significant component of the Galaxy with a magnitude equal to or greater than the mean field component \citep{2015ASSL..407..483H}. Recent studies have shown correlations between neutral hydrogen filaments and the magnetic field alignment \citep{2018ApJ...857L..10C} as observed with diffuse dust emission and starlight polarization. There is much still unknown about the turbulent properties of the Galactic magnetic field including the scales, ratio of random to regular components, the nature of turbulence (i.e., isotropic vs anisotropic random components), and whether the field has helicity \citep{2011A&A...530A..89O}.
\textbf{What is the large scale 3D magnetic field structure of the Milky Way Galaxy?} Observations of nearby spiral galaxies have revealed a regular large scale pattern that follows the spiral arms. However these observations are 2D projections of a 3D field, of which the exact topology remains a mystery \citep{Collaboration:2016eh}. Understanding the origins and evolution of galactic magnetic fields in general require this understanding. Our Milky Way provides us with a unique perspective to probe the large scale field from the inside. Studies to date have revealed probable field reversals in the arms but provide an incomplete picture \citep{2007ApJ...663..258B,2011ApJ...728...97V,2017A&A...603A..15O}. With new and better data, the next decade should see significant advancement in the development of a trustworthy model of the Galactic magnetic field \citep{2018JCAP...08..049B}, which will in turn allow us to properly subtract it to reveal the extra-galactic sky in more detail.
\textbf{What is the 3D magnetic field structure of nearby Galaxies?}
An important outstanding scientific issue regarding galactic halos and their magnetic fields relates to {\it lagging halos}, i.e. the fact that the rotation of the halo lags behind the rotation of the disk. Just why this occurs and how the lag may 'connect' to the IGM are not yet known although magnetic fields could be the missing link \citep[e.g.][]{2016MNRAS.458.4210H}. Another is, 'where do magnetic fields close?' One would expect that the field lines close at some point, but current observations are not able to detect magnetic fields beyond a few kpc from the disk. Others are: to what extent are magnetic fields affecting local or global dynamics? and ultimately, how are the fields generated?
Galactic magnetic fields should not be treated as minor perturbations, but rather as key ingredients in a rich dynamically active environment. Probing deeper into their structure and physical state will provide answers to some of the key questions facing galaxy formation and evolution today. Future instruments should improve on sensitivity by at least a factor of 10, while ensuring that a variety of spatial scales can be detected.
\textbf{What role do magnetic fields play in cosmic ray acceleration and propagation?} Galactic magnetic fields are thought to be responsible for the acceleration of electrons, positrons and ions to cosmic rays. Within the Milky Way, this acceleration is thought to occur primarily in supernova remnants \citep[e.g.,][]{2017arXiv170608275M}. Cosmic rays propagate through the Galaxy mostly along field lines, but also by diffusion and advection \citep{2019arXiv190703789S}. Additionally, ultra-high energy cosmic rays (UHECRs) are extragalactic cosmic rays with energies exceeding $10^{18}$~eV. Acceleration to these extreme energies could be due to transients such as massive supernovae or compact object mergers, active-galactic nuclei, or galaxy clusters, though the exact mechanism is unclear. Understanding the creation and propagation of cosmic rays will be advanced through accurate knowledge and diagnosis of the magnetic fields in supernova remnants, and also through understanding of Galactic, extra-galactic, and intergalactic magnetic field strength and structure.
\textbf{How do magnetic fields affect AGN feedback?} Active Galactic Nuclei, powered by accretion on a supermassive black hole, eject relativistic jets that interact with the interstellar and intergalactic medium on sub-pc to Mpc scales. Magnetic fields in AGN are observed from pc scales to Mpc scales, e.g. from the non-thermal filaments in the Galactic centre, VLBI polarimetry of radio galaxy cores, to polarization of jets, and radio lobes. They affect the accretion disk, jet collimation and particle acceleration, and contribute significantly to the pressure inside radio lobes. Expanding radio lobes inject large amounts of energy in the intergalactic medium \citep[e.g.,][]{2007ARA&A..45..117M} that affects its dynamics and indirectly accretion of gas on galaxies. Polarized radio emission and Faraday rotation reveal the magnetic field structure in AGN and their interaction with the intergalactic gas.
\textbf{How have magnetic fields evolved over cosmic time?} Much remains unknown about the evolution of magnetic fields and the state of fields in the early Universe. Did the fields grow steadily over time? Or is there a phase of rapid field amplification? Recent studies show little to no change in the rotation measures between high and low redshift galaxies \citep{2007MNRAS.375.1059B,2018MNRAS.475.1736V}. However, sample sizes of high redshift polarized sources are low. Currently there are only approximately 20 radio galaxies with polarization properties and redshifts $z \ge 3.5$ that are known. New surveys such as the Polarisation Sky Survey of the Universe’s Magnetism (POSSUM) and the Very Large Array Sky Survey (VLASS) will probe deeper than ever to sample the largest number of high-redshift galaxies with radio polarization observations to date. This will in turn allow us to investigate the evolution of their Faraday rotation measures, which probes their magnetic fields and the electron densities of their local environments, over cosmic time.
\textbf{What is the role of cosmic magnetic fields in large-scale structure formation and evolution?} We know that on the largest scales there is structure to the Universe: voids, filaments, and clusters (e.g. the cosmic web). Theory tells us the magnetic fields pervade all of this intergalactic space, but as yet there have been no measurements of intergalactic magnetic fields. The origin of cosmic fields and their evolution and role in structure formation is unknown. From simulations the strength of these fields can vary from nG to $\mu$G levels \citep[e.g.][]{2008Sci...320..909R,2014MNRAS.439.2662V,2016MNRAS.459...70V}, depending on the location of the field (e.g. voids or clusters) but also on the assumed strength of any primordial magnetic field and on the interactions and injections from galaxies and galaxy evolution. Recent statistical studies \citep{2017MNRAS.467.4914V,2017MNRAS.468.4246B} obtained upper limits on the field strength from new radio data, and the first direct detection of a intergalactic cosmic filament by the Low-Frequency Array (LOFAR) telescope was made earlier this year \citep{2019Sci...364..981G}. Upcoming surveys with existing and new instruments will provide unparalleled opportunities for advancement in this field.
\section{Current Canadian Leadership in the International Magnetism Community}
Canadians currently have leadership roles in nearly all of the major ongoing international radio polarization surveys, addressing a wide cross-section of significant science questions. Canadian led projects include next generation Faraday rotation measure grid experiments through POSSUM and VLASS, studies of the diffuse Galactic polarized emission in the Global Magneto-Ionic Medium Survey (GMIMS), and detailed studies of Galactic halo magnetic fields with CHANG-ES: Continuum Halos in Nearby Galaxies - an EVLA Survey. A recent \$10-million grant from Canadian Foundation for Innovation (CFI) and provincial partners for the Canadian Initiative for Radio Astronomy Data Analysis (CIRADA) will enable Canadians to develop the infrastructure and expertise needed to convert the enormous raw data streams from next-generation telescopes into enhanced data products that astronomers can directly use to make new discoveries. A significant component of this project is for cosmic magnetism related science with POSSUM and VLASS. In addition, Canadians also participate in projects on other instruments such as LOFAR, the Murchison Widefield Array (MWA), Australian Square Kilometer Array Pathfinder (ASKAP) and MeerKAT telescope, which probe deeper than ever before at a range of frequencies.
At optical wavelengths, Canadian leadership in the Magnetism in Massive Stars (MiMeS), Binarity and Magnetic Interactions in Stars (BinaMIcS), and related projects continue to exploit the international suite of precision optical polarimeters on 4-metre to 8-metre class telescopes to drive forward our understanding of the evolutionary impact of magnetic fields in non-degenerate stars and white dwarfs.
High-energy studies provide insights into particle acceleration operating in SNRs, pulsar wind nebulae and active galactic nuclei and also help address the bigger questions of cosmic magnetism and the origin of high-energy cosmic rays, in synergy with studies at lower energies. These questions in turn are driving future telescopes in the radio (Square Kilometer Array, SKA), submillimetre (next generation JCMT camera), X-ray (ATHENA and new X-ray polarimeters such as eXTP and IXPE) and gamma-ray bands (Cherenkov Telescope Array).
\subsection{CIRADA: Canadian Initiative for Radio Astronomy Data Analysis}
Through CIRADA, Canadians will develop expertise necessary for management of large scale radio surveys. Current cutting-edge telescopes, like ASKAP, produce high resolution and multi-frequency data with volumes that are now at a point where it is often impossible for an individual astronomer to use a desktop computer to process and analyze these data on their own. Instead, supercomputers are required to make images and transform these into scientifically useful catalogues and advanced image products such as Faraday cubes. CIRADA is developing the pipeline for the creation of all of the advanced polarization catalogues and data products for VLASS and POSSUM. Together these will make a polarized radio map of the entire sky in unprecedented detail.
\subsubsection{POSSUM: Polarisation Sky Survey of the Universe’s Magnetism}
POSSUM is one of ten major Survey Science Projects to be undertaken on ASKAP \citep{2010AAS...21547013G}. ASKAP is a radio telescope array located in Western Australia, which uses 36 antennas equipped with advanced receivers known as phased array feeds. It is an ideal survey instrument due to its wide field of view and fast mapping speed. POSSUM will team up with other major science projects, the Evolutionary Map of the Universe (EMU) and Widefield ASKAP L-Band Legacy All-Sky Blind Survey (WALLABY), with commensal observations to maximize scientific output and to obtain wide frequency coverage in the range $\sim$800-1800~MHz. It will survey the entire sky (south of $\delta=+30^\circ$), to an RMS sensitivity of 10~$\mu$Jy/beam at 10$''$ resolution.
The main science result will be a catalogue of Faraday rotation measures (RMs) for around a million extragalactic radio sources at an unprecedented density of approximately 25-30 RMs per deg$^2$. We will also produce advanced products like catalogues of Faraday components, descriptions of Faraday complexity, and Faraday cubes to provide additional spatially resolved information. Such a dense RM-grid will allow us to probe magnetic features in the Galaxy, to better determine the 3D geometry of the Milky Way's magnetic field, to test dynamo theory and other models that describe the generation of large-scale magnetic fields, and to understand how magnetic fields have evolved as a function of redshift in galaxies, clusters and intergalactic medium (see~Sec.~\ref{sec:science} for more details).
A number of test fields have been observed and are already showing very exciting and promising results, including the densest RM-grid ever produced (see Fig.~\ref{fig:rmgrid}), which allow us to tease out details of the Galactic magnetic field geometry on $\sim$pc scales. A full-scale pilot covering $\sim300$~deg$^2$ is currently underway with the full survey expected over the next five years. Ten Canadian astronomers are members of the POSSUM team, including four in leadership roles.
\subsubsection{VLASS: Very Large Array Sky Survey}
VLASS is a radio sky survey offering a unique combination of high angular resolution ($\approx2.5''$), sensitivity (a $1\sigma$ goal of 70~$\mu$Jy/beam in the coadded data), full linear Stokes
polarimetry, time domain coverage, and wide bandwidth (2–4 GHz) \citep{2019arXiv190701981L}. Observations will take place over three epochs to allow the discovery of variable and transient radio sources, for a total of 5500 hours to be observed on the Karl G. Jansky Very Large Array (VLA). Observations began in September 2017 and will continue until 2024 with the first epoch of observing now complete. VLASS covers the whole sky visible to the VLA (declination $> -40^\circ$), a total of 33 885 deg$^2$.
Faraday Tomography of The Magnetic Sky is one of four key science themes addressed by the survey. It is estimated that $200,000$ sources with Faraday rotation measures will be found (almost 6 times that of the current largest known catalogue of RMs). The sky coverage will overlap with a section of the POSSUM survey, which, with the different frequency coverage and improved spatial resolution of VLASS, will improve the ${{\lambda}^2}$ coverage and thus provide better RM measurements (see Sec.~\ref{sec:obs}).
\begin{figure*}[!ht]
\centering
\begin{minipage}{8.5cm}
\includegraphics[width=7.3cm]{SB8280_RMgrid.png}
\begin{scriptsize}\caption{ \label{fig:rmgrid}Preliminary RM-grid for a test POSSUM observation
(Vanderwoude/POSSUM collaboration, in prep.) with 1040 polarized sources and a density of $\approx$~29 sources/deg$^2$. The previous best RM-grid in this region \citep{2019MNRAS.485.1293S} has only 12 sources.}
\end{scriptsize}
\end{minipage}
\hfill
\begin{minipage}{8.5cm}
\includegraphics[width=7.5cm]{finalHSTdiskMedianhalo.jpeg}
\caption{ \label{fig:median}The median synchrotron halo of an edge-on spiral galaxy (in blue-grey) in L-band made from stacking 30 of the CHANG-ES galaxies superimposed on an optical Hubble Space Telescope image of NGC~5775. From \cite{2019AJ....158...21I}.
}
\end{minipage}
\end{figure*}
\subsection{GMIMS: The Global Magneto-Ionic Medium Survey}
GMIMS is an international project with 30 members in 9 countries, 11 of whom are in Canada. The goal of
GMIMS is to improve our understanding of the Galactic magnetic field by mapping polarized radio emission over the entire sky, in the Northern and Southern hemispheres, using large single-antenna radio telescopes around the world.
Although there are existing all-sky surveys, they exist only in narrow, widely spaced frequency bands. These data are inadequate for the characterization of Faraday depth, the main determinant of the appearance of the polarized radio sky at long wavelengths. GMIMS plans for complete coverage of the frequency range 300 to 1800 MHz with thousands of frequency channels. This is a crucial frequency range in terms of depolarization and Faraday depth coverage: at lower frequencies Faraday rotation so dominates that only quite local phenomena can be probed. At higher frequencies Faraday rotation is so weak that huge bandwidths are required. Rotation Measure Synthesis and other RM estimation techniques (e.g. QU fitting) are being used to analyze the data. GMIMS is the first project to apply Rotation Measure Synthesis to single-antenna data.
For technical reasons the band has been divided into three segments, 300--800\,MHz, 800--1300\,MHz, and 1300--1800~MHz, the Low, Mid, and High bands.The sky naturally divides into North and South, so a total of six all-sky surveys are required to complete the dataset. Observations are complete for three surveys, one in the North and two in the South, and data reduction is mostly complete.
\setlist{nosep}
\begin{itemize}
\item{ High band north: The DRAO Galt Telescope (26-m) 1270-1750 MHz, all RA, declination range $-30^\circ < \delta < +87^\circ$. Data reduction is virtually complete and four science papers have been published
\item{ Low band south: Parkes 64-m Telescope, 300-480 and 660-870 MHz, all RA, $-90^\circ < \delta < +20^\circ$. The 300--480~MHz data are published \citep{2019AJ....158...44W} and available at the CADC. The upper part of the band ($> 480$~MHz) was heavily affected by radio-frequency interference (RFI). Two science papers on this survey have been published.}
\item{High band south: Parkes, 1300-1800 MHz, all RA, $-90^\circ < \delta < 0^\circ$.
Data reduction is 90\% complete.}
\end{itemize}
To date, five science papers have been published from GMIMS data \citep{2010ApJ...724L..48W, 2015ApJ...811...40S, 2017MNRAS.467.4631H, 2019ApJ...871..106D, 2019MNRAS.487.4751T}. The GMIMS all-sky Faraday cubes are without precedent. It is evident for the first time that there is significant emission at non-zero Faraday depths. The published GMIMS papers explore fundamentally new analysis approaches to these data. The papers demonstrate the richness of the Faraday sky, and have provided clues to the structure of the magneto-ionic medium and the magnetic field configuration within the Galaxy. However, the real promise of GMIMS will be realized by combining the full range of frequencies to enable simultaneous resolution of small ($\sim 1 \textrm{ rad m}^{-2}$) features and sensitivity to large ($\sim 100 \textrm{ rad m}^{-2}$) features. Additional surveys are planned to achieve this goal.
An all-sky survey for Low-band North will be made with the 15-m DVA telescope at DRAO in 2020. This will subsequently be combined with CHIME data from 400--800\,MHz, and even later with CHORD data, to achieve sub-degree angular resolution. A proposal for Mid-band South data using the Parkes Telescope is being prepared, in collaboration with ASKAP projects POSSUM and EMU. A new survey with the DRAO Galt Telescope covering 900--1700 MHz is planned.
\subsection{CHANG-ES: Continuum Halos in Nearby Galaxies - an EVLA Survey}
CHANG-ES has 45 members in 8 countries, 9 of whom are in Canada. With over 400 hours of VLA time in 3 different array configurations (plus 200 hours of GBT time), and all 4 Stokes parameters, the CHANG-ES project observed 35 edge-on galaxies at two frequencies (1.6 GHz = L-band and 6 GHz = C-band) in order to probe the faint gaseous halo regions of spiral galaxies. It was important to use more than one configuration since disk-halo structures are seen over many spatial scales. A summary of some selected results can be found in \cite{2019AJ....158...21I} and further information and downloadable images are at {\tt queensu.ca/changes}. Fig.~\ref{fig:median} reveals the extent and significance of gaseous halos as revealed by CHANG-ES. Since the emission is non-thermal, magnetic fields {\it must} extend out into the entire halo region shown. It is clear that, if halos are included, spiral galaxies would look nothing like the thin flat disks that are normally depicted in standard images.
Gaseous halos are important because they provide a crucial interface between galaxy disks and the intergalactic medium (IGM). Like the Sun that shows an abundance of activity on its surface and transitions to a Solar wind that permeates interplanetary space, so too galaxies experience much activity in their halos and also reveal winds that can exceed the escape speed, extending into the IGM \citep{2018A&A...611A..72K}. And just like the Sun, the key and arguably most crucial ingredient is the magnetic field, its strength and topology.
CHANG-ES has made the structure of magnetic fields a priority. Already, new results are emerging that have never before been seen. An example is reversing rotation measures in halos \citep{Mora2019}. A model for such reversals has been developed from dynamo theory in which magnetic spiral arms are not restricted to the disk but rise into the halo regions \citep{2019MNRAS.487.1498W}.
Galaxy halos have weak radio emission compared to disks and emission related to magnetic fields (Stokes Q and U) is weaker still. {\it Sensitivity} is therefore mostly needed. CHANG-ES Q and U sensitivities range from about 4 to 10 $\mu$Jy/beam. A factor of 10 in sensitivity would go a long way in answering some of the above questions. From a technical standpoint, an easy seamless interface between single-dish (for zero-spacing flux) and interferometers would help in ensuring that all relevant spatial scales are integrated correctly into maps. Currently, software is only being developed now to combine wide-band GBT and VLA data, and future instruments should build on these developments so that future users do not have to re-invent this wheel.
\section{\label{sec:future}Future Canadian Leadership in Magnetism Research}
Given the significant roles that Canadians do and have played in advancement in the understanding of cosmic magnetism, there is great potential to continue this leadership in the next decade and beyond. There are several upcoming projects where Canadian participation will allow us to continue in leading roles.
\subsection{Square Kilometre Array}
The Square Kilometer Array (SKA) is an international effort to build the world’s largest radio telescope, with eventually over a square kilometre of collecting area (see also WP E043). The telescope will be built in several parts, with the low frequency array being Australia and the mid-frequency array located in South Africa. The SKA has identified five key science drivers that ``aims to solve some of the biggest questions in the field of astronomy.'' One of these key science projects is \textit{The origin and evolution of cosmic magnetism} \citep[see][]{2005AAS...20713703G}. Several Canadians are members of this key SKA-Scientific Working Group.
The SKA is anticipated to be an excellent polarization instrument, with SKA-LOW being $\sim8$ times more sensitive than LOFAR and SKA-MID $\sim5$ times more sensitive than the VLA. SKA should increase the number of RM sources by a factor of $\sim200$ from the current best available catalogue \citep{2018arXiv181003619M}.
With CIRADA, the radio community is beginning to rethink the current data processing and visualization methods and come up with new and better ways to manage extremely large data sets. The SKA Organisation has adopted a model that relies on so-called SKA Regional Centres for advanced data processing and science extraction. CIRADA will help build the Canadian capacity needed to participate in projects like the SKA Regional Centres.
Thirteen countries are at the core of the SKA (Canada currently being one), and 100 organisations in $\sim$20 countries have been participating in the design and development of the SKA and are now engaged in the detailed design. With commissioning expected to start in the mid 2020s, there is still much work to be done in order to reach the full science of the project. Continued investment by Canada and Canadian astronomers is key to ensuring the scientific returns of this instrument, which for the field of magnetism will be unmatched by anything previous.
\subsection{CHORD: the Canadian HI Observatory and Radio transient Detector}
CHORD (see also white paper E029) will solidify Canada’s leadership in cosmic magnetism.
The long wavelengths and broad bandwidth
of CHORD will deliver exquisite resolution in Faraday depth, yielding unprecedented views of magnetic field structures. Previous work at low frequencies has either used single dishes, with all-sky coverage but poor angular detail, or aperture synthesis, which can only focus on tiny details. CHORD will bridge the gap, revealing the role of large-scale magnetic fields in small-scale phenomena.
CHORD will make repeated measurements (over days, weeks, or months), to enable searches for Faraday depth variability. Such variations may be inherent to the extragalactic sources, but could also be a new tool for the study of interstellar turbulence in a way that has never been done before.
With $\approx 500$ antennas packed into a $(200 \textrm{ m})^2$ area and $300-1800$~MHz frequency coverage, CHORD will complement GMIMS data (from large single antennas), and data obtained from CHIME, boosting angular resolution by factors of 3 to 5, promising breakthroughs in mapping magnetic field configuration in the Milky Way. We advocate for a future expansion of CHORD to include $\approx 80$ additional antennas spread around the DRAO site to maximum baselines of $1-2$~km and arranged to achieve good instantaneous $uv$ coverage. Such a telescope would enable an all-northern sky GMIMS survey at a resolution of a few arcminutes at the lowest frequency, enabling investigation of magnetized turbulence down to small scales.
\subsection{DRAO Synthesis telescope upgrade}
The DRAO Synthesis telescope (ST) was originally built in the 1970s to observe atomic hydrogen (at 1420 MHz).
Later upgrades, including a correlator and spectrometer formed the basis of the Canadian Galactic Plane Survey \citep[CGPS;][]{2003AJ....125.3145T}, which ultimately revolutionized magnetic field studies by simultaneously observing polarisation angles at multiple wavelengths. This allowed for the first unambiguous determination of rotation measures for extragalatic compact sources within the Galactic disk \citep{2003ApJ...592L..29B}.
It has been two decades since the last major upgrades to this pioneering facility. With the support of an NSERC Collaborative Research Grant, a new correlator was been built in 2018 and is currently being tested for the ST using technology developed for CHIME (MSc thesis, P.\ Freeman). Additionally, a new multi-wavelength feed is being developed for the antennas (PhD thesis, X.\ Du). Future plans are detailed in the ST white paper (E080). The goal of these upgrades is to increase the bandwidth to cover 400-1800 MHz, making additional spectral lines observable, with broader continuum, and increased sensitivity. This will open up opportunities including RM synthesis with the interferometer, complementing the RM synthesis capabilities of the Galt Telescope.
\subsection{DRAO John A.~Galt Telescope Upgrade}
The DRAO John A.~Galt 26-m telescope is undergoing a complete upgrade of its signal path and control system. A MeerKAT $L$-band receiver has been purchased and fitted with ultra-low-noise cryogenically cooled amplifiers designed at the NRC. A full-Stokes spectral line and continuum backend has been assembled using GPUs and FPGA-based CHIME IceBoards. The telescope will now be capable of producing channels of 3 Hz bandwidth across 900-1800 MHz allowing for observations of Zeeman splitting in the 21-cm hydrogen emission line, the 18-cm OH transitions, and dozens of radio recombination lines from diffuse Galactic hydrogen, helium, and carbon. Zeeman detections of 21-cm emission are notoriously difficult because instrumental polarization conversion can contribute Zeeman-like features to circular polarization spectra; this is unfortunate since 21-cm emission can allow us to probe $B$ fields in a vast volume of the Galaxy. DRAO expertise in antenna modeling \citep{Du:Landecker:2016,Robishaw:Heiles:2018} paired with the very simple geometry and optics of the Galt 26-m telescope will allow for a thorough accounting of these instrumental contributions to any detected Galactic $B$ fields.
\begin{lrptextbox}[How does the proposed initiative result in fundamental or transformational advances in our understanding of the Universe?]
Magnetism contributes to every astrophysical process on every scale and its study offers considerable opportunity for transformational understanding. These include such fundamental questions as where and how did magnetic fields originate and what is their role in the evolution of the Universe. See Sec.~\ref{sec:science} for more details.
\end{lrptextbox}
\begin{lrptextbox}[What are the main scientific risks and how will they be mitigated?]
Obtaining RMs of point sources (i.e., RM-grids) have proven to be extremely useful, and pose less scientific risk than diffuse emission studies. Understanding Faraday depth in extended emission and its connection to physical structures holds enormous potential as a tremendous amount of information must be encoded in the data, yet its interpretation remains a very challenging task. This can be mitigated by prioritizing experiments that have a goal to measure point sources, and for both cases, ensuring broad bandwidth to sample a wide range of Faraday scales and avoid regimes of depolarization.
\end{lrptextbox}
\begin{lrptextbox}[Is there the expectation of and capacity for Canadian scientific, technical or strategic leadership?]
With an extremely strong record of leadership in this field, there is every expectation that Canada has the capacity to lead projects in all three areas. In terms of large scale projects, the SKA offers the greatest potential to revolutionize the study of cosmic magnetism, as well as offering a significant opportunity for Canadians to play a strategic and scientific leadership role on an international stage. Smaller scale projects serve a role in providing training opportunities for students and to maintain its leadership in engineering and technology development. See Sec.~\ref{sec:future} for further discussion of future opportunities.
\end{lrptextbox}
\begin{lrptextbox}[Is there support from, involvement from, and coordination within the relevant Canadian community and more broadly?]
The Canadian radio magnetism community is very collaborative and there are strong ties to the international community. There are also ties to and research synergies with the multi-wavelength community in Canada.
\end{lrptextbox}
\begin{lrptextbox}[Will this program position Canadian astronomy for future opportunities and returns in 2020-2030 or beyond 2030?]
With several new telescopes currently ramping up to full capacity (e.g., ASKAP, LOFAR, MWA, MeerKAT), new projects, like CIRADA, and the SKA on the horizon, the next decade (and beyond) looks very promising for cosmic magnetism science in Canada.
\end{lrptextbox}
\begin{lrptextbox}[In what ways is the cost-benefit ratio, including existing investments and future operating costs, favourable?]
Given the international leadership potential that Canadians have demonstrated in this research area, the benefit to Canada is be expected to be high. See the SKA WP (E043) for discussion in that context. Other projects are relatively low cost compared to the opportunities for training and technology development.
\end{lrptextbox}
\begin{lrptextbox}[What are the main programmatic risks
and how will they be mitigated?]
The most significant programatic risks to magnetism science are availablitiy of adequate computational facilities. Both computational power and data storage are concerns as the data volumes continue to increase. Expertise in the use of these facilities is also essential to assist astronomers, whose primary role should be to investigate the science. Investment in these areas is essential to ensure continued success.
Other concerns include the technical readiness of facilities, including but not limited to their ability to accurately calibrate the data, which is particularly challenging for polarization. This can be mitigated by developing expertise to assist in assessing data quality and help provide innovative solutions. The quality of radio frequency interference environment at the facility is an additional ongoing concern.
\end{lrptextbox}
\begin{lrptextbox}[Does the proposed initiative offer specific tangible benefits to Canadians, including but not limited to interdisciplinary research, industry opportunities, HQP training,
EDI,
outreach or education?]
Not only are there many opportunities for training young astronomers in this exciting research area, there are also ample opportunities for students to gain other transferable skills. The development of new telescope technology offer opportunities for training in engineering and the extremely large data sets provide the chance to learn ``big data'' strategies and gain familiarity with high performance computing environments.
\end{lrptextbox}
|
1,108,101,564,257 | arxiv | \section{Introduction}
Over the past few decades, scattering polarization and its
modification in the presence of a magnetic field
have become fundamental diagnostics of many physical properties
of astrophysical plasmas \citep{review-1,review-2}. In particular,
spectrally resolved observations of the polarized radiation from the
solar disk near the limb, using high sensitivity ($\mathrm{S/N}\gtrsim 10^3$)
instrumentation, have produced an extremely rich
amount of data (the so-called ``Second Solar Spectrum''
\citep{SS2-1,SS2-2}) of great diagnostic value \citep{diagnostic-1,diagnostic-2,diagnostic-3,diagnostic-4}.
However, the interpretation of these observations has often proven to be difficult, and continues to challenge our understanding of how polarized radiation is produced and transported in the solar atmosphere.
One notable example is the linear polarization the D$_1$ resonance line of neutral sodium at 589.6\,nm, which has been the target of many observations \citep{D1obs0-1,D1obs0-2,D1obs1-1,D1obs1-2}. In the optically thin limit, this $J\,{=}\,1/2\leftrightarrow J'\,{=}\,1/2$ transition cannot produce broadband linear polarization, despite the polarizability of its hyperfine-structure (HFS) levels \citep{expectedD1-1,expectedD1-2,expectedD1-3}. This is because the spectral shape of its emissivity turns out to be anti-symmetric, and so it averages out to zero
when the transition is spectrally unresolved. However, observations by \cite{D1obs0-1} and \cite{D1obs0-2} surprisingly had shown the presence of a strong linear polarization signal in the line core, raising many questions about its origin, and even on the reliability of those observations \citep{D1obs1-1,D1obs1-2}. While the complexity
of the line-formation problem in the optically thick and magnetized
atmosphere of the Sun is expected to play a role in determining the
spectral shape of this line, the ``enigma'' posed by those observations has even brought some authors \citep{doubts-1,doubts-2} to questioning the adequacy of the quantum-electrodynamic formalism on which many of our interpretation tools for solar polarimetric observations are based \citep{LL04}.
This impasse convinced us of the need to put this theoretical framework to the test with a specifically designed laboratory experiment.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{lab_setup.pdf}
\caption{Top-view diagram of the experimental setup. The four ``legs''
of the experiment are (clockwise from the bottom): input beam, scattered
light analysis, light-level monitor, and calibration. The inset shows the elements of the polarimeter and D$_1$/D$_2$ selector (L\,=\,LCVR, P\,=\,polarizer, Q\,=\,quartz plate, $\lambda/2$\,=\,half-waveplate, B\,=\,9.5\,nm blocker). }
\label{fig:setup}
\end{figure}
\section{Experiment}
\subsection{Experimental Setup}
We built a scattering experiment where a vapor of neutral sodium under controlled conditions of temperature and magnetic field is illuminated by a light beam. The scattered radiation is analyzed polarimetrically, separately for the D$_1$ (3p $^2$P$_{1/2}$ $\rightarrow$ 3s $^2$S$_{1/2}$, 589.6\,nm) and D$_2$ (3p $^2$P$_{3/2}$ $\rightarrow$ 3s $^2$S$_{1/2}$, 589.0\,nm) transitions.
A top-view schematics of the experiment is shown in Figure~\ref{fig:setup}.
This consists of a \ion{Na}{1} vapor cell surrounded by two
air-cooled Helmholtz-coil pairs, and flanked by four ``legs'' with different functions.
Light enters the apparatus from the bottom leg, is focalized at the center of the vapor cell, and the light scattered from
the vapor at $90^\circ$ is analyzed in the left leg. The top leg uses a photodiode to monitor
the light level of the source, and the right leg is used to input specific
polarization states for the purpose of polarimetric calibration.
The center of the sodium cell is located at the intersection of the four legs
of the apparatus.
The sodium is evaporated into the cell from a reservoir
which is temperature controlled at a typical value of 205 $^\circ$C. Along with the sodium vapor, the cell also contains $17\,$mmHg of Ar buffer gas.
The two Helmholtz-coil pairs allow the generation of a magnetic field
between 0 and 150\,G with any desired direction in the scattering plane
(plane of Figure~\ref{fig:setup}).
To ensure the condition of \emph{complete frequency redistribution} (CRD; see Modeling section) of the scattered
radiation, we employed a 50\,W halogen bulb with stabilized output,
which provides a largely flat and structureless spectrum over the frequency
range of the D lines.
An input polarization selector, consisting of a linear polarizer mounted
in a precision rotation stage and a fixed $\lambda/4$ retarder, can be
placed in the beam following the light source, allowing an arbitrary
polarization state to be input to the vapor. In this letter, we
only present data and modeling for the case of unpolarized
input.
The analysis leg consists of a Stokes polarimeter, a filter that
selects the D line to be observed, and a photomultiplier tube (PMT) with a gain of approximately $2\times10^6$.
Details of the polarimeter and D$_1$/D$_2$ selector are shown in the inset diagram of Figure~\ref{fig:setup}.
The polarimeter consists of two Nematic Liquid Crystal Variable Retarders
(LCVRs) followed by a linear polarizer. The LCVRs are
oriented with their fast axes at $0^\circ$ and $45^\circ$, with the
linear polarizer also oriented at $0^\circ$.
The orientation of this polarizer sets the reference direction of
positive Stokes $Q$, which is approximately normal to the scattering plane.
This system allows the analysis of the complete polarization
state of the scattered light by measuring its intensity at
selected retardations of the two LCVRs.
The D$_1$/D$_2$ line selector consists of a birefringent crystal between
polarizers \citep{Ma74}, producing a channel spectrum with a free
spectral range equal to twice the separation of the D doublet ($1.195\,$nm).
In order to minimize the shift of the bandpass with inclination angle
through the selector, we have used quartz crystals in a wide-fielded
configuration \citep{Ly44-1,Ly44-2}.
The channel spectrum is shifted by a third Nematic LCVR with its
fast axis aligned with that of the first crystal, which allows the
electro-optical selection of either of the D lines. For simplicity,
the analyzing polarizer of the polarimeter serves also as the entrance
linear polarizer of the D$_1$/D$_2$ selector.
To limit the number of unwanted orders of the D$_1$/D$_2$ selector we additionally
employ a 9.5\,nm wide interference filter centered
at 590.5\,nm (blocker), and a Schott KG3 filter. To compensate for thermal shifts of the D$_1$/D$_2$
selector, we monitor its temperature and adjust the
LCVR voltage to achieve the proper tuning.
The calibration leg contains a light source and
input polarization selector identical to those in the input-beam leg. For the purpose of polarimetric calibration, light is
input from the calibration leg into the analysis leg in the absence
of sodium vapor (cold cell) and magnetic field. By measuring the output signal for
known input polarization states, we can compute the response matrix
of the polarimeter,
which maps the measured Stokes vectors to the true ones.
\subsection{Measurements}
We measured the scattering polarization of the D lines
in the presence of a magnetic field in the scattering plane, with strength between 0 and 150\,G
in steps of 10\,G, and inclination from the direction of the incident
radiation between $0^\circ$ and $90^\circ$ (respectively, $\bm{B}_1$
and $\bm{B}_2$ in Figure~\ref{fig:setup}) in steps of $30^\circ$. The
calibration data were obtained before and after the scattering measurements.
A measurement of the background signal was taken at the beginning of the experiment with the cold cell and no magnetic field. This background is
a combination of Rayleigh scattering of the incident radiation by the
Ar buffer gas, and parasitic reflections off the cell walls that make it into the analysis leg.
A computer running LabVIEW performs all experiment controls and
data logging functions. The voltage output of the PMT is digitized
with 16-bit precision. Each measurement consists of an average of
$10^4$ samples taken over $250\,$ms followed by a $125\,$ms delay to
allow for LCVR relaxation. A Stokes vector measurement is obtained
by measuring the intensity coarsely corresponding to the six
modulated states $I\pm(Q,U,V)$, and making the proper combinations
and polarization cross-talk corrections to obtain $I,Q,U,V$. This is
accomplished by multiplying the measured Stokes vector by the
polarimeter response matrix to obtain the true Stokes vector.
The elements of the resulting Stokes vector have a typical uncertainty of $\sim10^{-3}$.
\section{Modeling}
To model the scattering polarization from the \ion{Na}{1}
vapor we rely on several physical assumptions:
1) The flat spectrum of the light source implies
that radiation scattering can be described as the incoherent
succession of single-photon absorption and re-emission \cite[CRD hypothesis;][]{CRD-1,CRD-2,LL04}.
2) Isotropic elastic collisions with the Ar buffer gas
contribute to the statistical equilibrium of the \ion{Na}{1} atoms,
leading to a partial depolarization of the atomic levels. The
corresponding two depolarizing rates (respectively, for the
orientation and the alignment of the atomic levels) are
free parameters of the model. For simplicity, we adopt the same rates
for the ground and excited states. However, the ensuing
depolarization is nearly total for the ground state because of
its much longer lifetime.
3) In order to fit the data, we found necessary to add a small collisional de-excitation rate to the statistical equilibrium of the \ion{Na}{1} atoms.
A possible explanation is that the collisions with the Ar buffer gas may not be perfectly elastic. However, the low temperature of the sodium vapor implies that collisional excitation from the ground state is negligible.
Thus the observed line intensity is dominated by the resonance scattering of the incident radiation, without any measurable contribution from Planckian
radiation at the vapor temperature.
Additionally, collisional transfer between the P$_{1/2}$ and P$_{3/2}$ levels can be important, as the energy separation is about $10^3$ times smaller than the excitation potential of the D-doublet.
These transfer collisions predominantly produce a depolarization of the levels, adding to the effect of elastic collisions already considered. Since the relative contribution between transfer and elastic collisions to this depolarization is not constrained by our data, we chose to simply ignore transfer collisions in our model.
4) The gas cell is operated at a regime around unit optical depth.
Hence, differential saturation of the line components
must be taken into account \citep{satur-1,satur-2}. Additionally, polarization effects due to quantum
interference between the fine-structure levels of
the atom cannot in principle be ruled out under our experimental conditions.
All these effects can confidently be modeled assuming that the fraction of the vapor contributing to the scattered radiation has spatially homogeneous thermodynamic and magnetic properties.
The differential saturation of the magnetic components of the lines \citep{satur-1,satur-2} turns out to be essential for the
interpretation of the experimental results. In contrast, for the
particular thermal and magnetic regimes of the experiment, our
modeling shows that quantum interference between the
P$_{1/2}$ and P$_{3/2}$ levels brings a much smaller, yet measurable, correction to the polarization.
\begin{figure*}
\includegraphics[width=1.0\textwidth]{D1D2_figure.pdf}
\caption{Broadband fractional polarization $Q/I$, $U/I$, and $V/I$ (left to right)
of the D$_1$ (top) and D$_2$ (bottom) lines as a function of magnetic field strength,
for various geometries of the applied magnetic field. The measurements are
represented by different symbols (with error bars) and colors, for different values
of $\vartheta_B$.
The continuous curves of matching color represent the model.}
\label{fig:Stokes}
\end{figure*}
5) Magnetic-induced dichroism affects the transfer of both the sodium emission and the
background radiation through the optically thick vapor. Hence, the background measurements cannot simply be subtracted from the experimental data, in order
to isolate the contribution of the D lines to the observed polarization.
Instead, we must treat the background radiation as a boundary term in the solution of the radiative transfer equation for the Stokes vector $\bm{S}\equiv(I,Q,U,V)$,
\begin{equation} \label{eq:PRTE}
\frac{\mathrm{d}}{\mathrm{d} s}\,\bm{S} = -\mathbf{K}\,\bm{S}+\bm{\varepsilon}\;.
\end{equation}
where $s$ is the coordinate along the optical path, $\bm{\varepsilon}$ is the
polarized emissivity vector (source term), and $\mathbf{K}$ is the
$4\times 4$ absorption matrix, which also accounts for dichroism and magneto-optical
effects \citep{LL04}.
The experimental data must then be compared with the numerical solution of
eq.~(\ref{eq:PRTE}) in a spatially homogeneous medium \cite[][\S8.3]{LL04}, taking into account the background term, and after convolution with the transmission profile of the D$_1$/D$_2$ selector.
The above hypotheses suggest that we adopt the formalism of the \emph{multi-term} atom with HFS \citep{CM05-1,CM05-2} to model the scattering polarization by the magnetized sodium vapor, as this takes into account the effects of quantum interference between the P$_{1/2}$ and P$_{3/2}$ levels.
However, it is instructive to look at the algebraic formulation of the \emph{multi-level} atom with HFS given by \cite{LL04}, because it allows us to better grasp how the free parameters of the model and the magnetic field affect the scattering polarization in each of the two D lines.
The broadband polarized emissivity due to radiation scattering in a two-level atom
$(J_\ell,J_u)$ with HFS is \cite[cf.][\S10.22]{LL04}
\begin{equation} \label{eq:emiss}
\bar\varepsilon_i(\bm{\Omega})=k_{\rm L}^{\textrm{\tiny A}}
\oint\frac{\textrm{d}\bm{\Omega}'}{4\pi}
\sum_{j=0}^3 P_{ij}(\bm{\Omega},\bm{\Omega}';\bm{B})_{\rm hfs}\,
S_j(\bm{\Omega}')\;,
\end{equation}
where $P_{ij}(\bm{\Omega},\bm{\Omega}';\bm{B})_{\rm hfs}$ is the
\emph{Hanle phase matrix}, and $i,j=0,1,2,3,$ enumerates the four
Stokes parameters $I,Q,U,V$. The interpretation of eq.~(\ref{eq:emiss}) is
straightforward: the Stokes parameter $S_j(\bm{\Omega}')$
of the incident radiation along the direction $\bm{\Omega}'$
is scattered into the direction $\bm{\Omega}$ and
polarization state $i$, with a frequency integrated cross-section given
by the line absorption coefficient, $k_{\rm L}^{\textrm{\tiny A}}$.
We recall that the incident radiation in our experiment has no spectral
structure around the transition frequency of the line.
Evidently, the assumption of a spectrally flat incident radiation is necessary in order to write
eq.~(\ref{eq:emiss}).
The Hanle phase matrix is given by
\begin{eqnarray} \label{eq:redistr}
P_{ij}(\bm{\Omega},\bm{\Omega}';\bm{B})_{\rm hfs}
&=&\sum_{KK'Q}
(-1)^Q\,{\cal T}^K_Q(i,\bm{\Omega})\,
{\cal T}^{K'}_{-Q}(j,\bm{\Omega}') \nonumber \\
&&\mathop{\times} W_{KK'Q}(J_\ell,J_u;\bm{B})_{\rm hfs}\;,
\end{eqnarray}
where the polarization tensors $T^K_Q(i,\bm{\Omega})$, with $K=0,1,2$ and
$Q=-K,\ldots,K$, characterize the scattering geometry, and are tabulated in \cite{LL04}. The \emph{line polarizability factor}
$W_{KK'Q}(J_\ell,J_u;\bm{B})_{\rm hfs}$ describes the magnetic dependence of the Hanle phase matrix. When stimulated emission and the polarization of the $J_\ell$ level can both be neglected, like in the case of our experiment, the polarizability factor can be expressed in algebraic form \cite[cf.][ eq.~(10.167)]{LL04}:
\begin{widetext}
\begin{eqnarray} \label{eq:polar}
W_{KK'Q}(J_\ell,J_u; \bm{B})_{\rm hfs}
&=& \frac{3(2J_u+1)}{2I+1}
\sixj{1}{1}{K}{J_u}{J_u}{J_\ell}
\sixj{1}{1}{K'}{J_u}{J_u}{J_\ell} \\
&&\kern -1.6in
\times
\sum_{F_u F_u' F_u'' F_u'''}
\sqrt{(2K+1)(2K'+1)(2F_u+1)(2F_u'+1)(2F_u''+1)(2F_u'''+1)}\,
\sixj{J_u}{J_u}{K}{F_u}{F_u'}{I}
\sixj{J_u}{J_u}{K'}{F_u''}{F_u'''}{I}
\nonumber \\
\noalign{\vskip -6pt}
&&\kern -1.6in
\times\sum_{f_u f_u'} \sum_{i j}
C_{F_u}^i(J_u f_u)\,
C_{F_u''}^i(J_u f_u)\,
C_{F_u'}^j(J_u f_u')\,
C_{F_u'''}^j(J_u f_u')
\threej{F_u}{F_u'}{K}{-f_u}{f_u'}{-Q}
\threej{F_u''}{F_u'''}{K'}{-f_u}{f_u'}{-Q}
\nonumber \\
\noalign{\vskip -6pt}
&&\kern -1in
\times \left\{ 1+\delta\apx{$K$}_{J_u}+\epsilon_{J_uJ_\ell}
+\mathrm{i}[\omega_j(J_uf_u')-\omega_i(J_u f_u)]/
A_{J_uJ_\ell}\right\}^{-1}\;. \nonumber
\end{eqnarray}
\end{widetext}
The coefficients $C_{F_u}^i(J_u f_u)$, with $F_u=|J_u-I|,\ldots,J_u+I$, are the components of the {$i^\mathrm{th}$ eigenvector of the HFS subspace of $J_u$ with magnetic quantum number $f_u$, and $\omega_{i}(J_u f_u)$ is the corresponding eigenvalue. They are determined
via diagonalization of the magnetic Hamiltonian, assuming the direction of $\bm{B}$ as the quantization axis.
In the denominator of eq.~(\ref{eq:polar}), the imaginary term accounts for polarization effects associated with the energy differences
between the atomic eigenstates (Hanle effect, HFS depolarization, level-crossing interference).
$\delta\apx{$K$}_{J_u}$ and $\epsilon_{J_uJ_\ell}$ are, respectively, the
depolarizing and inelastic collision rates, expressed in units of the
Einstein coefficient $A_{J_uJ_\ell} \approx 6.2{\times}10^8\,\rm s^{-1}$ \cite[cf.][eq.~(10.54)]{LL04}.
For the contribution of the level population to the emissivity ($K=0$), $\delta\apx{0}_{J_u}=0$, thus
the polarizability factor only contains the free parameters
$\delta\apx{1}_{J_u}$ (orientation relaxation), $\delta\apx{2}_{J_u}$
(alignment relaxation), and $\epsilon_{J_uJ_\ell}$ (collisional de-excitation).
For the multi-level atom, a distinct set of these three free parameters must
be specified for each of the two D lines.
In the multi-term formalism,
instead, we only need the three parameters $\delta\apx{1,2}\equiv\delta\apx{1,2}_{L_u}$
and $\epsilon\equiv\epsilon_{L_u L_\ell}$, expressed in units of the D-doublet spontaneous rate $A_{L_u L_\ell}\approx 6.2{\times}10^8\,\rm s^{-1}$, where $L_u=1$ and $L_\ell=0$.
On the other hand,
for the multi-term atom,
an algebraic expression of the broadband emissivity
analogous to eqs.~(\ref{eq:emiss})--(\ref{eq:polar}) cannot be attained \emph{separately} for
each line of the doublet.
\section{Results}
Figure~\ref{fig:Stokes} reports one set of measurements (resulting from the average of 12 different realizations of the experiment)
of the broadband fractional polarization of the two D lines (symbols with error bars). In Figure ~\ref{fig:Stokes}, the continuous
curves represent the fit of the experimental data provided by the model described in the previous
section. It is important to remark that the zero-field values in all plots, except for the $Q/I$ polarization of D$_2$, are dominated by the transfer of the background polarization through the optically thick vapor. In the absence of background radiation, those values would be zero (within the polarimetric accuracy of the experiment). The intensity and polarization of the background are measured at the beginning of the experiment (cold cell). The ratio $I_\mathrm{bkg}/(I_\mathrm{line}+I_\mathrm{bkg})$ turns out to be about 17\% for D1 and
12\% for D2, while the polarization of the background is very consistent between the two spectral ranges, with
($Q_\textrm{bkg},U_\textrm{bkg},V_\textrm{bkg})/I_\mathrm{bkg}\simeq(1,0.064,0.004,-0.018)$.
Numerical modeling based on eqs.~(\ref{eq:emiss})--(\ref{eq:polar}) predicts that all states of polarization of D$_1$, as well as the $V/I$ polarization of D$_2$ should remain largely insensitive to the magnetic field in an optically
thin vapor, well below the $10^{-3}$ sensitivity level of our experiment.
The large departures from
such ideal behavior observed in the experimental data, especially for the $V/I$ polarization, are mainly
due to the differential saturation of the magnetic components of the lines as they are transferred through
the optically thick vapor \citep{satur-1,satur-2}. In order to fit the measurements, we determined an optical depth
$\tau_\mathrm{D_2}\approx 1.3$. The non-flat behavior of the $U/I$ polarization of D$_2$
for $\vartheta_B=0^\circ$ is explained by a small error of the apparatus in setting the desired magnetic field
inclination, which we modeled with a $-2^\circ$ offset from the nominal values of $\vartheta_B$.
The remaining free parameters of the model are the depolarizing collision rates
$\delta\apx{1,2}$ and the de-excitation collision rate $\epsilon$. The value of $\delta\apx{2}$
strongly affects the linear polarization amplitudes of D$_2$, and characteristically the location
of the two crossing points among the $U/I$ polarization curves for
$\vartheta_B\ne 0$. We used these constraints to determine a value
$\delta\apx{2}\approx 19$.
The $\delta\apx{1}$ rate affects instead the
$V/I$ polarization caused by the presence of atomic orientation. In the case of unpolarized input, this contribution is rapidly suppressed by depolarizing
collisions. Thus, the value of $\delta\apx{1}$ is only weakly constrained by the data shown in Figure~\ref{fig:Stokes}. However, when the incident light is circularly polarized, the observed $V/I$ signals are much larger (by a factor ${\sim} 10$, in the case of D$_2$) than those shown in Figure~\ref{fig:Stokes}. Using such measurements (not reported here), we could determine $\delta\apx{1}\approx 13$.
Finally, by matching the zero-field value of the $Q/I$ polarization of D$_2$, after taking into account the
depolarization produced by $\delta\apx{2}$, we estimated $\epsilon\approx 0.44$.
\section{Conclusions}
The agreement between theory and experiment shown in Figure~\ref{fig:Stokes} is remarkable, considering that the fitting of the reported data (384 independent polarization measurements) practically relies on only three model parameters,
$\tau$, $\delta\apx{2}$, and $\epsilon$. This demonstrates that the quantum-electrodynamic formalism on which our model of scattering polarization in the CRD limit is based \citep{LL04} is completely adequate when the incident radiation is spectrally flat over the wavelength range of the atomic transition.
The \ion{Na}{1} D lines, however, are among the strongest absorption features of the solar spectrum, and the flat-spectrum approximation breaks down in the solar atmosphere, especially with regard to the treatment of the quantum interference between the P$_{1/2}$ and P$_{3/2}$ levels. Therefore, new polarization effects due to the partial redistribution of the radiation frequency (PRD) can be expected for these lines \citep{modeling-1,modeling-2}.
Recent work \citep{PRD-5,PRD-1,PRD-2,PRD-4,PRD-3} has formally extended the theory of \cite{LL04} beyond the CRD limit, in order to model PRD effects in radiation scattering. Indeed, when these effects are taken into account in the modeling of the polarized D$_1$ line \citep{modeling-1,modeling-2}, even its finer spectral details as observed on the Sun \citep{D1obs1-1,D1obs1-2} can be reproduced.
The successful interpretation of our experiment provides compelling evidence of the fundamental validity of the quantum-electrodynamic formalism used to interpret the many polarization phenomena routinely observed on the Sun and in other astrophysical objects. At the same time, together with the recent modeling by \cite{modeling-1} and \cite{modeling-2}, our results strongly support the conclusion that the peculiarities of the observed polarization of the D$_1$ line \citep{D1obs0-1,D1obs0-2,D1obs1-1,D1obs1-2} must be traced back to the complexity of the line formation problem in realistic atmospheric scenarios, or in extreme cases to possible instrumental effects that must be identified and corrected for.
\section{acknowledgments}
Financial support for this experiment was provided by the
National Center for Atmospheric Research through the Director's
Opportunity Funds. We thank G. Card for his contribution to the design and construction of the experiment. The authors have benefited from many discussions with
several colleagues, who at times have also assisted in various aspects of the experiment.
In particular, we thank A.~de Wijn, R.~Manso Sainz, A.~L\'opez Ariste, and J.~O.~Stenflo. We thank J.~Trujillo Bueno for helpful comments and suggestions on the final version of the manuscript.
|
1,108,101,564,258 | arxiv | \section{Introduction}
Fractal structures have been observed in a large variety of experimental
systems in physics, chemistry and
biology.\cite{Mandelbrot,Feder,Avnir:book,Bunde,Stanley,Schroeder} Unlike
exact (mathematical) fractals which are constructed to maintain scale
invariance over many orders of magnitude, and most existing physical models
displaying fractal behavior,\cite{many-decades} for {\em empirical} fractals
the range over which they obey a scaling law is necessarily restricted by
upper and lower cutoffs. In most experimental situations this range may be
quite small, namely not more than one or two orders of magnitude
(Fig.\ref{fig:fewdecades}). Nevertheless, even in these cases the fractal
analysis condenses data into useful relations between different quantities and
often provides valuable insight.
\begin{figure}
\psfig{figure=fewdecades.ps,width=12cm,angle=270}
\vskip -1.8cm
\caption{
To obtain a general idea about the experimental status of fractal dimension
measurements we collected all such measurements presented in Ref.[2] and
measured the width of the linear range in the log-log plots (measured in
decades) over which the FD was determined. This histogram shows the number of
plots as a function of the number of decades of the linear range. One can see
that most experimental measurements of fractal dimensions are based on data
that extends between one and two decades. Note further that all the data with
three and four decades, which come from a single paper, is the determination
of the Hurst exponent for temporal and not structural data.\hfill
\hspace{1.0cm}}
\label{fig:fewdecades}
\end{figure}
Motivated by the yet inexplicable abundance of reported fractals, we consider
here the apparent fractal properties of systems which are governed by
uniformly random distributions. The reasons for this choice are several.
First, randomness is abundant in nature. Second, although a uniformly random
system cannot be fully scale invariant, it may, as we show below, display
apparent fractality over a limited range, perhaps in better agreement with the
actual ranges observed than a model which is inherently scale free. Third, a
model of uniform randomness is a convenient limit, on top of which
correlations can be introduced as perturbations.
\section{The Basic Model}
\label{model}
To illustrate our ideas we use a model that consists of a random distribution
of spheres of diameter $d$, in the limit of low volume fraction occupied by
the spheres. The positions of the centers of these spheres are determined by
a uniform random distribution and the spheres are allowed to overlap. This
model may approximately describe the spatial distribution of objects such as
pores in porous media, craters on the moon, droplets in a cloud and adsorbates
on a substrate as well as some energy spectra and random temporal
signals.
\begin{figure}
\psfig{figure=Nresults.ps,width=12cm,angle=270}
\vskip -1.8cm
\caption{
Comparison of simulation results (circles) to the theoretical prediction of
Eq.(\protect\ref{eq:<N>}) (solid line) for the number of intersected boxes as
a function of their size, for one dimensional penetrable rods. The coverage is
$\eta=0.01$ and the rod length is $d/L=10^\protect{-6\protect}$. The cutoffs
are manifested as the two knees in the graph. The lower bound $r_0$ is seen to
be located at $r=d/L$. The upper bound $r_1$ is at $r=(1/\eta-1)d/L$, also
conforming with the prediction in the text. Also indicated is the estimated
middle point $r_e$.\hfill \hspace{1.0cm}}
\label{fig:Nresults}
\end{figure}
To simplify the analysis we consider here (without loss of generality) the one
dimensional case, where the spheres are $M$ rods of length $d$ which are
placed on a line of length $L \gg d$. The positions of the rod centers are
determined by a uniform random distribution. The rods are allowed to overlap
and are positioned with no correlations. An information-theory argument can be
used to show that this distribution is generic, or ``minimal'', in the sense
that it is characteristic of physical processes in which only the first moment
(such as the density) is determined from
outside.\cite{me:random-model,D-comment5} Below we calculate the fractal
dimension (FD) of the resulting set using the {\em box-counting} (BC)
procedure, which is a common techniques for determination of FD in empirical
data.\cite{higherorder} In the BC technique one divides the embedding space
into boxes of linear size $l$. It is convenient to work with the dimensionless
quantity $r \equiv l/L$ for the box size. The number of boxes that have
intersection with the measured object, $N(r)$, is then plotted vs. $r$ on a
log-log scale. The range of $r$ is limited from below by the finest feature in
the object and from above by the entire object size. Apparent fractal behavior
is commonly declared in a range bound between physical cutoffs if the log-log
plot of $N(r)$ vs. $r$ is linear over one or more
decades\cite{Pfeifer-in-Avnir:book} in that range. The dimension is given by:
\begin{equation}
D = - {\rm slope}\:\{\log(r), \log[N(r)] \}.
\label{eq:D}
\end{equation}
\noindent We will now show that our model generates approximate linearity over
a range which would conventionally be accepted to indicate fractality. The
lower cutoff is given by the rod length,
\begin{equation}
r_0 = d/L ,
\label{eq:r0}
\end{equation}
\noindent since below this scale no new information is obtained by decreasing
the box size. The upper cutoff is determined by the average distance
between adjacent rod edges,
\begin{equation}
r_1 = 1/M-d/L ,
\label{eq:r1}
\end{equation}
\noindent because above this scale (on average) all boxes are occupied. This
allows us to define an {\em estimated} scaling range as:\cite{D-comment3}
\begin{equation}
\Delta_e = \log_{10}(r_1)-\log_{10}(r_0).
\label{eq:Delta}
\end{equation}
\noindent Since the actual value of $N(r)$ depends on the particular set of
random numbers drawn, one can only obtain the expectation value $\langle
N(r)\rangle$. However, the law of large numbers ensures that for a large
enough number of rods, the deviations from this value will be insignificant.
Following probabilistic arguments of the type used by
Weissberg\cite{Weissberg} and Torquato and Stell,\cite{Torquato:3} one obtains
that out of the total of $1/r$ boxes the number of boxes that intersect the set
is:\cite{sketch}
\begin{equation}
\langle N(r)\rangle = {1\over r} \left\{ 1-[1-(r+d/L)]^{M} \right\}.
\label{eq:<N>}
\end{equation}
\begin{figure}
\psfig{figure=Nlinear.ps,width=12cm,angle=270}
\vskip -1.6cm
\caption{
Simulation results (circles) for the number of intersected boxes $N(r)$
vs. $r$ in the experimentally relevant range (between cutoffs), along with a
linear regression fit for coverage $\eta=0.1$ ($d/L=10^{-5},\,
M=10^4$).\hfill \hspace{1.0cm}}
\label{fig:Nlinear}
\end{figure}
\noindent Simulation results in terms of the {\em coverage} $\eta \equiv M d/L$
are shown in Fig.\ref{fig:Nresults}, along with the theoretical prediction of
Eq.(\ref{eq:<N>}). An excellent agreement is evident.\cite{goodfit}
Next, we examine the apparent FD [Eq.(\ref{eq:D})], {\it by mimicing the
standard experimental procedure} of using linear regression analysis between
the cutoffs. The simulation results and the linear fit for $\eta = 0.1$ are
shown in Fig.\ref{fig:Nlinear} for the range which is used to determine
empirical FDs. More than a decade of linearity is observed for this high
coverage. The slight inflexion of the simulation results may be smeared out by
noise in a real experiment. We next evaluate the slopes and {\em actual}
ranges of linearity $\Delta$ (generally $\neq \Delta_e$), under varying
degrees of strictness of linearity, as measured by the coefficient of
determination $R^2$. Typical results are shown in Fig.\ref{fig:range}, where,
e.g. for $\eta=0.01$, more than two decades of linear behavior are exhibited
for a required value of $R^2$ of below 0.975. This is well within the
experimental norm as most experimental measurements of fractal objects do not
extend for more than two orders of magnitude
(Fig.\ref{fig:fewdecades}). Moreover, this agreement with experimental data is
in contrast to that of most other physical models of fractality, which predict
much larger ranges.\cite{many-decades} Increasing $\eta$ beyond 0.1 results in
a decline of both $\Delta$ and $\Delta_e$ to below one decade and hence the
apparent fractality is {\em restricted to $\eta\leq 0.1$}.
The results of the regression analysis for the apparent FD as a function of
$\eta$ are shown in Fig.\ref{fig:D} and are further compared to an analytical
expression, obtained by calculating the logarithmic derivative of $N(r)$ at
the {\em estimated} middle point $r_e = \sqrt{r_0 r_1}$, in the $M
\rightarrow \infty$, constant coverage limit:\cite{me:random-model}
\begin{equation}
D = 1-{\sqrt{\eta(1-\eta)} \over {\exp \left( \eta+\sqrt{\eta(1-\eta)}
\right)-1}}.
\label{eq:Dresult}
\end{equation}
\noindent As seen in Fig.\ref{fig:D}, the FD predicted by Eq.(\ref{eq:Dresult})
is somewhat lower than the regression result and can serve as a lower bound.
In the limit of small $\eta$, one can further simplify Eq.(\ref{eq:Dresult})
and obtain
\begin{equation}
D \approx \left(\eta \over {1-\eta}\right)^{1/2}, \ \ \ \ \ \ \eta \ll 1.
\end{equation}
\begin{figure}
\psfig{figure=range.ps,width=12cm,angle=270}
\vskip -1.8cm
\caption{
The range of linearity, $\Delta$, as a function of imposed coefficient of
determination, $R^2$, in a linear regression analysis.\hfill \hspace{1.0cm}}
\label{fig:range}
\end{figure}
\section{Impenetrable Rods}
\begin{figure}
\psfig{figure=D.ps,width=12cm,angle=270}
\vskip -1.8cm
\caption{
Apparent fractality (FD) as computed by linear regression with $R^2 = 0.995$
(upper curve). The predictions of the analytical equations,
Eqs.(\protect\ref{eq:Dresult}) and (\protect\ref{eq:Dhard}) (for the
penetrable and impenetrable rods) are accurate lower bounds and differ only
marginally (two overlapping lower curves). This indicates the dominance of
randomness over correlations.\hfill \hspace{1.0cm}}
\label{fig:D}
\end{figure}
To examine the effect of correlations on the apparent FD we next consider a
model in which rods are randomly located as before but with the restriction
that the rods cannot overlap. The system is assumed to be at equilibrium. The
excluded volume effect clearly creates correlations in the positions of the
rods. This example is also fully solvable\cite{me:random-model} and represents
an important class of systems with correlations such as models of hard-sphere
liquids and energy spectra with level repulsion. We will now show that the
correlation introduced by the non-overlap restriction merely {\em modifies}
the apparent fractal character of the system. For this case, the expected
number of intersected boxes is:\cite{me:random-model}
\begin{equation}
\langle N(r)\rangle = {1 \over r} \left(1-(1-\eta) \left( 1- {r \over
{1-\eta}} \right)^{M} \right).
\label{eq:Nhard}
\end{equation}
\noindent Fig.\ref{fig:impenetrable} shows the number of intersected boxes
$\langle N(r)\rangle$ vs. $r$ both with [Eq.(\ref{eq:<N>})] and without
[Eq.(\ref{eq:Nhard})] overlap. The behavior in the two cases is qualitatively
similar and virtually indistinguishable for low
coverages. Fig.\ref{fig:impenetrable} thus demonstrates that the apparent
fractal behavior due to randomness is only slightly modified by moderate
correlations. As in the overlapping rods case, we can now use Eq.(\ref{eq:D})
(with the slope calculated at $r=r_e$) to calculate a lower bound for the
apparent FD. The result (for large $M$),
\begin{figure}
\psfig{figure=impenetrable.ps,width=12cm,angle=270}
\vskip -1.6cm
\caption{
Comparison of box-counting predictions in penetrable and impenetrable rods
cases. The results for penetrable [Eq.(\protect\ref{eq:<N>})] and impenetrable
rods [Eq.(\protect\ref{eq:Nhard})] virtually coincide for $\eta \leq
10^{-2}$ (lower two curves). For $\eta=0.1$ a barely noticeable difference develops (upper two curves). In both
cases $d/L=10^{-6}$.\hfill \hspace{1.0cm}}
\label{fig:impenetrable}
\end{figure}
\begin{equation}
D = 1- { {\eta \sqrt{{1/\eta} -1}}
\over {\exp \left(\sqrt{\eta/(1-\eta)} \right) - (1-\eta) } }
\label{eq:Dhard}
\end{equation}
\noindent is shown in Fig.\ref{fig:D}. The important observation is that for a
broad range of low coverages the apparent FD's of penetrable and impenetrable
rods nearly overlap. This is the relevant range for fractal measurements and
therefore we find that correlations of the type considered here have little
effect on the apparent fractal nature of the system.
\section{Fat-Fractal Analysis}
\label{fat-fractal}
In this section we treat the penetrable spheres model for the case of
two-dimensional (2D) disks, from the point of view of fat-fractal analysis. A
fat fractal is defined as ``A set with a fractal boundary and finite Lebesgue
measure''.\cite{Umberger:2} The fat-fractal approach is natural for our model,
since the set of disks clearly has non-zero measure. Fat-fractal analysis can
be performed on experimental data (but rarely is) in those cases where the
resolution of the measurement device is finer than the lower cut-off, which is
required for a knowledge of the measure of the studied set. An example is
helium scattering.\cite{me:fractals} In the present case we show that the
measure of the set of disks can be found analytically. In order to measure
the fat-fractal scaling exponent $\gamma$, one performs, as in the standard
fractal analysis, a box-counting procedure:
\begin{equation}
\gamma = \lim_{r \rightarrow r^*} {{\log[A(r)]} \over \log(r)}\:;\:\:\:\: A(r)
\equiv r^2 N(r) - \mu_0,
\label{eq:gamma}
\end{equation}
\noindent where $\mu_0$ is the normalized Lebesgue measure of the set. The
fractal dimension itself is given by
\begin{equation}
D_{ff} = 2-\gamma .
\label{eq:Dff}
\end{equation}
\noindent In the nonlinear dynamical systems literature, where fat fractals
were first introduced,\cite{Umberger} $r^*=0$. In the context of real-space
sets, there exists a lower cutoff $r_0 > 0$, and hence also $r^* > 0$. One
should bear this in mind whenever fractal theory is applied to real-space
systems with an inherent non-vanishing smallest scale.
Consider then again a system of $M$ uniformly randomly positioned disks of
equal radius $R=d/2$, located at low 2D coverage $\eta_2$ given by:
\begin{figure}
\psfig{figure=fat-fractal.ps,width=12cm,angle=270}
\vskip -1.6cm
\caption{
Fat fractal analysis of random disks. Analytical results of
Eq.(\protect\ref{eq:gamma}) for $A(r)$ are shown at three combinations of
coverages $\eta_1$ and disk numbers $M$. Circles indicate the positions of the
cutoffs according to Eqs.(\protect\ref{eq:r0}),(\protect\ref{eq:r1}). Inset:
Linear regression coefficient ${\cal R}^2$ for regression in-between the
cutoffs.\hfill \hspace{1.0cm}}
\label{fig:fat-fractal}
\end{figure}
\begin{equation}
\eta_2 = M \pi R^2/L^2 = (\pi/4)\eta_1^2 ,
\label{eq:eta}
\end{equation}
\noindent on a surface of area $L^2$. The effective ``1D coverage''
\begin{equation}
\eta_1=\sqrt{M}2R/L,
\label{eq:eta1}
\end{equation}
\noindent is defined for convenience of comparison with results in
1D and 3D. In order to find $\mu_0$, imagine that the surface is initially
empty, and randomly choose a point on it. Next locate a disk of radius $R$ at
a random position on the surface. The probability that it does not include the
chosen point is proportional to the free area, namely $q_1 = (L^2-\pi
R^2)/L^2$. The next disk is also positioned completely randomly, so that the
probability for the point to be outside of both disks is just
$q_1^2$. Clearly, after random placement of $M$ disks, the point will lie in
the uncovered region with probability $q_1^M$, and therefore will be in the
disk-covered region with probability $p_M = 1-(1-\pi R^2/L^2)^M$. On the other
hand, this probability is just the expectation value of the normalized disk
union area, $\mu_0/L^2$. Thus for large enough $M$:\cite{Weissberg,Torquato:3}
\begin{equation}
\mu_0 = \left[ 1-(1-\pi R^2/L^2)^M \right] L^2 .
\label{eq:mu0}
\end{equation}
\noindent A modified argument can be used to evaluate the BC function for our
basic model.\cite{me:random-model} The result for the expected number of
occupied boxes is:
\begin{equation}
N(r) = {L^2 \over r^2} \left[ 1-\left(1 - r^2 - 4r\,R/L - \pi (R/L)^2
\right)^M \right] .
\label{eq:N}
\end{equation}
\noindent Simulations\cite{me:random-model} (not shown here) confirm the 1D
version of this result to excellent accuracy for $M$ as small as 100. Taken
together, Eqs.(\ref{eq:mu0}),(\ref{eq:N}) determine the fat-fractal exponent
$\gamma$, using Eq.(\ref{eq:gamma}). Analytical results are shown in
Fig.\ref{fig:fat-fractal} for three $\eta_1 / M$ pairs. The effect of changing
$M$ at constant coverage (solid and short-dashed lines) is a rigid translation
of the curve in the plane. This implies that the coverage is the important
parameter in determining the slope i.e., the FD. Circles indicate the
positions of the cutoffs according to
Eqs.(\protect\ref{eq:r0}),(\protect\ref{eq:r1}). Beyond the lower cutoff the
slope tends to 1, beyond the upper cutoff -- to 0. In-between the cutoffs, a
nearly straight line is observed, in agreement with apparent fractal
behavior. In order to find $\gamma$ it remains to determine the point
$r^*$. For disks, in analogy to the discussion for rods, the cutoffs are given
by:
\begin{figure}
\psfig{figure=fat-fractal-dims.ps,width=12cm,angle=270}
\vskip -1.8cm
\caption{
Analytical and regression slope between the cutoffs (dashed and long dashed)
of fat fractal analysis, and $2\!-\!D$ of ``thin'' fractal
analysis.\hfill \hspace{1.0cm}}
\label{fig:fat-fractal-dims}
\end{figure}
\begin{eqnarray}
r_0 &=& 2R/L \nonumber \\
r_1 &=& 1/\sqrt{M}-2R/L .
\label{eq:r0+r1}
\end{eqnarray}
\noindent As in Sec.\ref{model} we choose $r^*$ as the estimated middle point
of the scaling range,
\begin{equation}
r^* = r_e = \sqrt{r_0\,r_1} ,
\label{eq:rm}
\end{equation}
\noindent and find $\gamma$ by evaluating the logarithmic derivative of
$A(r)$ there. The result is:
\begin{equation}
\gamma = {{d \log[A(r)]} \over {d \log(r)}}\left|_{r_e}\right. =
{{2\eta_1(1-\eta_1+\sqrt{\eta_1-\eta_1^2})} \over
{\exp[\eta_1(1-\eta_1+2\sqrt{\eta_1-\eta_1^2})]-1}} .
\label{eq:gamma-analytical}
\end{equation}
\begin{figure}
\psfig{figure=artifact.ps,width=12cm,angle=270}
\vskip -1.8cm
\caption{
Log-log plot of $A(r) = r^n N(r) - \mu_0$ (solid line)
and just $r^n N(r)$ (dashed line). Inside the physical range, the measure
$\mu_0$ has little effect. Circles indicate the cutoffs.\hfill \hspace{1.0cm}}
\label{fig:artifact}
\end{figure}
\noindent This result is compared in Fig.\ref{fig:fat-fractal-dims} to the
result of the regular (``thin'') fractal analysis, for which we
found:\cite{me:random-model}
\begin{equation}
2 - D =
{
{ {1 \over 2}[ \eta_1 \sqrt{2\eta_1-\eta_1^2} + 2\eta_1-\eta_1^2 ] }
\over
{ \exp\{ \eta_2 + {1 \over 2} [ \eta_1 \sqrt{2\eta_1-\eta_1^2} + {1 \over 2} (2\eta_1-\eta_1^2) ] \} -1 }
} .
\label{eq:D-thin}
\end{equation}
\noindent The two curves differ only slightly. The analytical fat-fractal
result is also compared in Fig.\ref{fig:fat-fractal-dims} to the procedure
followed in typical experimental analysis of fractal scaling data: a linear
regression between the physical cutoffs ($r_0$ and $r_1$ in our case). The
trend is similar, and the agreement is quite good for the higher coverages. In
any case the analytical Eq.(\ref{eq:gamma-analytical}) serves as an accurate
upper bound to the expected regression result for $\gamma$. The corresponding
regression coefficient ${\cal R}^2$ (inset of Fig.\ref{fig:fat-fractal}) does
not fall below 0.9985 which indicates a very high quality regression,
certainly by experimental standards. Note that ${\cal R}^2$ remains very high
even for $\eta_1 < 10^{-2}$ (i.e., a scaling range $\Delta_e > 2$). This is a
wider range than found for the 1D version of ``thin'' fractal analysis. There
the apparent fractality was observed in a range of 1-2 decades if $\eta <
10^{-1}$ and ${\cal R}^2>0.97$ are required (Fig.\ref{fig:range}). This range
improvement is {\em not} a direct outcome of the inherent fractality we found
between the cutoffs, but is due to the differences in the response of the linear
regression procedure to Eq.(\ref{eq:gamma}), in comparison to Eq.(\ref{eq:D}): It is
the multiplication of the BC function by $r^2$ in the former which is
responsible for the improved range effect, through the increase in the slope of the log-log
plot. One should be cautious therefore to eliminate slope-biases in analyses
of scaling properties on a log-log plots. Furthermore, as clearly seen in
Fig.\ref{fig:artifact}, the essence of the fat fractal analysis, namely the
subtraction of the measure $\mu_0$ of the set, has essentially no effect on
the slope and on the location of the cutoffs and thus does not provide us in
this case with added information. Being left then with the choice between
$N(r)$ and $r^2 N(r)$, there does not seem to be a clear reason to opt for the
latter. We conclude that for low density systems, such as in this report, fat
fractal analysis is not necessary.
\section{Conclusions}
In summary, we have shown that random structures, which are generic in
experimental situations where only the first moment of a distribution is
determined, give rise to apparent fractal behavior within physically relevant
cutoffs, with a non-universal FD. Although this is not a mathematically
rigorous fractality, in the sense that the scaling is not strictly a power
law, it is a {\em physical} fractality: It satisfies the conditions of
high-quality linear regression in the physically relevant range of
observation. Since experiments rarely observe a perfect power law, we believe
that the possibility of {\em approximate} scaling should be considered in
theoretical models, if a more complete understanding of the experimental
fractal data is to be achieved. It is likely that some of this data does in
fact not reflect the existence of an exact power law, but rather an
approximate power law between cutoffs with a weak inflexion point in the
log-log plot. The present model and its approximate scaling properties hint
that this may be the case, e.g., for porous media. Moderate correlations have
little effect on the apparent fractal properties and even in their presence it
is still the underlying randomness that is the main contributor to the
apparent power-law scaling relation. Elsewhere we showed that these results
remain practically unchanged for higher dimensions and for a variety of size
distribution profiles of the elementary building blocks.\cite{me:random-model}
We thus propose to consider randomness as a possible common source for
apparent fractality.
\section*{Acknowledgments}
We would like to thank R.B. Gerber, D. Mukamel and G. Shinar for very helpful
discussions. D.A. is a member of the Fritz Haber Research Center for Molecular
Dynamics and of the Farkas Center for Light Energy Conversion.
\section*{References}
|
1,108,101,564,259 | arxiv | \section{Introduction}\label{intro}
Let $\mca E$ be a locally free sheaf on a projective line $\bb P^1$ over a field $k$.
As was proven by Grothendieck \cite{Gr}, the sheaf $\mca E$ decomposes into a direct sum of line bundles on $\bb P^1$ and the decomposition is unique up to isomorphisms.
Hence we have a complete classification not only of locally free sheaves but also of indecomposable sheaves on $\bb P^1$.
It may be natural to study an analogue of Grothendieck's theorem for $\bb P^n$, but it seems difficult.
In fact, if $n >1$ then there exists an indecomposable locally free sheaf on $\bb P^n$ whose rank is greater than $1$.
A simple example of such a sheaf is the tangent sheaf on $\bb P^n$.
Moreover the classification of indecomposable locally free sheaves is more difficult in the case of lower rank (cf. \cite{Har79}).
Though higher dimensional analogue of Grothendieck's theorem is difficult, Ishii and Uehara prove a beautiful analogue for the fundamental cycle $Z_{A_n}$ of the Kleinian singularity $A_n$.
To recall their result, a non-zero sheaf $\mca F$ on a scheme $Y$ is said to be \textit{pure} if the support of any non-trivial subsheaf of $\mca F$ has the same dimension of $Y$.
We note that if $Y$ is smooth and $1$-dimensional then a pure sheaf on $Y$ is equivalent to a locally free sheaf on $Y$.
Thus a pure sheaf is a natural generalization of locally free sheaves for reducible schemes such as $Z_{A_n}$.
Ishii and Uehara prove the following:
\begin{theorem}[{\cite[Lemma 6.1]{IU}}]\label{IUthm}
Let $\mca E$ be a pure sheaf on $Z_{A_n}$.
Then $\mca E$ decomposes into a direct sum of invertible sheaves on connected subtrees of $Z_{A_n}$.
Moreover, the decomposition is unique up to isomorphisms.
\end{theorem}
We first study an analogue of Theorem \ref{IUthm}.
\begin{theorem}[=Corollary \ref{maincor}]\label{main1}
Let $Z$ be the fundamental cycle of a Kleinian singularity except for $A_n$.
Then
\[
\max \{ \mathop{\mathrm{rank}}\nolimits_Z \mca E \mid \mca E\mbox{ is an indecomposable pure sheaf on }Z \} = \infty.
\]
\end{theorem}
We remark that the usual rank of sheaves is not appropriate since our scheme is reducible.
Thus we introduce more suitable ``rank" of sheaves in our setting (see Definition \ref{rank}).
By using it the first half of Theorem \ref{IUthm} can be restated that the maximum of the rank of indecomposable pure sheaves is $1$.
It may be natural to expect that the maximum of the rank of indecomposable pure sheaves are bounded for the other Kleinian singularities.
Our theorem gives an counter-example of the expectation.
The second aim of this note is to study the classification of ``$\mca O_X$-rigid" pure sheaves on $Z$.
The classification is related to the classification of spherical objects in a certain category (for the definition of spherical objects, see also \cite{Huy06} or \cite{ST}).
To explain the relation,
let $ X $ be the minimal resolution of a Kleinian singularity.
It is well-known that the fundamental cycle $Z$ of the singularity is the schematic fiber of the singularity by the resolution.
Since $Z$ is a subscheme of $X$, we have a natural embedding $\iota \colon Z \to X$.
We denote by $D_Z(X)$ the bounded derived category of coherent sheaves on $X$ supported on $Z$.
A coherent sheaf $\mca E$ on $Z$ is said to be \textit{$\mca O_X$-rigid} if the push forward $\iota _* \mca E$ by $\iota $ is rigid, that is, $\mathop{\mathrm{Ext}}\nolimits_X^1 (\iota _* \mca E, \iota _* \mca E) =0$.
Ishii and Uehara show that each cohomology (with respect to the standard $t$-structure) of spherical objects in $D_Z(X)$ is the push forward $\iota _* \mca E$ of a pure sheaf $\mca E $ on $Z$ which is $\mca O_X$-rigid.
If the singularity is $A_n$, then the classification of $\mca O_X$-rigid sheaves is a direct consequence of Theorem \ref{IUthm}.
By using the classification, Ishii and Uehara classify spherical objects in $D_Z(X)$ for the Kleinian singularity $A_n$ (the details are in \cite[Proposition 1.6]{IU}).
One might hope to classify spherical objects for the other Kleinian singularities following Ihii and Uehara's approach, by the first classifying indecomposable pure $\mca O_X$-rigid sheaves.
Theorem \ref{main1} is an evidence that this likely to be a rather difficult problem and we do not solve it in this paper.
However we do prove the following result, which leaves hope that such a classification might be achieved in the future.
\begin{theorem}\label{mainthm2}
Let $\mca E$ be an indecomposable pure sheaf on the reduced scheme $Z_{r}$ of the fundamental cycle of a Kleinian singularity except for $A_n$.
If $\mca E$ is $\mca O_X$-rigid, then $\mathop{\mathrm{rank}}\nolimits_{Z_r} \mca E \leq 3$ and the inequality is best possible.
\end{theorem}
The proof of Theorem \ref{mainthm2} will be postponed till the end of Section \ref{4}.
The essential part is in the proof of Propositions \ref{mainD} and \ref{bestD}.
\subsection*{Acknowledgement}
The author thanks the referee for valuable comments which simplify the proof of Theorem \ref{mainthm1} and improve readability.
He is supported by JSPS KAKENHI Grant Number JP 16H06337.
\section{Notations and Conventions}
Throughout this note, our field $k$ is algebraically closed and Kleinian singularities are given by $\mr{Spec}\, k\llbracket x,y,z \rrbracket/ f(x,y,z)$ where $f(x,y,z)$ is one of the following:
\begin{center}
\begin{tabular}{cll}
$A_n$ & $x^2 + y ^2 +z^{n+1}$ & for $n \geq 1$\\
$D_n$ & $x^2 + y^2 z + z^{n-1}$ & for $n \geq 4$\\
$E_6$ & $x^2 + y^3 + z^4$\\
$E_7$ & $x^2 + y^3 + yz ^3$\\
$E_8$ & $x^2 + y^3 + z^5$.
\end{tabular}
\end{center}
Let $Z$ be the fundamental cycle of the singularity $D_{n}$.
The $i$-th irreducible component $C_i$ of $Z$ is denoted as in Figure \ref{tree}.
Then it is well-known that $Z$ is $C_1 +C_2 + \sum_{i=3}^{n-1}2 C_i +C_{n}$.
Similarly the $j$-th irreducible component $C_j$ of the fundamental cycle of the singularities $E_{6}$, $E_7$ or $E_8$ is denoted as in Figure \ref{treeE}.
\begin{figure}[htb]
\begin{minipage}{0.49\hsize}
\begin{center}
\includegraphics[clip, width=67mm]{1.eps}
\end{center}
\caption{}
\label{tree}
\end{minipage}
\begin{minipage}{0.49\hsize}
\begin{center}
\includegraphics[clip, width=74mm]{2.eps}
\end{center}
\caption{}
\label{treeE}
\end{minipage}
\end{figure}
\begin{remark}\label{change index}
We note that the chain $\sum_{i=2}^5 C_i$ in Figure \ref{treeE} gives the reduced scheme of the fundamental cycle of the singularity $D_4$. We use this identification in the proof of Proposition \ref{propE}.
\end{remark}
Let $\mca D$ be a $k$-linear triangulated category.
We denote by $\mathop{\mathrm{hom}}\nolimits^p(E,F)$ the dimension of the vector space $\mathop{\mathrm{Hom}}\nolimits^p_{\mca D}(E,F) = \mathop{\mathrm{Hom}}\nolimits_\mca D(E,F[p])$ for $E$ and $F \in \mca D$.
The category $\mca D$ is said to be of finite type if the sum $\sum_{p \in \bb Z} \mathop{\mathrm{hom}}\nolimits^p(E,F)$ is finite for any $E,F \in \mca D$.
If $\mca D$ is of finite type
then the Euler characteristi
\[
\chi(E,F)= \sum_{p \in \bb Z} (-1)^p \mathop{\mathrm{hom}}\nolimits^p(E, F)
\]
is well-defined.
If the Serre functor of $\mca D$ is isomorphic to the double shift $[2]$, then $\mca D$ is said to be \textit{$2$-dimensional Calabi-Yau} (for simplicity CY2).
If $\mca D$ is CY2, then we have $\mathop{\mathrm{hom}}\nolimits^p(E,F) = \mathop{\mathrm{hom}}\nolimits^{2-p}(F,E)$.
One of the best example of CY2 categories is $D_Z(X)$.
\section{Indecomposable pure sheaves}\label{2}
\begin{definition}\label{rank}
Let $Z'$ a $1$-dimensional closed subscheme of the fundamental cycle $Z$ of a Klein singularity and $\iota' \colon Z' \to X$ be the embedding to the minimal resolution $X$ of the singularity.
We define the rank of a sheaf $\mca E$ on $Z'$ as follows:
\[
\mathop{\mathrm{rank}}\nolimits_{Z'} \mca E := \min \{ a \in \bb Z_{\geq 0} \mid c_1(\iota' _* \mca E) \leq a \cdot Z' \},
\]
where $c_1$ is the first Chern class.
\end{definition}
\begin{remark}
We would like to define a rank so that the structure sheaf of any $Z'$ has rank $1$.
One of a naive generalization of the usual rank is the following:
The rank of a sheaf $\mca E$ on $Z'$ is the maximum rank of $\mca E$ on each irreducible components of $Z'$.
Such a generalization does not satisfy our requirement if $Z'$ is the fundamental cycle $Z$ of a Kleinian singularity except for $A_n$.
\end{remark}
By using Definition \ref{rank} the first half of Theorem \ref{IUthm} can be restated as follows
\[
\max \{ \mathop{\mathrm{rank}}\nolimits_{Z_{A_n}} \mca E \mid \mca E\mbox{ is an indecomposable pure sheaf on } Z_{A_n} \} = 1.
\]
Contrary to the singularity $A_n$, we prove the following for the singularity $D_4$:
\begin{theorem}\label{mainthm1}
Let $\mca E$ be a pure sheaf on the reduced scheme $Z_{r}$ of the fundamental cycle $Z$ of the singularity $D_4$.
Then there exists an indecomposable pure sheaf $\mca E$ on $Z_{r}$ of $\mathop{\mathrm{rank}}\nolimits_{Z_{r}} \mca E=r$ for any $r \in \bb N$.
\end{theorem}
Before the proof we denote by $\mca O_{C_1+C_2+C_3}(a_1,a_2,a_3)$ an invertible sheaf on the chain $C_1 + C_2 +C_3$ such that the degree on each irreducible component $C_i$ is $a_i$.
\begin{remark}
A key ingredient of the proof of Theorem \ref{mainthm1} is a choice of particular sheaves $\mca L_n$, where $\mca L_n = \mca O_{C_1 + C_2 + C_3}(n,-n,0)$ for $n \in \bb Z$.
Any pair $(\mca L_n, \mca L_m)$ (for $n \neq m$) has the following property:
For any morphism $ f \colon \mca L_n \to \mca L_m$, the induces morphism $ f_* \colon \mathop{\mathrm{Ext}}\nolimits^1_{Z_r} (\mca O_{C_4} , \mca L_n) \to \mathop{\mathrm{Ext}}\nolimits^1_{Z_r}(\mca O_{C_4}, \mca L_m)$ is zero (the details are in Lemma \ref{ferox}).
If the singularity is $A_n$, such a pair does not exist.
\end{remark}
\begin{proof}
Take an invertible sheaf $\mca L_n = \mca O_{C_ 1 + C_2 + C_3}(n,-n,0)$ on $C_1+C_2+C_3$ for an integer $n \in \bb Z$.
It is easy to see $\mathop{\mathrm{Ext}}\nolimits_{Z_{r}}^1(\mca O_{C_4}, \mca L_n) \cong \mathop{\mathrm{Ext}}\nolimits_{Z_{r}}^1 (\mca O_{C_4},
\mca O_{C_3}) $.
We wish to describe $\mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}} (\mca O_{C_4}, \mca O_{C_3}) $ in an explicit way.
By the locally free resolution of $\mca O_{C_4}$ as $\mca O_{Z_{r}}$-module,
we see $\mca Hom^0_{Z_{r}} (\mca O_{C_4}, \mca O_{C_3} ) =0$ and
$\mca Ext^1_{Z_{r}}(\mca O_{C_4}, \mca O_{C_3}) \cong k(x)$ where $x \in C_3 \cap C_4$.
Thus we have $\mathop{\mathrm{Ext}}\nolimits_{Z_{r}}^1(\mca O_{C_4}, \mca L_n) \cong \mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}} (\mca O_{C_4}, \mca O_{C_3}) \cong H^0\big(k(x)\big)
$ by the local-to-global spectral sequence.
Let $I$ be an arbitrary finite subset of $\bb Z$.
The vector space $\bigoplus _{n \in I} \mathop{\mathrm{Ext}}\nolimits_{Z_{r}}^1 (\mca O_{C_4}, \mca L_n)$ is denoted by $V_I$.
Any extension class $[\mca E] \in V_I$ can be identified with a column vector with respect to a natural basis of $V_I$.
Take the universal extension $[\mca U_I]$, that is, $[\mca U_I]$ is a vector whose components are all $1$.
We wish to prove that $\mca U_{I} $ is indecomposable.
Suppose to the contrary that $\mca U_{I}$ decomposes into $\mca F \oplus \mca G$.
We can assume $\mr{Supp}\ \mca G \supset C_4$ without loss of generality.
Then we have $\mathop{\mathrm{Hom}}\nolimits_{Z_r}(\mca F, \mca O_{C_4})=0$ since $\mr{Supp}\ \mca F \subset C_1 + C_2 +C_3$.
Hence the natural morphism $ f \colon \mca F \oplus \mca G \to \mca O_{C_4}$ splits into $0 \oplus \tilde f$ where $0$ is the zero morphism from $\mca F$ and $\tilde f \colon \mca G \to \mca O_{C_4}$.
Let $K$ be the kernel of the morphism $\tilde f$.
Then we have
\[
\mca F \oplus K \cong \bigoplus_{n \in I}\mca L_n.
\]
and see that $\mca F \oplus K $ is a pure $\mca O_{C_1 +C _2 + C_3}$-module.
Thus, by Theorem \ref{IUthm}, we see
$\mca F \cong \bigoplus _{n_i \in I'}\mca L_{n_i}$ and $K \cong \bigoplus_{n_j \in I''} \mca L_{n_j}$ where $I' \coprod I'' = I$.
In particular we have the following diagram of distinguished triangles:
\[
\xymatrix{
\mca F \oplus \mca G \ar[d]\ar[r]^f &\mca O_{C_4} \ar[d]\ar[r]^-{u} &\bigoplus_{n \in I} \mca L_n [1]\ar[d]^{\pi[1]} \\
\mca F \ar[r] &0 \ar[r] &\bigoplus _{ n_i \in I'}\mca
L_{n_i}[1]. \\
}
\]
Hence the composite $\pi[1] \circ u$ is zero in the derived category $D(Z_r)$ on $Z_r$.
Thus the corresponding coefficients $[\mca U_{I}] \in \mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}}(\mca O_{C_4}, \mca F)
\oplus \mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}}(\mca O_{C_4}, \mca G)$ to $\mca F$ should be $0$.
Moreover we see that the natural representation of the automorphism group $\mathop{\mathrm{Aut}}\nolimits (\bigoplus \mca L_n)$ on $V_I$ is contained in diagonal matrices by Lemma \ref{ferox} (below).
This contradicts the definition of $\mca U_I$.
\end{proof}
\begin{lemma}\label{ferox}
We denote by $\mca L_{i}$ a pure sheaf $\mca O_{C_1 +C_2 + C_3 } (i,-i,0)$ for an integer $i$.
For any finite subset $I \subset \bb Z$, the vector space $\bigoplus _{i \in I} \mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}}(\mca O_{C_4}, \mca L_{i})$ is denoted by $V_I$.
Then the image of a natural representation
\[
\rho \colon \mr{End}_{Z_{r}} \big(\bigoplus _{i \in I} \mca L_i \big) \to \mr{End} _{k}(V_I )
\]
is contained in diagonal matrices with respect to a natural basis of $V_I$.
\end{lemma}
\begin{proof}
Let $\mca D$ be the derived category on $Z_r$.
The vector space $\mr{End}_{Z_{r}} \big(\bigoplus _{i \in I} \mca L_i \big) $ decomposes into as follows:
\[
\mr{End}_{Z_{r}} \big(\bigoplus _{i \in I} \mca L_i \big)
\cong \bigoplus _{i,j \in I} \mathop{\mathrm{Hom}}\nolimits_{Z_r} ( \mca O_{C_1+C_2+C_3}, \mca L_{j-i} ).
\]
By the symmetry for $C_1$ and $C_2$ we can assume $ \ell = i - j \geq 0$.
If $-\ell < 0$ then $H^0(\mca O_{C_2+ C_3}(-\ell,0))$ is zero.
Hence the natural inclusion $\mca O_{C_1}(\ell-1) \to \mca L_{\ell}$ induces an isomorphism
\begin{equation}
\mathop{\mathrm{Hom}}\nolimits(\mca O_{C_1 + C_2 +C_3}, \mca L_{\ell})
\cong \mathop{\mathrm{Hom}}\nolimits(\mca O_{C_1 + C_2 +C_3}, \mca O_{C_1}(\ell-1)) . \label{hung
\end{equation}
Thus any morphism $ \varphi \in \mathop{\mathrm{Hom}}\nolimits(\mca O_{C_1 + C_2 +C_3}, \mca L_{\ell}) $ factors through $\mca O_{C_1}(\ell-1)$ and
hence the induced morphism in $\mca D$
\[
\varphi _* \colon \mathop{\mathrm{Hom}}\nolimits_{\mca D}^0(\mca O_{C_4}[-1], \mca O_{C_1 +C_2 + C_3}) \to \mathop{\mathrm{Hom}}\nolimits_{\mca D}^0(\mca O_{C_4}[-1], \mca L_{\ell})
\]
factors through $\mca O_{C_1}(\ell-1)$.
Thus the morphism $\varphi_* $ should be zero since the intersection $C_1 \cap C_4$ is empty.
Hence the action of $\mr{End}_{Z_{r}} (\bigoplus_{i \in I} \mca L_i)$ is contained in diagonal component of $\mr{End}_{k}(V_I)$.
\end{proof}
\begin{corollary}\label{maincor}
Let $Z$ be the fundamental cycle of a Kleinian singularity except for $A_n$.
Then there is an indecomposable pure sheaf on $Z$ of rank $r$ for any $r \in \bb N$.
In particular the following holds:
\[
\max \{ \mathop{\mathrm{rank}}\nolimits_Z \mca E \mid \mca E \mbox{ is an indecomposable pure sheaf on }Z \} = \infty.
\]
\end{corollary}
\begin{proof}
Let $Z_{4,r}$ be the reduced scheme of the fundamental cycle of the singularity $D_4$.
Then $Z_{4,r}$ is a closed subscheme of $Z$.
As in the proof of Theorem \ref{mainthm1}, the universal extension $\mca U_I$ of $\mathop{\mathrm{Ext}}\nolimits^1_{Z_{4,r}} (\mca O_{C_4}, \bigoplus_{n \in I} \mca L_n)$ is indecomposable pure sheaves on $Z_{4,r}$.
The push forward $\iota _* \mca U$ by the closed embedding $\iota \colon Z_{4,r} \to Z$ is also a pure sheaf on $Z$.
Moreover the push forward $\iota _*$ is a fully faithful functor from $\mr{Coh} (Z_{4,r})$ to $\mr{Coh}(Z)$ and the full subcategory $\iota_*\mr{Coh} (Z_{4,r})$ is closed under direct summands.
Thus the assertion holds.
\end{proof}
\section{$\mca O_X$-rigid pure sheaves on $D_n$}\label{3}
For any closed embedding $f \colon Z \to X$, the push forward $f_* \colon \mr{Coh}(Z) \to \mr{Coh}(X)$ is fully faithful, but the derived push forward $f_* \colon D(Z) \to D(X)$ is not.
To analyze the difference the following lemma is necessary.
\begin{lemma}\label{extcomp}
Let $f \colon Z \to X$ be a closed embedding of $Z$ to an algebraic variety $X$.
Let $\mca F$ and $\mca E$ be sheaves on $Z$.
The canonical map
$\mathop{\mathrm{Ext}}\nolimits^1_Z(\mca F , \mca E) \rightarrow\mathop{\mathrm{Ext}}\nolimits^1_X(f_* \mca F, f_* \mca E)$ is injective.
\end{lemma}
\begin{proof}
By the adjunction we have $\mathop{\mathrm{Ext}}\nolimits^p_X(f_* \mca F, f_* \mca E) \cong \mathop{\mathrm{Ext}}\nolimits^p_Z\big(\bb L f^* (f_* \mca F), \mca E
\big)$ (note that $f$ is affine morphism).
By the canonical morphism
$\bb L f^* f_* \mca F \to \mca F$ we have the following distinguished triangle in the derived category $\mca D$ of coherent sheaves on $Z$:
\[
\begin{CD}
\mf F @>>> \bb Lf^* f_* \mca F @>>> \mca F @>>> \mf F[1].
\end{CD}
\]
Since $\bb L^0 f^* f_* \mca F = f^* f_*\mca F\cong \mca F$, we see that the $p$-th cohomology (with respect to the standard $t$-structure) of the complex $\mf F $ vanishes for $p\in \bb Z_{\geq 0}$.
Hence we have $\mathop{\mathrm{Hom}}\nolimits_Z^q(\mf F, \mca E)=0$ for $q \in \bb Z_{\leq 0}$.
By taking $\bb R \mathop{\mathrm{Hom}}\nolimits_{\mca D} (-, \mca E)$ to the above sequence we have the following exact sequence:
\[
\begin{CD}
\mathop{\mathrm{Hom}}\nolimits^0_{\mca D}(\mf F, \mca E) @>>> \mathop{\mathrm{Hom}}\nolimits^1_{\mca D}(\mca F, \mca E) @>\kappa>> \mathop{\mathrm{Hom}}\nolimits^1_{\mca D}(\bb Lf^*f_* \mca F, \mca
E ) @>>> \mathop{\mathrm{Hom}}\nolimits^1_{\mca D}(\mf F, \mca E)
\end{CD}.
\]
Note that the canonical morphism $\mathop{\mathrm{Ext}}\nolimits^1_Z(\mca F, \mca E) \to \mathop{\mathrm{Ext}}\nolimits^1_X(f_* \mca F, f_* \mca E)$ is given by $\kappa$.
Since $\mathop{\mathrm{Hom}}\nolimits^0_Z(\mf F, \mca E)=0$ the morphism $\kappa$ is injective.
\end{proof}
\begin{corollary}\label{isom}
Let $Z$ be a chain of rational curves in $X$ and let $\{ C_i \}_{i=1}^{n}$ be a set of irreducible components of $Z$.
Then for $i \neq j$, we have
\[
\mathop{\mathrm{Ext}}\nolimits^1_Z(\mca O_{C_i}(d_i), \mca O_{C_j}(d_j)) \cong \mathop{\mathrm{Ext}}\nolimits^1_X(f_* \mca O_{C_i}(d_i), f_* \mca O_{C_j}(d_j)).
\]
\end{corollary}
\begin{proof}
A locally free resolution of $f_*\mca O_{C_i}(d_i)$ is given by
\[
\begin{CD}
0 @>>> \mca O_X(D-C_i) @>>> \mca O_X(D) @>>> 0
\end{CD},
\]
where $D$ is a divisor on $X$ such that $D.C_i =d_i$.
Thus $\bb Lf^* f_* \mca O_{C_i}(d_i)$ is given by the following:
\[
\begin{CD}
0 @>>> \mca O_Z(D-C_i) @>\delta>> \mca O_Z(D) @>>> 0
\end{CD}.
\]
In particular $\bb L^{-1} f^{*}f_* \mca O_{C_i}(d_i)$ is the kernel of $\delta$ which is isomorphic to
$\mca O_{C_i}(D-Z)$.
Moreover $\mf F$ is isomorphic to $\mca O_{C_i}(D- Z)[1]$.
Since $C_i \neq C_j$ we have
\[
\mathop{\mathrm{Hom}}\nolimits^1_Z(\mf F, \mca O_{C_j}(d_j)) = \mathop{\mathrm{Hom}}\nolimits^0_Z(\mca O_{C_i}(D- Z), \mca O_{C_j}(d_j)) =0.
\]
This is the desired conclusion.
\end{proof}
We first introduce a relation on a collection of sheaves and secondly prove that the relation defines an order:
\begin{definition}\label{Order}
Let $\{ \mca N_i \}_{i \in I}$ and $\{ \mca L_j \}_{j \in J}$ be finite collections of isomorphism classes of sheaves on a scheme $Y$.
Suppose that $\mf N$ and $\mf L$ satisfy the following:
\begin{enumerate}
\item[$(a)$] Endomorphism rings $\mr{End}_Y(\mca L_j)$ and $\mr{End}_Y(\mca N_i)$ are generated by the identity for each $i$ and $j$.
\item[$(b)$] Each pair $(\mca L_j, \mca N_i)$ satisfies $ \mathop{\mathrm{dim}}\nolimits \mathop{\mathrm{Ext}}\nolimits^1_Y(\mca L_j, \mca N_i)=1$.
\end{enumerate}
(1) If there is a morphism $f\colon \mca L_{j_2} \to \mca L_{j_1}$ such that $f^*\colon \mathop{\mathrm{Ext}}\nolimits^1_Z(\mca L_{j_1}, \mca N_i) \to \mathop{\mathrm{Ext}}\nolimits^1_Z(\mca L_{j_2}, \mca N_i)$ is nonzero for all $i \in I$ then we define a relation $\mca L_{j _1} \leq \mca L_{j_2} $ on $\{ \mca L_j \}_{j \in J}$.
(2) Dually if there is a morphism $g\colon \mca N_{i_1} \to \mca N_{i_2}$ such that the induced morphism $g_* \colon \mathop{\mathrm{Ext}}\nolimits^1_Z(\mca L_j, \mca N_{i_1}) \to \mathop{\mathrm{Ext}}\nolimits^1_Z(\mca L_j, \mca N_{i_2}) $ is nonzero for all $ j \in J$ then we define a relation $\mca N_{i_1} \leq \mca N_{i_2}$ on $\{ \mca N_i \}_{i \in I}$.
\end{definition}
\begin{proposition}\label{Order2}
The relations on $\{ \mca N_i \}_{i \in I}$ and $\{ \mca L_j \}_{j \in J}$ in Definition \ref{Order} respectively define orders.
In particular both are posets.
\end{proposition}
\begin{proof}
Since the proof is similar, it is enough to show the claim for $\{ \mca {L}_j \}_{j \in J}$.
The reflexivity is obvious since the identity gives the identity on $\mathop{\mathrm{Ext}}\nolimits^1_{Z_r} (\mca L_j, \mca N_i)$.
Suppose $\mca L_{j_1} \leq \mca L_{j_2}$ and $\mca L_{j_2} \leq \mca L_{j_1}$.
Then there exist morphisms $f_1: \mca L_{j_2} \to \mca L_{j_1}$ and $f_2: \mca L_{j_1} \to \mca L_{j_2}$.
By the condition $(b)$ in Definition \ref{Order}, both $f_1^*$ and $f_2^*$ are isomorphisms.
In particular the compositions $(f_1 \circ f_2)^*$ and $(f_2 \circ f_1)^*$ are nonzero morphisms.
Thus two morphisms $f_1 \circ f_2 \in \mr{End}(\mca L_{j_1})$ and $f_2 \circ f_1 \in \mr{End}(\mca L_{j_2})$ are not zero.
By the condition $(a)$ in Definition \ref{Order}, we see that $f_1 \circ f_2$ and $f_2 \circ f_1$ are identities up to scalar and hence $\mca L_{j_1} \cong \mca L_{j_2}$.
For the transitivity let us suppose $\mca L_{j_1} \leq \mca L_{j_2}$ and $\mca L_{j_2} \leq \mca L_{j_3}$.
Similarly as above the composition $f_1 \circ f_2$ of two morphisms $f_1: \mca L_{j_2} \to \mca L_{j_1} $ and $f_2 : \mca L_{j_3} \to \mca L_{j_2}$ induces a non-zero morphism $(f_1 \circ f_2 )_* \colon \mathop{\mathrm{Ext}}\nolimits^1_Z (\mca L_{j_3}, \mca N_i) \to \mathop{\mathrm{Ext}}\nolimits^1_Z(\mca L_{j_1}, \mca N_i)$.
Thus we obtain $\mca L_{j_1 } \leq \mca L_{j_3}$
\end{proof}
\begin{remark}
We are interested in the classification of $\mca O_X$-rigid pure sheaves and study the classification in Lemma \ref{minimal} and Proposition \ref{propE}.
Any pure sheaf $\mca E$ on $Z$ of a Kleinian singularity is given by a successive extension of pure sheaves on subtrees (see the filtration (\ref{horoyoi}) below).
The poset structure defined in Proposition \ref{Order2} is convenient to analyze the successive extension.
\end{remark}
We are ready to prove our main proposition in this section.
\begin{proposition}\label{mainD}
Let $X$ be the minimal resolution of the singularity $D_n$ and $Z_r$ be the reduced scheme of the fundamental cycle $Z$ of $D_n$.
Suppose that $\mca E$ is an indecomposable pure sheaf on $Z_r$.
If $\mca E$ is $\mca O_X$-rigid then we have $\mathop{\mathrm{rank}}\nolimits_{Z_r} \mca E \leq 3$.
\end{proposition}
\begin{proof}
We have $Z_{r} =\sum_{i=1}^{n} C_i $ by the definition.
Take a pure sheaf $\mca E$ on $Z_r$ which is not necessary indecomposable but $\mca O_X$-rigid.
We shall show that the rank of each direct summand of $\mca E$ is at most $3$.
Let $\mca F$ be the kernel of the restriction $\mca E \to \mca E \otimes \mca O_{C_4+ \cdots +C_n}$.
By taking saturation if necessary, we can assume that the sheaf $\mca E$ fits into the short exact sequence
\[
\begin{CD}
0 @>>> \mca F @>>> \mca E @>>> \mca G @>>> 0
\end{CD}
\]
where $\mca F$ is a pure sheaf on $C_1 + C_ 2 + C_3$ and $\mca G$ is a pure sheaf on $C_4+\cdots + C_n$.
Both $\mca F$ and $\mca G$ are respectively direct sums of invertible sheaves of subtrees by Theorem \ref{IUthm} since both trees $C_1 +C_2 + C_3$ and $C_4 + \cdots +C_n$ is the fundamental cycle of respectively $A_3$ and $A_{n-3}$:
\[
\mca F = \bigoplus_{i \in I} \mca N_{i } \mbox{ and }\mca G = \bigoplus _{j \in J} \mca L_j.
\]
Without loss of generality we can assume the following
\begin{itemize}
\item The support of each $\mca N_i$ contains $C_3$.
\item The support of each $\mca L_j$ contains $C_4$ and is connected.
\end{itemize}
Now we claim the following:
\begin{lemma}\label{minimal}
Let $\mf N$ be the collection of direct summands $\{ \mca N_{i} \}_{i \in I}$ of $\mca F$ and $\mf L$ the collection of direct summands $ \{ \mca L_{j} \} _{j \in J}$ of $\mca G$.
\begin{enumerate}
\item[$(1)$] Both $\mf N$ and $ \mf L$ are posets with respect to the relation in Definition \ref{Order}.
\item[$(2)$] There exist at most $3$ minimal elements in any subposet of $\mf N$ and the poset $\mf L$ is totally ordered.
\end{enumerate}
\end{lemma}
Before the proof of Lemma \ref{minimal}, we finish the proof.
Note that $\mca E$ defines a class $[\mca E ]$ in $\bigoplus_{i \in I, j \in J} \mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}}(\mca L_j, \mca N_i)$ denoted by $V_{IJ}$.
Put $m=\# I$ and $n = \# J$.
Clearly $V_{IJ}$ can be identified with the set of $m\times n$ matrices and we can write $[\mca E]$ by a matrix
\[
[\mca E] =\begin{pmatrix}
e_{11} & e_{12} & \cdots & e_{1n} \\
e_{21} & e_{22} & \cdots & e_{2n} \\
\vdots & \vdots & & \vdots \\
e_{m1} & e_{m2} & \cdots & e_{mn}
\end{pmatrix}.
\]
If $\mca N_{i_1} \leq \mca N_{i_2}$ and $e_{i_1, j} \neq 0$ then
we can assume $e_{i_2,j}=0$ by row fundamental transformations induced by a morphism $\mca N_{ i_1} \to \mca N_{i_2}$.
Since $\mf {N}$ has at most $3$ minimal elements by Lemma \ref{minimal}, there exists at most $3$ components $e_{ij}$ such that $e_{ij}=1$ in each row.
Similarly if $\mca L _{j_1} \geq \mca L_{j_2}$ and $e_{i, j_1} \neq 0$
then we can assume $e_{i, j_2}=0$ by column fundamental transformations.
Since $\mf L$ is totally ordered by Lemma \ref{minimal}, there exists at most $1$ components $e_{i'j'}$ such that $e_{i'j'}=1$ in each column.
This means that the rank of each direct summand of $\mca E$ is at most $3$ since $Z_r$ is reduced.
\end{proof}
\renewcommand{\proofname}{\textit{Proof} of Lemma \ref{minimal}}
\begin{proof}
Let $\iota \colon Z_{r} \to X$ be the embedding to the minimal resolution of the singularity.
Since $\mathop{\mathrm{Hom}}\nolimits_{Z_r}( \mca F, \mca G)$ is zero the push forwards $\iota _* F$ and $ \iota _* \mca G$ are rigid by \cite[Lemma 2.5]{BB13}.
Each direct summand of $\mca G$ is an invertible sheaf on a connected subtree of $C_4 + \cdots + C_n$.
Then the order introduced in \cite[Section 6.1]{IU} gives the order in Definition \ref{Order}.
In particular $\mf {L}$ is totally ordered.
To determine $\mf N$, similarly as before, take a filtration of $\mca F$
\begin{equation}
0 \subset \mca F_1 \subset \mca F _3 \subset \mca F_2 = \mca F \label{horoyoi}
\end{equation}
such that
\begin{itemize}
\item $\mca F_1$ and $\mca F_3$ are pure on respectively $C_1$ and $C_1 + C_3$,
\item $\mca F_3/ \mca F_1$ is pure on $C_3$ and
\item $\mca F_2/ \mca F_3$ is a pure sheaf on $C_2$.
\end{itemize}
Similarly, $\mca F_3/ \mca F_1$ and $\mca F_2/ \mca F_3$ are rigid by Lemma \ref{extcomp} and \cite[Lemma 2.5]{BB13}.
Since $\mca F_1$ is rigid and pure, there exists an integer $a_1$ such that $\mca F_1= \mca O_{C_1}(a_1)^{\oplus m_1} \oplus \mca O_{C_1}(a_1+1)^{\oplus n_1}$.
A similar statement applies to $\mca F_3 /\mca F_1$ amd $\mca F_2 / \mca F_3$.
So there are three integers $\{ a_1, a_2 , a_3 \}$ such that every summand of $\mca F$ is one of the $18$ possibilities listed in Table \ref{Summand}.
\begin{table}[htbp]
\begin{center}
\resizebox{1\hsize}{!}{
\begin{tabular}{|c|c|c|}
\hline
$\mca O_{C_1+C_3}(a_1+1,a_3)$ &$\mca O_{C_1+C_2+C_3}(a_1+1, a_2, a_3+1)$ &$\mca O_{C_1+C_2+C_3}(a_1+1, a_2+1, a_3+1)$ \\
\hline
$ \mca O_{C_1+C_3}(a_1+2 , a_3)$ &$\mca O_{C_1+C_2+C_3}(a_1+2, a_2, a_3 +1)$ &$\mca O_{C_1+C_2+C_3}(a_1+2, a_2+1, a_3+1)$ \\
\hline
$\mca O_{C_3}(a_3)$ &$\mca O_{C_2+C_3}(a_2, a_3+1) $ &$\mca O_{C_2 +C_3}(a_2+1, a_3+1)$ \\
\hline
$ \mca O_{C_1+C_3}(a_1 +1, a_3+1)$ &$\mca O_{C_1+C_2+C_3}(a_1+1, a_2, a_3+2)$ &$\mca O_{C_1+C_2+C_3} (a_1 +1, a_2 +1, a_3+2)$ \\
\hline
$\mca O_{C_1+C_3}(a_1 +2, a_3+1)$ &$\mca O_{C_1+C_2+C_3}(a_1+2, a_2,a_3+2)$ &$\mca O_{C_1+C_2+C_3}(a_1+2, a_2+1, a_3+2)$ \\
\hline
$\mca O_{C_3}(a_3 +1)$ &$\mca O_{C_2+C_3}(a_2, a_3+2)$ &$\mca O_{C_2+C_3}(a_2+1, a_3+2)$\\
\hline
\end{tabular}
}
\end{center}
\caption{We denote by $\mca N_{ij}$ the $i$-th column and $j$-the row component in the table. For instance, $\mca N_{31}=\mca O_{C_3}(a_3)$. }\label{Summand}
\end{table}%
We prove the first assertion.
Recall that $\mathop{\mathrm{Ext}}\nolimits^1_{Z_{r}}(\mca L_j, \mca N_i) $ is isomorphic to $H^0(\mca O_{x})$ where $x \in C_3 \cap C_4$.
Since the support of each $\mca N_i$ contains $C_3$, both $\mf N$ and $\mf L$ satisfy the conditions $(a)$ and $(b)$ in Definition \ref{Order}.
If $\mathop{\mathrm{Hom}}\nolimits(\mca O_{C_4}(d), \mca O_{C_4}(d'))$ is not zero, where $d$ and $d' \in \{ a_4, a_4+1 \}$, then there exists a morphism $\psi \colon \mca O_{C_4}(d) \to \mca O_{C_4}(d')$ which induces a non-zero morphism on $H^0(\mca O_x)$.
Similarly, if $\mathop{\mathrm{Hom}}\nolimits(\mca N_{i_1}, \mca N_{i_2})$ is not zero for $\mca N_{i_1}$ and $\mca N_{i_2}$ in $\mf N$, then there exists a morphism $\varphi \colon \mca N_{i_1} \to \mca N_{i_2}$ which induces a non-zero morphism on $H^0(\mca O_x)$ since the point $x$ is not in $(C_1 \cup C_2 )\cap C_3$.
Thus $\mf N$ and $\mf L$ are posets and this gives the proof of the first assertion.
To finish the proof of the second assertion $(2)$,
let us denote by $\mca N_{ij}$ the $i$-th column and the $j$-th row component of Table \ref{Summand} and put $\mf T = \{ \mca N_{ij} \}_{i\in I, j\in J}$.
Clearly $\mf N$ is a subposet of $\mf T$.
We see that each column subposet $\{ \mca N_{ij} \}_{i=1}^6$ is totally ordered $\{ \mca N_{1j}\leq \cdots \leq \mca N_{6j} \}$ $(j \in \{ 1,2,3 \})$ and each row subposet is also totally ordered $\{ \mca N_{i1} \leq \mca N_{i2} \leq \mca N_{i3} \}$ $(i \in \{ 1,2,3 \})$.
However the poset $\mf T$ is not totally ordered.
For instance the pair $(\mca N_{31}, \mca N_{22})$ satisfies $\mca N_{31} \not\leq \mca N_{22}$ and $\mca N_{22} \not\leq \mca N_{31}$ since $\mathop{\mathrm{Hom}}\nolimits_{Z_r}(\mca N_{31}, \mca N_{22}) = \mathop{\mathrm{Hom}}\nolimits_{Z_r}(\mca N_{22}, \mca N_{31})=0$.
Thus, there are at most three minimal elements in any subposet of $\mf T$.
In particular $\mf N$ has also at most three minimal elements.
\end{proof}
\renewcommand{\proofname}{\textit{Proof}}
\begin{remark}
Let $Z$ be the fundamental cycle of the singularity $A_n$.
Similarly as in the proof of Lemma \ref{minimal}, a pure sheaf $\mca E$ on $Z$ is obtained from an extension on pure sheaves $\mca F$ on $C_1 + \cdots + C_{n-1}$ and $\mca G$ on $C_n$.
The sets of direct summands of $\mca F$ and $\mca G$ are not only posets but also totally ordered sets (see also \cite[Section 6.1]{IU}).
This is the essential difference between the singularity $A_n$ and the other Kleinian singularities.
\end{remark}
In the rest of this note we show that the inequality in Proposition \ref{mainD} is best possible by constructing an $\mca O_X$-rigid sheaf.
The following lemma is necessary for the construction.
\begin{lemma}\label{rigidCY}
Let $\mca D$ be a $k$-linear triangulated category.
Suppose that $\mca D$ is CY2.
Let $\mca F$ and $\mca G$ be in the heart $\mca A$ of a $t$-structure on $\mca D$.
Consider an extension class $[\mca E] \in \mathop{\mathrm{Hom}}\nolimits^{1}_{\mca D}(\mca G, \mca F)$
\begin{equation}
\begin{CD}
0 @>>> \mca F @>>> \mca E @>>> \mca G @>>> 0
\end{CD} \label{nemui}
\end{equation}
such that
\[
\mathop{\mathrm{Hom}}\nolimits_{\mca D}^0(\mca F, \mca G)=\mathop{\mathrm{Hom}}\nolimits^{1}_{\mca D}(\mca F, \mca F) = \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{1}(\mca G, \mca G)=0.
\]
Then the following are equivalent.
\begin{enumerate}
\item[$(a)$] $\mca E$ is rigid.
\item[$(b)$] the vector space $\mathop{\mathrm{Hom}}\nolimits^{1}_{\mca D}(\mca G, \mca F)$ is generated by $\epsilon ^{\mr{L}}\big(\mr{End}_{\mca D}(\mca G)\big)$ and $\epsilon^{\mr{R}}\big(\mr{End}_{\mca D}(\mca F)\big)$ where $\epsilon = [\mca E] \in \mathop{\mathrm{Hom}}\nolimits^{1}_{\mca D}(\mca G, \mca F)$ and $\epsilon^{\mr L}$ (resp. $\epsilon^{\mr R}$) is the left (resp. right) composition:
\begin{align*}
\epsilon^{\mr{R}} \colon \mr{End}_{\mca D}(\mca F) \to \mathop{\mathrm{Hom}}\nolimits^{1}_{\mca D}(\mca G, \mca F) &,\ \epsilon^{\mr{R}} (f)= f \circ \epsilon \mbox{ and } \\
\epsilon^{\mr{L}} \colon \mr{End}_{\mca D}(\mca G) \to \mathop{\mathrm{Hom}}\nolimits^{1}_{\mca D}(\mca G, \mca F) &,\ \epsilon^{\mr{L}} (g)= \epsilon \circ g
\end{align*}
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\mca F$ and $\mca G$ are rigid by the assumption, we have
\[
\mathop{\mathrm{hom}}\nolimits_{\mca D}^{0}(\mca F, \mca F) = \frac{1}{2}\chi(\mca F, \mca F) \mbox{ and }\mathop{\mathrm{hom}}\nolimits_{\mca D}^{0}(\mca G, \mca G) = \frac{1}{2}\chi(\mca G, \mca G).
\]
In particular, the following is obvious since $\mca E$ is in the heart $\mca A$:
\begin{equation}
\mca E\mbox{ is rigid} \iff \mathop{\mathrm{hom}}\nolimits_{\mca D}^{0}(\mca E, \mca E) = \frac{1}{2} \chi(\mca E, \mca E). \label{katakori}
\end{equation}
By taking $\mathop{\mathrm{Hom}}\nolimits_{\mca D}(-, \mca G)$ to the sequence (\ref{nemui}), we have $\mathop{\mathrm{Hom}}\nolimits_{\mca D}(\mca G, \mca G) \stackrel{\sim}{\to} \mathop{\mathrm{Hom}}\nolimits_{\mca D}(\mca E, \mca G)$ and hence
\begin{equation}
\mathop{\mathrm{hom}}\nolimits_{\mca D}^0(\mca E, \mca G) = \frac{1}{2}\chi(\mca G, \mca G). \label{ccd}
\end{equation}
Similarly we have the following exact sequence:
\[
\xymatrix{
0\ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{0}(\mca G, \mca F)\ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{0}(\mca E, \mca F)\ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{0}(\mca F, \mca F)\ar[lld]\\
& \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{1}(\mca G, \mca F)\ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{1}(\mca E, \mca F)\ar[r] & 0. \\
}
\]
We remark that $\mathop{\mathrm{hom}}\nolimits_{\mca D}^{2}(\mca G, \mca F)=\mathop{\mathrm{hom}}\nolimits_{\mca D}^0(\mca F, \mca G)=0$ since $\mca D$ is CY2.
Hence the exact sequence gives the following equation
\begin{equation}
d_0-d_1= \frac{1}{2}\chi (\mca F, \mca F) + \chi(\mca G, \mca F), \label{noro}
\end{equation}
where $d_i=\mathop{\mathrm{hom}}\nolimits_{\mca D}^{i}(\mca E, \mca F)$ for $i \in \{ 0,1\}$.
By taking $\bb R\mathop{\mathrm{Hom}}\nolimits_{\mca D}^{i}(\mca E, -)$, we have the following exact sequence:
\[
\xymatrix{
0\ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{0}(\mca E, \mca F)\ar[r] &\mathop{\mathrm{Hom}}\nolimits_{\mca D}^{0}(\mca E, \mca E)\ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{0}(\mca E, \mca G) \ar[r]^{\delta} & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^{1}(\mca E, \mca F). \\
}
\]
By computation of dimensions and (\ref{ccd}), we see that the surjectivity of $\delta $ is equivalent to the following
\begin{equation}
\mathop{\mathrm{hom}}\nolimits_{\mca D}^0(\mca E, \mca E)= d_0-d_1 + \frac{1}{2}\chi(\mca G, \mca G). \label{casio}
\end{equation}
By (\ref{noro}) the equation (\ref{casio}) is equivalent to the following:
\begin{eqnarray*}
\mathop{\mathrm{hom}}\nolimits_{\mca D}^0(\mca E, \mca E) &=& \frac{1}{2}\chi(\mca F, \mca F) + \chi(\mca G, \mca F)+\frac{1}{2}\chi(\mca G, \mca G) \\
&=& \frac{1}{2} \chi(\mca E, \mca E).
\end{eqnarray*}
Hence $\mca E$ is rigid if and only if the morphism $\delta $ is surjective by (\ref{katakori}).
Furthermore, the subjectivity of $\delta$ can be understood by the following diagram of exact sequences:
\[
\xymatrix{
0 \ar[r] & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^0(\mca G, \mca G) \ar[r]^{\cong}\ar[d]^{\epsilon^{\mr L}} & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^0(\mca E , \mca G) \ar[r]\ar[d]^{\delta} & 0 \\
\mathop{\mathrm{Hom}}\nolimits_{\mca D}^0(\mca F, \mca F)\ar[r]^{\epsilon^{\mr R}} & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^1(\mca G, \mca F) \ar[r]^{\pi} & \mathop{\mathrm{Hom}}\nolimits_{\mca D}^1(\mca E, \mca F) \ar[r]& 0
}
\]
Since $\pi$ is surjective,
$\delta$ is surjective if and only if $\epsilon^{ \mr{L}}$ is surjective up to the image of $\epsilon^{\mr{R}}$.
\end{proof}
\begin{proposition}\label{bestD}
Let $Z_{4,r}$ be the reduced scheme of the fundamental cycle of $D_4$ and let $Z_r$ be the reduced scheme of the fundamental cycle of the singularity $D_n$.
\begin{enumerate}
\item[$(1)$] There exists a rank $3$ indecomposable pure sheaf on $Z_{4,r}$ which is $\mca O_X$-rigid.
\item[$(2)$] The inequality in Proposition \ref{mainD} is best possible. Namely there exists an $\mca O_X$-rigid pure sheaf $\mca E$ on $Z_r$ with $\mathop{\mathrm{rank}}\nolimits_{Z_r}\mca E=3$.
\end{enumerate}
\end{proposition}
\begin{proof}
We first prove the assertion (1).
Let $\iota \colon Z_{4,r} \to X$ be the embedding to the minimal resolution of the singularity.
Take three pure sheaves $\mca N_{41}=\mca O_{C_1+C_3}(a_1+1, a_3+1), \mca N_{32}=\mca O_{C_2+C_3}(a_2, a_3+1)$ and $\mca N_{23} =\mca O_{C_1+C_2+C_3}(a_1+2, a_2+1, a_3 +1)$ from Table \ref{Summand} and put $\mca N =\mca N_{41}\+ \mca N_{32}\+\mca N_{23}$.
Consider the universal extension $[\mca U] \in \mathop{\mathrm{Ext}}\nolimits^1_{Z_{4,r}}(\mca O_{C_4}, \mca N)$:
\[
\begin{CD}
0 @>>> \mca N @>>> \mca U @>>> \mca O_{C_4} @>>> 0
\end{CD}.
\]
We wish to prove that $\mca U$ is indecomposable and $\mca O_X$-rigid.
The proof of indecomposability is essentially the same as in the proof of Theorem \ref{mainthm1}.
It is easily see
\begin{align}
\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{41}, \mca N_{32}) &= \mathop{\mathrm{Hom}}\nolimits _{Z_{4,r}}(\mca N_{32}, \mca N_{41})=0, \label{ortho}\\
\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{41}, \mca N_{23}) &\cong H^0(\mca O_{C_1+C_3} (1,-1)) \cong k \mbox{ and } \label{right41} \\
\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{32}, \mca N_{23}) &\cong H^0(\mca O_{C_2+C_3} (1,-1)) \cong k. \label{right32}
\end{align}
Take non-zero morphisms $\varphi $ and $\varphi'$ respectively in $\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{41}, \mca N_{23})$ and $\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{32}, \mca N_{23})$.
Both section $\varphi$ and $\varphi'$ are zero on $C_3$ by (\ref{right41}) and (\ref{right32}).
Moreover, since $\mca N_{41}$ is left and right orthogonal to $\mca N_{32}$ by (\ref{ortho}), the same argument in the proof of Theorem \ref{mainthm1} shows that $\mca U$ is indecomposable.
The rigidity of $\iota _*\mca U$ is a consequence of Lemma \ref{rigidCY}.
We first show that $\mca N$ and $\mca O_{C_4}$ satisfy the assumption in Lemma \ref{rigidCY}.
It is enough to show that $\iota _*\mca N$ is rigid.
Let us denote by $\mca D$ the derived category $D_Z(X)$.
The rigidity of $\iota_* \mca N$ essentially follows from Riemann-Roch theorem.
In fact, by Riemann-Roch theorem, we easily see
\begin{align}
\chi( \iota _* \mca N_{41}, \iota _* \mca N_{32}) & =0 \label{zero} \\
\chi( \iota _* \mca N_{41}, \iota _* \mca N_{23}) & =\chi( \iota _* \mca N_{32}, \iota _* \mca N_{32})=1. \label{one}
\end{align}
By (\ref{ortho}) and (\ref{zero}) we have $\mathop{\mathrm{Hom}}\nolimits_{\mca D}^1(\iota_* \mca N_{41}, \iota _* \mca N_{32})=0$.
It is easy to see
\begin{align}
\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{23}, \mca N_{41}) &\cong H^0(\mca O_{C_1+C_3} (-1,0))=0 \mbox{ and } \label{lorth41} \\
\mathop{\mathrm{Hom}}\nolimits_{Z_{4,r}}(\mca N_{23}, \mca N_{32}) &\cong H^0(\mca O_{C_2+C_3} (-1,0))=0. \label{lorth32}
\end{align}
Hence we have $\mathop{\mathrm{Hom}}\nolimits_{\mca D}^1(\iota_* \mca N_{23}, \iota _* \mca N_{41})=\mathop{\mathrm{Hom}}\nolimits_{\mca D}^1(\iota_* \mca N_{23}, \iota _* \mca N_{32})=0$ by (\ref{right41}), (\ref{right32}), (\ref{one}), (\ref{lorth41}) and (\ref{lorth32}).
Thus we see that $\iota _*\mca N$ is rigid.
By Corollary \ref{isom} the push forward $\iota _* \mca U$ is also a universal extension.
Since the projections $p_{ij} \colon \iota _* \mca N \to \iota _* \mca N_{ij}$ to the direct summand of $\iota _* \mca N$ give a basis $\{ [ \iota _* \mca U]^{\mr{L}} (p_{ij}) \}$ of $\mathop{\mathrm{Hom}}\nolimits^1_X(\iota _* \mca O_{C_4}, \iota _* \mca N)$, the push forward $\iota _*\mca U$ is rigid by Lemma \ref{rigidCY}.
For the second assertion $(2)$, let us denote by $j \colon Z_{4,r} \to Z_{r}$ the closed embedding.
Then the push forward $j_* \mca U$ is a pure sheaf on $Z_r$ and is $\mca O_X$ rigid by Lemma \ref{extcomp}.
Thus the second assertion holds.
\end{proof}
\section{$\mca O_X$-rigid pure sheaves on $E_{6,7,8}$}\label{4}
\begin{proposition}\label{propE}
Let $Z$ be the fundamental cycle of the singularity $E_{n}$ for $n \in \{6,7,8 \}$ and $Z_{r}$ is the reduced scheme of $Z$.
Then the maximal rank of $\mca O_X$-rigid indecomposable pure sheaves is $3$:
\[
\max \{ \mathop{\mathrm{rank}}\nolimits_{Z_r} \mca E \mid \mca E \mbox{ is an $\mca O_X$-rigid indecomposable pure sheaf } \} =3.
\]
\end{proposition}
\begin{proof}
The proof is essentially the same as in Proposition \ref{mainD}.
Let $\mca F$ be an $\mca O_X$-rigid pure sheaf on $Z_r$.
By the same argument in the proof of Proposition \ref{mainD}, there is a filtration of $\mca F$
\[
0 =\mca F_{0} \subset \mca F_1 \subset \mca F_2 \subset \mca F_3 \subset \mca F_4 \subset \mca F_{5} = \mca F
\]
such that
\begin{itemize}
\item $\mca F_i/\mca F_{i-1}$ is a pure sheaf on $C_i$ for $i \in \{ 1,2,3,4 \}$ and
\item $\mca F_5/ \mca F_4$ is a pure sheaf on $C_5 + \cdots + C_n$
\end{itemize}
The quotient $\mca F_i / \mca F_{i-1}$ is also $\mca O_X$-rigid since $\mca F$ is $\mca O_X$-rigid.
Hence for each $i \in \{ 1,2,3,4 \}$ we have
\[
\mca F_i/ \mca F_{i-1} \cong \mca O_{C_i}(a_i)^{\+m_i} \+ \mca O_{C_i}(a_i+1)^{\+n_i}.
\]
Moreover we can assume that the support of each $\mca F_i$ contains $C_i$.
Hence each direct summand of $\mca F_3$ is one of the Table \ref{F_3} and each direct summand of $\mca F_4$ supported on $C_4$ is one of Table \ref{F_4}.
\begin{table}[hbtp]
\resizebox{1\hsize}{!}{
\begin{tabular}{|c|c|c|}
\hline
$\mca O_{C_1+C2}(a_1-1, a_2)$ &$\mca O_{C_1+C_2+C_3}(a_1-1, a_2+1, a_3)$ &$\mca O_{C_1+C_2+C_3}(a_1-1, a_2+1, a_3+1)$ \\
\hline
$\mca O_{C_1+C2}(a_1, a_2)$ &$\mca O_{C_1+C_2+C_3}(a_1, a_2+1, a_3)$ &$\mca O_{C_1+C_2+C_3}(a_1, a_2+1, a_3+1)$\\
\hline
$\mca O_{C_2}(a_2)$ &$\mca O_{C_2 +C_3}(a_2+1, a_3)$ &$\mca O_{C_2+C_3}(a_2+1, a_3+1) $\\
\hline
$\mca O_{C_1+C2}(a_1-1, a_2+1)$ &$\mca O_{C_1+C_2+C_3} (a_1 -1, a_2 +2, a_3)$&$\mca O_{C_1+C_2+C_3}(a_1 -1, a_2 +2, a_3+1)$ \\
\hline
$\mca O_{C_1+C2}(a_1, a_2+1)$ &$\mca O_{C_1+C_2+C_3}(a_1, a_2+2, a_3)$&$\mca O_{C_1+C_2+C_3}(a_1, a_2+2, a_3+1)$ \\
\hline
$\mca O_{C_2}(a_2)$ &$\mca O_{C_2+C_3}(a_2+2, a_3)$ &$\mca O_{C_2+C_3}(a_2+2, a_3+1)$ \\
\hline
\end{tabular}
}
\vspace{1mm}
\caption{}\label{F_3}
\resizebox{1\hsize}{!}{
\begin{tabular}{|c|c|c|}
\hline
$\mca O(a_1-1, a_2+1, a_3)$ & $\mca O(a_1-1, a_2+1, a_3+1, a_4)$ &$\mca O(a_1-1, a_2+1, a_3+1, a_4+1)$ \\
\hline
$\mca O(a_1, a_2+1, a_3)$ & $\mca O(a_1, a_2+1, a_3+1, a_4)$& $\mca O(a_1, a_2+1, a_3+1, a_4+1)$\\
\hline
$\mca O(a_2+1, a_3)$ & $\mca O(a_2+1, a_3+1,a_4) $& $\mca O(a_2+1, a_3+1,a_4+1)$\\
\hline
$\mca O (a_1 -1, a_2 +2, a_3)$ & $\mca O (a_1 -1, a_2 +2, a_3+1,a_4)$& $\mca O (a_1 -1, a_2 +2, a_3+1,a_4+1)$ \\
\hline
$\mca O(a_1, a_2+2, a_3)$ & $\mca O(a_1, a_2+2, a_3+1,a_4)$ & $\mca O(a_1, a_2+2, a_3+1,a_4+1)$ \\
\hline
$\mca O(a_2+2, a_3)$ & $\mca O(a_2+2, a_3+1,a_4)$& $\mca O(a_2+2, a_3+1,a_4+1)$ \\
\hline
$\mca O(a_3)$ & $\mca O(a_3+1,a_4)$ & $\mca O(a_3+1,a_4+1)$\\
\hline
$\mca O(a_1-1, a_2+1, a_3+1)$ & $\mca O(a_1-1, a_2+1, a_3+2,a_4)$& $\mca O(a_1-1, a_2+1, a_3+2,a_4+11)$\\
\hline
$\mca O(a_1, a_2+1, a_3+1)$ & $\mca O(a_1, a_2+1, a_3+2,a_4)$& $\mca O(a_1, a_2+1, a_3+2,a_4+1)$\\
\hline
$\mca O(a_2+1, a_3+1)$ & $\mca O(a_2+1, a_3+2,a_4)$ & $\mca O(a_2+1, a_3+2,a_4+1)$\\
\hline
$\mca O(a_1 -1, a_2 +2, a_3+1)$ & $\mca O(a_1 -1, a_2 +2, a_3+2, a_4)$& $\mca O(a_1 -1, a_2 +2, a_3+2, a_4+1)$\\
\hline
$\mca O(a_1, a_2+2, a_3+1)$ & $\mca O(a_1, a_2+2, a_3+2,a_4)$ & $\mathcal O(a_1, a_2+2, a_3+2,a_4+1)$\\
\hline
$\mca O(a_2+2, a_3+1)$ & $\mca O(a_2+2, a_3+2,a_4)$ & $\mca O(a_2+2, a_3+2,a_4+1)$\\
\hline
$\mca O(a_3+1)$ & $\mca O(a_3+2, a_4)$& $\mca O(a_3+2, a_4+1)$\\
\hline
\end{tabular}
}
\vspace{1mm}
\caption{For simplicity we denote by $\mca O(b_1, b_2, b_3, b_4)$ an invertible sheaf on $C_1+ C_2 +C_3 +C_4$ whose degrees are respectively $b_i$ on $C_i$. }\label{F_4}
\end{table}
Since $\mca F_5$ is pure sheaf on the tree which is isomorphic to the fundamental cycle of $A_{n-4}$, the set of direct summands of $\mca F_5/\mca F_4$ is totally ordered with respect to Definition \ref{Order}.
Let $\mf T$ be the set of sheaves in Table \ref{F_4}.
Similarly as in Proposition \ref{mainD} the set $\mf T$ is a poset.
Furthermore each column set $\{ \mca N_{ij} \}_{j=1}^3$ and each row set $\{ \mca N_{ij} \}_{i=1}^{14}$ are totally ordered.
Hence any subposet of $\mf T$ has at most $3$ minimal elements.
Thus the maximum of the rank of indecomposable pure sheaves on $Z_r$ is at most $3$.
Let $\mca U$ be a pure sheaf constructed in the proof of Proposition \ref{bestD} and $\iota \colon Z' \to Z_r$ be the closed embedding where $Z' = \sum_{j=2}^{5}C_j$.
Then the push forward $\iota _* \mca U$ gives an indecomposable pure sheaf on $Z_r$ after the change of indexes $(1,2,3,4) \mapsto (2,4,3,5)$ .
The sheaf $\iota _* \mca U$ is $\mca O_X$-rigid by Lemma \ref{extcomp}.
Thus the opposite inequality holds.
\end{proof}
\renewcommand{\proofname}{\textit{Proof of Theorem \ref{mainthm2}}}
\begin{proof}
If the singularity is $D_n$ then the inequality holds and best possible by Propositions \ref{mainD} and \ref{bestD}.
The case of $E_{6},E_{7} $ or $E_{8}$ follows form Proposition \ref{propE}.
\end{proof}
\renewcommand{\proofname}{\textit{Proof}}
|
1,108,101,564,260 | arxiv | \section{Introduction}
The usage of integrable techniques in the context of the gauge-string correspondence \cite{M} provided us with an unpreceded analytic insight
into the problem of higher orders of perturbation theory in the planar maximally supersymmetric ${\cal N}=4$ gauge theory, as well as with important results for finite values of the
coupling and even for non-perturbative ones, see {\it e.g.} \cite{Arutyunov:2009ga, Beisert:2010jr} for the reviews. Most of our progress and understanding came through investigation of the light-cone string sigma model
on ${\rm AdS}_5\times {\rm S}^5$ by means of the Factorised Scattering Theory, Thermodynamic Bethe Ansatz (TBA) \cite{Arutyunov:2009ur}-\cite{Gromov:2009bc} and the quintessence of the latter realised in the form of the quantum spectral curve
construction \cite{Gromov:2013pga}. In many cases this progress was possible due to ingenious guesswork, an intuition developed in studying a number of simpler examples, through comparisons
with different limiting cases where a solution was possible by other means and also by trial and error. To better understand the nature of the proposed constructions, their analytic properties and the role of symmetries, it is essential to study other solvable examples of stringy type sigma models and their gauge theory duals. One such interesting example is offered by integrable deformations of the ${\rm AdS}_5\times {\rm S}^5$ string sigma model \cite{Delduc:2013qra,Delduc:2014kha} based on the earlier constructions by Klimcik \cite{Klimcik:2002zj, Klimcik:2008eq}.
In modern language these deformations can be classified as $\eta$-deformations \cite{Delduc:2013qra}, $\lambda$-deformations \cite{Sfetsos:2013wia,Hollowood:2014qma} and deformations related to solutions of the classical Yang-Baxter equation
\cite{Kawaguchi:2014qwa}-\cite{vanTongeren:2015soa}. In some cases these deformations are not totally unrelated but can be connected through the contraction limits \cite{Hoare:2016hwh,Hoare:2016ibq}.
\smallskip
In the present paper we restrict our attention to certain aspects of the $\eta$-deformed sigma model on ${\rm AdS}_5\times {\rm S}^5$. We recall that our current knowledge about this model includes the perturbative S-matrix \cite{Arutyunov:2013ega,Arutyunov:2015qva}, which agrees with the semi-classical
limit of the exact S-matrix based on the quantum group symmetries \cite{BK}. Assuming that the exact S-matrix \cite{BK} drives the scattering in the quantum version of the $\eta$-deformed model, in a series of works \cite{Arutynov:2014ota}-\cite{vanTongeren:2013gva} the mirror TBA construction\footnote{For the construction of the $q$-deformed dressing phase see \cite{Hoare:2011wr}.} for the corresponding sigma model spectrum has been developed. Importantly, as was recently shown \cite{Arutyunov:2015mqj}, the target-space bosonic fields (NSNS and RR in the string theory language) of the two-dimensional $\eta$-deformed model
do not satisfy the standard type IIB supergravity equations but rather obey their specific generalisation, an observation also confirmed by considerations of classical $\kappa$-symmetry \cite{Wulff:2016tju}. A similar phenomenon has been also observed for other deformations
\cite{Hoare:2016hwh,Orlando:2016qqu,Kyono:2016jqy}.
\smallskip
As is well known, under various consistent reductions an integrable two-dimensional sigma model can produce one-dimensional integrable models which might have an important physical meaning on their own.
For instance, for the ${\rm AdS}_5\times {\rm S}^5$ sigma model the spinning string ansatz produces rigid string solutions \cite{Frolov:2002av,Frolov:2003qc}, which are nicely described in terms of the Neumann or Neumann-Rosochatius models \cite{Arutyunov:2003uj,Arutyunov:2003za}.
The integrable models of Neumann or Neumann-Rosochatius type are historically among the first examples of integrable systems and they show up in various problems of mathematical physics including the problem of geodesics on ellipsoid
or equivariant harmonic maps into spheres \cite{Neumann}-\cite{JMoser}. In the context of strings on ${\rm AdS}_5\times {\rm S}^5$ they were useful to explicitly construct the corresponding spinning string solutions and compute their energy as a function of spins.
The associated conserved charges governing the corresponding string profiles were compared to that of the spin chain solutions describing certain operators in ${\cal N}=4$ theory \cite{Arutyunov:2003rg}.
\smallskip
In the previous work by two of us \cite{Arutyunov:2014cda}, the deformed Neumann model which follows from the reduction of the $\eta$-deformed sigma model on ${\rm AdS}_5\times {\rm S}^5$ was studied.
There, without loss of generality we restricted ourselves to the case of the deformed sphere and found the corresponding Lax connection and the deformed analogues of Uhlenbeck integrals. The aim of the present paper
is to extend our previous analysis to the deformation of the Neumann-Rosochatius model, which also naturally comes from the $\eta$-deformed sigma model. The Neumann-Rosochatius model
is richer in the sense that its reduction to the ``Rosochatius part" describes the geodesic problem on a (deformed) sphere. Again, under a certain limiting procedure we will extract the integrals of motion
of the deformed Neumann-Rosochatius system from the Lax matrix of the $\eta$-deformed sigma model. We then explicitly demonstrate that these integrals are in involution with respect
to the Dirac bracket. The three conserved angular momenta corresponding to the isometry directions together with two Neumann-Rosochatius integrals are enough to
declare that the deformed model is integrable in the Liouville sense. We hope that our present findings on the integrability of the deformed Neumann-Rosochatius system will be further used in explicit constructions
of corresponding solutions, see {\it e.g.} \cite{Kameyama:2014vma}-\cite{Banerjee:2016xbb}, which are necessary to be able to compare with the results based on the exact TBA approach.
\smallskip
The paper is organised as follows. In the next section we recall the basic facts about the usual Neumann-Rosochatius system. In Section 3 we briefly describe the $\eta$-deformed sigma model and the spinning ansatz for the corresponding solutions.
Section 4 is devoted to the Lax representation for the $\eta$-deformed Neumann-Rosochatius model and integrals of motion. In Section 5 we continue the study of the integrals of motion, establish connection to
the previously found integrability of the $\eta$-deformed Neumann and Rosochatius model, and discuss consistent truncations to lower-dimensional models. In the Conclusions we discuss the results obtained and formulate some open problems. Some technical details are collected in 3 appendices.
\section{The Neumann-Rosochatius model}\label{UndefSystems}
We will briefly review the undeformed integrable system, making special emphasis on its integrals of motion and the properties thereof. We start with presenting the main features of the Neumann-Rosochatius model, which describes, in particular, generalised spinning strings in ${\rm AdS}_5\times {\rm S}^5$. Later, we will describe the relation between the Neumann-Rosochatius system and the Neumann and Rosochatius integrable models.
In the present section, all expressions will be given in the $x_{i}$ coordinates commonly used in the literature. The equivalent expressions in terms of the unconstrained coordinates $(r,\xi)$ are given in appendix \ref{undefinrxi}.
\subsection{Brief overview}\label{briefoverviewundef}
The Lagrangian for this system is given by
\begin{equation}\label{undefLagNr}
L_{NR} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {{x'_i}^2 + x_i^2{\alpha'_{i}}^2 - \omega _i^2x_i^2} \right) + \frac{\Lambda }{2}\left( {\sum\limits_{i = 1}^3 {x_i^2} - 1} \right)} ,
\end{equation}
where $\Lambda$ is a Lagrangian multiplier and $'$ denotes a derivative with respect to time. The corresponding Hamiltonian is given by
\begin{equation}\label{HNR}
{H_{NR}} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\pi _i^2 + \omega _i^2x_i^2 + \frac{{\pi _{{\alpha _i}}^2}}{{x_i^2}}} \right)}=\frac{1}{4}\sum\limits_{i \ne j}^3 {J_{ij}^2} + \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\omega _i^2x_i^2 + \frac{{\pi _{{\alpha _i}}^2}}{{x_i^2}}} \right)} \ ,
\end{equation}
where ${J_{ij}} = {x_i}{\pi _j} - {x_j}{\pi _i}$, with $\pi_{i}$ and $\pi _{{\alpha _i}}$ denoting the momenta canonically conjugate to $x_{i}$ and $\alpha_{i}$, respectively, and the Hamiltonian is subjected to the constraints
\begin{align}\label{undefconst}
\sum\limits_{i = 1}^3 {x_i^2 = 1}\ ,&& \sum\limits_{i = 1}^3 {{x_i}{\pi _i} = 0}\ .
\end{align}
Due to the second constraint, in these coordinates it is necessary to use the Dirac bracket formalism, which yields
\begin{align*}
{\left\{ {{\pi _i},{\pi _j}} \right\}_{D.B.}} = {x_i}{\pi _j} - {x_j}{\pi _i}\ ,&&{\left\{ {{\pi _i},{x_j}} \right\}_{D.B.}} = {\delta _{ij}} - {x_i}{x_j}\ ,&& {\left\{ {{x_i},{x_j}} \right\}_{D.B.}} = 0\ .
\end{align*}
From the Hamiltonian of this system, we see that it describes a particle moving on a sphere under both a harmonic oscillator potential with frequency $\omega_{i}$ (which in this context is also referred as the Neumann potential) and a Coulomb potential (also referred as the Rosochatius potential).
We have three coordinates $x_{i}$, three phases $\alpha_{i}$, their corresponding conjugated momenta and two constraints. Thus, Liouville integrability requires 5 integrals of motion in involution in order to solve this system. Since the phases $\alpha_{i}$ are cyclic coordinates, their canonically conjugate momenta $\pi_{\alpha_i}$ are integrals of motion. As the three $\pi_{\alpha_i}$ are pairwise in involution, we can fix them to be constants, while $x_i$ and their conjugate momenta will describe the dynamics of the system.
The other two integrals of motion needed for complete integrability of the system can be chosen from the generalisation of the Uhlenbeck integrals
\begin{equation}\label{IntNR}
{I_i} = x_i^2 + \sum\limits_{j \ne i}^3 {\frac{1}{{\omega _i^2 - \omega _j^2}}} \left[ {J_{ij}^2 + \frac{{\pi _{{\alpha _i}}^2x_j^2}}{{x_i^2}} + \frac{{\pi _{{\alpha _j}}^2x_i^2}}{{x_j^2}}} \right]\quad i\in\left\{ {1,2,3} \right\}\, .
\end{equation}
The integrals of motion satisfy the following properties
\begin{equation}\label{propertiesundefNR1}
\left\{ {{H_{NR}},{I_i}} \right\}_{D.B.} = 0\ ,\quad \quad\quad\left\{ {{I_i},{I_j}} \right\}_{D.B.} = 0\ ,\quad \quad \quad \left\{ {{I_i},{\pi _{{\alpha _j}}}} \right\}_{D.B.} = 0\ ,
\end{equation}
\begin{equation}\label{propertiesundefNR2}
\sum\limits_{i = 1}^3 {{I_i} = 1} .
\end{equation}
From the last expression it is clear that only two of the three integrals $I_{i}$ are independent.
Furthermore, also the Hamiltonian can be expressed as a linear combination of the integrals of motion introduced above
\begin{equation}\label{propertiesundefNR3}
{H_{NR}} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\omega _i^2{I_i} + \pi _{{\alpha _i}}^2} \right)} .
\end{equation}
\subsection{Connection with the Neumann and Rosochatius integrable models}\label{connectionundef}
As previously mentioned, the Neumann-Rosochatius system is of particular interest since it has both the potential terms of the Neumann and Rosochatius integrable models. We will now briefly explain how the Neumann and Rosochatius integrable models are recovered as limits of the more general Neumann-Rosochatius system.
First, we will show how the Neumann-Rosochatius system reduces to the Neumann model in the limit of $\pi_{\alpha_i}\rightarrow0$. From equation \eqref{HNR} we see that $H_{N}$, the Hamiltonian of the Neumann model, is given by
\begin{align}\label{counterpart1a}
{H_N} = \mathop {\lim }\limits_{{\pi _{{\alpha _i}}} \to 0} {H_{NR}} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\pi _i^2 + \omega _i^2x_i^2} \right)} \ ,
\end{align}
still subjected to the constraints of equation \eqref{undefconst}. Naturally, in this limit, the integrals of motion of the Neumann-Rosochatius system reduce to the Uhlenbeck integrals of the Neumann model
\begin{align}\label{counterpart1b}
{F_i} = \mathop {\lim }\limits_{{\pi _{{\alpha _j}}} \to 0} {I_i} = x_i^2 + \sum\limits_{j \ne i}^3 {\frac{{J_{ij}^2}}{{\omega _i^2 - \omega _j^2}}} \quad\quad i \in \left\{ {1,2,3} \right\}\ .
\end{align}
Reduction of the Neumann-Rosochatius model to the Rosochatius system happens in the limit $\omega_{i}\rightarrow0$, where the Hamiltonian of the Rosochatius system is given by
\begin{equation}\label{HNRtoHR}
{H_R} = \mathop {\lim }\limits_{{\omega _j} \to 0} {H_{NR}} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\pi _i^2 + \frac{{\pi _{{\alpha _i}}^2}}{{x_i^2}}} \right)} \ ,
\end{equation}
subjected to the standard constraints \eqref{undefconst}. The integrals of motion for this system are given by
\begin{align}\label{IntR}
{F_{ij}} = J_{ij}^2 + \frac{{\pi _{{\alpha _i}}^2x_j^2}}{{x_i^2}} + \frac{{\pi _{{\alpha _j}}^2x_i^2}}{{x_j^2}}\quad i\neq j\ .
\end{align}
The integrals of motion of the Neumann-Rosochatius and Rosochatius systems are intrinsically related, which can be seen from expressions \eqref{IntNR} and \eqref{IntR}
$${I_i} = x_i^2 + \sum\limits_{j \ne i}^3 {\frac{{{F_{ij}}}}{{\omega _i^2 - \omega _j^2}}}\ .$$
Namely, taking into account the fact that $0\le x_{i}^2\le1$, we see that as the frequencies $\omega_{i}$, and by this $\omega_{i}^{2}-\omega_{j}^{2}$, approach zero, the leading contribution to the integrals $I_{i}$ will come from the Rosochatius integrals of motion $F_{ij}$,
\begin{equation}\label{NRtoNUndefLimit}
\mathop {\lim }\limits_{{\omega _{j}\to\omega_{i}}} \left( {\omega _i^2 - \omega _j^2} \right){I_i} = {F_{ij}}\ ,
\end{equation}
where $i\neq j$ and there is no summation over the $i$ index in the expression above.
\section{Bosonic $\eta$-deformed sigma model and generalised spinning solutions}
\label{section3}
Our starting point to analise the $\eta$-deformed Neumann-Rosochatius model is the Lagrangian of the bosonic $({\rm AdS}_5\times {\rm S}^5)_{\eta}$ sigma model \cite{Arutyunov:2013ega} restricted to the sphere
\begin{eqnarray}
\mathscr L &=&{1\ov2}\, \eta^{\alpha\beta}\Bigg(\frac{\partial_\alpha r\partial_\beta r
}{ \left(1-r^2\right) \left(1+\varkappa ^2 r^2\right)}
+\frac{r^2 \partial_\alpha \xi\partial_\beta \xi}{1+ \varkappa ^2 r^4 \sin ^2\xi}+\frac{r^2 \cos ^2\xi\ \partial_\alpha \phi_1\partial_\beta \phi_1}{1+ \varkappa ^2
r^4 \sin ^2\xi }\label{L} \nonumber \\
&&\qquad +r^2 \sin^2\xi\ \partial_\alpha \phi_2\partial_\beta \phi_2 + \frac{\left(1-r^2\right)\partial_\alpha \phi_{3}\partial_\beta \phi_{3}}{1+\varkappa ^2 r^2}\, \Bigg)\label{FulletaLag} \\
&&+{\varkappa\ov2}\epsilon^{\alpha\beta}\left(\frac{ r^4 \sin 2 \xi }{1+ \varkappa ^2 r^4 \sin^2\xi}\partial_\alpha\phi_1\partial_\beta\xi +\frac{2r \partial_{\alpha}r\partial_{\beta}\phi_{3}}{1+\varkappa^{2}r^{2}} \right)\ ,\nonumber
\end{eqnarray}
where $r\in\left[0,1\right]$, $\xi\in\left[0,\frac{\pi}{2}\right]$, $\phi_{i}\in\left[0,2\pi\right]$ and $\varkappa=\frac{2\eta}{1-\eta^2}\ge0$, where $\eta$ is the original deformation parameter of \cite{Delduc:2013qra}. For convenience, we choose the world-sheet metric $\eta^{\alpha\beta}$ to be Minkowski, while $\epsilon^{\alpha\beta}$ denotes the Levi-Civita symbol. Just as done in \cite{Arutyunov:2014cda}, we rescaled the Lagrangian by an overall constant and included a total derivative which appeared naturally in the calculation of the B-field contribution \cite{Arutyunov:2013ega}.
\begin{figure}[h]
\centering
\includegraphics [scale=0.60]{Slide1.jpg}
\caption{Classical solutions in $\rm{AdS}_{5}\times \rm{S}^{5}$ and $(\rm{AdS}_{5}\times \rm{S}^{5})_{\eta}$ with their corresponding integrable models and integrals of motion.}
\label{noche12}
\end{figure}
In general, we will consider closed solutions along the world-sheet spatial direction $\sigma$ and consequently, we assume $r$ and $\xi$ to be periodic in $\sigma$ with period $2\pi$. From equation \eqref{FulletaLag} we see that $\mathscr L$ has three isometries corresponding to translations along the angles $\phi_{i}$. Thus, we can consider classical solutions located at a point in the center of $\rm{AdS}$ and having the following form on the $\eta$-deformed $\rm{S}^5$:
\begin{equation}
\begin{aligned}
\label{solucion2}
r= r(\sigma)\ ,&& \xi =\xi(\sigma)\ , \\
\phi_{1}=\omega_{1}\tau+\alpha_{1}(\sigma)\ ,\quad\quad\quad&& \phi_{2}=\omega_{2}\tau+\alpha_{2}(\sigma)\ ,\quad\quad\quad && \phi_{3}=\omega_{3}\tau+\alpha_{3}(\sigma)\ ,
\end{aligned}
\end{equation}
where $\tau$ and $\sigma$ denote the world-sheet coordinates, $\omega_{i}$ are constant angular velocities and $\alpha_{i}(\sigma)$ are interpreted as real phases satisfying the periodicity condition $\alpha_{i}(\sigma+2\pi)=\alpha_{i}(\sigma)+2\pi m_{i}$ with $m_{i}\in\mathbb{Z}$.
This ansatz represents a generalisation of the spinning solutions studied in \cite{Arutyunov:2014cda} (for which $\alpha_{i}\rightarrow0$) and therefore, we will refer to solutions of the form \eqref{solucion2} as ``generalised spinning solutions''. For the case of undeformed ${\rm AdS}_5\times {\rm S}^5$, in \cite{Arutyunov:2003za} solutions of this type were shown to reduce to a 1-d integrable model: The well-known Neumann-Rosochatius system, with $\sigma$ playing the role of time parameter, while $\tau$ decouples from the equations of motion. In the case of $\eta$-deformed ${\rm AdS}_5\times {\rm S}^5$, this type of solution was first considered in \cite{Kameyama:2014vma}, where the Lagrangian was presented in a new set of coordinates.
For generalised spinning solutions, the reduction of the Lagrangian \eqref{FulletaLag} and its corresponding Hamiltonian are given by
\begin{align}\label{LagNR}
{\widetilde L_{NR}} =& \frac{1}{2}\left[ \frac{{{{r'}^2}}}{{\left( {1 - {r^2}} \right)\left( {1 + {\varkappa ^2}{r^2}} \right)}} + \frac{{{r^2}{{\xi '}^2} + \varkappa {\omega _1}{r^4}\xi '\sin 2\xi }}{{1 + {\varkappa ^2}{r^4}{{\sin }^2}\xi }} + \frac{{\left( {{\alpha'}_1^2 - \omega _1^2} \right){r^2}{{\cos }^2}\xi }}{{1 + {\varkappa ^2}{r^4}{{\sin }^2}\xi }}\right.\\
&\left. + \left( {{\alpha '}_2^2 - \omega _2^2} \right){r^2}{{\sin }^2}\xi + \frac{{\left( {{\alpha '}_3^2 - \omega _3^2} \right)\left( {1 - {r^2}} \right)}}{{1 + {\varkappa ^2}{r^2}}} - \frac{{2\varkappa {\omega _3}rr'}}{{1 + {\varkappa ^2}{r^2}}} \right]\nonumber\ ,\\
{\widetilde H_{NR}} = &\frac{1}{2}\left[ \left( {1 - {r^2}} \right)\left( {1 + {\varkappa ^2}{r^2}} \right)\pi _r^2 + \frac{{\pi _\xi ^2\left( {1 + {\varkappa ^2}{r^4}{{\sin }^2}\xi } \right)}}{{{r^2}}} - 2\varkappa {\omega _1}{\pi _\xi }{r^2}\sin \xi \cos \xi\right.\label{HamNrrr}\\
&\left. + 2\varkappa {\omega _3}{\pi _r}r\left( {1 - {r^2}} \right) + \omega _1^2{r^2}{{\cos }^2}\xi + \omega _2^2{r^2}{{\sin }^2}\xi + \omega _3^2\left( {1 - {r^2}} \right)\right.\nonumber\\
&\left. + \frac{{\pi _{{\alpha _1}}^2\left( {1 + {\varkappa ^2}{r^4}{{\sin }^2}\xi } \right)}}{{{r^2}{{\cos }^2}\xi }} + \frac{{\pi _{{\alpha _2}}^2}}{{{r^2}{{\sin }^2}\xi }} + \frac{{\pi _{{\alpha _3}}^2\left( {1 + {\varkappa ^2}{r^2}} \right)}}{{1 - {r^2}}} \right]\nonumber\ ,
\end{align}
where $\sigma$ plays the role of the time parameter and $'$ denotes $\partial_{\sigma}$. Again, by cyclicity of the $\alpha_i$, the angular momenta $\pi_{\alpha_{i}}$ conjugate to $\alpha_{i}$ are integrals of motion. Since these three momenta are pairwise in involution, to have Liouville integrability one needs two extra conserved quantities in involution. These will be constructed in Section \ref{construyendo} (in principle one of them can be chosen to be $\widetilde{H}_{NR}$).
To see that this system indeed corresponds to a one-parameter deformation of the Neumann-Rosochatius model, one can easily check that moving from unconstrained coordinates $(r,\xi)$ to constrained coordinates $x_{i}$ given by:
\begin{align}\label{cocord}
x_{1}=r \cos\xi\ ,&& x_{2}=r\sin\xi\ ,&& x_{3}=\sqrt{1-r^2}\ ,
\end{align}
the Lagrangian \eqref{LagNR} constitutes a deformation of \eqref{undefLagNr} with the constraint $\sum_{i}x_{i}^{2}=1$.
As previously mentioned, the generalised spinning solution \eqref{solucion2} reduces to the spinning solution considered in \cite{Arutyunov:2014cda} by sending $\alpha_{i}\rightarrow0$ and $\pi_{\alpha_{i}}\rightarrow0$. Taking this limit in equations \eqref{LagNR} and \eqref{HamNrrr}, one obtains the expressions for the $\eta$-deformed Neumann model studied in \cite{Arutyunov:2014cda}. Moreover, one can also consider the limit $\omega_{i}\rightarrow0$ in \eqref{solucion2}, which would correspond to solutions depending exclusively on the world-sheet coordinate $\sigma$. Then, by performing the double Wick rotation $\sigma\leftrightarrow\tau$ (which leaves the Lagrangian \eqref{FulletaLag} invariant up to an overall minus), we see that the corresponding classical solution describes geodesic motion on the $\eta$-deformed sphere. As was explained in \cite{Arutyunov:2014cda}, geodesics on this background are described by an integrable deformation of the Rosochatius system. Thus, by studying the $\eta$-deformed
Neumann-Rosochatius model and its relevant limits, one can also obtain integrals for geodesic motion on the deformed background.
For a diagram describing the relation between these three different classical solutions and their corresponding deformed and undeformed integrable models see Figure \ref{noche12}.
\section{Lax pair for the $\eta$-deformed Neumann-Rosochatius model}\label{construyendo}
We will proceed to construct a $4\times4$ Lax representation for the system presented in the previous section, and later on, we will use it to create the $\eta$-deformed analogues of the integrals of motion \eqref{IntNR}. The procedure is in spirit similar to the one used in \cite{Arutyunov:2014cda}, although expressions are considerably more complicated. Therefore, in our exposition we will briefly introduce the key points in this construction, omitting intermediate expressions in the derivations.
\subsection{Construction of a $4\times4$ Lax representation}
Our starting point is the zero-curvature representation for the bosonic sigma model of $({\rm AdS}_5\times {\rm S}^5)_{\eta}$, as proposed in \cite{Delduc:2013qra},
\begin{align}
\partial_{\alpha}\mathcal{L}_{\beta}-\partial_{\beta}\mathcal{L}_{\alpha}+\left[\mathcal{L}_{\alpha},\mathcal{L}_{\beta}\right]=0\ ,
\label{zerocurvatureetadeformations}
\end{align}
where the Lax connection $\mathcal{L}_{\alpha}$ consists of $8\times8$ matrices defined by
\begin{align*}
\mathcal{L}^{\alpha}&=\widetilde{J}_{+}^{\alpha (0)}+J_{-}^{\alpha (0)}+\lambda^{-1}\sqrt{1+\varkappa^{2}}\ \widetilde{J}_{+}^{\alpha (2)}+\lambda\sqrt{1+\varkappa^{2}}\ J_{-}^{\alpha (2)}\ .
\end{align*}
Here $\lambda$ is the spectral parameter, $J^{\alpha(i)}$ and $\widetilde{J}^{\alpha(i)}$ denote the respective components of $J^{\alpha}$ and $\widetilde{J}^{\alpha}$ along the $\mathbb{Z}_{4}$ graded subspaces $i=0,...,3$, while $J_ - ^\alpha$ and $\widetilde J_ + ^\alpha$ are projections $J_{-}^{\alpha}=P^{\alpha\beta}_{-}J_{\beta}$ and $\widetilde{J}_{+}^{\alpha}=P^{\alpha\beta}_{+}\widetilde{J}_{\beta}$, with $P_{\pm}^{\alpha\beta}=\frac{1}{2}(\eta^{\alpha\beta}\pm\epsilon^{\alpha\beta})$, of the deformed currents
\begin{align}
J_{\alpha } = -\frac{1}{1 - \varkappa R_{\mathfrak{g}} \circ P_{2}}\left(A_{\alpha} \right),\ \ \ \ \ \ \ \ \ \ \ \ \ \ \widetilde{J}_{\alpha } = -\frac{1}{1 + \varkappa R_{\mathfrak{g}} \circ P_{2}}\left(A_{\alpha} \right).
\label{definitionJs}
\end{align}
In the expressions above $P_{i}$ is the projector along the subspace with grade $i=0,..,3$ and we used the definition $A_{\alpha}=-\mathfrak{g}^{-1}\partial_{\alpha}\mathfrak{g}$, where $\mathfrak{g}=\mathfrak{g}(\tau,\sigma)$ denotes a bosonic coset representative of ${\rm SU}(2,2)\times {\rm SU}(4)/{\rm SO}(4,1) \times {\rm SO}(5)$. Additionally, for the operator $R_{\mathfrak{g}}$ on $M\in \mathfrak{psu}(2,2|4)$ we use the definition
$$R_{\mathfrak{g}}(M)=\mathfrak{g}^{-1}R(\mathfrak{g}M\mathfrak{g}^{-1})\mathfrak{g} ,$$
where $R$ is an operator on $\mathfrak{psu}(2,2|4)$ satisfying the modified Yang-Baxter equation \cite{Delduc:2013qra}. By choosing the same coset representative $\mathfrak{g}(\tau,\sigma)$ and $R$, as in \cite{Arutyunov:2013ega},
$$R(M)_{ij}=-i\epsilon_{ij} M_{ij}, \ \ \ \ \ \epsilon_{ij}= \left\{
\begin{array}{ll}
\ \ 1\ \ \text{if}\ i<j \\
\ \ 0\ \ \text{if} \ i=j\\
-1\ \ \text{if} \ i>j
\end{array}
\right. ,$$
and inverting the operators $1 \pm \varkappa R_{\mathfrak{g}}\circ P_{2}$ in \eqref{definitionJs}, one obtains an explicit expression for the $8\times8$ Lax connection $\mathcal{L}_{\alpha}$.
Plugging in the generalised spinning solution \eqref{solucion2}, the coordinate $\tau$ decouples, as the coordinates $r$, $\xi$ and $\alpha_{i}$ depend exclusively on $\sigma$, and the zero-curvature condition \eqref{zerocurvatureetadeformations} reduces to
\begin{align}
\partial_{\sigma}\mathcal{L}_{\tau}=\left[\mathcal{L}_{\tau},\mathcal{L}_{\sigma}\right] .
\label{zerocurvaturespinning}
\end{align}
In the bosonic sigma model, the $8\times8$ Lax connection takes a block-diagonal form, with one $4\times4$ block corresponding to $\rm{AdS}_{5}$ and the other one corresponding to $\rm{S}^{5}$. As our classical solution lives on $\rm{S}^{5}$, we restrict ourselves to the latter.
By explicit calculation, it can be checked that equation \eqref{zerocurvaturespinning} is satisfied after imposing the equations of motion coming from \eqref{LagNR} and therefore, \eqref{zerocurvaturespinning} is a $4\times4$ Lax pair for our system. Naturally, this Lax pair reduces to the one constructed for spinning solutions in \cite{Arutyunov:2014cda} when taking the limit $\alpha_{i}\rightarrow0$ and $\pi_{\alpha_{i}}\rightarrow0$, and it is related to the Lax pair used to study geodesics in \cite{Arutyunov:2014cda} after taking the limit $\omega_{i}\rightarrow0$ and considering the time parameter of the system to be $\tau$ instead of $\sigma$.
The entries of this pair of $4\times4$ matrices constitute very large expressions which we present in Appendix \ref{laxapendex}. Since we are mainly interested in the deformed analogues of the integrals \eqref{IntNR}, we will now proceed to use $\mathcal{L}_{\tau}$ to generate a tower of integrals of motion.
\subsection{Generating deformed integrals of motion $\widetilde{I}_{i}$}\label{findingI}
Conserved quantities for this system can be generated by considering the trace of powers of $\mathcal{L}_{\tau}$, namely $\text{Tr}\left[ {\mathcal{L}_\tau ^n}(\lambda) \right]$, and then series expand in the spectral parameter $\lambda$. Due to Newton's identities for the traces, it suffices to restrict our analysis to $n\leq4$.
By explicit calculation it can be checked that for $n\leq3$, all integrals of motion generated in this way can be written in terms of $\pi_{\alpha_{i}}$ and $\widetilde{H}_{NR}$. However, for the case of $n=4$, the integrals of motion generated by
$$K_{m}={\left. {\frac{1}{{m!}}\frac{{{d^m}\left( {{\lambda ^4}\mathcal{L}_\tau ^4} \right)}}{{d{\lambda ^m}}}} \right|_{\lambda = 0}}\quad\quad \text{with}\quad m \in \left\{ {2,4,6} \right\} ,$$
can be decomposed in terms of the previously known integrals and a new unknown integral of motion, which will be denoted by $\widetilde{K}$.
This new integral of motion $\widetilde{K}$ comes as a very large expression, having terms up to quadratic order in $\pi_{r}$ and quartic order in $\pi_{\xi}$. In principle, this integral together with the momenta $\pi_{\alpha_{i}}$ and $\widetilde{H}_{NR}$, constitute an involutive family, ensuring classical integrability in the Liouville sense. However, the undeformed limit of this quantity suggests that it can be split further into smaller conserved blocks, namely into integrals $\widetilde{I}_{i}$ deforming the ones in equation \eqref{IntNR}.
As in the undeformed case, it is expected that only two of the deformed integrals $\widetilde{I}_{i}$ are truly independent, see \eqref{propertiesundefNR2}, and that the Hamiltonian is given by a linear combination of the three $\widetilde{I}_{i}$, see \eqref{propertiesundefNR3}. Thus, we adopt the following ansatz
\begin{align*}
{{\widetilde H}_{NR}} &= {A_1}{{\widetilde I}_1} + {A_2}{{\widetilde I}_2} + {A_3}\ ,\\
\widetilde K &= {B_1}{{\widetilde I}_1} + {B_2}{{\widetilde I}_2} + {B_3}\ ,
\end{align*}
where $A_{i}$ and $B_{i}$ are constants independent of $\sigma$, which in principle can depend on $\pi_{\alpha_{i}}$, $\omega_{i}$ and $\varkappa$. By using the explicit expressions of $\widetilde{K}$ and $\widetilde{H}_{NR}$, one can solve for $\widetilde{I}_{1}$ and $\widetilde{I}_{2}$ in terms of the $A_{i}$ and $B_{i}$.
From the expressions \eqref{HNR} and \eqref{IntNR}, we see that the undeformed system satisfies
$$\frac{{{\partial ^2}{H_{NR}}}}{{\partial \omega _i^2}} = {\left. {{{I}_i}} \right|_{{\pi _r}=\pi_{\xi} = {\pi _{{\alpha _j}}} = 0}}\quad \forall i \in \left\{ {1,2,3} \right\}.$$
From equation \eqref{HamNrrr} one also has that ${\partial ^2}{{\widetilde H}_{NR}}/\partial \omega _i^2 = {\partial ^2}{H_{NR}}/\partial \omega _i^2$. Since we want the $\eta$-deformed integrals $\widetilde{I}_{i}$ to coincide with the undeformed ones in the limit of $\varkappa\rightarrow0$, we will impose that
\begin{align}\label{thetrixk}
\frac{{{\partial ^2}{{\widetilde H}_{NR}}}}{{\partial \omega _i^2}} = {\left. {{{\widetilde I}_i}} \right|_{{\pi _r}=\pi_{\xi} = {\pi _{{\alpha _j}}} = 0}}\quad \forall i \in \left\{ {1,2} \right\},
\end{align}
which is in essence equivalent to imposing $${\left. {{I_i}} \right|_{{\pi _r}=\pi_{\xi} = {\pi _{{\alpha _j}}} = 0}} = {\left. {{{\widetilde I}_i}} \right|_{{\pi _r}=\pi_{\xi} = {\pi _{{\alpha _j}}} = 0}}\ .$$
By replacing the $\widetilde{I}_{i}(A_{i},B_{i})$ on the right hand side of \eqref{thetrixk}, we find conditions for the constant coefficients $A_{i}$ and $B_{i}$. For instance, one can start with $i=1$, where one needs to match both sides of the equation \eqref{thetrixk} term by term in powers of $r(\sigma)$. By doings this, one finds the coefficients $B_{i}$ in terms of the $A_{i}$. Repeating this procedure for the case of $i=2$, equation \eqref{thetrixk} fixes the constants $A_{i}$ after matching both sides in powers of $r(\sigma)$. In analogy to \eqref{propertiesundefNR2}, we define the integral $\widetilde{I}_{3}$ through the relation
\begin{align}\label{ela0}
1 = {{\widetilde I}_1} + {{\widetilde I}_2} + {{\widetilde I}_3}\ .
\end{align}
By explicit calculation, one can check that the deformed integrals of motion obtained in this way satisfy
\begin{align}\label{ela1}
{{\widetilde H}_{NR}} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\omega _i^2{{\widetilde I}_i} + \pi _{{\alpha _i}}^2} \right)} \ ,
\end{align}
and
\begin{align}\label{ela2}
\left\{ {{{\widetilde H}_{NR}},{{\widetilde I}_i}} \right\}_{P.B.} = 0\ ,&&\left\{ {{{\widetilde I}_i},{{\widetilde I}_j}} \right\}_{P.B.} = 0\ ,&& \left\{ {{{\widetilde I}_i},{\pi _{{\alpha _j}}}} \right\}_{P.B.} = 0\ ,
\end{align}
where we used the standard Poisson brackets of the unconstrained $(r,\xi)$ coordinates.
By looking at the integrals $\widetilde{I}_{i}$ obtained by this procedure term by term, one sees that they contain a few constant terms proportional to powers of the momenta $\pi_{\alpha_{i}}$. These terms can be removed such that \eqref{ela0}, \eqref{ela1} and \eqref{ela2} are left unchanged. The final results for the $\eta$-deformed integrals of motion obtained by this construction are presented in equations \eqref{IntDefNR1}, \eqref{IntDefNR2} and \eqref{IntDefNR3}.
\subsection{Moving from $(r,\xi)$ to $x_{i}$ coordinates}
So far, all our calculations have been performed in the unconstrained coordinates $(r,\xi)$. However, it is convenient to transform the Hamiltonian and integrals of motion to the constrained coordinates $x_{i}$ described in equation \eqref{cocord}, since these are the ones commonly used in the literature. The map to the $x_{i}$ coordinates used here was first derived in \cite{Arutyunov:2014cda} and consists of several steps, which we will briefly explain.
First, we write the deformed integrals of motion $\widetilde{I}_{i}$ in terms of $r'\left( \sigma \right)$ and $\xi '\left( \sigma \right)$ instead of momenta $\pi_{r}$ and $\pi_{\xi}$; this is easily achieved by making use of ${\pi _r} = \partial {{\widetilde L}_{NR}}/\partial r'$, ${\pi _\xi} = \partial {{\widetilde L}_{NR}}/\partial \xi'$ and the Lagrangian \eqref{LagNR}. Then we proceed to perform a change of coordinates $\{r,\xi \}\rightarrow\{x_{1},x_{2}\}$ by means of
\begin{align*}
r = \sqrt {x_1^2 + x_2^2} \ ,&& \xi = \arctan \left( {\frac{{{x_2}}}{{{x_1}}}} \right)\ .
\end{align*}
Once the Lagrangian and deformed integrals of motion are written in terms of $x_{1}$, $x_{2}$ and their derivatives, we calculate the momenta ${p_1} = \partial {{\widetilde L}_{NR}}/\partial {x'_1}$ and ${p_2} = \partial {{\widetilde L}_{NR}}/\partial {x'_2}$ conjugated to $x_{1}$ and $x_{2}$, respectively. Using these results, we proceed to rewrite the deformed integrals in terms of phase-space variables such that they have the functional dependence $\widetilde{I}_{i}(x_{1},x_{2},p_{1},p_{2})$. Afterwards, a third coordinate is introduced through the relation
\begin{align}\label{constrait1a}
1 = x_1^2 + x_2^2 + x_3^2\ .
\end{align}
In order to introduce a momentum conjugate to $x_{3}$, we perform the transformation
\begin{align*}
{p_1} \to \frac{{{\pi _1}{x_3} - {\pi _3}{x_1}}}{{{x_3}}}\ ,&& {p_2} \to \frac{{{\pi _2}{x_3} - {\pi _3}{x_2}}}{{{x_3}}}\ ,
\end{align*}
where we have yet to express the new momenta $\pi_{i}$ with $i\in\{1,2,3\}$ in terms of $p_{1}$, $p_{2}$ and the $x_{i}$.
By performing this procedure to the integrals of motion found in subsection \ref{findingI}, we find the expressions for the $\widetilde{I}_{i}$ in terms of the phase-space coordinates $x_{i}$ and $\pi_{i}$. Writing the Hamiltonian in these coordinates is easily achieved by the use of $\eqref{ela1}$, where we defer presentation of the resulting expressions for $\widetilde{H}_{NR}$ and $\widetilde{I}_{i}$ to Section \ref{Defsystems}.
As seen before, the undeformed system is subjected to both the constraints of equation \eqref{undefconst}. In the $x_{i}$ coordinates, the deformed model immediately satisfies the first constraint \eqref{constrait1a}, and we would like to impose also for the new momenta $\pi_{i}$ to satisfy the second constraint
\begin{align}\label{constrait1b}
\sum\limits_{i = 1}^3 {{\pi _i}{x_i} = 0} \ .
\end{align}
In order to find an explicit expression for the $\pi_{i}$ in terms of $x_{i}$ and $x'_{i}$, we express the Hamiltonian $\widetilde{H}_{NR}$ in terms of the new phase-space coordinates $x_{i}$ and $\pi_{i}$, and then use the equations
\begin{align}\label{piconditions}
{x'_1} = \left\{ {{{\widetilde H}_{NR}},{x_1}} \right\}_{D.B.}\ , && {x'_2} = \left\{ {{{\widetilde H}_{NR}},{x_2}} \right\}_{D.B.}\ ,
\end{align}
where Dirac brackets are required due to the constraints \eqref{constrait1a} and \eqref{constrait1b}. By explicit evaluation of the right hand sides in \eqref{piconditions}, we end up with two extra equations relating the momenta $\pi_{i}$ with the $x_{i}$ and $x'_{i}$. By solving for $\pi_{i}$ in equations \eqref{constrait1b} and \eqref{piconditions}, one obtains
\begin{align}
\pi_{1}=&\frac{1}{u}\left[{x'_1} - \varkappa {x_1}\left( {x_2^2{\omega _1} + x_3^2{\omega _3}} \right) + {\varkappa ^2}{x_2}\left( {{x_2}{x'_1}\left( {1 + x_1^2} \right) - {x_1}{x'_2}\left( {1 - x_2^2} \right)} \right) \right. \nonumber\\
& \left. - {\varkappa ^3}{x_1}x_2^2\left( {x_1^2 + x_2^2} \right)\left( {{\omega _1} + x_3^2{\omega _3}} \right)\right],\nonumber\\
\pi_{2}=&\frac{1}{u}\left[{x'_2} - \varkappa {x_2}\left( {x_3^2{\omega _3} - x_1^2{\omega _1}} \right) + {\varkappa ^2}\left( {\left( {x_1^2 + x_2^4} \right){x'_2} - {x_1}{x_2}\left( {1 - x_2^2} \right){x'_1}} \right)\right.\label{qwerty}\\
& \left. + {\varkappa ^3}{x_2}\left( {x_1^2 + x_2^2} \right)\left( {x_1^2{\omega _1} - x_3^2x_2^2{\omega _3}} \right)\right] ,\nonumber\\
\pi_{3}=&\frac{{{x'_3} + \varkappa {\omega _3}{x_3}\left( {x_1^2 + x_2^2} \right)}}{{1 + {\varkappa ^2}\left( {x_1^2 + x_2^2} \right)}}\ ,\nonumber
\end{align}
where $u$ is given by
\begin{align*}
u = \left( {1 + {\varkappa ^2}\left( {x_1^2 + x_2^2} \right)} \right)\left( {1 + {\varkappa ^2}x_2^2\left( {x_1^2 + x_2^2} \right)} \right)\ .
\end{align*}
Naturally, in the $\varkappa\rightarrow0$ limit we see that ${\pi _i} \to {x'_i}$, as expected for the undeformed model.
As done in \cite{Arutyunov:2014cda}, a consistency check on the definition of the $\pi_{i}$ is to check that the Dirac brackets correctly determine the time evolution of the system
\begin{align*}
{x'_i} = {\left\{ {{{\widetilde H}_{NR}},{x_i}} \right\}_{D.B.}}\ ,&& {\pi'_i} = {\left\{ {{{\widetilde H}_{NR}},{\pi _i}} \right\}_{D.B.}}\ .
\end{align*}
The equation on the left holds by construction for the cases of $i=1,2$. For $i=3$, after evaluating the right hand side, one uses the explicit expressions \eqref{qwerty} for the $\pi_{i}$, the constraint \eqref{constrait1a} and its derivative. Similarly, the equation on the right can be checked by evoking \eqref{qwerty}, rewriting $x_{3}$ and $x'_{3}$ in terms of $x_{1}$, $x_{2}$ and their derivatives, and using the Euler-Lagrange equations coming from $\widetilde{L}_{NR}$ written in terms of $x_{1}$ and $x_{2}$.
For completeness, we also present the angular momenta ${\pi _{{\alpha _i}}}$ written in the $x_{i}$ coordinates
\begin{align*}
{\pi _{{\alpha _1}}} = \frac{{x_1^2{\alpha'_1}}}{{1 + {\varkappa ^2}x_2^2\left( {x_1^2 + x_2^2} \right)}}\ ,\quad\quad {\pi _{{\alpha _2}}} = x_2^2{\alpha'_2}\ ,\quad\quad {\pi _{{\alpha _3}}} = \frac{{x_3^2{\alpha'_3}}}{{1 + {\varkappa ^2}\left( {x_1^2 + x_2^2} \right)}}\ .
\end{align*}
\section{The $\eta$-deformed Neumann-Rosochatius model}\label{Defsystems}
Here we present the main results of our calculation, which are the deformed integrals of motion $\widetilde{I}_{i}$ along with the Hamiltonian $\widetilde{H}_{NR}$ in the $x_{i}$ coordinates. First, the explicit expressions for these quantities are given. Then, we elaborate on the different limits of the $\eta$-deformed Neumann-Rosochatius system by relating its integrals of motion to the ones of the $\eta$-deformed Neumann and Rosochatius models (recall Figure \ref{noche12}). Finally, we briefly discuss truncations of the $\eta$-deformed Neumann-Rosochatius to lower dimensions.
\subsection{Identities and integrals of motion}\label{mainresultsI}
The Hamiltonian for the $\eta$-deformed Neumann-Rosochatius model is given by
\begin{align}
{\widetilde H_{NR}} =& {H_{NR}} - \varkappa \left( {{\omega _1}{x_1}{x_2}{J_{12}} + {\omega _3}{x_1}{x_3}{J_{13}} + {\omega _3}{x_2}{x_3}{J_{23}}} \right)\label{HNRtildex}\\
&+ \frac{{{\varkappa ^2}}}{2}\left[ {\left( {x_2^2 - x_3^2} \right)J_{12}^2 + \left( {x_1^2 + x_2^2} \right)J_{13}^2 + \left( {x_1^2 + x_2^2} \right)J_{23}^2} \right]\nonumber\\
&+ \frac{{{\varkappa ^2}\left( {x_1^2 + x_2^2} \right)}}{2}\left( {\frac{{\pi _{{\alpha _1}}^2x_2^2}}{{x_1^2}} + \frac{{\pi _{{\alpha _3}}^2}}{{x_3^2}}} \right)\ ,\nonumber
\end{align}
where the undeformed Hamiltonian $H_{NR}$ is written in equation \eqref{HNR}. This system is subjected to the constraints of equations \eqref{constrait1a} and \eqref{constrait1b}.
It is interesting to point out that the first two lines in the expression for $\widetilde{H}_{NR}$ contain deformations of the Neumann potential and the kinetic term, while the third line is proportional to $\pi _{{\alpha _1}}^2$ and $\pi _{{\alpha _3}}^2$, and consequently has its origins in deformations of the Rosochatius potential. Moreover, equation \eqref{HNRtildex} exposes the asymmetry of the deformation along the different $x_i$ directions.
Liouville integrability of this model is due to the existence of deformed integrals of motion $\widetilde{I}_{i}$ derived in Section \ref{construyendo}
\begin{align}\label{I1NReta}
{\widetilde I_1} =& {I_1} + \left[ \frac{{\sum\limits_{i = 1}^4 {{n_i}{\varkappa ^i}} }}{{\left( {\omega _1^2 - \omega _2^2} \right)\left( {\omega _1^2 - \omega _3^2} \right)}} - \frac{{2\varkappa {J_{13}}{x_1}{x_3}{\omega _3}}}{{\omega _1^2 - \omega _3^2}} - \frac{{2\varkappa {J_{12}}{x_1}{x_2}{\omega _1}}}{{\omega _1^2 - \omega _2^2}}\right.\\
&\left.+ \frac{{{\varkappa ^2}J_{12}^2x_2^2}}{{\omega _1^2 - \omega _2^2}} + \frac{{{\varkappa ^2}J_{13}^2\left( {x_1^2 + x_2^2} \right)}}{{\omega _1^2 - \omega _3^2}} - \frac{{{\varkappa ^2}J_{12}^2x_3^2\omega _1^2}}{{\left( {\omega _1^2 - \omega _2^2} \right)\left( {\omega _1^2 - \omega _3^2} \right)}} \right]\nonumber\\
&+ \left[ \frac{{\varkappa \left( {\pi _{{\alpha _1}}^4{m_1} + \pi _{{\alpha _1}}^2\pi _{{\alpha _2}}^2{m_2} + \pi _{{\alpha _1}}^2\pi _{{\alpha _3}}^2{m_3} + \pi _{{\alpha _2}}^2\pi _{{\alpha _3}}^2{m_4} + \pi _{{\alpha _1}}^2{m_5} + \pi _{{\alpha _2}}^2{m_6} + \pi _{{\alpha _3}}^2{m_7}} \right)}}{{\left( {\omega _1^2 - \omega _2^2} \right)\left( {\omega _1^2 - \omega _3^2} \right)}}\right.\nonumber\\
&\left.+ \frac{{{\varkappa ^2}\omega _1^2\pi _{{\alpha _1}}^2x_2^2\left( {1 - x_3^2} \right)}}{{\left( {\omega _1^2 - \omega _2^2} \right)\left( {\omega _1^2 - \omega _3^2} \right)x_1^2}} + \frac{{{\varkappa ^2}\omega _1^2\pi _{{\alpha _3}}^2\left( {1 - x_3^2} \right)}}{{\left( {\omega _1^2 - \omega _2^2} \right)\left( {\omega _1^2 - \omega _3^2} \right)x_3^2}} \right]\ ,\nonumber
\end{align}
\begin{align}\label{I2NReta}
{\widetilde I_2} =& {I_2} + \left[ \frac{{\sum\limits_{i = 1}^4 {{n_i}{\varkappa ^i}} }}{{\left( {\omega _2^2 - \omega _1^2} \right)\left( {\omega _2^2 - \omega _3^2} \right)}} - \frac{{2\varkappa {J_{23}}{x_2}{x_3}{\omega _3}}}{{\omega _2^2 - \omega _3^2}} - \frac{{2\varkappa {J_{12}}{x_1}{x_2}{\omega _1}}}{{\omega _2^2 - \omega _1^2}}\right.\\
&\left.+ \frac{{{\varkappa ^2}J_{12}^2x_2^2}}{{\omega _2^2 - \omega _1^2}} + \frac{{{\varkappa ^2}J_{23}^2\left( {x_1^2 + x_2^2} \right)}}{{\omega _2^2 - \omega _3^2}} - \frac{{{\varkappa ^2}J_{12}^2x_3^2\omega _2^2}}{{\left( {\omega _2^2 - \omega _1^2} \right)\left( {\omega _2^2 - \omega _3^2} \right)}} \right]\nonumber\\
&+ \left[ \frac{{\varkappa \left( {\pi _{{\alpha _1}}^4{m_1} + \pi _{{\alpha _1}}^2\pi _{{\alpha _2}}^2{m_2} + \pi _{{\alpha _1}}^2\pi _{{\alpha _3}}^2{m_3} + \pi _{{\alpha _2}}^2\pi _{{\alpha _3}}^2{m_4} + \pi _{{\alpha _1}}^2{m_5} + \pi _{{\alpha _2}}^2{m_6} + \pi _{{\alpha _3}}^2{m_7}} \right)}}{{\left( {\omega _2^2 - \omega _1^2} \right)\left( {\omega _2^2 - \omega _3^2} \right)}}\right.\nonumber \\
&\left. + \frac{{{\varkappa ^2}\omega _2^2\pi _{{\alpha _1}}^2x_2^2\left( {1 - x_3^2} \right)}}{{\left( {\omega _2^2 - \omega _1^2} \right)\left( {\omega _2^2 - \omega _3^2} \right)x_1^2}} + \frac{{{\varkappa ^2}\omega _2^2\pi _{{\alpha _3}}^2\left( {1 - x_3^2} \right)}}{{\left( {\omega _2^2 - \omega _1^2} \right)\left( {\omega _2^2 - \omega _3^2} \right)x_3^2}} \right]\nonumber\ ,\nonumber
\end{align}
\begin{align}\label{I3NReta}
{\widetilde I_3} =& {I_3} + \left[ \frac{{\sum\limits_{i = 1}^4 {{n_i}{\varkappa ^i}} }}{{\left( {\omega _3^2 - \omega _1^2} \right)\left( {\omega _3^2 - \omega _2^2} \right)}} - \frac{{2\varkappa {J_{23}}{x_2}{x_3}{\omega _3}}}{{\omega _3^2 - \omega _2^2}} - \frac{{2\varkappa {J_{13}}{x_1}{x_3}{\omega _3}}}{{\omega _3^2 - \omega _1^2}}\right.\\
&\left. + \frac{{{\varkappa ^2}J_{13}^2\left( {x_1^2 + x_2^2} \right)}}{{\omega _3^2 - \omega _1^2}} + \frac{{{\varkappa ^2}J_{23}^2\left( {x_1^2 + x_2^2} \right)}}{{\omega _3^2 - \omega _2^2}} - \frac{{{\varkappa ^2}J_{12}^2x_3^2\omega _3^2}}{{\left( {\omega _3^2 - \omega _1^2} \right)\left( {\omega _3^2 - \omega _2^2} \right)}} \right]\nonumber\\
&+ \left[ \frac{{\varkappa \left( {\pi _{{\alpha _1}}^4{m_1} + \pi _{{\alpha _1}}^2\pi _{{\alpha _2}}^2{m_2} + \pi _{{\alpha _1}}^2\pi _{{\alpha _3}}^2{m_3} + \pi _{{\alpha _2}}^2\pi _{{\alpha _3}}^2{m_4} + \pi _{{\alpha _1}}^2{m_5} + \pi _{{\alpha _2}}^2{m_6} + \pi _{{\alpha _3}}^2{m_7}} \right)}}{{\left( {\omega _3^2 - \omega _1^2} \right)\left( {\omega _3^2 - \omega _2^2} \right)}}\right.\nonumber\\
&\left. + \frac{{{\varkappa ^2}\omega _3^2\pi _{{\alpha _1}}^2x_2^2\left( {1 - x_3^2} \right)}}{{\left( {\omega _3^2 - \omega _1^2} \right)\left( {\omega _3^2 - \omega _2^2} \right)x_1^2}} + \frac{{{\varkappa ^2}\omega _3^2\pi _{{\alpha _3}}^2\left( {1 - x_3^2} \right)}}{{\left( {\omega _3^2 - \omega _1^2} \right)\left( {\omega _3^2 - \omega _2^2} \right)x_3^2}} \right]\nonumber\ ,\nonumber
\end{align}
where $I_{i}$ denotes the integrals of motion of the undeformed Neumann-Rosochatius model \eqref{IntNR}. In the expressions above, the $n_i$ are given by
\begin{align*}
n_{1}=& - 2{J_{12}}{J_{13}}{J_{23}}{\omega _1}\ ,\\
n_{2}=&- J_{12}^2x_3^2\omega _3^2 + 2{J_{12}}{x_3}{\omega _1}{\omega _3}\left( {{J_{23}}{x_1} + {J_{13}}{x_2}} \right) - J_{12}^2J_{13}^2\ ,\\
n_{3}=&2{J_{12}}{J_{13}}\left[ {{J_{12}}{x_1}{x_3}{\omega _3} - {J_{23}}\left( {x_1^2 + x_2^2} \right){\omega _1}} \right]\ ,\\
n_{4}=& - J_{12}^2J_{13}^2\left( {x_1^2 + x_2^2} \right)\ ,
\end{align*}
while the $m_i$ are defined as
\begin{align*}
{m_1} =& - \frac{{\varkappa {\kern 1pt} x_2^2x_3^2\left( {1 + {\varkappa ^2}\left( {x_1^2 + x_2^2} \right)} \right)}}{{x_1^4}}\ ,\\
{m_2} =& - \frac{{\varkappa \left( {1 + {\varkappa ^2}x_2^2} \right)x_3^2}}{{x_1^2}}\ ,\\
{m_3} =& - \frac{{\varkappa \left( {1 + {\varkappa ^2}} \right)x_2^2}}{{x_1^2}}\ ,\\
{m_4} =& - \frac{{\varkappa \left( {1 + {\varkappa ^2}} \right)x_1^2}}{{x_2^2x_3^2}}\ ,\\
{m_5} =& - \frac{{\varkappa x_3^2\left( {1 + {\varkappa ^2}\left( {1 - x_3^2} \right)} \right)J_{12}^2}}{{x_1^2}} - \frac{{{\varkappa} x_2^2\left( {1 + {\varkappa ^2}\left( {1 - x_3^2} \right)} \right)J_{13}^2}}{{x_1^2}} + \varkappa \left( {1 + {\varkappa ^2}\left( {1 - x_3^2} \right)} \right)J_{23}^2\nonumber\\
& + \frac{{2{\varkappa ^2}{\omega _3}x_2^2{x_3}{J_{13}}}}{{{x_1}}} - \frac{{2{J_{23}}{x_2}{x_3}\left( {{\varkappa ^2}{\omega _3}x_1^2 + {\omega _1}\left( {1 + {\varkappa ^2}\left( {1 - x_3^2} \right)} \right)} \right)}}{{x_1^2}}\nonumber\\
& - \frac{{\varkappa x_2^2}}{{x_1^2}}\left( {\omega _2^2x_3^2 + \omega _3^2 - 2{\omega _1}{\omega _3}x_3^2} \right),\\
{m_6} =& - \frac{{\varkappa J_{13}^2\left( {1 - x_3^2} \right)\left( {1 + {\varkappa ^2}\left( {1 - x_3^2} \right)} \right)}}{{x_2^2}} + \frac{{2{J_{13}}{x_1}{x_3}\left( {{\varkappa ^2}{\omega _3}\left( {1 - x_3^2} \right) + {\omega _1}\left( {1 + {\varkappa ^2}\left( {1 - x_3^2} \right)} \right)} \right)}}{{x_2^2}}\nonumber\\
& - \frac{{\varkappa {{({\omega _1} + {\omega _3})}^2}x_1^2x_3^2}}{{x_2^2}}\ ,\\
{m_7} =& - \frac{1}{{x_3^2}}\left[ {\varkappa \left( {1 + {\varkappa ^2}} \right)\left( {1 - x_2^2} \right)J_{12}^2 + 2\left( {1 + {\varkappa ^2}} \right){\omega _1}{x_1}{x_2}{J_{12}} + \varkappa \left( {\omega _1^2x_2^2 + \omega _2^2x_1^2} \right)} \right]\ .
\end{align*}
From the structure of equations \eqref{I1NReta}, \eqref{I2NReta} and \eqref{I3NReta}, we see that the deformed integrals of motion $\widetilde{I}_{i}$ are composed of three parts. The first part consists of the integrals of motion of the undeformed Neumann-Rosochatius system $I_{i}$, a second part corresponds to deformations already present in the deformed Uhlenbeck integrals of the $\eta$-deformed Neumann model \cite{Arutyunov:2014cda} (these contributions are enclosed in the first bracket $[\ ]$ from top to bottom), and a third contribution introduced by the deformations of the Rosochatius potential (these are in the second bracket $[\ ]$ from top to bottom).
It is curious to note that the deformed integrals of motion have terms with a double pole structure in the frequencies $\omega_{i}$. This feature will play an important role later on, when considering the limit $\omega_{i}\rightarrow0$. From the expressions above, we see that the deformed Hamiltonian $\widetilde{H}_{NR}$ is quadratic in momenta and has terms up to the order $\varkappa^{2}$, while $\widetilde{I}_{i}$ are quartic in momenta and have terms up to the order $\varkappa^{4}$.
The Hamiltonian and the integrals of motion obtained for the $\eta$-deformed Neumann-Rosochatius model in the $x_{i}$ coordinates satisfy
\begin{equation}
\left\{ {{\widetilde{H}_{NR}},{\widetilde{I}_i}} \right\}_{D.B.} = 0\ ,\quad\quad \left\{ {{\widetilde{I}_i},{\widetilde{I}_j}} \right\}_{D.B.} = 0\ ,\quad \quad \left\{ {{\widetilde{I}_i},{\pi _{{\alpha _j}}}} \right\}_{D.B.} = 0\ ,
\end{equation}
\begin{equation}\label{eq433}
\sum\limits_{i = 1}^3 {{\widetilde{I}_i} = 1}\ ,
\end{equation}
\begin{equation}\label{eq434}
{\widetilde{H}_{NR}} = \frac{1}{2}\sum\limits_{i = 1}^3 {\left( {\omega _i^2{\widetilde{I}_i} +\pi_{{\alpha _i}}^2} \right)} \ ,
\end{equation}
which correspond to the deformed analogues of the identities \eqref{propertiesundefNR1}, \eqref{propertiesundefNR2} and \eqref{propertiesundefNR3} of the undeformed Neumann-Rosochatius system.
Naturally, in the undeformed limit, the Hamiltonian and the new integrals of motion reduce to the known expressions for the undeformed Neumann-Rosochatius system
\begin{align}
\mathop {\lim }\limits_{\varkappa \to 0} {{\widetilde H}_{NR}} = {H_{NR}}\ ,&& \mathop {\lim }\limits_{\varkappa \to 0} {{\widetilde I}_i} = {I_i}\ .
\end{align}
\subsection{Connections with the $\eta$-deformed Neumann and Rosochatius systems}\label{connectionsRandN}
As discussed in Section \ref{section3}, in the $({\rm AdS}_5\times {\rm S}^5)_{\eta}$ background, the generalised spinning solution of equation \eqref{solucion2} reduces to the spinning solution studied in \cite{Arutyunov:2014cda} by taking the limit $\alpha_{i}\rightarrow0$ and $\pi_{\alpha_{i}}\rightarrow0$, and it is connected to geodesic solutions by considering the limit $\omega_{i}\rightarrow0$ and changing the time parameter $\sigma\rightarrow\tau$. Therefore, by considering the respective limits for the $\eta$-deformed Neumann-Rosochatius model, the system must reduce to the integrable models describing spinning solutions and geodesics on $({\rm AdS}_5\times {\rm S}^5)_{\eta}$: The $\eta$-deformed Neumann and Rosochatius models, respectively (see Figure \ref{noche12}). In this section, the connection between these deformed integrable models is made explicit by considering their Hamiltonians and integrals of motion.
Reduction of the $\eta$-deformed Neumann-Rosochatius system to the $\eta$-deformed Neumann model follows straightforwardly from the results of \cite{Arutyunov:2014cda} and the expressions for $\widetilde{H}_{NR}$ and $\widetilde{I}_{i}$
\begin{align}\label{limitsystem1}
\mathop {\lim }\limits_{{\pi _{\alpha j}} \to 0} {{\widetilde H}_{NR}} = {{\widetilde H}_N}\ ,&& \mathop {\lim }\limits_{{\pi _{\alpha j}} \to 0} {{\widetilde I}_i} = {{\widetilde F}_i}\ ,
\end{align}
where $\widetilde{H}_{N}$ denotes the Hamiltonian of the $\eta$-deformed Neumann model, while $\widetilde{F}_{i}$ are the deformed Uhlenbeck integrals of the $\eta$-deformed Neumann model obtained in \cite{Arutyunov:2014cda}. Equation \eqref{limitsystem1} is therefore, the deformed counterpart of equations \eqref{counterpart1a} and \eqref{counterpart1b}.
For the discussion on the relation between the $\eta$-deformed Neumann-Rosochatius model and geodesic solutions, we will use the expressions in $(r,\xi)$ coordinates presented in appendix \ref{definrxi}, since these are the coordinates in which integrability of geodesics was originally studied (see \cite{Arutyunov:2014cda} for details).
In Section \ref{UndefSystems}, we saw how the integrals of motion of the undeformed Rosochatius system are obtained by taking the limit $\omega_{i}\rightarrow0$ of the expressions for the integrals of the undeformed Neumann-Rosochatius model. We will now perform a similar limit on the integrals of the $\eta$-deformed Neumann-Rosochatius system presented in equations \eqref{IntDefNR1}, \eqref{IntDefNR2} and \eqref{IntDefNR3}, with the aim of obtaining integrals of motion for the $\eta$-deformed Rosochatius model.
Due to the double pole structure that the integrals $\widetilde{I}_{i}$ have on the frequencies $\omega_{i}$, it is not sufficient to multiply by an $(\omega_{i}^{2}-\omega_{j}^{2})$ factor and then take the limit $\omega_{j}\rightarrow\omega_{i}$, as was done for the undeformed case in equation \eqref{NRtoNUndefLimit}. Instead, we will consider the following limit
\begin{equation}\label{NRtoRDeflimit}
\mathop {\lim }\limits_{{\omega _i}\to 0}\ \ \mathop {\lim }\limits_{{\omega _k},{\omega _l}\to {\omega _i} } \ {\widetilde I_i}\ \prod\limits_{j \ne i}^3 {\left( {\omega _i^2 - \omega _j^2} \right)} = - {\varkappa ^2}Q + {\varkappa ^4}\left( {\pi _{{\alpha _1}}^2\pi _{{\alpha _2}}^2 + \pi _{{\alpha _1}}^2\pi _{{\alpha _3}}^2 + \pi _{{\alpha _2}}^2\pi _{{\alpha _3}}^2} \right)\quad \forall i \in \left\{ {1,2,3} \right\}\ ,
\end{equation}
where there is no summation over the index $i$ on the left side and the three indices ($i$, $k$,$l$) take all different values. It can be checked explicitly that the quantity on the right hand side is indeed an integral of motion for $\eta$-deformed geodesic solutions: It is composed out of the angular momenta $\pi_{\alpha_i}$ and the integral of motion $Q$, which was obtained in \cite{Arutyunov:2014cda} through the Lax formalism. The explicit expression for $Q$ is given by
\begin{align*}
Q=&\left( {1 + {\varkappa ^2}{r^2}} \right)\left( {1 - {r^2}} \right)\bigg[ {\frac{{\pi _\xi ^4{{\sin }^2}\xi }}{{{r^2}}}}{ - \frac{{\pi _\xi ^3{\pi _r}\sin 2\xi }}{r}}{ + \pi _r^2\pi _\xi ^2{{\cos }^2}\xi }{ + \frac{{2\pi _\xi ^2\pi _{{\alpha _1}}^2{{\tan }^2}\xi }}{{{r^2}}}}\label{integralQ}\\
&{ + \frac{{\pi _\xi ^2\pi _{{\alpha_2}}^2}}{{{r^2}}}}{ + \pi _r^2\pi _{{\alpha _2}}^2{{\cot }^2}\xi }{ - \frac{{2{\pi _r}{\pi _\xi }\pi _{{\alpha _1}}^2\tan \xi }}{r}}{ - \frac{{2{\pi _r}{\pi _\xi }\pi _{{\alpha _2}}^2\cot \xi }}{r}}{ + \frac{{\pi _{{\alpha _1}}^4{{\tan }^2}\xi\ {{\sec }^2}\xi }}{{{r^2}}}}\bigg]\nonumber\\
&+ \pi _{{\alpha _1}}^2\pi _{\alpha_{3}} ^2\left( {{\varkappa ^2} + {{\sin }^2}\xi } \right){\sec ^2}\xi + \frac{{\pi _{{\alpha _2}}^2\pi _{\alpha_{3}} ^2\left( {{{\cos }^2}\xi + {\varkappa ^2}\left( {1 - {r^2}{{\sin }^2}\xi } \right)} \right){{\csc }^2}\xi }}{{1 - {r^2}}}\nonumber\\
&+ \frac{{\left( {1 + {\varkappa ^2}} \right)\pi _\xi ^2\pi _{\alpha_{3}} ^2\left( {1 - {r^2}{{\sin }^2}\xi } \right)}}{{1 - {r^2}}}+ \frac{{\pi _{{\alpha _1}}^2\pi _{{\alpha _2}}^2\left( {1 + {\varkappa ^2}{r^2} - {r^2}\left( {1 + {\varkappa ^2}{r^2}{{\sin }^2}\xi } \right)} \right){{\sec }^2}\xi }}{{{r^2}}}\nonumber\ .
\end{align*}
It is interesting to point out that when taking the limit $\omega_{i}\rightarrow0$, instead of the deformed analogues of the three integrals $F_{ij}$ of the undeformed Rosochatius system, for $\varkappa>0$ one obtains only one integral of motion $Q$ (independent of the $\pi_{\alpha_{i}}$).
By considering the $\omega_{i}\rightarrow0$ limit of $\widetilde{H}_{NR}$ (recall equation \eqref{HamNrrr}), we see that
\begin{align}
\mathop {\lim }\limits_{{\omega _i} \to 0} {{\widetilde H}_{NR}} = {{\widetilde H}_R}\ ,
\end{align}
where $\widetilde{H}_{R}$ is the Hamiltonian describing geodesics in the deformed sphere, which was first calculated in \cite{Arutyunov:2014cda}. This equation, along with equation \eqref{NRtoRDeflimit}, are the deformed analogues of equations \eqref{HNRtoHR} and \eqref{NRtoNUndefLimit}, respectively.
\subsection{Lower dimensional truncations}\label{truncations}
By examining the Dirac bracket of the Hamiltonian $\widetilde{H}_{NR}$ with the coordinate $x_{i}$ and the conjugate momenta $\pi_{i}$ (with $i
\in\left\{ {1,2,3} \right\}$), it can be checked that the following is a consistent truncation of the system along the $i$-direction
\begin{align*}
x_{i}=0\ ,&& \pi_{i}=0\ ,&&\pi_{\alpha_{i}}=0\ , && \omega_{i}=0\ .
\end{align*}
In this case, the phase-space constraints of equations \eqref{constrait1a} and \eqref{constrait1b} reduce to
\begin{align*}
\sum\limits_{j \ne i}^3 {x_j^2 = 1} \ ,&& \sum\limits_{j \ne i}^3 {{x_j}{\pi _j} = 0} \ ,
\end{align*}
while the deformed integrals of motion reduce to
\begin{align*}
\mathop {\lim }\limits_{{x_i},{\pi _i},{\pi _{{\alpha _i}}},{\omega _i} \to 0} {{\widetilde I}_i} = 0\ , && \mathop {\lim }\limits_{{x_i},{\pi _i},{\pi _{{\alpha _i}}},{\omega _i} \to 0} {{\widetilde I}_j} \ne 0\ \ \ \ \ \forall j \ne i\ .
\end{align*}
The above expressions imply that equations \eqref{eq433} and \eqref{eq434} become
\begin{align*}
\sum\limits_{j \ne i}^3 {{{\left. {{{\widetilde I}_j}} \right|}_{{\pi _i} = {x_i} = {\pi _{{\alpha _i}}} = {\omega _i} = 0}} = 1} \ , && {{\widetilde H}_{NR}} = \frac{1}{2}\sum\limits_{j \ne i}^3 {{{\left. {\left( {\omega _j^2{{\widetilde I}_j} + \pi _{{\alpha _j}}^2} \right)} \right|}_{{\pi _i} = {x_i} = {\pi _{{\alpha _i}}} = {\omega _i} = 0}}} \ .
\end{align*}
Therefore, out of ${\widetilde{H}}_{NR}$ and the 2 integrals of motion $\widetilde{I}_{j}$ (with $j\neq i$), there is only one truly independent integral of motion. This is consistent with Liouville's theorem, as in the truncated system one has a 2-dimensional phase-space, once the constraints are taken into account.
Because of the asymmetry of the system along the different directions, it is possible to truncate the $\eta$-deformed Neumann-Rosochatius model in several ways. In particular, the truncation along $i=3$ corresponds to the system studied in \cite{Hernandez2016}, where their corresponding integrals of motion and identities coincide with the ones obtained by the truncation procedure explained above.
\section{Conclusions}
In the present paper we have studied the integrable model describing generalised spinning solutions in the $\eta$-deformed ${\rm AdS}_5\times {\rm S}^5$ background, constituting a one-parameter deformation of the well-known Neumann-Rosochatius integrable system. By explicit construction of a $4\times4$ Lax pair representation and a set of integrals of motion in involution, we exposed the Liouville integrability of the model. The deformed integrals of motion obtained generalise the ones previously found for the Neumann model and geodesic motion on the $\eta$-deformed sphere. The construction of the integrals of motion and Lax representation for this model is a necessary first step towards finding its exact solution, however, there are still many open questions to be addressed.
The deformed model we considered corresponds to the $N=3$ Neumann-Rosochatius system, where motion is constrained to a two-sphere. Generally, the Neumann-Rosochatius model is known to be integrable for arbitrary $N$, where motion is constrained to a $(N-1)$-sphere. Thus, it would be interesting to generalise the results found here for $N>3$. Asymmetry of the deformation in the different $x_i$ directions makes this a very non-trivial task. A natural starting point would be to deform the sigma model on the coset space $\rm{SO}(N+1)/\rm{SO}(N)$ and then to consider a generalised spinning solution similar to the one in \eqref{solucion2}.
In Section \ref{connectionsRandN}, we explored the connection between the $\eta$-deformed Neumann-Roso-chatius and Neumann model by studying their Hamiltonians and integrals of motion. For the undeformed case integrability of the $N=3$ Neumann-Rosochatius model also follows from the fact that it can be seen as a special case of an $N=6$ Neumann model with degenerate frequencies. In fact, the undeformed integrals $I_{i}$ can be constructed explicitly by considering convenient linear combinations of the $F_{i}$ integrals of motion of the $N=6$ degenerate Neumann model. It would be interesting to see if this also holds for their $\eta$-deformed counterparts, which again would require an in depth understanding of the $N>3$ deformed models.
The $\eta$-deformed models considered in the present paper have highly complicated integrals of motion, making separation of variables a very difficult problem. A solution to this problem is supposedly given by Sklyanin's method \cite{Sklyanin:1995bm}, which yields canonical coordinates in terms of (properly normalized) eigenvalues and poles of the Baker-Akhiezer function. But because of the sheer size of the Lax pair (see Appendix \ref{laxapendex}), the corresponding equations appear to be rather involved and solutions seem difficult to find. For this reason, it would be desirable to devise a lower dimensional Lax pair.
Geodesic motion on spheres is a renowned problem, partially due to the fact that it is superintegrable. Concerning its Liouville integrability for ${\rm S}^5$, a set of integrals of motion in involution is given by the angular momenta $\pi_{{\alpha}_{i}}$, the Hamiltonian $H_{R}$ and one of the three non-abelian integrals $F_{ij}$. The fact that this system has more integrals of motion than required by Liouville's theorem imposes strong constraints on its dynamics: Geodesics are closed and the motion is periodic. For the $\eta$-deformed sphere, it was found in \cite{Arutyunov:2014cda} that geodesics are Liouville integrable due to the set of integrals of motion in involution $\pi_{{\alpha}_{i}}$, $\widetilde{H}_{R}$ and $Q$. Here, by considering a geodesic limit of the deformed Neumann-Rosochatius system, we end up with the same set of integrals of motion, leaving superintegrability an open question. An interesting way to approach this problem is to consider a lower-dimensional model obtained from geodesic motion on $({\rm S}^5)_\eta$ under the conditions $\pi_{\xi}=\pi_{\alpha_{1}}=\pi_{\alpha_{2}}=0$, such that, as was shown in \cite{Hoare:2014pna}, the corresponding motion is constrained to the manifold of Fateev's sausage model \cite{Fateev:1992tk}. Investigation of this problem is on the way \cite{Arutyunov:2016kve}.
\section*{Acknowledgements}
We would like to thank A. Dekel, J. M. Nieto, and S. C. Vargas for useful discussions and R. Klabbers for useful comments on the manuscript. M.H. thanks NORDITA for hospitality.
The work of G.A. and M.H. is supported by the German Science Foundation (DFG) under the Collaborative Research Center (SFB) 676 Particles, Strings and the Early Universe. The work of D.M. is supported by the ERC advanced grant No 341222.
|
1,108,101,564,261 | arxiv | \section*{Introduction}
Imaging electronic dynamics in molecules immediately following photoexcitation is of utmost interest to photochemistry as the first few femtoseconds can determine the fate of ensuing reactions \cite{Worner2017, Wolter2016, Kubel2016}. Electronic and nuclear dynamics have been probed with attosecond precision \cite{Krausz2009RMP,Leone2014NatPhot} by means of high harmonic emission \cite{Li2008,Haessler2010NatPhys,Kraus2015Science}, laser-induced electron diffraction \cite{Meckel2008,Blaga2012,Wolter2016}, and photoelectron holography
\cite{Huismans2011,Porat2018}. The aforementioned techniques rely on the recollision mechanism \cite{Corkum1993, Krause1992}, where the photoionized electron is driven back to the parent ion by the intense laser field and probes the transient molecular or atomic structure. Recollision-free schemes, such as attosecond transient absorption \cite{Goulielmakis2010} and sequential double ionization have also been used to follow electronic \cite{Fleischer2011,Fechner2014, Calegari2014Science} and nuclear dynamics \cite{Ergler2006} on a few-femtosecond time scale.
Attosecond technology not only offers unprecedented time-resolution for ultrafast processes, but also laser-based approaches to imaging electronic structure. Such images can be obtained indirectly by analyzing the high harmonic spectrum as a function of molecular alignment with respect to the laser polarization \cite{Itatani2004,Haessler2010NatPhys,Vozzi2011}, or directly, by measuring the photoelectron angular distribution in the molecular frame \cite{Meckel2008,Staudte2009,Holmegaard2010,Comtois2013}.
Photoelectron angular distributions have been studied to follow electron dynamics on a sub-picosecond time scale \cite{Hockett2011,Forbes2018}. However, the direct imaging of bound electron wave packets on the femtosecond time-scale has yet to be accomplished.
Some of the simplest bound electron wave packets that can be prepared by strong-field ionization are spin-orbit wave packets in noble gas ions \cite{Rohringer2009,Goulielmakis2010, Woerner2011,Sabbar2017}.
As the spin-orbit wave packet evolves, the 3p$^{-1}$ electron-hole in the noble gas ion oscillates between the $m=0$ state and the $|m|=1$ states ($m$ being the magnetic quantum number). This oscillation leads to a time-dependent modulation in the angle-dependent tunnel ionization probability of the ion \cite{Fleischer2011}. Time-resolved measurements of the momentum distribution of photoelectrons, emitted from the ion, would allow for directly imaging the evolving electron-hole \cite{Woerner2011}. The main obstacle is the contamination of the signal with photoelectrons from the pump pulse \cite{Fechner2014}.
Here, we demonstrate the direct imaging of electron density variations with a temporal resolution of only a few femtoseconds. We prepare a bound wave packet in an argon ion using optical tunnel ionization by a few-cycle visible laser pulse. The resulting multi-electron wavepacket is then imaged \textit{via} another tunnel ionization process induced by a second few-cycle visible laser pulse.
Contamination of the probe pulse signal is avoided by superimposing a weak, orthogonally polarized, carrier-envelope phase-stable, mid-infrared streaking field \cite{Kubel2017} onto the probe pulse. This allows us to separate the primary and secondary photoelectrons spatially and thereby enables direct imaging of the valence-shell wave packet.
By inverting the resulting 2D momentum spectra we obtain the autocorrelation functions of the spatial density of the bound electron wavepacket, as seen through the optical tunnel.
\section*{Results}
\subsection{Time-resolved orbital imaging experiment}
\begin{figure}[h]
\centerline{\includegraphics[width=0.5\textwidth]{fig1.jpg}}
\caption{Schematic of the time-resolved orbital imaging experiment. (a) A coherent electron wave packet is prepared in Ar$^+$ \textit{via} strong-field ionization by a few-cycle pump pulse. As the wave packet evolves, a hole (vacancy) oscillates between the $m=0$ and $|m|=1$ states of the valence shell of the Ar$^+$ ion with the spin orbit period $T_\mathrm{SO} = 23.3\,\mathrm{fs}$ \cite{Fleischer2011}. The electron density in Ar$^+$ is probed after a variable time delay using a few-cycle probe pulse in the presence of a phase-stable, mid-infrared deflection field. The deflection field makes the centered momentum distributions produced by the pump pulse alone (b) distinguishable from the off-center distribution produced by the probe pulse (c). Panel (c) was generated by simulating the effect of the deflection field on the data presented in (b). In the total electron distribution shown in (d), the signal with $p_x < \unit[-0.5]{a.u.}$ is dominated by electrons from the probe pulse, as indicated by the red oval. The colorscale indicates the electron yield. Each panel is normalized to its maximum. Source data are provided as a Source Data file.}
\label{fig:experiment}
\end{figure}
Figure \ref{fig:experiment} shows a schematic of our pump-probe experiment. In the pump step, strong field ionization of neutral Ar with a few-cycle visible laser pulse causes the coherent population of the $^2\mathrm{P}_{3/2}$ and $^2\mathrm{P}_{1/2}$ fine-structure states. The resulting spin-orbit wavepacket oscillates with a period $T_\mathrm{SO}=h/\Delta E_\mathrm{SO}=23.3\,\mathrm{fs}$ and is probed at a variable time delay using strong field ionization by a second few-cycle visible laser pulse. Superimposed on the probe pulse is an orthogonally polarized, mid-infrared (mid-IR), 40\,fs pulse, that deflects and thereby labels the electron created by the probe pulse.
Three-dimensional ion and electron momenta are measured in coincidence using Cold Target Recoil Ion Momentum Spectroscopy (COLTRIMS). We make use of the fact that the few cycle pulse alone produces photoelectrons with a narrow momentum distribution along $p_z$ centered at zero momentum (Fig.~\ref{fig:experiment}(b)). When the orthogonally polarized mid-IR deflection field is superimposed, the ionized electron wavepacket is shifted, as shown in Fig.~\ref{fig:experiment}(c). In the experiment with all three pulses (Fig.~\ref{fig:experiment}(d)), the probe pulse signal dominates for negative momenta along the direction of the deflection field, as marked by the red oval. In the following, we present results for electrons selected accordingly, see Methods for details.
\begin{figure*}[t]
\centerline{\includegraphics[width=0.9\textwidth]{fig2.jpg}}
\caption{Snapshots of a spin-orbit wave packet in the argon cation. (a) Measured Ar$^{2+}$ yield as a function of time delay between pump and probe pulses. The cartoons illustrate the electron configuration in Ar$^+$ at the time of interaction with the probe pulse. For the positions marked with dotted lines and fractions of the spin orbit period, $T_\mathrm{SO}$, we present measured electron density plots in momentum space (b) and real space (c). The momentum space images show the positive part of the normalized differences of delay-dependent and delay-averaged electron momentum distributions. The data has been integrated over $p_z$ and a delay range of $\pm 3\,$fs. A low pass frequency filter has been applied. The real space images show the Fourier transform of the momentum space images. The theory plots show the normalized differences between calculated Ar$^+$ momentum space orbitals corresponding to $m=0$ and $|m|=1$ vacancies, and their Fourier transforms, respectively. Source data are provided as a Source Data file.}
\label{fig:results}
\end{figure*}
\subsection{Snapshots of an electronic wave packet}
Figure \ref{fig:results} shows our experimental results. Figure \ref{fig:results} (a) shows the delay dependent Ar$^{2+}$ yield for one oscillation of the valence shell wave packet. Measured data for several oscillations are shown in Supplementary Figure 1. We observe a strong modulation of the Ar$^{2+}$ yield of $\approx 45 \%$. Ionization is favoured when the $m=0$ state is populated with two electrons. In this case, ionization from $|m|=1$ state is negligible \cite{Woerner2011}.
On the other hand, at the yield minima, ionization from the donut shaped $|m|=1$ orbital becomes significant.
In Figure \ref{fig:results}(b) and Supplementary Movie 1, we present a time series of measured electron density plots for the wave packet in Ar$^+$. Each density plot is a normalized difference between the delay-dependent and delay-averaged momentum distributions. The series of snapshots shows a narrow spot in the center of the distributions for $\Delta t = 1/2\,T_\mathrm{SO}$, corresponding to a yield maximum. The central spot becomes weaker for larger delays, $\Delta t = 4/6\,T_\mathrm{SO}$. Eventually, the center spot disappears and a ring around the origin is established at $\Delta t = 5/6\,T_\mathrm{SO}$, which appears with maximum brightness at $\Delta t = T_\mathrm{SO}$, at the minimum of the Ar$^{2+}$ yield.
The experimental images agree qualitatively with the simple calculations at $\Delta t = 1/2\,T_\mathrm{SO}$ and $\Delta t = T_\mathrm{SO}$, respectively. At intermediate values $\Delta t = 4/6\,T_\mathrm{SO}$, and $\Delta t = 5/6\,T_\mathrm{SO}$, the images are essentially identical to the ones at $\Delta t = 1/2\,T_\mathrm{SO}$, and $\Delta t = T_\mathrm{SO}$, respectively, but exhibit a reduced contrast, see Supplementary Figure 2. For the calculated momentum distributions, we use spatial Ar$^+$ valence orbitals for $m=0$ and $|m|=1$, and calculate the transversal momentum space orbitals by Fourier transform, see Supplementary Method 2 for details. Plotted in Fig.~\ref{fig:results} are the normalized differences between the $m=0$ and $|m|=1$ vacancy states.
The expected circular symmetry of the momentum distributions is broken by a noticeable stretch along the $p_x$ axis. This distortion arises from the mid-IR deflection field, which is used to identify the probe pulse signal. The stretch in momentum space corresponds to a contraction in the real space images. Because the $x$ and $y$ directions are equivalent, the distortion does not cause loss of information. The mean momentum shift induced by the deflection field, $\Delta p_x = -1.5\,\mathrm{a.u.}$, has been subtracted from the presented images.
In Figure \ref{fig:results}(c), we show the autocorrelation functions of the spatial electron density, which are obtained from the momentum distributions by Fourier transform, assuming a flat phase. The spatial distributions obtained from the experimental data qualitatively agree with the theoretical results. This indicates that our methods allows for reconstruction of real-space features of the time-dependent valence electron density.
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig3.jpg}
\caption{Streaking ionization of an electron wave packet in Ar$^+$. (a) Momentum distributions in the $p_y$/$p_z$ plane for ionization from a coherent wave packet in Ar$^+$. The left (right) half corresponds to delay values with a maximum (minimum) in the Ar$^{2+}$ yield. Each spectrum is normalized to the same number of counts. The normalized difference between the left and right side is displayed in (b). The dotted box indicates the momentum range for which the normalized difference is plotted along $p_z$ in panel (c). The experimental data are compared to the calculated difference in the instantaneous ionization probabilities for the $m=|1|$ and $m=0$ vacancies. Errorbars are s.d. Source data are provided as a Source Data file.}
\label{fig:plong}
\end{figure}
\subsection{Longitudinal momentum distribution}
Next, we turn our attention to the photoelectron momentum component along the ionizing laser field, the z-direction. In Fig.~\ref{fig:plong}, we examine the distributions in the ($p_y/p_z$) plane.
The spectra recorded at maximum ($\Delta t = 1/2\,T_\mathrm{SO}$) and minimum ($\Delta t = T_\mathrm{SO}$) Ar$^{2+}$ yields are qualitatively indistinguishable, see Fig.~\ref{fig:plong}(a). The normalized difference of the two spectra reveals the distinctions between the momentum distributions arising from ionization of $m=0$ and $|m|=1$ states and is presented in Fig.~\ref{fig:plong}(b). A clear pattern is visible: The blue areas at larger perpendicular momenta ($|p_y| \gtrsim 0.5\,\mathrm{a.u.}$) indicate the contribution of the donut shaped $|m|=1$ orbital at the yield minima, as seen at $\Delta t = T_\mathrm{SO}$ in Fig.~\ref{fig:imaging}(b). The red area at small perpendicular momenta ($|p_y| < 0.5\,\mathrm{a.u.}$) indicates the dominance of ionization from the $m=0$ orbital at the yield maxima, as seen at $\Delta t = 1/2\,T_\mathrm{SO}$ in Fig.~\ref{fig:imaging}(b).
The spectra recorded at maximum and minimum Ar$^{2+}$ yields are qualitatively indistinguishable, see Fig.~\ref{fig:plong}(a). However, the normalized difference of the two spectra, presented in Fig.~\ref{fig:plong}(b), reveals a clear pattern: The blue areas at larger perpendicular momenta ($|p_y| \gtrsim 0.5\,\mathrm{a.u.}$) indicate the contribution of the donut shaped $|m|=1$ orbital at the yield minima. The red area at small perpendicular momenta ($|p_y| < 0.5\,\mathrm{a.u.}$) indicates the dominance of ionization from the $m=0$ orbital at the yield maxima.
Strikingly, the normalized difference exhibits pronounced maxima for large longitudinal momenta ($|p_z| > 2\,\mathrm{a.u.}$). Similar observations have been made in pump-probe experiments on double photodetachment from negative ions \cite{Hultgren2013,Eklund2013}.
The maxima observed at large longitudinal momenta raise the question whether the final momentum distributions are, in fact, influenced by the momentum distribution in the bound state. Even though it is intriguing to speculate whether orbital imaging is not purely two-dimensional, as in very recent work on alignment-dependent molecular ionization \cite{Trabattoni2018}, we offer a different interpretation in Fig.~\ref{fig:plong}(c). The plot shows the normalized difference of the longitudinal momentum distributions recorded at the yield maxima and yield minima. The experimental results are selected for small perpendicular momenta (indicated by the dotted box in Fig.~\ref{fig:plong}(b)) and compared to the results of a computational model, similar to the one proposed in \cite{Woerner2011}, and detailed in the Methods. In the model, we calculate the instantaneous non-adiabatic tunnel ionization rates of the $|m|=1$ and $m=0$ vacancies.
The computational results agree very well with the experimental ones. They indicate that the maxima at large longitudinal momenta arise because the ratio of the ionization probabilities for the two vacancy states varies throughout a laser half cycle.
Specifically, large momenta are produced near the zero crossing of the laser electric field within the optical cycle. At these laser phases, the vector potential is close to its maximum and, correspondingly, the electric field is rather weak. Since the $m=0$ vacancy state is harder to ionize than the $|m|=1$ vacancy, its ionization probability drops faster with decreasing field strength. Hence, ionization near the zero crossing has an increased contribution from the $|m|=1$ vacancy. In the fashion of a streak camera,
the laser vector potential maps the electron emission times to final momenta, leading to the observed maxima in the normalized difference at large longitudinal momenta.
\section*{Discussion}
So far, we have shown that our pump-probe scheme allows us to identify double ionization events where the first and second ionization occurs in the pump and probe pulse, respectively. For these events we can separate the first from the second photoelectron, exploiting the deflection induced by the mid-IR streaking field \cite{Kubel2017}. Recording the transverse momentum distribution of the second photoelectron enables us to image the electron dynamics unfolding in the cation. We have also shown that the longitudinal momentum distribution of the second photoelectron carries information on the ionization dynamics of the cation in a non-stationary state. In the following, we address the question how quantitative information can be extracted from the measured orbital images.
\begin{figure*}
\includegraphics[width=0.9\textwidth]{fig4.jpg}
\caption{Imaging time-dependent valence electron densities. (a) Normalized differences between the momentum distributions recorded for ionization of Ar$^+$ at delays corresponding to yield maxima and yield minima. The data in each plane is integrated over the third dimension. Low-pass frequency filtering has been applied to the experimental data. (b) Results of an analytical imaging model. The color bar applies to both, experimental and theoretical results. (c) Modulus square of the Ar$^+$ 3p orbital wave functions used in the simulation. Blue and red colors are chosen for presentational purposes. (d) Cut through the center of the measured and calculated 3D normalized differences along $p_y$. The experimental signal is multiplied by a factor of 2 to facilitate direct comparison of the measured and calculated widths of the signals. Source data are provided as a Source Data file.}
\label{fig:imaging}
\end{figure*}
Figure \ref{fig:imaging}(a) and Supplementary Movie 2 show the normalized differences of the projected 2D momentum distributions measured at maximum and minimum Ar$^{2+}$ yield. Figure \ref{fig:imaging} (b) presents calculated distributions, based on a simple imaging model.
The model is described in the Methods. It uses the orbitals depicted in Fig.~\ref{fig:imaging}(c), i.e., the Ar$^+$ 3p orbitals with $|m|=1$, and $m=0$. These electronic density functions are multiplied with a transversal filter function, which describes the tunneling probability as a function of the perpendicular momentum \cite{Murray2011}. The acceleration in the combined laser field is simulated, and the sub-cycle streaking effect, discussed above, is taken into account.
The simulated momentum distributions exhibit reasonable agreement with the experimental results. In particular, as shown in Fig.~\ref{fig:imaging}(d), almost quantitative agreement is obtained for the width of the distribution along the laser propagation, which remains unaffected by the laser field after tunneling. This indicates that the model captures the essential imaging mechanism. Thus, valence electron densities in real space could be reconstructed from time-resolved orbital imaging experiments, using appropriate computational techniques. The spatial resolution of the reconstructed real-space distribution is given by the maximum perpendicular photoelectron momentum, which is limited by the transversal filter function and the signal to noise ratio. Time-resolved orbital imaging will greatly benefit from the development of intense few-cycle laser sources approaching the MHz range \cite{Krebs2013}.
Our method offers exciting prospects for time-resolved imaging of molecular orbitals. For example, strong-field ionization can induce purely electronic dynamics, such as charge migration \cite{Kraus2015Science}, or correlated electron-nuclear dynamics such as dissociation or isomerization. Our pump-probe scheme will enable imaging of the electronic rearrangements that take place during such processes. At the same time, nuclear dynamics and configurations can be tracked with coulomb explosion imaging \cite{Amini2018a, Burt2018} or laser-induced electron diffraction \cite{Meckel2008, Blaga2012, Wolter2016, Amini2018b}.
\section*{Methods}
\subsection{Experimental setup}
The experiment relies on the set-up developed for sub-cycle tracing of ionization enabled by infrared (STIER), which has been described in Ref.~\cite{Kubel2017}. Briefly, the output of a 10\,kHz, 2\,mJ titanium:sapphire laser (Coherent Elite) is split in two parts to obtain 5\,fs, few-cycle pulses, centered at 730\,nm, from a gas-filled hollow core fiber, and 40\,fs, phase-stable mid-IR idler pulses at 2330\,nm from an optical parametric amplifier. We extend STIER to pump-probe experiments by further splitting the few-cycle pulses and recombining them in a Mach-Zehnder interferometer in order to obtain pump and probe pulses with adjustable time delay. The few-cycle pulses pass through a broadband half wave plate and are recombined with the mid-IR pulses. In order to avoid overlap between the mid-IR pulse and the visible pump pulse, the pump-probe delay was offset by $t_0 = 6670\,\mathrm{fs}$, much larger than the duration of the mid-IR pulses. The offset is not included in the delay values given in the main text. Choosing this large offset is legitimate, as it has been demonstrated that no notable dephasing of the spin orbit wave packet occurs over the course of several nanoseconds \cite{Fleischer2011}. We have tested in separate experiments that no significant overlap between visible and mid-IR pulse occurs for pump probe delays larger than 50\,fs.
The laser pulses (pulse energies of $\unit[2.5]{\mu J}$ for each of the visible pulses, and $\unit[18]{\mu J}$ for the mid-IR pulse) are focused ($f=\unit[75]{mm}$) into a cold ($T \approx 10\,\mathrm{K}$) argon gas jet in the center of a COLTRIMS \cite{Ullrich2003}. We estimate the focal spot sizes (1/e$^2$ width) as $\unit[7 \pm 2]{\mu m}$ for the visible pulses and $\unit[30 \pm 10]{\mu m}$ for the mid-IR pulse. Photoelectrons and ions arising from the interaction are detected in coincidence, and their three-dimensional momenta are measured using time and position sensitive detectors. The polarization of the mid-IR deflection field is along the $x$ axis, which is defined by the spectrometer axis of the COLTRIMS. The laser propagates along the $y$ axis and the ionizing few-cycle pulses are polarized along $z$. The electron count rate was kept below $0.2$ electron per laser pulse to limit the number of false coincidences. The laser intensity of the visible pulses of $(6.0 \pm 1.0) \times 10^{14} \,\mathrm{W/cm^2}$ was estimated from the carrier-envelope phase-dependent momentum spectra along the laser polarization \cite{Kubel2018}. The intensity of the mid-IR pulses was estimated from the deflection amplitude $\Delta p_x = -1.5\,\mathrm{a.u.}$ as $3 \times 10^{13}\,\mathrm{W/cm^2}$, low enough to not cause notable ionization of Ar or Ar$^+$.
\subsection{Data analysis}
To obtain images of the transient electron density in the Ar$^+$ valence shell, it is crucial to identify the electrons produced in the second ionization step, $\mathrm{Ar}^+ \rightarrow \mathrm{Ar}^{2+}\,+\,\mathrm{e}^-$ by the probe pulse. This is accomplished as follows. First, recorded electron spectra with and without the deflection field present are compared, see Fig.~\ref{fig:experiment}(b) and (d). This shows that for momenta $p_x<-\unit[0.5]{a.u.}$, the (deflected) probe pulse signal clearly dominates. We estimate that the contribution of the pump pulse to the signal in the red oval in Fig.~\ref{fig:experiment}(d) as less than 10\% at $p_x=\unit[-0.5]{a.u.}$, and approximately 1\% at $p_x=\unit[-2]{a.u.}$. Next, we select events for which an Ar$^{2+}$ ion has been detected in coincidence with one electron. The momentum component of the other electron along the deflection field is calculated using momentum conservation. The events in which the second ionization step occurs in the probe pulse are selected with the following conditions,
\begin{align}
p_x^\mathrm{meas} &< -0.3 \mathrm{a.u.}, \\
-1 < p_x^\mathrm{calc} &< 0.6 \mathrm{a.u.},
\end{align}
where $p_x^{meas}$ is the measured electron momentum component along the IR polarization, and $p_x^{calc}$ is the momentum component calculated from momentum conservation. The rationale for the above conditions is outlined in Supplementary Method 1 and visualized in Supplementary Figure 3.
Having identified the electrons produced in the second ionization step, the electron density plots are obtained by calculating normalized differences of signal, $S$, and reference, $R$, photoelectron momentum distribution:
\begin{equation}
D = (S - R) / (S + R).
\end{equation}
\subsection{Orbital effect in the longitudinal momentum spectra}
To calculate the ionization rates for the two orbitals we build on the model described in Ref.~\cite{Woerner2011}. However, we ignore some ionization pathways with low transition probability. For the simulations, a 5-fs (full width at half maximum of the gaussian intensity envelope) laser pulse with frequency $w=0.06\,\mathrm{a.u.}$ and intensity $I = 1.0 \times 10^{15} \,\mathrm{W/cm^2}$ is used. The ionization probability for either orbital is calculated at every point in time using the rates for non-adiabatic tunneling proposed in Ref.~\cite{Yudin2001}. Longitudinal momentum spectra are obtained from the vector potential of the laser pulse, appropriately weighing each contribution with the calculated rate. The calculations are repeated for 16 different values of the carrier-envelope phase (CEP), and the results are averaged over the CEP.
The normalized difference of the spectra calculated for $|m|=1$ and $m=0$ vacancies is calculated and plotted in Fig.~\ref{fig:plong}(c).
Usage of ADK formula \cite{Ammosov1986} instead of the non-adiabatic tunneling formula \cite{Yudin2001}, leads to very similar results for the normalized difference of the longitudinal spectra.
\subsection{Imaging model}
Here, we give a short description of the procedure used to generate the theoretical images shown in Fig.~\ref{fig:imaging}(b). A detailed description can be found the SI. Real space wave functions for the Ar$^+$ valence orbital are taken from the computational chemistry software GAMESS. Momentum-real space wave functions are calculated by partial Fourier transform, as described in Ref.~\cite{Murray2011}, and plotted in Fig.~\ref{fig:imaging}(c).
The wavefunctions squared are multiplied with a ``tunnel filter" \cite{Murray2011} to obtain the transversal momentum distribution at the tunnel exit. The tunnel filter suppresses large momenta perpendicular to the direction of tunneling. In order to obtain the momentum distributions after propagation in the laser field, the momentum distributions at the tunnel exit are convoluted with Gaussian functions, representing the ionizing visible, and mid-IR deflection fields. The orbital effect in the longitudinal direction is taken into account.
The momentum distribution that correspond to $m=0$ and $|m|=1$ vacancies are given by appropriate linear combination of the spectra calculated for the $m=0$ and $|m|=1$ wave functions. To calculate the normalized differences in the three momentum planes, the spectra are integrated over the third dimension.
\section*{Data Availability}
The data for Figures 1b-d, 2a-c, 3a-c and 4a-d; and Supplementary Figures 1, 2a-c, and 3 are provided as a Source Data file. The data that support the findings of this study are available from the corresponding author upon reasonable request.
\section*{References}
|
1,108,101,564,262 | arxiv | \section{INTRODUCTION}
Observations of distant Supernovae (SNe Ia) (Perlmutter et al. 1997, 1998, 1999; Riess et al. 1998, 2000; Garnavich et al. 1998a,b; Schmidt et al. 1998; Tonry et al. 2003; Clocchiatti et al. 2006), fluctuation of cosmic microwave background radiation (CMBR) (de Bernardis et al. 1998; Hanany et al. 2000), large scale structure (LSS) (Spergel et al. 2003; Tegmark et al. 2004), sloan digital sky survey (SDSS)
(Seljak et al. 2005; Adelman-McCarthy et al. 2006), Wilkinson microwave anisotropy probe (WMAP) (Bennett. et al 2003) and Chandra x-ray observatory
(Allen et al. 2004) by means of ground and altitudinal experiments have established that our Universe is undergoing a late-time
accelerating expansion, and we live in a priviledged spatially flat Universe composed of approximately $4\%$ baryonic
matter, $22\%$ dark matter and $74\%$ dark energy. The simplest candidate for dark energy is the cosmological constant.
Recently, a great number of theme have been proposed to explain the current accelerating Universe, partly such as
scalar field model, exotic equation of state (EoS), modified gravity, and the inhomogeneous cosmology model. There are
several dark energy models which can be distinguished by, for instance, their EoS ($\omega = \frac{p_{de}}{\rho_{de}}$)
during the evolution of the universe. \\
The introduction of viscosity into cosmology has been investigated from different view points (Gr$\o$n 1990; Padmanabhan $\&$ Chitre 1987; Barrow 1986; Zimdahl 1996; Farzin et al. 2012).
Misner (1966, 1967;) noted that the ``measurement of the isotropy of the cosmic background radiation represents the
most accurate observational datum in cosmology''. An explanation of this isotropy was provided by showing that in
large class of homogeneous but anisotropic universe, the anisotropy dies away rapidly. It was found that the most
important mechanism in reducing the anisotropy is neutrino viscosity at temperatures just above $10^{10} K$ (when the
Universe was about 1 s old: cf. Zel'dovich and Novikov (Zel'dovich $\&$ Novikov 1971)). The astrophysical observations also indicate
some evidences that cosmic media is not a perfect fluid (Jaffe et al. 2005), and the viscosity effect could be concerned in
the evolution of the universe (Brevik $\&$ Gorbunova, 2005; Brevik et al. 2005; Cataldo et al. 2005). On the other hand, in the standard cosmological model, if
the EoS parameter $\omega$ is less than $-1$, so-called phantom, the universe shows the future finite time singularity
called the Big Rip (Caldwell et al. 2003; Nojiri et al. 2005) or Cosmic Doomsday. Several mechanisms are proposed to prevent the future big rip,
like by considering quantum effects terms in the action (Nojiri $\&$ Odintsov 2004; Elizalde et al. 2004), or by including viscosity effects for the Universe
evolution (Meng et al. 2007). A well known result of the FRW cosmological solutions, corresponding to universes filled with
perfect fluid and bulk viscous stresses, is the possibility of violating dominant energy condition (Barrow 1987, 1988; Folomeev $\&$ Gurovich 2008; Ren $\&$ Meng 2006; Brevikc $\&$ Gorbunovac 2005; Nojiri $\&$ Odintsov 2005).
Setare (Setare 2007a,b,c) and Setare and Saridakis (Setare $\&$ Saridakis 2000) have studied the interacting models of dark energy in
different context. Interacting new agegraphic viscous dark energy with varying $G$ has been studied by
Sheykhi and Setare (Sheykhi $\&$ Setare 2010). \\
Recently, Amirhashchi et al. (2011a,b); Pradhan et al. (2011); Saha et al. (2012) have studied
the two-fluid scenario for dark energy in FRW universe in different context. Very recently Singh and Chaubey (2012) have studied interacting dark energy in Bianchi type I space-time. Some experimental data implied that our
universe is not a perfectly flat universe and recent papers (Spergel et al. 2003; Bennett et al. 2003; Ichikawa et al. 2006) favoured a universe with spatial
curvature. Setare et al. (2009) have studied the tachyon cosmology in non-interacting and interacting cases in
non-flat FRW universe. Due to these considerations and motivations, in this Letter, we study the evolution of
the dark energy parameter within the framework of a FRW open cosmological model filled with two fluids (i.e., barotropic
fluid and bulk viscous stresses). In doing so we consider both interacting and non-interacting cases.
\section{THE METRIC AND FIELD EQUATIONS}
We consider the spherically symmetric Friedmann-Robertson-Walker (FRW) metric as
\begin{equation}
\label{eq1}
ds^{2} = -dt^{2} + a^{2}(t)\left[\frac{dr^{2}}{1 - kr^{2}} + r^{2}(d\theta^{2} +
\sin^{2}\theta d\phi^{2})\right],
\end{equation}
where $a(t)$ is the scale factor and the curvature constant $k$ is $-1, 0, +1$ respectively
for open, flat and close models of the universe.\\\\
The Einstein's field equations (with $8\pi G = 1$ and $c = 1$) read as
\begin{equation}
\label{eq2}
R^{j}_{i} - \frac{1}{2}R\delta^{j}_{i} = - T^{j}_{i},
\end{equation}
where the symbols have their usual meaning and $T^{j}_{i}$ is the two-fluid energy-momentum tensor due
to bulk viscous dark and barotropic fluids written in the form.
\begin{equation}
\label{eq3}
T^{j}_{i}=(\rho+\bar{p})u^{j}_{i}+\bar{p}g^{j}_{i},
\end{equation}
where
\begin{equation}
\label{eq4} \bar{p}=p-\xi u^{i}_{;i}
\end{equation}
and
\begin{equation}
\label{eq5} u^{i}u_{i} = -1,
\end{equation}
where $\rho$ is the energy density; $p$, the pressure; $\xi$, the bulk-viscous coefficient; and $u^{i}$,
the four-velocity vector of the distribution. Here after the semi-colon denotes covariant differentiation.\\\\
The expansion factor $\theta$ is defined by $\theta = u^{i}_{;i} = 3\frac{\dot{a}}{a}$. Hence Eq. (\ref{eq4})
leads to
\begin{equation}
\label{eq6} \bar{p} = p - 3\xi H,
\end{equation}
where $H$ is Hubble's constant defined by
\begin{equation}
\label{eq7} H = \frac{\dot{a}}{a}.
\end{equation}
Now with the aid of Equations (\ref{eq3})-(\ref{eq5}) and metric (\ref{eq1}), the surviving field equations
(\ref{eq2}) take the explicit forms
\begin{equation}
\label{eq8} \rho = 3\left(\frac{\dot{a}^{2}}{a^{2}}+\frac{k}{a^{2}}\right),
\end{equation}
and
\begin{equation}
\label{eq9} \bar{p} = -\left(\frac{\dot{a}^{2}}{a^{2}} + 2\frac{\ddot{a}}{a} + \frac{k}{a^{2}}\right).
\end{equation}
Also in space-time (\ref{eq1}) the Bianchi identity for the bulk-viscous fluid distribution $G^{;j}_{ij} = 0$
leads to $T^{;j}_{ij} = 0$ which yields
\begin{equation}
\label{eq10} \rho u^{i}+(\rho+\bar{p})u^{i}_{;i}
\end{equation}
which leads to
\begin{equation}
\label{eq11}\dot{\rho} + 3H(\rho + \bar{p})=0.
\end{equation}
Using Eq. (\ref{eq7}) in Eqs. (\ref{eq8}) and (\ref{eq9}) we get
\begin{equation}
\label{eq12} \rho=\left(\frac{3k}{A^{2}}e^{-2Ht}+3H^{2}\right),
\end{equation}
and
\begin{equation}
\label{eq13} \bar{p}=-\left(\frac{k}{A^{2}}e^{-2Ht}+3H^{2}\right),
\end{equation}
where $\bar{p} = p_{m} + \bar{p}_{D}$ and $\rho = \rho_{m} + \rho_{D}$. Here $p_{m}$ and $\rho_{m}$ are
pressure and energy density of barotropic fluid and $p_{D}$ and $\rho_{D}$ are pressure and energy
density of dark fluid respectively.\\
The equation of state (EoS) for the barotropic fluid $\omega_{m}$ and dark field $\omega_{D}$ are given by
\begin{equation}
\label{eq14}\omega_{m} = \frac{p_{m}}{\rho_{m}},
\end{equation}
and
\begin{equation}
\label{eq15}\omega_{D} = \frac{\bar{p}_{D}}{\rho_{D}},
\end{equation}
respectively.\\
From Eqs. (\ref{eq11})-(\ref{eq13}) we obtain
\begin{equation}
\label{eq16} \frac{\dot{\rho}}{3H} = \frac{2k}{a^{2}}e^{-2Ht}.
\end{equation}
Now we assume
\begin{equation}
\label{eq17} \rho = \alpha\theta^{2}~\mbox{or}~\rho=9\alpha H^{2},
\end{equation}
where $\alpha$ is an arbitrary constant. Eq. (\ref{eq17}) ensure us that our universe approaches homogeneity (Collins 1977). This condition has also been used by Banerjee et al. (1986) for deriving a viscous-fluid cosmological model with Bianchi type II space time.\\
Putting Eq. (\ref{eq17}) in Eq. (\ref{eq16}) and after integrating we get
\begin{equation}
\label{eq18} e^{-2Ht} = -\frac{3\alpha A^{2}}{2kt^{2}},
\end{equation}
which yields
\begin{equation}
\label{eq19} H = \frac{1}{2t}\ln\left(-\frac{2kt^{2}}{3\alpha A^{2}}\right),
\end{equation}
where $A$ is an arbitrary constant. From Eq. (\ref{eq19}), we observe that the condition given by (\ref{eq17}) restrict our study to the case when $k = -1 $
(i.e. only for open universe). In the following sections we deal with two cases, (i) non-interacting two-fluid model and
(ii) interacting two-fluid model.
\section{NON-INTERACTING TWO-FLUID MODEL}
In this section we assume that two-fluid do not interact with each other. Therefor, the general form of
conservation equation (\ref{eq11}) leads us to write the conservation equation for the dark and barotropic
fluid separately as,
\begin{equation}
\label{eq20}\dot{\rho}_{m} + 3\frac{\dot{a}}{a}\left(\rho_{m} + p_{m}\right) = 0,
\end{equation}
and
\begin{equation}
\label{eq21}\dot{\rho}_{D} + 3\frac{\dot{a}}{a}\left(\rho_{D} + \bar{p}_{D}\right) = 0.
\end{equation}
Integration Eq. (\ref{eq20}) and using (\ref{eq7}) leads to
\begin{equation}
\label{eq22}\rho_{m} = \rho_{0}a^{-3(1 + \omega_{m})}~\mbox{or}~\rho_{m}=\rho_{0}Be^{-3H(1 + \omega_{m})t},
\end{equation}
where $\rho_{0}$ is an integrating constant and $B=A^{-3(1 + \omega_{m})}$. By using Eq. (\ref{eq22}) in
Eqs. (\ref{eq12}) and (\ref{eq13}), we first obtain the $\rho_{D}$ and $p_{D}$ in term of Hubble's constant $H$ as
\begin{equation}
\label{eq23}\rho_{D} =\left(\frac{3k}{A^{2}}e^{-2Ht}+3H^{2}\right)-\rho_{0}Be^{-3H(1 + \omega_{m})t},
\end{equation}
and
\begin{equation}
\label{eq24} \bar{p}_{D} =\left(\frac{k}{A^{2}}e^{-2Ht}+3H^{2}\right)-\omega_{m}\rho_{0}B
e^{-3H(1 + \omega_{m})t}.
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm,height=8cm,angle=0]{ms1197fig1.eps}
\caption{The plot of $\rho_{D}$ vs $t$ for $\alpha = 0.1, A = 100, \omega_{m} = 0.5 $ in both non-interacting
and interacting two-fluid model}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm,height=8cm,angle=0]{ms1197fig2.eps}
\caption{The plot of EoS parameter $\omega^{eff}_{D}$ vs $t$ for $\rho_{0} = 10, \omega_{m} = 0.5,
\alpha = 0.01, B=1$ in non-interacting two-fluid model}
\end{figure}
respectively. By using Eqs. (\ref{eq23}) and (\ref{eq24}) in Eq. (\ref{eq15}), we can find the EoS of
dark energy in term of time as
\begin{equation}
\label{eq25}\omega_{D} = -\frac{\left(\frac{k}{A^{2}}e^{-2Ht}+3H^{2}\right)+
\omega_{m}\rho_{0}Be^{-3H(1 + \omega_{m})t}}{\left(\frac{3k}{A^{2}}e^{-2Ht} + 3H^{2}\right) -
\rho_{0}Be^{-3H(1 + \omega_{m})t}} .
\end{equation}
Therefore the effective EoS parameter for viscous DE can be written as
\begin{equation}
\label{eq26}\omega^{eff}_{D} =\omega_{D}-\frac{3\xi H}{\rho_{D}}= -\frac{\left(\frac{k}{A^{2}}e^{-2Ht}+3H^{2}\right)+3\xi H +
\omega_{m}\rho_{0}Be^{-3H(1 + \omega_{m})t}}{\left(\frac{3k}{A^{2}}e^{-2Ht} + 3H^{2}\right) -
\rho_{0}Be^{-3H(1 + \omega_{m})t}} .
\end{equation}
The expressions for the matter-energy density $\Omega_{m}$ and dark-energy density $\Omega_{D}$ are given by
\begin{equation}
\label{eq27}\Omega_{m} = \frac{\rho_{m}}{3H^{2}} = \frac{4t^{2}\rho_{0}Be^{-\frac{3}{2}\ln(\frac{2t^{2}}
{3\alpha A^{2}})(1 + \omega_{m})}}{3\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})},
\end{equation}
and
\begin{equation}
\label{eq28}\Omega_{D} = \frac{\rho_{D}}{3H^{2}} = -\frac{6\alpha}{\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})}
+ 1-\frac{4t^{2}\rho_{0}Be^{-\frac{3}{2}\ln(\frac{2t^{2}}{3\alpha A^{2}})(1 + \omega_{m})}}
{3\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})},
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm,height=8cm,angle=0]{ms1197fig3.eps}
\caption{The plot of density parameter ($\Omega$) vs $t$ for $A=1, \alpha = 0.01$ in non-interacting
two-fluid model}
\end{figure}
respectively. Adding Eqs. (\ref{eq27}) and (\ref{eq28}), we obtain
\begin{equation}
\label{eq29}\Omega = \Omega_{m} + \Omega_{D} = -\frac{6\alpha}{\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})} + 1.
\end{equation}
From the right hand side of Eq. (\ref{eq29}), it is clear that for open universe, $\Omega < 1$ but at late time we
see that $\Omega \to 1$ i.e. the flat universe scenario. This result is also compatible with the observational
results. Since our model predicts a flat universe for large times and the present-day universe is very close to flat,
so being flat, the derived model is thus compatible with the observational results. \\\\
Fig. $1$ depicts the energy density of DE ($\rho_{D}$) versus $t$. From this figure, we observe that ($\rho_{D}$),
in both non-interacting and interacting cases, is a decreasing function of time and approaches a small positive
value at late time and never go to infinity. Thus, in both cases the universe is free from big rip. \\\\
The behavior of EoS for DE in term of cosmic time $t$ is shown in Fig. $2$. It is observed that for
open universe, the $\omega^{eff}_{D}$ is an decreasing function of time, the rapidity of its decrease at the early
stage depends on the larger value of bulk viscous coefficient. The EoS parameter of the DE begins in non-dark
($\omega_{D} > -\frac{1}{3}$) region at early stage and cross the phantom divide or cosmological constant
($\omega_{D} = -1$) region and then pass over into phantom ($\omega_{D} < -1$) region. The property of DE is a
violation of the null energy condition (NEC) since the DE crosses the Phantom Divide Line (PDL), in particular
depending on the direction (Rodrigues 2008; Kumar $\&$ Yadav 2011; Pradhan $\&$ Amirhashchi 2011). In theory, despite the observational constraints, extensions of
general relativity are the prime candidate class of theories consistent with PDL crossing (Nesseris $\&$ Perivolaropoulos 2007). On
the other hand, while the current cosmological data from SN Ia (Supernova Legacy Survey, Gold Sample of Hubble
Space Telescope) (Riess et al. 2004; Astier et al. 2006). CMB (WMAP, BOOMERANG) (Komatsu et al. 2009; MacTavish et al. 2006) and large scale structure
(SDSS) (Eisenstein et al. 2005) data rule out that $\omega_{D} \ll -1$, they mildly favour dynamically evolving DE
crossing the PDL (see Rodrigues 2008; Kumar $\&$ Yadav 2011; Pradhan $\&$ Amirhashchi 2011; Nesseris $\&$ Perivolaropoulos 2007; Zhao et al. 2007; Coperland et al. 2006) for theoretical and observational status of crossing
the PDL). Thus our DE model is in good agreement with well established theoretical result as well as the
recent observations. From Fig. $2$, it is observed that in absence of viscosity (i.e. for $\xi = 0$),
the universe does not cross the PDL but approaches to cosmological constant ($\omega_{D} = -1$) scenario.
Thus, it clearly indicates the impact of viscosity on the evolution of the universe. \\\\
The variation of density parameter ($\Omega$) with cosmic time $t$ for open universe
has been shown in Fig. $3$. From the figure, it can be seen that in an open universe, $\Omega$ is an increasing
function of time and at late time, it approaches to the flat universe's scenario.
\section{INTERACTING TWO-FLUID MODEL}
In this section we consider the interaction between dark viscous and barotropic fluids. For this purpose we can write
the continuity equations for barotropic and dark viscous fluids as
\begin{equation}
\label{eq30}\dot{\rho}_{m} + 3\frac{\dot{a}}{a}(\rho_{m} + p_{m}) = Q,
\end{equation}
and
\begin{equation}
\label{eq31}\dot{\rho}_{D} + 3\frac{\dot{a}}{a}(\rho_{D} + \bar{p}_{D}) = -Q,
\end{equation}
where the quantity $Q$ expresses the interaction between the dark components. Since we are interested in
an energy transfer from the dark energy to dark matter, we consider $Q > 0$ which ensures that the second
law of thermodynamics is fulfilled (Pavon $\&$ Wang 2009). Here we emphasize that the continuity Eqs. (\ref{eq11}) and
(\ref{eq30}) imply that the interaction term ($Q$) should be proportional to a quantity with units of
inverse of time i.e $Q \propto \frac{1}{t}$. Therefor, a first and natural candidate can be the Hubble
factor $H$ multiplied with the energy density. Following Amendola et al. (2007) and Gou et al.
(2007), we consider
\begin{equation}
\label{eq32}Q = 3H \sigma \rho_{m},
\end{equation}
where $\sigma$ is a coupling constant. Using Eq. (\ref{eq32}) in Eq. (\ref{eq30}) and after integrating,
we obtain
\begin{equation}
\label{eq33}\rho_{m} = \rho_{0}a^{-3(1 + \omega_{m} - \sigma)} ~ \mbox{or} ~ \rho_{m} =
\rho_{0}Be^{-3H(1 + \omega_{m} - \sigma)t}.
\end{equation}
By using Eq. (\ref{eq33}) in Eqs. (\ref{eq12}) and (\ref{eq13}), we again obtain the $\rho_{D}$ and $p_{D}$
in term of Hubble's constant $H$ as
\begin{equation}
\label{eq34}\rho_{D} = \left(\frac{3k}{A^{2}}e^{-2Ht} + 3H^{2}\right) - \rho_{0}Be^{-3H(1 + \omega_{m} -
\sigma)t},
\end{equation}
and
\begin{equation}
\label{eq35} \bar{p}_{D} = \left(\frac{k}{A^{2}}e^{-2Ht} + 3H^{2}\right)- (\omega_{m} -
\sigma)\rho_{0}Be^{-3H(1 + \omega_{m} - \sigma)t},
\end{equation}
respectively. By using Eqs. (\ref{eq34}) and (\ref{eq35}) in Eq. (\ref{eq15}), we can find the EoS of
dark energy in term of time as
\begin{equation}
\label{eq36}\omega_{D} = -\frac{\left(\frac{k}{A^{2}}e^{-2Ht} + 3H^{2}\right)+
(\omega_{m} - \sigma)\rho_{0}Be^{-3H(1 + \omega_{m} - \sigma)t}}{\left(\frac{3k}{A^{2}}e^{-2Ht} +
3H^{2}\right) - \rho_{0}Be^{-3H(1 + \omega_{m} - \sigma)t}}.
\end{equation}
Again we can write the effective EoS parameter of viscous DE as
\begin{equation}
\label{eq37}\omega^{eff}_{D} = -\frac{\left(\frac{k}{A^{2}}e^{-2Ht} + 3H^{2}\right) - 3\xi H +
(\omega_{m} - \sigma)\rho_{0}Be^{-3H(1 + \omega_{m} - \sigma)t}}{\left(\frac{3k}{A^{2}}e^{-2Ht} +
3H^{2}\right) - \rho_{0}Be^{-3H(1 + \omega_{m} - \sigma)t}}.
\end{equation}
\begin{figure}[htbp]
\centering
\includegraphics[width=8cm,height=8cm,angle=0]{ms1197fig4.eps}
\caption{The plot of EoS parameter $\omega^{eff}_{D}$ vs $t$ for $\rho_{0} = 10, \omega_{m} = 0.5, \alpha = 0.01,
B = 1, \sigma = 0.3 $ in interacting two-fluid model}
\end{figure}
The expressions for the matter-energy density $\Omega_{m}$ and dark-energy density $\Omega_{D}$ are given by
\begin{equation}
\label{eq38}\Omega_{m} = \frac{\rho_{m}}{3H^{2}} = \frac{4t^{2}\rho_{0}Be^{-\frac{3}{2}\ln(\frac{2t^{2}}
{3\alpha A^{2}})(1 + \omega_{m} - \sigma)}}{3\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})},
\end{equation}
and
\begin{equation}
\label{eq39}\Omega_{D} = \frac{\rho_{D}}{3H^{2}} = -\frac{6\alpha}{\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})}
+ 1 - \frac{4t^{2}\rho_{0}Be^{-\frac{3}{2}\ln(\frac{2t^{2}}{3\alpha A^{2}})(1 + \omega_{m} - \sigma)}}
{3\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})},
\end{equation}
respectively. Adding Eqs. (\ref{eq38}) and (\ref{eq39}), we obtain
\begin{equation}
\label{eq40}\Omega = \Omega_{m} + \Omega_{D} = -\frac{6\alpha}{\ln^{2}(\frac{2t^{2}}{3\alpha A^{2}})} + 1,
\end{equation}
which is the same expression as in previous case of non-interacting two-fluid. Fig. $4$ shows a plot of EoS parameter
($\omega^{eff}_{D}$) versus $t$. The characteristic of $\omega^{eff}_{D}$ in this case is the same as in the previous case.
\section{CONCLUSION}
In this Letter, we have studied the evolution of dark energy parameter within the frame work of an open FRW space-time
filled with barotropic and bulk viscous dark fluid. In both non-interacting and interacting cases, we have observed
that for all values of bulk viscous coefficient, the universe has transition from non-dark region
($\omega^{eff}_{D} > -\frac{1}{3}$) to phantom region ($\omega^{eff}_{D} < -1$). In summary, we have investigated the possibility of
constructing a two-fluid dark energy models which have the equation of state ($\omega^{eff}_{D}$) crossing - 1 by using the
two-fluid (barotropic and bulk viscous dark fluid) naturally. Therefore, the two-fluid scenario discussed in the present
paper is a viable candidate for dark energy. It is also worth mentioned here that in both interacting and non-interacting
cases, our models are free from big rip. \\
\section*{ACKNOWLEDGMENT}
This work has been supported by the FRGS Grant by the Ministry of Higher Education, Malaysia under the
Project Number 02-10-10-969 FR. H. Amirhashchi \& A. Pradhan also thank the Laboratory of Computational
Sciences and Mathematical Physics, Institute for Mathematical Research, Universiti Putra Malaysia for providing
facility where this work was done.
|
1,108,101,564,263 | arxiv | \section{Introduction}
Since the discovery of multiferroic properties in TbMnO$_3$,\cite{Kimura} the field of multiferroic
materials have attracted a lot of interest.\cite{Fiebig,Eerenstein,Maxim} This interest arises from the emergence of
new fundamental physics\cite{electromagnon} and potential
technological applications.\cite{Eerenstein,Maxim} This field also gave rise to the reinvestigation of a large number
of "old" materials such as, for instance, the manganites RMnO$_3$ \cite{RMnO3} or the pyroxene family.\cite{pyroxene} Historically, the first family of multiferroic materials to be investigated was the boracite family.
Boracites are materials exhibiting the general formula M$_3$B$_7$O$_{13}$X where M is a transition metal ion
or alternatively Mg, Cd. The vast majority of the boracites are halogen boracites with compositions X = Cl, Br or I.\cite{boracites} Occasionally X can be OH, F or NO$_3$ and these associated phases have been much less investigated.\cite{M3B7O13X} The boracites have been widely investigated due to their ferroelectric, ferroelastic and magnetic properties.\cite{multiferroic} Several compositions within the boracites with X = Cl are natural minerals. They are of interest for mineralogists due their complex twinning and anomalous optical properties.\cite{minerals} Despite the large number of studies dedicated to this family and its wide chemistry, few studies have been dedicated to the determination of their magnetic ground states using neutron diffraction.\cite{Mn3B7O13I,Ni3B7O13Br,Co3B7O13Br,Ni3B7O13Cl}
Recently, a new composition with M = Fe and X = OH has been reported.\cite{Fe3B7O13OH} It has been shown that this boracite crystallizes in the space group \textit{R3c} (No. 161). This system orders antiferromagnetically below T$_N$ $\simeq$ 4.8 K and potentially exhibits magnetic frustration. Magnetic frustration could arise due to the arrangement of magnetic Fe$^{2+}$ ions which is based on a triangular framework. A magnetic system is considered to be spin frustrated when the
ratio f = $\mid\theta/T_N\mid$ is equal to or greater than 6.\cite{frustration} For Fe$_3$B$_7$O$_{13}$OH, f
is about 5.6,\cite{Fe3B7O13OH} and thus it may exhibit some magnetic frustration. However in the absence of neutron diffraction, this study could not further probe the exact nature of the ground state of this material. We aim here to investigate the magnetic ground state using powder and single crystal neutron diffraction. Additionally, we have used neutron diffraction in order to better characterize the crystal structure and in particular the hydrogen position which could not be located from x-ray single crystal work.
\section{Experiment}
Small single crystals of Fe$_3^{11}$B$_{7}$O$_{13}$(OH) were synthesized
by a hydrothermal method. A mixture of FeO, $^{11}$B$_{2}$O$_3$, and NaOH solution (4 mol/L) was
sealed in a silver capsule. Then it was heated up to 600 $^{\circ}$C
in a test-tube-type autoclave under 150 MPa of hydrostatic
pressure. After the reaction for 3 days, the product was
washed with hot water in order to remove the excess of $^{11}$B$_{2}$O$_3$. $^{11}$B$_{2}^{11}$O$_3$ was used in order to reduce the absorption of natural boron by neutrons.
Most of the neutron diffraction measurements were carried out on powder
samples. The precise crystal and magnetic structures
were investigated using high resolution powder data at various temperatures
using the D2B diffractometer at the Institut Laue Langevin (ILL). The measurements were
carried out at a wavelength of 1.594 \r{A} corresponding to the
(335) Bragg reflexion of a germanium monochromator. The neutron
detection is performed with $^{3}$He counting tubes spaced at
1.25$^{\circ}$ intervals for D2B. A complete diffraction pattern
is obtained after about 25 steps of 0.05$^{\circ}$ in 2$\theta$.
Powder neutron diffraction was carried out by crushing small single crystals resulting in a fine light brown powder. Measurement was carried out above the N\'eel temperature (T $\sim$ 9 K) and below (T = 1.8 K). Diffraction data analysis was done using the FullProf refinement package. \cite{fullprof}
Additional data were collected on the high resolution four-circle single crystal diffractometer D9 at the ILL. Few reflections were followed as function of temperature to determine the critical temperature behavior of the magnetic order. Data collection was done using a wavelength of 0.706 \r{A} obtained by reflection from a Cu(220) monochromator. The wavelength was calibrated using a germanium single crystal. D9 is equipped with a small two- dimensional area detector\cite{lehmann}, which for this measurement allowed optimal delineation of the peak from the background. For all data, background corrections\cite{Wilkinson} and Lorentz corrections were applied.
\section{Results and discussion}
\subsection{Structural properties}\label{Structural properties}
Attempts to solve the crystal structure using single crystal neutron diffraction data were unsuccessful due to large twinning of the crystals. Consequently only few reflections measured on the single crystal diffractometer could be used. Powder diffraction data were used to solve the crystal and magnetic structures.
Attempts to refine the crystal structure at 9 K using the x-ray single crystal model were unsuccessful. The best refinement which could be obtained is shown in Figure \ref{Cell9K_No-H}. These data show clearly that some intensity is lacking over the whole pattern. This discrepancy results from the impossibility from the x-ray single crystal data to locate the hydrogen atom of the hydroxyl group. Prior to investigate the magnetic properties of this polar iron boracite, we have been using the neutron diffraction data in order to locate the hydrogen atom of the hydroxyl group.
\begin{figure}[htb]
\centering
\includegraphics[angle=-90,width=8cm]{9K_No-H}
\caption{(Color Online) Refinement of neutron data at 9 K of the crystal structure of Fe$_3$B$_{7}$O$_{13}$(OH) using the structural model derived from single crystal data. The excluded region around 40 degrees is to remove the cryostat contribution.}\label{Cell9K_No-H}
\end{figure}
Localization of the missing hydrogen atom participating to the hydroxyl group was done by calculating the difference Fourier map of the refined pattern shown in Figure \ref{Cell9K_No-H}. The difference Fourier map obtained at 9 K is illustrated in Figure \ref{FourierMap}. The hydrogen atom can be localized in the Wyckoff position \textit{6a} in (0, 0, z). The refined atomic position of the hydrogen atom is (0, 0, z = 0.0426(10). The final Rietveld refinement at 9 K is show in Figure \ref{9K_H} and the corresponding atomic positions are given in Table \ref{structure10K}. A representation of the Fe$_3$ trimer unit with the hydroxyl group is shown in Figure \ref{Iron_Polyhedra}. The O - H bond is directed along the polar \textit{c} axis. Its distance obtained after refinement is 1.00(3) \r{A}. This bond distance is in excellent agreement with other report for hydroxyl group in minerals.\cite{eosphorite}
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{FourierMap}
\caption{(Color Online) Difference Fourier map showing the presence of the hydrogen atom sitting on the Wyckoff position 6a (0 0 z $\sim$ 0.03).}\label{FourierMap}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[angle=-90,width=8cm]{9K_H}
\caption{(Color Online) Refinement of neutron data at 9 K of the crystal structure of Fe$_3$B$_{7}$O$_{13}$(OH) including the hydrogen atom of the hydroxyl group in (0, 0, z = 0.0426(10)). The excluded region around 40 degrees is to remove the cryostat contribution. Statistics: R$_p$=3.34\% and R$_{Bragg}$ = 5.57\%}\label{9K_H}
\end{figure}
\begin{table}[htb]
\centering
\begin{tabular}{c c c c c c}
\hline \hline
Atom & Wyckoff &x& y& z& U$_{iso}$ \\
\hline
Fe & 18b & 0.5247(7) & 1.0580(4) & 0.2993(5) & 0.0057(4)\\
O$_1$ & 6a & 0.66666 & 0.33333 & 0.32818(-) & 0.0137(20)\\
H & 6a & 0.00000 & 0.00000 & 0.0426(10)& 0.017(3)\\
O$_2$ & 18b & 0.7085(9) & 0.9748(10)& 0.3254(6) & 0.0094(13)\\
O$_3$ & 18b & 0.6445(8) & 0.1092(9) & 0.2097(5) & 0.0068(12)\\
O$_4$ & 6a & 0.00000 & 0.00000 & 0.2999(7) & 0.0047(16)\\
O$_5$ & 18b & 0.3397(9) & 0.8310(9) & 0.3502(6) & 0.0082(12)\\
O$_6$ & 18b & 0.3043(10)& 0.0814(9) & 0.2676(6) & 0.0073(11)\\
B$_1$ & 18b & 0.1740(9) & 0.8349(8) & 0.3712(5) & 0.0080(9)\\
B$_2$ & 6a & 0.33333 & 0.66666 & 0.3524(6) & 0.0049(15)\\
B$_3$ & 18b & 0.8973(10)& 0.0996(9) & 0.3165(6) & 0.0070(8)\\
\hline \hline
\end{tabular}
\\
\caption{Crystallographic coordinates extracted from the Rietveld
refinement carried out on powder neutron diffraction (D2B) using
the space group \textit{R3c} at 9 K with cell parameters \emph{a}
= \emph{b} = 8.56080(5) \r{A} and \emph{c} =
21.06236(19) \r{A}. The z coordinate of the O$_1$ has been fixed in order to define the origin.}\label{structure10K}
\end{table}
\begin{figure}[htb]
\centering
\includegraphics[width=3cm]{Iron_Polyhedra}
\includegraphics[width=3cm]{Iron_Polyhedra_bis}\\
\caption{(Color Online) a) Detail of the Fe$_3$ trimer unit. The center oxygen is actually a OH$^{-}$ ion which is illustrated in b). The O - H bond is directed along the \textit{c} axis. Drawing was made using the software VESTA.\cite{VESTA}}\label{Iron_Polyhedra}
\end{figure}
\subsection{Magnetic structure}\label{Magnetic structure}
As mentioned in the previous section, the magnetic structure was determined using the powder neutron diffraction data. The 1.8 K neutron diffraction pattern collected on D2B indicates the presence of additional magnetic reflections at reciprocal lattice positions of the nuclear cell as shown in Figure \ref{Difference}.
\begin{figure}[htb]
\centering
\includegraphics[angle=-90,width=8cm]{Difference}
\caption{(Color Online) Powder diffraction patterns recorded at 9 and 2 K, shown respectively in red and blue. All the magnetic reflections can be indexed on the chemical unit cell.}\label{Difference}
\end{figure}
Using single crystal data, despite of the twinning, we could follow as function of temperature few magnetic reflections. This enables us to probe the nature of the magnetic phase transition. We present in Figure \ref{205_vs_T} the temperature evolution of the (205) reflection. Attempt to fit the data close to T$_N$ using a phenomenological model of power law gives rise to T$_N$ = 4.86(4) K. This N\'eel temperature is in excellent agreement with the previously reported value.\cite{Fe3B7O13OH} The critical exponent that we obtain give rise to $\beta$ = 0.47(6) which is close to $\frac{1}{2}$ suggesting that the magnetic ordering in Fe$_3$B$_{7}$O$_{13}$(OH) follows a typical mean-field theory.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{205_vs_T}
\caption{(Color Online) Integrated intensity of the (205) magnetic reflection as a
function of temperature. The line corresponds to a fit to the power law
I = I$_0$(T$_N$-T)$^\beta$ in the vicinity of T$_N$ = 4.86(4) K and constant above.}\label{205_vs_T}
\end{figure}
The possible magnetic structures compatible with the
symmetry of Fe$_3$B$_{7}$O$_{13}$(OH) were determined using BasIreps.\cite{BasiReps} For
the propagation vector $\overrightarrow{k}$ = $\overrightarrow{0}$, the small group G$_{\overrightarrow{k}}$, formed by
those elements of the space group that leave $\overrightarrow{k}$ invariant,
coincides with the space group \textit{R3c}. For $\overrightarrow{k}$ = $\overrightarrow{0}$, the irreducible
representations of the group G$_{\overrightarrow{k}}$ are those shown in Table
\ref{irreps}.
\begin{table*}[htb]
\centering
\caption{Irreducible representations of the space group \textit{R3c} for $\protect\overrightarrow{k}$=$\protect\overrightarrow{0}$. The symmetry elements are written according to Seitz notation (Ref. \cite{Seitz})}
\begin{tabular}{c c c c c c c}
\hline \hline
& 1$\vert$0,0,0 & {3+$_{00z}\vert$000} & {3-$_{00z}\vert$000} & (m$_{x\overline{x}z}\vert$0,0,$\frac{1}{2}$) & (m$_{x2xz}\vert$0,0,$\frac{1}{2}$) & (m$_{2xxz}\vert$0,0,$\frac{1}{2}$)\\
\hline
$\Gamma_{1}$ & 1 & 1 & 1 & 1 & 1 & 1\\
$\Gamma_{2}$ & 1 & 1 & 1 & -1 & -1 & -1\\
$\Gamma_{3}$ & $\begin{pmatrix}
1&0\\
0&1
\end{pmatrix}$ & $\begin{pmatrix}
-\frac{1}{2}+\frac{\sqrt{3}}{2}i&0\\
0&-\frac{1}{2}-\frac{\sqrt{3}}{2}i
\end{pmatrix}$ & $\begin{pmatrix}
-\frac{1}{2}-\frac{\sqrt{3}}{2}i&0\\
0&-\frac{1}{2}+\frac{\sqrt{3}}{2}i
\end{pmatrix}$& $\begin{pmatrix}
0&1\\
1&0
\end{pmatrix}$ & $\begin{pmatrix}
0 & -\frac{1}{2}-\frac{\sqrt{3}}{2}i\\
-\frac{1}{2}+\frac{\sqrt{3}}{2}i & 0
\end{pmatrix}$ & $\begin{pmatrix}
0 & -\frac{1}{2}+\frac{\sqrt{3}}{2}i\\
-\frac{1}{2}-\frac{\sqrt{3}}{2}i & 0
\end{pmatrix}$\\
\hline \hline
\end{tabular}
\label{irreps}
\end{table*}
A representation $\Gamma$ is constructed with the Fourier components
\textbf{m$^k$} corresponding to the Fe atoms of the Wyckoff
position 18b. The decomposition of the representation
$\Gamma$ in terms of the irreducible representations $\Gamma_{\overrightarrow{k}}$ is for the Wyckoff 18b site,
\begin{equation}
\Gamma_{\overrightarrow{k}}(18b) = \Gamma_1 + \Gamma_2 + 2\Gamma_3
\end{equation}
\begin{figure}[htb]
\centering
\includegraphics[angle=-90,width=8cm]{Rietveld_Mag}
\caption{(Color Online) Refinement of neutron data at 1.8 K of the magnetic structure of Fe$_3$B$_{7}$O$_{13}$(OH). The excluded region around 40 degrees is to remove the cryostat contribution. Statistics: R$_p$=3.27\%, R$_{Bragg}$ = 5.86\% and R$_{mag}$ = 7.6\%.}\label{Rietveld_Mag}
\end{figure}
The best refinement of the powder neutron data was obtained considering the magnetic structure associated to the irreducible representation $\Gamma_1$. The resulting magnetic moment for the Fe$^{2+}$ ions is 4.5(2) $\mu_B$. This value is higher than the spin only value of 4 $\mu_B$. This is likely related to the orbital contribution to the magnetic moment coming from Fe$^{2+}$ ions. This is in agreement with the reported M\"{o}ssbauer data where a large quadrupole spitting is reported.\cite{Fe3B7O13OH} The resulting fit of the powder data at 1.8 K is presented in Figure \ref{Rietveld_Mag}. A representation of the magnetic structure is shown in Figure \ref{MagneticStructure}. The spins lie mostly within the (ab) plane with a small out of plane component. This out of plane component is necessary in order to describe properly the first magnetic reflection at 2$\theta\sim$13.1$^{\circ}$. In Figure \ref{MagneticStructure}, the out of plane component points down. While moving along the trigonal axis, the orientation of the spins changes by 60$^{\circ}$ and the out of plane component points alternatively down and up. The overall magnetic structure is purely antiferromagnetic without any weak ferromagnetic component. The resulting magnetic symmetry is \textit{R3c}.
\begin{figure}[htb]
\centering
\includegraphics[width=6cm]{MagneticStructure}
\caption{(Color Online) Representation of the magnetic structure obtained at 1.8 K within the \textit{ac} plane. For the next layer along the \textit{c} axis, the spins are rotated by 60$^{\circ}$. Graphical representation was made using the software VESTA.\cite{VESTA}}\label{MagneticStructure}
\end{figure}
These results are in contrast with the reported literature on the magnetic ground state of the boracites.\cite{boracites,Mn3B7O13I,Ni3B7O13Br,Co3B7O13Br,Ni3B7O13Cl} Fe$_3$B$_{7}$O$_{13}$(OH) crystallizes in the trigonal space group \textit{R3c} which is potentially ferroelectric but does not exhibit any weak ferromagnetic component in contrast to all the other reported boracites.\cite{boracites} Even Co$_3$B$_7$O$_{13}$Cl which also exhibits the trigonal symmetry \textit{R3c} at room and low temperature, changes of symmetry giving rise to a weak ferromagnetic component below T$_N$ = 12 K.\cite{Co3B7O13Cl} It would be of interest to investigate the other compositions exhibit the trigonal \textit{R3c} symmetry (X = OH, NO$_3$ for instance) at room temperature in order to investigate further whether Fe$_3$B$_{7}$O$_{13}$(OH) is the exception to the rule or not.
Another interesting point in the magnetic properties of Fe$_3$B$_{7}$O$_{13}$(OH) is the absence of reduction of the magnetic moment despite the expected presence of magnetic frustration (f = 5.6). While all the investigated compositions by neutron diffraction exhibit the same magnetic symmetry Pc$^{'}$a2$_1^{'}$ and a lower symmetry than Fe$_3$B$_{7}$O$_{13}$(OH), their resulting magnetic moments are lower than the spin only values. In the paramagnetic space group Pca2$_1$1$^{'}$, there are 3 different crystallographic site for M. Function of the metal M, the neutron experiments show that there is one or two crystallographic sites exhibiting magnetic frustration giving rise to a reduced magnetic moment. For M = Co and X = Br, despite the presence of the expected large orbital momentum contribution, the third site exhibits a magnetic moment of 1.7(1) $\mu_B$ while the other 2 show respectively a magnetic moment of 4.7(2) and 4.0(2) $\mu_B$.\cite{Co3B7O13Br} For Mn$_3$B$_{7}$O$_{13}$I, only one site shows a saturated magnetic moment with 5.4(2) $\mu_B$ while the other two sites are reported with a magnetic moment of 3.8(2) $\mu_B$.\cite{Mn3B7O13I} Similar results are reported for the other compositions investigated by neutron diffraction.\cite{Ni3B7O13Br,Ni3B7O13Cl} In boracites, the frustration parameter f increases going from X = I $>$ Br $>$ Cl.\cite{Mn3B7O13I,Fe3B7O13X} Using the results from literature for M = Fe, we can further extend the rule to X = OH and we notice that the frustration parameter f increases from X = OH $>$ I $>$ Br $>$ Cl going from 1.4 (X = Cl) to 5.6 ( X = OH). The f parameter for the halogen boracites remains small irrespective of the chemical composition and much below 6 (at most f $\sim$ 3 for X = I). Consequently the lowering of symmetry from \textit{R3c} to \textit{Pca2$_1$} gives rise surprisingly to an increase of the reduction of the magnetic moment while the frustration parameter f decreases. DFT calculations would be necessary in order to investigate in more detail the magnetic frustration in the boracites.
\section{Conclusion}
We have investigated by neutron diffraction the crystal and magnetic structures of the newly reported trigonal iron boracite Fe$_3$B$_{7}$O$_{13}$(OH). We were able to locate the hydrogen atom within the structure by Fourier map difference. The hydroxyl group is characterized by a hydrogen oxygen bond distance in excellent agreement with other hydroxyl groups reported in other minerals. In agreement with previous report, we find that below T$_N$ = 4.86(4) K an antiferromagnetic state takes place characterized by $\overrightarrow{k}$ = $\overrightarrow{0}$. The resulting magnetic moment is 4.5(2) $\mu_B$ is larger than the spin only value of 4 $\mu_B$. This difference is probably related to the orbital contribution to the magnetic moment. We show that the magnetic frustration in boracites increases along X = OH $>$ I $>$ Br $>$ Cl although without giving rise to a reduced magnetic moment for X = OH as expected for a magnetically frustrated system. We demonstrate that Fe$_3$B$_{7}$O$_{13}$(OH) is a very unusual system within the boracite family. We expect that this work will stimulate experimental investigations of the other compositions with the boracite family with X = OH and NO$_3$.
\section*{ACKNOWLEDGEMENTS}
The authors acknowledge the allocation of beamtime at the Institut Laue Langevin and the technical support provided during the experiment.
|
1,108,101,564,264 | arxiv | \section{Introduction}
In 1976 Ribe~\cite{Ribe76} (see
also~\cite{Ribe78}, \cite{HM82}, \cite{Bourgain87}, \cite{BL00}) proved that if $X$ and
$Y$ are uniformly homeomorphic Banach spaces then $X$ is finitely
representable in $Y$, and vice versa ($X$ is said to be finitely
representable in $Y$ if there exists a constant $K>0$ such that
any finite dimensional subspace of $X$ is $K$-isomorphic to a
subspace of $Y$). This theorem suggests that ``local properties"
of Banach spaces, i.e. properties whose definition involves
statements about finitely many vectors, have a purely metric
characterization. Finding explicit manifestations of this
phenomenon for specific local properties of Banach spaces (such as
type, cotype and super-reflexivity), has long been a major driving
force in the bi-Lipschitz theory of metric spaces (see Bourgain's
paper~\cite{Bourgain86-trees} for a discussion of this research
program). Indeed, as will become clear below, the search for
concrete versions of Ribe's theorem has fueled some of the field's
most important achievements.
The notions of type and cotype of Banach spaces are the basis of a
deep and rich theory which encompasses diverse aspects of the
local theory of Banach spaces. We refer
to~\cite{MS86}, \cite{Pisier86}, \cite{Pisier86-book},
\cite{T-J89}, \cite{Pisier89}, \cite{LT91}, \cite{DJT95}, \cite{Woj96}, \cite{Maurey03}
and the references therein for background on these topics. A
Banach space $X$ is said to have (Rademacher) type $p> 0$ if there
exists a constant $T<\infty$ such that for every $n$ and every
$x_1,\ldots,x_n\in X$,
\begin{eqnarray}\label{eq:def type}
\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j x_j\Biggr\|_X^p\le T^p\sum_{j=1}^n
\|x_j\|_X^p.
\end{eqnarray}
where the expectation $\mathbb{E}_\varepsilon$ is with respect to a uniform choice
of signs $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_n)\in \{-1,1\}^n$. $X$ is said to have
(Rademacher) cotype $q>0$ if there exists a constant $C<\infty$
such that for every $n$ and every $x_1,\ldots,x_n\in X$,
\begin{eqnarray}\label{eq:def Rademacher cotype}
\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j x_j\Biggr\|_X^q\ge
\frac{1}{C^q}\sum_{j=1}^n \|x_j\|_X^q.
\end{eqnarray}
These notions are clearly {\em linear} notions, since their
definition involves addition and multiplication by scalars. Ribe's
theorem implies that these notions are preserved under uniform
homeomorphisms of Banach spaces, and therefore it would be
desirable to reformulate them using only distances between points
in the given Banach space. Once this is achieved, one could define
the notion of type and cotype of a metric space, and then
hopefully transfer some of the deep theory of type and cotype to
the context of arbitrary metric spaces. The need for such a theory
has recently received renewed impetus due to the discovery of
striking applications of metric geometry to theoretical computer
science (see~\cite{Mat01}, \cite{Ind01}, \cite{Linial02} and the references
therein for part of the recent developments in this direction).
Enflo's pioneering work~\cite{Enflo69}, \cite{Enflo69-groups}, \cite{Enflo70}, \cite{Enflo78} resulted in
the formulation of a nonlinear notion of type, known today as {\em Enflo type}. The basic idea is
that given a Banach space $X$ and $x_1,\ldots,x_n\in X$, one can
consider the {\em linear} function $f:\{-1,1\}^n\to X$ given by
$f(\varepsilon)=\sum_{j=1}^n\varepsilon_j x_j$. Then~\eqref{eq:def type} becomes
\begin{multline}\label{eq:enflo type norm}
\mathbb{E}_\varepsilon \|f(\varepsilon)-f(-\varepsilon)\|_X^p
\le T^p\sum_{j=1}^n
\mathbb{E}_\varepsilon\Big\|f(\varepsilon_1,\ldots,\varepsilon_{j-1},\varepsilon_j,\varepsilon_{j+1},\ldots,\varepsilon_n)\\
-f(\varepsilon_1,\ldots,\varepsilon_{j-1},-\varepsilon_j,\varepsilon_{j+1},\ldots,\varepsilon_n)\Big\|_X^p.
\end{multline}
One can thus say that a metric space $(\mathcal{M},d_\mathcal{M})$ has Enflo type
$p$ if there exists a constant $T$ such that for every $n\in
\mathbb N$ and {\em every} $f:\{-1,1\}^n\to \mathcal{M}$,
\begin{multline}\label{eq:enflo type}
\mathbb{E}_\varepsilon d_\mathcal{M}\left(f(\varepsilon),f(-\varepsilon)\right)^p\le T^p\sum_{j=1}^n \mathbb{E}_\varepsilon
d_\mathcal{M}\Big(f(\varepsilon_1,\ldots,\varepsilon_{j-1},\varepsilon_j,\varepsilon_{j+1},\ldots,\varepsilon_n),\\
f(\varepsilon_1,\ldots,\varepsilon_{j-1},-\varepsilon_j,\varepsilon_{j+1},\ldots,\varepsilon_n)\Big)^p.
\end{multline}
There are two natural concerns about this definition. First of all,
while in the category of Banach spaces~\eqref{eq:enflo type} is
clearly a strengthening of~\eqref{eq:enflo type norm} (as we are not
restricting only to linear functions $f$), it isn't clear
whether~\eqref{eq:enflo type} follows from~\eqref{eq:enflo type
norm}. Indeed, this problem was posed by Enflo in~\cite{Enflo78},
and in full generality it remains open. Secondly, we do not know
if~\eqref{eq:enflo type} is a useful notion, in the sense that it
yields metric variants of certain theorems from the linear theory of
type (it should be remarked here that Enflo found striking
applications of his notion of type to Hilbert's fifth problem in
infinite dimensions~\cite{Enflo69-groups}, \cite{Enflo70}, \cite{Enflo78}, and to
the uniform classification of $L_p$ spaces~\cite{Enflo69}). As we
will presently see, in a certain sense both of these issues turned
out not to be problematic. Variants of Enflo type were studied by
Gromov~\cite{Gro83} and Bourgain, Milman and Wolfson~\cite{BMW86}.
Following \cite{BMW86} we shall say that
a metric space $(\mathcal{M},d_\mathcal{M})$ has BMW type $p>0$ if there exists a
constant $K<\infty$ such that for every $n\in \mathbb N$ and every
$f:\{-1,1\}^n\to \mathcal{M}$,
\begin{multline}\label{eq:BMW}
\mathbb{E}_\varepsilon d_\mathcal{M}(f(\varepsilon),f(-\varepsilon))^2\le K^2n^{\frac{2}{p}-1}\sum_{j=1}^n
\mathbb{E}_\varepsilon
d_\mathcal{M}\Big(f(\varepsilon_1,\ldots,\varepsilon_{j-1},\varepsilon_j,\varepsilon_{j+1},\ldots,\varepsilon_n),
\\ f(\varepsilon_1,\ldots,\varepsilon_{j-1},-\varepsilon_j,\varepsilon_{j+1},\ldots,\varepsilon_n)\Big)^2.
\end{multline}
Bourgain, Milman and Wolfson proved in~\cite{BMW86} that if a
Banach space has BMW type $p>0$ then it also has Rademacher type
$p'$ for all $0<p'<p$. They also obtained a nonlinear version of
the Maurey-Pisier theorem for type~\cite{Pisier74}, \cite{MP76}, yielding
a characterization of metric spaces which contain bi-Lipschitz
copies of the Hamming cube. In~\cite{Pisier86} Pisier proved that
for Banach spaces, Rademacher type $p$ implies Enflo type $p'$ for
every $0<p'<p$. Variants of these problems were studied by Naor
and Schechtman in~\cite{NS02}. A stronger notion of nonlinear
type, known as Markov type, was introduced by Ball~\cite{Ball92}
in his study of the {\em Lipschitz extension problem}. This
important notion has since found applications to various
fundamental problems in metric
geometry~\cite{Naor01}, \cite{LMN02}, \cite{BLMN05}, \cite{NPSS04}, \cite{MN05-proc}
Despite the vast amount of research on nonlinear type, a
nonlinear notion of cotype remained elusive. Indeed, the problem
of finding a notion of cotype which makes sense for arbitrary
metric spaces, and which coincides (or almost coincides) with the
notion of Rademacher type when restricted to Banach spaces, became
a central open problem in the field.
There are several difficulties involved in defining nonlinear
cotype. First of all, one cannot simply reverse
inequalities~\eqref{eq:enflo type} and~\eqref{eq:BMW}, since the
resulting condition fails to hold true even for Hilbert space
(with $p=2$). Secondly, if Hilbert space satisfies an inequality
such as~\eqref{eq:enflo type}, then it must satisfy the same
inequality where the distances are raised to any power $0<r<p$.
This is because Hilbert space, equipped with the metric
$\|x-y\|^{r/p}$, is isometric to a subset of Hilbert space
(see~\cite{Schoenberg38}, \cite{WW75}). In the context of nonlinear type,
this observation makes perfect sense, since if a Banach space has
type $p$ then it also has type $r$ for every $0<r<p$. But, this is
no longer true for cotype (in particular, no Banach space has
cotype less than $2$). One viable definition of cotype of a metric
space $X$ that was suggested in the early 1980s is the following:
Let $\mathcal{M}$ be a metric space, and denote by $\mathrm{Lip}(\mathcal{M})$ the Banach
space of all real-valued Lipschitz functions on $\mathcal{M}$, equipped
with the Lipschitz norm. One can then define the nonlinear cotype
of $\mathcal{M}$ as the (Rademacher) cotype of the (linear) dual
$\mathrm{Lip}(\mathcal{M})^*$. This is a natural definition when $\mathcal{M}$ is a Banach
space, since we can view $\mathrm{Lip}(\mathcal{M})$ as a nonlinear substitute for
the dual space $\mathcal{M}^*$ (note that in~\cite{Lin64} it is shown that
there is a norm $1$ projection from $\mathrm{Lip}(\mathcal{M})$ onto $\mathcal{M}^*$). With
this point of view, the above definition of cotype is natural due
to the principle of local reflexivity~\cite{LR69}, \cite{JRZ71}.
Unfortunately, Bourgain~\cite{Bourgain86-trees} has shown that
under this definition subsets of $L_1$ need not have finite
nonlinear cotype (while $L_1$ has cotype $2$). Additionally, the
space $\mathrm{Lip}(M)^*$ is very hard to compute, for example it is an
intriguing open problem whether even the unit square $[0,1]^2$ has
nonlinear cotype $2$ under the above definition.
In this paper we introduce a notion of cotype of metric spaces,
and show that it coincides with Rademacher cotype when restricted
to the category of Banach spaces. Namely, we introduce the
following concept:
\begin{definition}[Metric cotype]\label{def:cotype} Let $(\mathcal{M},d_\mathcal{M})$ be a
metric space and\break $q>0$. The space
$(\mathcal{M},d_\mathcal{M})$ is said to have {\it metric cotype $q$ with constant} $\Gamma$ if for
every integer $n\in \mathbb N$, there exists an even integer $m$,
such that for every $f:\mathbb{Z}_m^n\to \mathcal{M}$,
\begin{eqnarray}\label{eq:def cotype}
\sum_{j=1}^n\mathbb{E}_x\left[d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f(x)\right)^q\right]\le
\Gamma^q m^q\mathbb{E}_{\varepsilon,x}\left[d_\mathcal{M}(f(x+\varepsilon),f(x))^q\right],
\end{eqnarray}
where the expectations above are taken with respect to uniformly
chosen $x\in \mathbb{Z}_m^n$ and $\varepsilon\in\{-1,0,1\}^n$ (here, and in what
follows we denote by $\{e_j\}_{j=1}^n$ the standard basis of
$\mathbb{R}^n$). The smallest constant $\Gamma$ with which
inequality~\eqref{eq:def cotype} holds true is denoted
$\Gamma_q(\mathcal{M})$.
\end{definition}
Several remarks on Definition~\ref{def:cotype} are in order. First
of all, in the case of Banach spaces, if we apply
inequality~\eqref{eq:def cotype} to linear functions
$f(x)=\sum_{j=1}^n x_j v_j$, then by homogeneity $m$ would cancel,
and the resulting inequality will simply become the Rademacher
cotype $q$ condition (this statement is not precise due to the
fact that addition on $\mathbb{Z}_m^n$ is performed modulo $m$ --- see
Section~\ref{section:easy direction} for the full argument). Secondly,
it is easy to see that in any metric space which contains at least
two points, inequality~(\ref{eq:def cotype}) forces the scaling
factor $m$ to be large (see Lemma~\ref{lem:lower m}) --- this is an
essential difference between Enflo type and metric cotype.
Finally, the averaging over $\varepsilon\in \{-1,0,1\}^n$ is natural here,
since this forces the right-hand side of~\eqref{eq:def cotype} to be a
uniform average over all pairs in $\mathbb{Z}_m^n$ whose distance is at
most $1$ in the $\ell_\infty$ metric.
\medbreak
The following theorem is the main result of this paper:
\begin{theorem}\label{thm:cotype} Let $X$ be a Banach space{\rm ,} and $q\in [2,\infty)$. Then
$X$ has metric cotype $q$ if and only if $X$ has Rademacher cotype
$q$. Moreover{\rm ,}
$$
\frac{1}{2\pi}C_q(X)\le \Gamma_q(X)\le 90C_q(X).
$$
\end{theorem}
Apart from settling the nonlinear cotype problem described above,
this notion has various applications. Thus, in the remainder of
this paper we proceed to study metric cotype and some of its
applications, which we describe below. We believe that additional
applications of this notion and its variants will be discovered in
the future. In particular, it seems worthwhile to study the
interaction between metric type and metric cotype (such as in
Kwapien's theorem~\cite{Kwapien72}), the possible ``Markov"
variants of metric cotype (\`a la Ball~\cite{Ball92}) and their
relation to the Lipschitz extension problem, and the relation
between metric cotype and the nonlinear Dvoretzky theorem
(see~\cite{BFM86}, \cite{BLMN05} for information about the nonlinear
Dvoretzky theorem, and~\cite{FLM77} for the connection between
cotype and Dvoretzky's theorem).
\subsection{Some applications of metric cotype}
\smallbreak
\subsubsection*{1) \textsl{A nonlinear version of the Maurey-Pisier
theorem}.} Given two metric spaces $(\mathcal{M},d_\mathcal{M})$ and $(\mathcal{N},d_\mathcal{N})$,
and an injective mapping $f:\mathcal{M}\hookrightarrow \mathcal{N}$, we denote the
{\em distortion} of $f$ by
$$
\mathrm{dist}(f):= \|f\|_{\mathrm{Lip}}\cdot\|f^{-1}\|_{\mathrm{Lip}}=\sup_{\substack{x,y\in
\mathcal{M}\\ x\neq y}} \frac{d_\mathcal{N}(f(x),f(y))}{d_\mathcal{M}(x,y)}\cdot
\sup_{\substack{x,y\in \mathcal{M}\\ x\neq y}}
\frac{d_\mathcal{M}(x,y)}{d_\mathcal{N}(f(x),f(y))}.
$$
The smallest distortion with which $\mathcal{M}$ can be embedded into $\mathcal{N}$
is denoted $c_\mathcal{N}(\mathcal{M})$; i.e.,
$$
c_\mathcal{N}(\mathcal{M}):= \inf\{\mathrm{dist}(f):\ f:\mathcal{M}\hookrightarrow \mathcal{N}\}.
$$
If $c_\mathcal{N}(\mathcal{M})\le \alpha$ then we sometimes use the notation $ \mathcal{M}
\overset{\alpha}{\hookrightarrow} \mathcal{N}$. When $\mathcal{N}=L_p$ for some
$p\ge 1$, we write $c_\mathcal{N}(\cdot)=c_p(\cdot)$.
For a Banach space $X$ write
$$
p_X=\sup\{p\ge 1:\ T_p(X)<\infty\}\quad \mathrm{and}\quad
q_X=\inf\{q\ge 2:\ C_q(X)<\infty\}.
$$
$X$ is said to have nontrivial type if $p_X>1$, and $X$ is said
to have nontrivial cotype if $q_X<\infty$.
In~\cite{Pisier74} Pisier proved that $X$ has no nontrivial type
if and only if for every $n\in \mathbb N$ and every $\varepsilon>0$,
$\ell_1^n\overset{1+\varepsilon}{\hookrightarrow} X$. A nonlinear analog
of this result was proved by Bourgain, Milman and
Wolfson~\cite{BMW86} (see also Pisier's proof in~\cite{Pisier86}).
They showed that a metric space $\mathcal{M}$ does not have BMW type larger
than $1$ if and only if for every $n\in \mathbb N$ and every
$\varepsilon>0$,
$(\{0,1\}^n,\|\cdot\|_1)\overset{1+\varepsilon}{\hookrightarrow}\mathcal{M}$.
In~\cite{MP76} Maurey and Pisier proved that a Banach space $X$
has no nontrivial cotype if and only for every $n\in \mathbb N$
and every $\varepsilon>0$, $\ell_\infty^n \overset{1+\varepsilon}{\hookrightarrow}
X$. To obtain a nonlinear analog of this theorem we need to
introduce a variant of metric cotype (which is analogous to the
variant of Enflo type that was used in [11].
\begin{definition}[Variants of metric cotype \`a la Bourgain, Milman and\break Wolfson]
Let $(\mathcal{M},d_\mathcal{M})$ be a metric space and
$1\le p\le q$. We denote by $\Gamma^{(p)}_q(\mathcal{M})$ the least constant
$\Gamma$ such that for every integer $n\in \mathbb N$ there exists
an even integer $m$, such that for every $f:\mathbb{Z}_m^n\to \mathcal{M}$,
\begin{multline}\label{eq:def weak cotype}
\sum_{j=1}^n\mathbb{E}_x\left[d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f(x)\right)^p\right]\\
\le
\Gamma^p m^p
n^{1-\frac{p}{q}}\mathbb{E}_{\varepsilon,x}\left[d_\mathcal{M}(f(x+\varepsilon),f(x))^p\right],
\end{multline}
where the expectations above are taken with respect to uniformly
chosen $x\in \mathbb{Z}_m^n$ and $\varepsilon\in\{-1,0,1\}^n$.
Note that
$\Gamma^{(q)}_q(\mathcal{M})=\Gamma_q(\mathcal{M})$. When $1\le p<q$ we shall refer
to~\eqref{eq:def weak cotype} as a weak metric cotype $q$
inequality with exponent $p$ and constant $\Gamma$.
\end{definition}
\setcounter{theorem}{3}
The following theorem is analogous to Theorem~\ref{thm:cotype}.
\begin{theorem}\label{thm:weak cotype}
Let $X$ be a Banach space{\rm ,} and assume that for some $1\le p< q${\rm ,}
$\Gamma_q^{(p)}(X)<\infty$. Then $X$ has cotype $q'$ for every
$q'>q$. If $q=2$ then $X$ has cotype $2$. On the other hand{\rm ,}
$$
\Gamma_q^{(p)}(X)\le c_{pq}C_q(X),
$$
where $c_{pq}$ is a universal constant depending only on $p$ and
$q$.
\end{theorem}
In what follows, for $m,n\in \mathbb N$ and $p\in [1,\infty]$ we
let $[m]_p^n$ denote the set $\{0,1,\ldots,m\}^n$, equipped with
the metric induced by $\ell_p^n$. The following theorem is a
metric version of the Maurey-Pisier theorem (for cotype):
\begin{theorem}\label{thm:MPcotype} Let $\mathcal{M}$ be a metric space
such that $\Gamma_q^{(2)}(\mathcal{M})=\infty$ for all $q<\infty$. Then for
every $m,n\in \mathbb N$ and every $\varepsilon>0${\rm ,}
$$
[m]_\infty^n \overset{1+\varepsilon}{\hookrightarrow}\mathcal{M}.
$$
\end{theorem}
We remark that in~\cite{MP76} Maurey and Pisier prove a stronger
result, namely that for a Banach space $X$, for every $n\in
\mathbb N$ and every $\varepsilon>0$, $\ell_{p_X}^n
\overset{1+\varepsilon}{\hookrightarrow} X$ and $\ell_{q_X}^n
\overset{1+\varepsilon}{\hookrightarrow} X$. Even in the case of nonlinear
type, the results of Bourgain, Milman and Wolfson yield an
incomplete analog of this result in the case of BMW type greater
than $1$. The same phenomenon seems to occur when one tries to
obtain a nonlinear analog of the full Maurey-Pisier theorem for
cotype. We believe that this issue deserves more attention in
future research.
\subsubsection*{{\rm 2)} \textsl{Solution of a problem posed by Arora{\rm ,} Lov\'{a}sz{\rm ,} Newman{\rm ,}
Rabani{\rm ,}\break Rabinovich and Vempala.}}
The following question appears in \cite[Conj.~5.1]{ALNRRV05}:
\begin{quote}
Let $\mathcal F$ be a \emph{baseline} metric class which does not contain
all finite metrics with distortion arbitrarily close to $1$. Does
this imply that there exists $\alpha>0$ and arbitrarily large
$n$-point metric spaces $\mathcal{M}_n$ such that for every $\mathcal{N}\in \mathcal
F$, $c_\mathcal{N}(\mathcal{M}_n)\ge (\log n)^\alpha$?
\end{quote}
We refer to~\cite[\S 2]{ALNRRV05} for the definition of baseline
metrics, since we will not use this notion in what follows. We also
refer to~\cite{ALNRRV05} for background and motivation from
combinatorial optimization for this problem, where several partial
results in this direction are obtained. An extended abstract of the
current paper~\cite{MN06} also contains more information on the
connection to Computer Science. Here we apply metric cotype to
settle this conjecture positively, without any restriction on the
class $\mathcal F$.
To state our result we first introduce some notation. If $\mathcal{F}$ is a
family of metric spaces we write
$$
c_\mathcal{F}(\mathcal{N})=\inf \left\{c_\mathcal{M}(\mathcal{N}): \mathcal{M}\in \mathcal{F}\right\}.
$$
For an integer $n\ge 1$ we define
$$
\mathcal D_n(\mathcal{F})= \sup\{c_\mathcal{F}(\mathcal{N}):\ \mathcal{N}\ \text{is a metric space},\
|\mathcal{N}|\le n\}.
$$
Observe that if, for example, $\mathcal{F}$ consists of all the subsets of
Hilbert space (or $L_1$), then Bourgain's embedding
theorem~\cite{Bou85} implies that $\mathcal D_n(\mathcal{F})=O(\log n)$.
For $K>0$ we define the $K$-cotype (with exponent $2$) of a family
of metric spaces $\mathcal{F}$ as
$$
q_\mathcal{F}^{(2)}(K)=\sup_{\mathcal{M}\in \mathcal{F}} \inf \left\{q\in (0,\infty]:\
\Gamma_q^{(2)}(\mathcal{M})\le K\right\}.
$$
Finally we let
$$
q^{(2)}_\mathcal{F}=\inf_{\infty>K>0} q_\mathcal{F}^{(2)}(K).
$$
The following theorem settles positively the problem stated above:
\begin{theorem}\label{thm:dicho}
Let $\mathcal{F}$ be a family of metric spaces. Then the following
conditions are equivalent\/{\rm :}\/
\begin{enumerate} \itemsep 0mm
\item
There exists a finite metric space $\mathcal{M}$ for which $c_{\mathcal{F}}(\mathcal{M})>1$.
\item $q_\mathcal{F}^{(2)}<\infty$. \item There exists $0<\alpha<\infty$
such that $\mathcal D_n(\mathcal{F})=\Omega\left((\log n)^\alpha\right)$.
\end{enumerate}
\end{theorem}
\subsubsection*{ 3) \textsl{A quantitative version of Matou{\hskip.75pt\rm \v{{\hskip-5.75pt\it s}}}ek\/{\rm '}\/s {\rm BD}
Ramsey theorem.}} In~\cite{Mat92} Matou\v{s}ek proved the
following result, which he calls the Bounded Distortion (BD)
Ramsey theorem. We refer to~\cite{Mat92} for motivation and
background on these types \pagebreak of results.
\vskip8pt{{\sc Theorem 1.7} {\rm (Matou\v{s}ek's BD Ramsey theorem).}}
{\it Let $ X$ be a finite metric space and $\varepsilon>0${\rm ,} $\gamma>1$. Then
there exists a metric space $ Y=Y(X,\varepsilon,\gamma)${\rm ,} such that for
every metric space} $Z${\rm ,}
$$
c_Z(Y)<\gamma\implies c_Z(X)<1+\varepsilon.
$$
\vskip6pt
We obtain a new proof of Theorem~1.7, which is
quantitative and concrete:
\vskip6pt {{\sc Theorem 1.8} {\rm (Quantitative version of Matou\v{s}ek's BD Ramsey
theorem).}} {\it There exists a universal constant $C$
with the following properties. Let $X$ be an $n$-point metric
space and $\varepsilon\in (0,1)${\rm ,} $\gamma>1$. Then for every integer $N\ge
(C\gamma)^{2^{5A}}${\rm ,} where
$$
A=\max\left\{\frac{4\diam(X)}{\varepsilon\cdot \min_{x\neq y}
d_X(x,y)},n\right\},
$$
if a metric space $Z$ satisfies $c_Z(X)>1+\varepsilon$ then{\rm ,}
$c_Z\left(\left[N^5\right]_\infty^N\right)>\gamma$.}
\vskip6pt
\setcounter{theorem}{8}
We note that Matou\v{s}ek's argument in~\cite{Mat92} uses Ramsey
theory, and is nonconstructive (at best it can yield tower-type
bounds on the size of $Z$, which are much worse than what the
cotype-based approach gives).
\subsubsection*{{\rm 4)} \textsl{Uniform embeddings and Smirnov\/{\rm '}\/s problem}.} Let
$(\mathcal{M},d_\mathcal{M})$ and $(\mathcal{N},d_\mathcal{N})$ be metric spaces. A mapping $f:\mathcal{M}\to
\mathcal{N}$ is called a {\em uniform embedding} if $f$ is injective, and
both $f$ and $f^{-1}$ are uniformly continuous. There is a large
body of work on the uniform classification of metric spaces --- we
refer to the survey article~\cite{Lin98}, the book~\cite{BL00},
and the references therein for background on this topic. In spite
of this, several fundamental questions remain open. For example,
it was not known for which values of $0<p, q<\infty$, $L_p$ embeds
uniformly into $L_q$. As we will presently see, our results yield
a complete characterization of these values of $p,q$.
In the late 1950's Smirnov asked whether every separable metric
space embeds uniformly into $L_2$ (see~\cite{Gorin59}). Smirnov's
problem was settled negatively by Enflo in~\cite{Enflo69-smirnov}.
Following Enflo, we shall say that a metric space $\mathcal{M}$ is a {\em
universal uniform embedding space} if every separable metric space
embeds uniformly into $\mathcal{M}$. Since every separable metric space is
isometric to a subset of $C[0,1]$, this is equivalent to asking
whether $C[0,1]$ is uniformly homeomorphic to a subset of $\mathcal{M}$
(the space $C[0,1]$ can be replaced here by $c_0$ due to Aharoni's
theorem~\cite{Aha74}). Enflo proved that $c_0$ does not uniformly
embed into Hilbert space. In~\cite{AMM85}, Aharoni, Maurey and
Mityagin systematically studied metric spaces which are uniformly
homeomorphic to a subset of Hilbert space, and obtained an elegant
characterization of Banach spaces which are uniformly homeomorphic
to a subset of $L_2$. In particular, the results of~\cite{AMM85}
imply that for $p>2$, $L_p$ is not uniformly homeomorphic to a
subset of $L_2$.
Here we prove that in the class of Banach spaces with nontrivial
type, if $Y$ embeds uniformly into $X$, then $Y$ inherits the
cotype of $X$. More precisely:
\begin{theorem}\label{thm:uniform} Let $X$ be a
Banach space with nontrivial type. Assume that $Y$ is a Banach
space which uniformly embeds into $X$. Then $q_Y\le q_X$.
\end{theorem}
As a corollary, we complete the characterization of the values of
$0<p$,\break $q<\infty$ for which $L_p$ embeds uniformly into $L_q$:
\begin{theorem}\label{thm:uniformL_p}
For $p,q> 0${\rm ,} $L_p$ embeds uniformly into $L_q$ if and only if
$p\le q$ or $q\le p\le 2$.
\end{theorem}
We believe that the assumption that $X$ has nontrivial type in
Theorem~\ref{thm:uniform} can be removed --- in
Section~\ref{section:problems} we present a concrete problem which
would imply this fact. If true, this would imply that cotype is
preserved under uniform embeddings of Banach spaces. In
particular, it would follow that a universal uniform embedding
space cannot have nontrivial cotype, and thus by the
Maurey-Pisier theorem~\cite{MP76} it must contain
$\ell_\infty^n$'s with distortion uniformly bounded in $n$.
\subsubsection*{{\rm 5)} \textsl{Coarse embeddings}.} Let $(\mathcal{M},d_\mathcal{M})$ and
$(\mathcal{N},d_\mathcal{N})$ be metric spaces. A mapping $f:\mathcal{M}\to \mathcal{N}$ is called a
{\em coarse embedding} if there exists two nondecreasing
functions $\alpha,\beta:[0,\infty)\to[0,\infty)$ such that
$\lim_{t\to\infty} \alpha(t)=\infty$, and for every $x,y\in \mathcal{M}$,
$$
\alpha(d_\mathcal{M}(x,y))\le d_\mathcal{N}(f(x),f(y))\le \beta(d_\mathcal{M}(x,y)).
$$
This (seemingly weak) notion of embedding was introduced by Gromov
(see \cite{Gro99}), and has several important geometric
applications. In particular, Yu~\cite{Yu00} obtained a striking
connection between the Novikov and Baum-Connes conjectures and
coarse embeddings into Hilbert spaces. In~\cite{KY04} Kasparov and
Yu generalized this to coarse embeddings into arbitrary uniformly
convex Banach spaces. It was unclear, however, whether this is
indeed a strict generalization, i.e. whether or not the existence
of a coarse embedding into a uniformly convex Banach space implies
the existence of a coarse embedding into a Hilbert space. This was
resolved by Johnson and Randrianarivony in~\cite{JR04}, who proved
that for $p>2$, $L_p$ does not coarsely embed into $L_2$.
In~\cite{Ran04}, Randrianarivony proceeded to obtain a
characterization of Banach spaces which embed coarsely into $L_2$,
in the spirit of the result of Aharoni, Maurey and
Mityagin~\cite{AMM85}. There are very few known methods of proving
coarse nonembeddability results. Apart from the
papers~\cite{JR04}, \cite{Ran04} quoted above, we refer
to~\cite{Gro03}, \cite{DGLY02}, \cite{Oza04} for results of this type. Here we use
metric cotype to prove the following coarse variants of
Theorem~\ref{thm:uniform} and Theorem~\ref{thm:uniformL_p}, which
generalize, in particular, the theorem of Johnson and
Randrianarivony.
\begin{theorem}\label{thm:coarse}
Let $X$ be a Banach space with nontrivial type. Assume that $Y$
is a Banach space which coarsely embeds into $X$. Then $q_Y\le
q_X$. In particular{\rm ,} for $p,q> 0${\rm ,} $L_p$ embeds coarsely into
$L_q$ if and only if $p\le q$ or $q\le p\le 2$.
\end{theorem}
\subsubsection*{\textrm{6)} \textsl{Bi-Lipschitz embeddings of the integer lattice.}}
Bi-Lipschitz embeddings of the integer lattice $[m]_p^n$ were
investigated by Bourgain in~\cite{Bourgain87} and by the present
authors in~\cite{MN05-proc} where it was shown that
if $2\le p<\infty$ and $Y$ is a Banach space which admits an
equivalent norm whose modulus of uniform convexity has power type
$2$, then
\begin{equation}\label{eq:phase}
c_Y\left([m]_p^n\right)=\Theta\left(\min\left\{n^{\frac12-\frac{1}{p}},m^{1-\frac{2}{p}}\right\}\right).
\end{equation}
The implied constants in the above asymptotic equivalence
depend on $p$ and on the $2$-convexity constant of $Y$. Moreover,
it was shown in~\cite{MN05-proc} that
$$
c_Y([m]_\infty^n)=\Omega\left(\min\left\{\sqrt{\frac{n}{\log
n}},\frac{m}{\sqrt{\log m}}\right\}\right).
$$
It was conjectured in~\cite{MN05-proc} that the logarithmic terms
above are unnecessary. Using our results on metric cotype we
settle this conjecture positively, by proving the following
general theorem:
\begin{theorem}\label{thm:infty grid} Let $Y$ be a Banach space with nontrivial type which has
cotype $q$. Then
$$
c_Y([m]_\infty^n)=\Omega\left(\min\left\{n^{1/q},m\right\}\right).
$$
\end{theorem}
Similarly, our methods imply that~\eqref{eq:phase} holds true for
any Banach space $Y$ with nontrivial type and cotype $2$ (note
that these conditions are strictly weaker than being $2$-convex,
as shown e.g. in~\cite{LTII77}). Moreover, it is possible to
generalize the lower bound in~\eqref{eq:phase} to Banach spaces
with nontrivial type, and cotype $2\le q\le p$, in which case the
lower bound becomes
$\min\left\{n^{\frac{1}{q}-\frac{1}{p}},m^{1-\frac{q}{p}}\right\}$.
\subsubsection*{{\rm 7)} \textsl{Quadratic inequalities on the cut-cone.}} An
intriguing aspect of Theorem~\ref{thm:cotype} is that $L_1$ has
metric cotype $2$. Thus, we obtain a nontrivial inequality on
$L_1$ which involves distances {\em squared}. To the best of our
knowledge, all the known nonembeddability results for $L_1$ are
based on Poincar\'e type inequalities in which distances are
raised to the power $1$. Clearly, any such inequality reduces to
an inequality on the real line. Equivalently, by the cut-cone
representation of $L_1$ metrics (see~\cite{DL97}) it is enough to
prove any such inequality for {\em cut metrics}, which are
particularly simple. Theorem~\ref{thm:cotype} seems to be the
first truly ``infinite dimensional" metric inequality in $L_1$, in
the sense that its nonlinearity does not allow a straightforward
reduction to the one-dimensional case. We believe that
understanding such inequalities on $L_1$ deserves further
scrutiny, especially as they hint at certain nontrivial (and
nonlinear) interactions between cuts.
\section{Preliminaries and notation}
We start by setting notation and conventions. Consider the
standard $\ell_\infty$ Cayley graph on $\mathbb{Z}_m^n$, namely $x,y\in
\mathbb{Z}_m^n$ are joined by an edge if and only if they are distinct and
$x-y\in \{-1,0,1\}^n$. This induces a shortest-path metric on
$\mathbb{Z}_m^n$ which we denote by $d_{\mathbb{Z}_m^n}(\cdot,\cdot)$.
Equivalently, the metric space $(\mathbb{Z}_m^n,d_{\mathbb{Z}_m^n})$ is precisely
the quotient $(\mathbb{Z}^n,\|\cdot\|_\infty)/(m\mathbb{Z})^n$ (for background on
quotient metrics see~\cite{BH99}, \cite{Gro99}). The ball of radius $r$
around $x\in \mathbb{Z}_m^n$ will be denoted $B_{\mathbb{Z}_m^n}(x,r)$. We denote
by $\mu$ the normalized counting measure on $\mathbb{Z}_m^n$ (which is
clearly the Haar measure on this group). We also denote by
$\sigma$ the normalized counting measure on $\{-1,0,1\}^n$. In
what follows, whenever we average over uniformly chosen signs
$\varepsilon\in\{-1,1\}^n$ we use the probabilistic notation $\mathbb{E}_\varepsilon$ (in
this sense we break from the notation used in the introduction,
for the sake of clarity of the ensuing arguments).
In what follows all Banach spaces are assumed to be over the
complex numbers $\mathbb C$. All of our results hold for real
Banach spaces as well, by a straightforward complexification
argument.
Given a Banach space $X$ and $p,q\in [1,\infty)$ we denote by
$C_q^{(p)}(X)$ the infimum over all constants $C>0$ such that for
every integer $n\in \mathbb N$ and every $x_1,\ldots,x_n\in X$,
\begin{eqnarray}\label{eq:pass to p}
\Biggl(\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j
x_j\Biggr\|_X^p\Biggr)^{1/p}\ge \frac{1}{C}\Biggl(\sum_{j=1}^n
\|x_j\|_X^q\Biggr)^{1/q}.
\end{eqnarray}
Thus, by our previous notation, $C_q^{(q)}(X)=C_q(X)$. Kahane's
inequality~\cite{kahane64} says that for $1\le p,q<\infty$ there
exists a constant $1\le A_{pq}<\infty$ such that for every Banach
space $X$, every integer $n\in \mathbb N$, and every
$x_1,\ldots,x_n\in X$,
\begin{eqnarray}\label{eq:kahane}
\Biggl(\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n
\varepsilon_j x_j\Biggr\|_X^p\Biggr)^{1/p}\le
A_{pq}\Biggl(\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j
x_j\Biggr\|_X^q\Biggr)^{1/q}.
\end{eqnarray}
Where clearly $A_{pq}=1$ if $p\le q$, and for every $1\le
q<p<\infty$, $A_{pq}=O\left(\sqrt{p}\right)$ (see~\cite{Tal88}).
It follows in particular from~\eqref{eq:kahane} that if $X$ has
cotype $q$ then for every $p\in [1,\infty)$,
$C_q^{(p)}(X)=O_{p,q}(C_q(X))$, where the implied constant may
depend on $p$ and $q$.
Given $A\!\subseteq\!\{1,\ldots,n\}$,
we consider the Walsh functions \hbox{$W_A:\{-1,1\}^n \to \mathbb{C}$,} defined as
\[ W_A(\varepsilon_1,\ldots,\varepsilon_m)=\prod_{j\in A} \varepsilon_j .\]
Every $f:\{-1,1\}^n\to X$ can be written as
$$
f(\varepsilon_1,\ldots,\varepsilon_n)=\sum_{A\subseteq \{1,\ldots,n\}}\widehat
f(A)W_A(\varepsilon_1,\ldots,\varepsilon_n),
$$
where $\widehat f(A)\in X$ are given by
$$
\widehat f(A)=\mathbb{E}_\varepsilon \Bigl(f(\varepsilon)W_A(\varepsilon) \Bigr).
$$
The {\em Rademacher projection} of $f$ is defined by
$$
{\mathrm{\bf Rad}}(f)=\sum_{j=1}^n \widehat f(A)W_{\{j\}}.
$$
The $K$-convexity constant of $X$, denoted $K(X)$, is the smallest
constant $K$ such that for every $n$ and every $f:\{-1,1\}^n\to
X$,
$$
\mathbb{E}_\varepsilon\|{\mathrm{\bf Rad}}(f)(\varepsilon)\|_X^2\le K^2 \mathbb{E}_\varepsilon \|f(\varepsilon)\|_X^2.
$$
In other words,
$$
K(X)=\sup_{n\in \mathbb N} \|{\mathrm{\bf Rad}}\|_{L_2(\{-1,1\}^n,X)\to
L_2(\{-1,1\}^n,X)}.
$$
$X$ is said to be $K$-convex if $K(X)<\infty$. More generally, for
$p\ge 1$ we define
$$
K_p(X)=\sup_{n\in \mathbb N} \|{\mathrm{\bf Rad}}\|_{L_p(\{-1,1\}^n,X)\to
L_p(\{-1,1\}^n,X)}.
$$
It is a well known consequence of Kahane's inequality and duality
that for every $p>1$,
$$
K_p(X)\le O\Biggl(\frac{p}{\sqrt{p-1}}\Biggr)\cdot K(X).
$$
The following deep theorem was proved by Pisier in~\cite{Pis82}:
\begin{theorem}[Pisier's $K$-convexity theorem~\cite{Pis82}]
Let $X$ be a Banach space. Then
$$
q_X>1\iff K(X)<\infty.
$$
\end{theorem}
Next, we recall some facts concerning Fourier analysis on the
group $\mathbb{Z}_m^n$. Given $k=(k_1,\ldots,k_n)\in \mathbb{Z}_m^n$ we consider
the Walsh function $W_k:\mathbb{Z}_m^n\to \mathbb C$:
$$
W_{k}(x)=\exp\Biggl(\frac{2\pi i}{m}\sum_{j=1}^m k_jx_j\Biggr).
$$
Then, for any Banach space $X$, any $f:Z_m^n\to X$ can be
decomposed as follows:
$$
f(x)=\sum_{k\in \mathbb Z_m^n} W_k(x)\widehat f(k),
$$
where
$$
\widehat f(k)=\int_{\mathbb{Z}_m^n} f(y)\overline{W_k(y)}d\mu(y)\in X.
$$
If $X$ is a Hilbert space then Parseval's identity becomes:
$$
\int_{Z_m^n} \|f(x)\|_X^2d\mu(x)=\sum_{k\in \mathbb{Z}_m^n}
\left\|\widehat f(k)\right\|_X^2.
$$
\subsection{Definitions and basic facts related to metric cotype}
\begin{definition}
Given $1\le p\le q$, an integer $n$ and an even integer $m$, let
$\Gamma_q^{(p)}(\mathcal{M};n,m)$ be the infimum over all $\Gamma>0$ such
that for every $f:\mathbb{Z}_m^n\to \mathcal{M}$, \begin{multline}\label{eq:two
parameter} \sum_{j=1}^n \int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f(x)\right)^pd\mu(x)\\
\le
\Gamma^pm^pn^{1-\frac{p}{q}}\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(x+\varepsilon\right),f(x)\right)^pd\mu(x)d\sigma(\varepsilon).
\end{multline}
When $p=q$ we write $\Gamma_q(\mathcal{M}; n,m):= \Gamma_q^{(q)} (\mathcal{M}; n,m)$ .
With this notation,
$$
\Gamma_q^{(p)}(\mathcal{M})=\sup_{n\in \mathbb N}\inf_{m\in 2\mathbb
N}\Gamma_q^{(p)}(\mathcal{M};n,m).
$$
We also denote by $m_q^{(p)}(\mathcal{M};n,\Gamma)$ the smallest even
integer $m$ for which~\eqref{eq:two parameter} holds. As usual,
when $p=q$ we write $m_q(\mathcal{M};n,\Gamma):= m_q^{(q)}(\mathcal{M};n,\Gamma)$.
\end{definition}
The following lemma shows that for nontrivial metric spaces $\mathcal{M}$,\break
$m_q(\mathcal{M};n,\Gamma)$ must be large.
\begin{lemma}\label{lem:lower m}
Let $(\mathcal{M},d_\mathcal{M})$ be a metric space which contains at least two
points. Then for every integer $n${\rm ,} every $\Gamma>0${\rm ,} and every
$p,q>0${\rm ,}
$$
m_q^{(p)}(\mathcal{M};n,\Gamma)\ge \frac{n^{1/q}}{\Gamma}.
$$
\end{lemma}
\begin{proof} Fix $u,v\in \mathcal{M}$, $u \ne v$, and without loss of generality
normalize the metric so that $d_\mathcal{M}(u,v)=1$. Denote
$m=m_q^{(p)}(\mathcal{M};n,\Gamma)$. Let $f:\mathbb{Z}_m^n\to \mathcal{M}$ be the random
mapping such that for every $x\in \mathbb{Z}_m^n$,
$\Pr[f(x)=u]=\Pr[f(x)=v]=\frac12$, and $\{f(x)\}_{x\in \mathbb{Z}_m^n}$
are independent random variables. Then for every distinct $x,y\in
\mathbb{Z}_m^n$, $\mathbb{E} \left[d_\mathcal{M}(f(x),f(y))^p\right]=\frac12$. Thus, the
required result follows by applying~\eqref{eq:two parameter} to $f$ and taking expectation.
\end{proof}
\begin{lemma}\label{lem:multip} For every two integers $n,k${\rm ,} and every even integer
$m${\rm ,}
$$
\Gamma_q^{(p)}(\mathcal{M};n,km)\le \Gamma_q^{(p)}(\mathcal{M};n,m).
$$
\end{lemma}
\begin{proof} Fix $f:\mathbb{Z}_{km}^n\to \mathcal{M}$. For every $y\in \mathbb{Z}_k^n$
define $f_y: \mathbb{Z}_m^n\to \mathcal{M}$ by
$$
f_y(x)=f(kx+y).
$$
Fix $\Gamma> \Gamma_q^{(p)}(\mathcal{M};n,m)$. Applying the definition of
$\Gamma_q^{(p)}(\mathcal{M};n,m)$ to $f_y$, we get that
\begin{multline*}
\sum_{j=1}^n \int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(kx+\frac{km}{2}e_j+y\right),f(kx+y)\right)^pd\mu_{\mathbb{Z}_m^n}(x)\\\le
\Gamma^pm^pn^{1-\frac{p}{q}}\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(kx+k\varepsilon+y\right),f(kx+y)\right)^pd\mu_{\mathbb{Z}_m^n}(x)d\sigma(\varepsilon).
\end{multline*}
Integrating this inequality with respect to $y\in \mathbb{Z}_k^n$ we see
that
\begin{small}
\begin{eqnarray*}
&& \sum_{j=1}^n \int_{\mathbb{Z}_{km}^n}
d_\mathcal{M}\left(f\left(z+\frac{km}{2}e_j\right),f(z)\right)^pd\mu_{\mathbb{Z}_{km}^n}(z)\\&=&\sum_{j=1}^n
\int_{\mathbb{Z}_k^n}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(kx+\frac{km}{2}e_j+y\right),f(kx+y)\right)^pd\mu_{\mathbb{Z}_m^n}(x)d\mu_{\mathbb{Z}_k^n}(y)\\
&\le&
\Gamma^pm^pn^{1-\frac{p}{q}}\hskip-4pt\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_k^n}\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(kx+k\varepsilon+y\right),f(kx+y)\right)^pd\mu_{\mathbb{Z}_m^n}(x)d\mu_{\mathbb{Z}_k^n}(y)d\sigma(\varepsilon)\\
&=&
\Gamma^pm^pn^{1-\frac{p}{q}}\hskip-4pt\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_{km}^n}d_\mathcal{M}\left(f\left(z+k\varepsilon\right),f(z)\right)^pd\mu_{\mathbb{Z}_{km}^n}(z)d\sigma(\varepsilon)\\
&\le&
\Gamma^pm^pn^{1-\frac{p}{q}}\hskip-4pt\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_{km}^n}k^{p-1}\sum_{s=1}^k
d_\mathcal{M}\left(f\left(z+s\varepsilon\right),f(z+(s-1)\varepsilon)\right)^pd\mu_{\mathbb{Z}_{km}^n}(z)d\sigma(\varepsilon)\\
&=&\Gamma^p(km)^pn^{1-\frac{p}{q}}\hskip-4pt\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_{km}^n}
d_\mathcal{M}\left(f\left(z+\varepsilon\right),f(z)\right)^pd\mu_{\mathbb{Z}_{km}^n}(z)d\sigma(\varepsilon).
\end{eqnarray*}\end{small}
\end{proof}
\begin{lemma}\label{lem:monotone} Let $k,n$ be integers such that $k\le n${\rm ,} and let
$m$ be an even integer. Then
$$
\Gamma_q^{(p)}(\mathcal{M};k,m)\le
\left(\frac{n}{k}\right)^{1-\frac{p}{q}}\cdot\Gamma_q^{(p)}(\mathcal{M};n,m).
$$
\end{lemma}
\begin{proof}
Given an
$f:\mathbb{Z}_m^k\to \mathcal{M}$, we define an $\mathcal{M}$-valued function on
$\mathbb{Z}_m^n\cong \mathbb{Z}_m^k\times \mathbb{Z}_m^{n-k}$ by $g(x,y)=f(x)$. Applying
the definition $\Gamma_q^{(p)}(\mathcal{M};n,m)$ to $g$ yields the required
inequality.
\end{proof}
We end this section by recording some general inequalities which
will be used in the ensuing arguments. In what follows $(\mathcal{M},d_\mathcal{M})$
is an arbitrary metric space.
\begin{lemma}\label{lem:pass to diagonals}
For every $f:\mathbb{Z}_m^n\to \mathcal{M}${\rm ,}
\begin{multline*}
\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\\
\le 3\cdot2^{p-1}n\cdot
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f(x+\varepsilon),f(x)\right)^pd\mu(x)d\sigma(\varepsilon).
\end{multline*}
\end{lemma}
\begin{proof} For every $x\in \mathbb{Z}_m^n$ and $\varepsilon\in \{-1,0,1\}^n$,
$$
d_\mathcal{M}(f(x+e_j),f(x))^p\le 2^{p-1}
d_\mathcal{M}(f(x+e_j),f(x+\varepsilon))^p+2^{p-1}d_\mathcal{M}(f(x+\varepsilon),f(x))^p.
$$
Thus
\begin{align*}
\frac23\int_{\mathbb{Z}_m^n} &
d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\\
&= \sigma(\{\varepsilon\in
\{-1,0,1\}^n:\ \varepsilon_j\neq -1\})
\cdot \int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\\
&\le2^{p-1}\int_{\{\varepsilon\in
\{- 1,0,1\}^n:\ \varepsilon_j\neq
-1\}}\int_{\mathbb{Z}_m^n}\Big(d_\mathcal{M}\left(f(x+e_j),f(x+\varepsilon)\right)^p\\
&\qquad +d_\mathcal{M}(f(x+\varepsilon),f(x))^p\Big)d\mu(x)d\sigma(\varepsilon)\\
& =2^{p-1}\int_{\{\varepsilon\in
\{- 1,0,1\}^n:\ \varepsilon_j\neq
1\}}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(y+\varepsilon),f(y))^pd\mu(y)d\sigma(\varepsilon)\\
&\qquad +2^{p-1}\int_{\{\varepsilon\in
\{- 1,0,1\}^n:\ \varepsilon_j\neq
-1\}}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(x+\varepsilon),f(x))^pd\mu(x)d\sigma(\varepsilon)\\
& \le
2^p\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(x+\varepsilon),f(x))^pd\mu(x)d\sigma(\varepsilon).
\end{align*}
Summing over $j=1,\ldots,n$ yields the required result.
\end{proof}
\begin{lemma}\label{lem:with zeros} Let $(\mathcal{M},d_\mathcal{M})$ be a metric
space. Assume that for an integer $n$ and an even integer $m$, we have that for every $\ell\le n$, and every
$f:\mathbb{Z}_m^\ell \to \mathcal{M}$,
\begin{multline*}
\sum_{j=1}^\ell\int_{\mathbb{Z}_m^\ell}
d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f\left(x\right)\right)^pd\mu(x)\\ \le
C^pm^pn^{1-\frac{p}{q}}\Bigg(\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^\ell}d_\mathcal{M}\left(f(x+\varepsilon),f(x)\right)^pd\mu(x)\\
+\frac{1}{\ell}\sum_{j=1}^\ell\int_{\mathbb{Z}_m^\ell}
d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\Bigg).
\end{multline*}
Then
$$
\Gamma_q^{(p)}(\mathcal{M};n,m)\le 5C.
$$
\end{lemma}
\begin{proof} Fix $f:\mathbb{Z}_m^n\to \mathcal{M}$ and $\emptyset \neq A\subseteq
\{1,\ldots,n\}$. Our assumption implies that\pagebreak
\begin{multline*}
\sum_{j\in A}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f\left(x\right)\right)^pd\mu(x)\\[2pt]
\le
C^pm^pn^{1-\frac{p}{q}}\Biggl(\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}d_\mathcal{M}\Biggl(f\Biggl(x+\sum_{j\in
A}\varepsilon_je_j\Biggr),f(x)\Biggr)^pd\mu(x)\\[2pt]
+\frac{1}{|A|}\sum_{j\in
A}\int_{\mathbb{Z}_m^n} d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\Biggr).
\end{multline*}
Multiplying this inequality by $\frac{2^{|A|}}{3^n}$, and summing
over all $\emptyset \ne A\subseteq \{1,\ldots,n\}$, we see that
\begin{eqnarray}&&\label{eq:manor's catch}\\[6pt]
&& \frac23 \sum_{j=1}^n\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f\left(x\right)\right)^pd\mu(x)\nonumber\\[6pt]
&&\quad =\sum_{\emptyset\ne
A\subseteq \{1,\ldots,n\}}\frac{2^{|A|}}{3^n}\sum_{j\in
A}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f\left(x\right)\right)^pd\mu(x)
\nonumber\\[6pt]
&&\quad \le C^pm^pn^{1-\frac{p}{q}}\Biggl(\sum_{\emptyset
\ne A\subseteq
\{1,\ldots,n\}}\frac{2^{|A|}}{3^n}\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}d_\mathcal{M}\Biggl(f\Biggl(x+\sum_{j\in
A}\varepsilon_je_j\Biggr),f(x)\Biggr)^pd\mu(x)\nonumber \end{eqnarray}
\begin{eqnarray}&&\qquad +
\sum_{\emptyset \ne A\subseteq
\{1,\ldots,n\}}\frac{2^{|A|}}{|A|3^n}\sum_{j\in A}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\Biggr)\nonumber\\[6pt] \label{eq:use
triangle}&&\quad \le
C^pm^pn^{1-\frac{p}{q}}\Bigg(\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f\left(x+\delta\right),f(x)\right)^pd\mu(x)d\sigma(\delta)\\[6pt]
&&\nonumber\quad\phantom{\le
C^pm^pn^{1-\frac{p}{q}}\Bigg(}
+\frac{1}{n}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(f(x+e_j),f(x)\right)^pd\mu(x)\Bigg)\\[6pt] &&\le
C^pm^pn^{1-\frac{p}{q}}\left(3^p+1\right)
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(x+\delta\right),f(x)\right)^pd\mu(x)d\sigma(\delta),\nonumber
\end{eqnarray}
where we used the fact that in~\eqref{eq:manor's catch}, the
coefficient of $d_\mathcal{M}\left(f(x+e_j),f(x)\right)^p$ equals
$\sum_{k=1}^{n}\frac{2^k}{k3^n}\binom{n-1}{k-1}\le \frac{1}{n}$,
and in~\eqref{eq:use triangle} we used Lemma~\ref{lem:pass to
diagonals}.
\end{proof}
\section{Warmup: the case of Hilbert space}
The fact that Hilbert spaces have metric cotype $2$ is
particularly simple to prove. This is contained in the following
proposition.
\begin{prop}\label{prop:hilbert}
Let $H$ be a Hilbert space. Then for every integer $n${\rm ,} and every
integer $m\ge \frac23\pi\sqrt{n}$ which is divisible by $4${\rm ,}
$$
\Gamma_2(H;n,m)\le \frac{\sqrt{6}}{\pi}.
$$
\end{prop}
\begin{proof} Fix $f:Z_m^n\to H$ and decompose it into Fourier coefficients:
$$
f(x)=\sum_{k\in \mathbb Z_m^n} W_k(x)\widehat f(k).
$$
For every $j=1,2,\ldots,n$ we have that
$$
f\left(x+\frac{m}{2}e_j\right)-f(x)=\sum_{k\in \mathbb Z_m^n}
W_k(x)\left(e^{\pi ik_j}-1\right)\widehat f(k).
$$
Thus
\begin{eqnarray*}
&&\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f(x)\right\|_H^2d\mu(x) \\&=& \sum_{k\in
\mathbb{Z}_m^n}\Biggl(\sum_{j=1}^n\left|e^{\pi ik_j}-1
\right|^2\Biggr)\left\|\widehat f(k)\right\|_H^2 = 4\sum_{k\in
\mathbb{Z}_m^n}|\{j: k_j\equiv 1\hbox{ mod}\, 2\}|\cdot\left\|\widehat
f(k)\right\|_H^2.
\end{eqnarray*}
Additionally, for every $\varepsilon\in \{-1,0,1\}^n$,
$$
f(x+\varepsilon)-f(x)=\sum_{k\in \mathbb Z_m^n} W_k(x)(W_k(\varepsilon)-1)\widehat
f(k).
$$
Thus
\begin{multline*}
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_H^2d\mu(x)d\sigma(\varepsilon)\\
=\sum_{k\in
\mathbb
Z_m^n}\left(\int_{\{-1,0,1\}^n}\left|W_k(\varepsilon)-1\right|^2d\sigma(\varepsilon)\right)
\left\|\widehat f(k)\right\|^2_H.
\end{multline*}
Observe that
\begin{eqnarray*}
\int_{\{-1,0,1\}^n}\left|W_k(\varepsilon)-1\right|^2d\sigma(\varepsilon)&=&\int_{\{-1,0,1\}^n}\Biggl|\exp\Biggl(\frac{2\pi
i}{m}\sum_{j=1}^m
k_j\varepsilon_j\Biggl)-1\Biggr|^2d\sigma(\varepsilon)\\
&=&2-2\, \mathrm{Re}\prod_{j=1}^n\int_{\{-1,0,1\}^n}\exp\left(\frac{2\pi
i }{m}k_j\varepsilon_j\right)d\sigma(\varepsilon)\\
&=& 2-2\prod_{j=1}^n\frac{1+2\cos\left(\frac{2\pi
}{m}k_j\right)}{3}\\
&\ge&
2-2\prod_{j:\ k_j\equiv 1\mod 2}\frac{1+2\left|\cos\left(\frac{2\pi
}{m}k_j\right)\right|}{3}.
\end{eqnarray*}
Note that if $m$ is divisible by $4$ and $\ell\in
\{0,\ldots,m-1\}$ is an odd integer, then
$$
\left|\cos\left(\frac{2\pi\ell}{m}\right)\right|\le
\left|\cos\left(\frac{2\pi}{m}\right)\right|\le
1-\frac{\pi^2}{m^2}.
$$
Hence
\begin{eqnarray*}
\int_{\{-1,0,1\}^n} \left|W_k(\varepsilon)-1\right|^2 d \sigma(\varepsilon) &\ge&
2\Biggl(1-\Biggl(1-\frac{2\pi^2}{3m^2}\Biggr)^{|\{j:\ k_j\equiv
1\mod 2\}|}\Biggr)\\&\ge& 2\Biggl(1-e^{-\frac{2|\{j:\ k_j\equiv
1\mod 2\}|\pi^2}{3m^2}}\Biggr)\\&\ge& |\{j:\ k_j\equiv 1\mod
2\}|\cdot \frac{2\pi^2}{3m^2},
\end{eqnarray*}
provided that $m\ge \frac23\pi \sqrt{n}$.
\end{proof}
\section{$K$-convex spaces}
In this section we prove the ``hard direction" of
Theorem~\ref{thm:cotype} and Theorem~\ref{thm:weak cotype} when
$X$ is a $K$-convex Banach space; namely, we show that in this case
Rademacher cotype $q$ implies metric cotype $q$. There are two
reasons why we single out this case before passing to the proofs
of these theorems in full generality. First of all, the proof for
$K$-convex spaces is different and simpler than the general case.
More importantly, in the case of $K$-convex spaces we are able to
obtain optimal bounds on the value of $m$ in
Definition~\ref{def:cotype} and Definition~1.3.
Namely, we show that if $X$ is a $K$-convex Banach space of cotype
$q$, then for every $1\le p\le q$,
$m_q^{(p)}(X;n,\Gamma)=O(n^{1/q})$, for some $\Gamma=\Gamma(X)$.
This is best possible due to Lemma~\ref{lem:lower m}. In the case
of general Banach spaces we obtain worse bounds, and this is why
we have the restriction that $X$ is $K$-convex in
Theorem~\ref{thm:uniform} and Theorem~\ref{thm:coarse}. This issue
is taken up again in Section~\ref{section:problems}.
\begin{theorem}\label{thm:K} Let $X$ be a $K$-convex Banach space with cotype $q$. Then for
every integer $n$ and every integer $m$ which is divisible by $4${\rm ,}
$$
m\ge \frac{2n^{1/q}}{C_q^{(p)}(X)K_p(X)} \implies
\Gamma_q^{(p)}(X;n,m)\le 15C_q^{(p)}(X)K_p(X).
$$
\end{theorem}
\begin{proof} For $f:\mathbb{Z}_m^n\to X$ we define the following operators:
\begin{eqnarray*}
\widetilde \partial_j f(x)&=&f(x+e_j)-f(x-e_j),
\\
\mathcal E_j f(x)&=&\mathbb{E}_\varepsilon f\Biggl(x+\sum_{\ell\neq j} \varepsilon_\ell
e_\ell\Biggr),
\end{eqnarray*}
and for $\varepsilon\in \{-1,0,1\}^n$,
$$
\partial_\varepsilon f(x)=f(x+\varepsilon)-f(x). \pagebreak
$$
These operators operate diagonally on the Walsh basis
$\{W_k\}_{k\in \mathbb{Z}_m^n}$ as follows:
\begin{eqnarray}\label{eq:partial tilde}
\widetilde \partial_j W_k&= &\left(W_k(e_j)-W_k(-e_j)\right)W_k=
2\sin\left(\frac{2\pi i k_j}{m}\right)\cdot W_k,
\\
\label{eq:Ej}
\mathcal E_j W_k &=&\Biggl(\mathbb{E}_\varepsilon \prod_{\ell\neq j}e^{\frac{2\pi i
\varepsilon_\ell k_\ell}{m}}\Biggr)W_k=\Biggl(\prod_{\ell\neq j}
\cos\left(\frac{2\pi k_\ell}{m}\right)\Biggr)W_k,
\end{eqnarray}
and for $\varepsilon\in \{-1,1\}^n$,
\begin{eqnarray}\label{eq:partial}
\partial_\varepsilon
W_k&=&\left(W(\varepsilon)-1\right)W_k\\\nonumber&=&\Biggl(\prod_{j=1}^n
e^{\frac{2\pi i \varepsilon_j k_j}{m}}-1\Biggr)W_k\\&=&\nonumber
\Biggl(\prod_{j=1}^n\Biggl(\cos\left(\frac{2\pi
\varepsilon_jk_j}{m}\right)+i\sin\left(\frac{2\pi
\varepsilon_jk_j}{m}\right)\Biggr)-1\Biggr)W_k\\&=&
\Biggl(\prod_{j=1}^n\left(\cos\left(\frac{2\pi
k_j}{m}\right)+i\varepsilon_j\sin\left(\frac{2\pi
k_j}{m}\right)\right)-1\Biggr)W_k.\nonumber
\end{eqnarray}
The last step was a crucial observation, using the fact that
$\varepsilon_j\in \{-1,1\}$. Thinking of $\partial_\varepsilon W_k$ as a function of
$\varepsilon\in \{-1,1\}^n$, equations~\eqref{eq:partial tilde},
\eqref{eq:Ej} and~\eqref{eq:partial} imply that
\begin{eqnarray*}
{\mathrm{\bf Rad}}(\partial_\varepsilon W_k)&=&i\Biggl(\sum_{j=1}^n
\varepsilon_j\sin\left(\frac{2\pi k_j}{m}\right)\cdot \prod_{\ell\neq j}
\cos\left(\frac{2\pi
k_\ell}{m}\right)\Biggr)W_k\\
&=&\frac{i}{2}\Biggl(\sum_{j=1}^n \varepsilon_j
\widetilde \partial_j \mathcal E_j\Biggr)W_k.
\end{eqnarray*}
Thus for every $x\in \mathbb{Z}_m^n$ and $f:\mathbb{Z}_m^n\to X$,
$$
{\mathrm{\bf Rad}}(\partial_\varepsilon f(x))=\frac{i}{2}\Biggl(\sum_{j=1}^n \varepsilon_j
\widetilde
\partial_j \mathcal E_j\Biggr)f(x).
$$
It follows that
\begin{eqnarray}\label{eq:rademacher case}
&&\hskip-16pt \int_{\mathbb{Z}_m^n} \mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j
\bigl[ \mathcal E_j
f(x+e_j)-\mathcal E_j f(x-e_j) \bigr ]\Biggr\|_X^pd\mu(x)\\
&&\qquad\qquad = \int_{\mathbb{Z}_m^n}
\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n\varepsilon_j \widetilde \partial_j \mathcal E_j
f(x) \Biggr\|_X^pd\mu(x)\nonumber\\&&\qquad\qquad =\int_{\mathbb{Z}_m^n}
\mathbb{E}_\varepsilon\|{\mathrm{\bf Rad}}(\partial_\varepsilon f(x))\|_X^pd\mu(x)\nonumber\\ &&\qquad\qquad \le
K_p(X)^p\int_{\mathbb{Z}_m^n} \mathbb{E}_\varepsilon\|\partial_\varepsilon f(x)\|_X^pd\mu(x).
\nonumber
\end{eqnarray}
By~\eqref{eq:rademacher case} and the definition of
$C_q^{(p)}(X)$, for every $C>C^{(p)}_q(X)$ we have that
\begin{eqnarray}\label{eq:holder}
&&
[K_p(X)C]^p\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}
\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x)\\\nonumber&&\qquad \ge C^p\cdot\mathbb{E}_\varepsilon
\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n \varepsilon_j[\mathcal E_j
f(x+e_j)-\mathcal E_j
f(x-e_j)]\Biggr\|_X^pd\mu(x)\\&&\qquad \ge\int_{\mathbb{Z}_m^n}\Biggl(\sum_{j=1}^n
\left\|\mathcal E_j f(x+e_j)-\mathcal E_j
f(x-e_j)\right\|_X^q\Biggr)^{p/q}d\mu(x)\nonumber\\&&\qquad
\ge\frac{1}{n^{1-p/q}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}\left\|\mathcal E_j f(x+e_j)-\mathcal E_j
f(x-e_j)\right\|_X^pd\mu(x). \nonumber
\end{eqnarray}
Now, for $j\in \{1,\ldots,n\}$,
\begin{eqnarray}\label{eq:after integral}
\qquad &&\int_{\mathbb{Z}_m^n} \left\|\mathcal
E_jf\left(x+\frac{m}{2}e_j\right)-\mathcal
E_jf\left(x\right)\right\|_X^pd\mu(x)\\
&&\qquad \le\left(\frac{m}{4}\right)^{p-1}
\sum_{s=1}^{m/4}\int_{\mathbb{Z}_m^n} \left\|\mathcal
E_jf\left(x+2se_j\right)-\mathcal
E_jf\left(x+2(s-1)e_j\right)\right\|_X^pd\mu(x)\nonumber\\&&\qquad =
\left(\frac{m}{4}\right)^p\int_{\mathbb{Z}_m^n} \left\|\mathcal E_j
f(x+e_j)-\mathcal E_j f(x-e_j)\right\|_X^pd\mu(x).\nonumber
\end{eqnarray}
Plugging~\eqref{eq:after integral} into~\eqref{eq:holder} we get
\begin{eqnarray*}
&& \hskip-38pt
\left(\frac{m}{4}\right)^{p}n^{1-\frac{p}{q}}[K_p(X)C]^p\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}
\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x)\\
&&\quad \ge \sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|\mathcal E_jf\left(x+\frac{m}{2}e_j\right)-\mathcal
E_jf\left(x\right)\right\|_X^pd\mu(x)\\ &&\quad \ge\frac{1}{3^{p-1}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\\
&&\qquad -2\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|\mathcal
E_jf\left(x\right)-f\left(x\right)\right\|_X^pd\mu(x)\\&&\quad
=\frac{1}{3^{p-1}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\\
&&\qquad -2\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\Biggl\|\mathbb{E}_\varepsilon\Biggl(f\Biggl(x+\sum_{\ell\neq j} \varepsilon_\ell
e_\ell\Biggr)-f\left(x\right)\Biggr)\Biggr\|_X^pd\mu(x)\nonumber\\&&\quad \ge
\frac{1}{3^{p-1}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}
e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\\*
&&\qquad -2\sum_{j=1}^n\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}
\Biggl\|f\Biggl(x+\sum_{\ell\neq j} \varepsilon_\ell
e_\ell\Biggr)-f\left(x\right)\Biggr\|_X^pd\mu(x)\\
&&\quad \ge \frac{1}{3^{p-1}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}
e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\\
&&\qquad -2^pn\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\varepsilon\right)-f\left(x\right)\right\|_X^pd\mu(x)\\& &\qquad -
2^p\sum_{j=1}^n\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\varepsilon_je_j\right)-f\left(x\right)\right\|_X^pd\mu(x).
\end{eqnarray*}
Thus, the required result follows from Lemma~\ref{lem:with zeros}.
\end{proof}
The above argument actually gives the following
generalization of Theorem~\ref{thm:K}, which holds for products of
arbitrary compact Abelian groups.
\begin{theorem}\label{thm:groups}Let $G_1,\ldots,G_n$ be compact
Abelian groups{\rm ,} $(g_1,\ldots,g_n)\in G_1\times\cdots \times G_n${\rm ,}
and let $X$ be a $K$-convex Banach space. Then for every integer
$k$ and every $f:G_1\times\cdots\times G_n\to X${\rm ,}
\vglue-18pt
\begin{small}
\begin{multline*}
\sum_{j=1}^n \int_{G_1\times \cdots\times G_n}
\|f(x+2kg_je_j)-f(x)\|_X^pd(\mu_{G_1}\otimes\cdots
\otimes\mu_{G_n})(x)\\\le C^p \int_{\{-1,0,1\}^n}\int_{G_1\times
\cdots\times
G_n}\Biggl\|f\Biggl(x+\sum_{j=1}^n\varepsilon_jg_je_j\Biggr)-f(x)\Biggr\|_X^pd(\mu_{G_1}\otimes\cdots
\otimes\mu_{G_n})(x)d\sigma(\varepsilon),
\end{multline*} \end{small}
\vglue-8pt\noindent
where
$$
C\le
5\max\left\{C_q^{(p)}(X)K_p(X)kn^{\frac{1}{p}-\frac{1}{q}},n^{\frac{1}{p}}\right\}.
$$
\end{theorem}
Here $\mu_G$ denotes the normalized Haar measure on a compact
Abelian group $G$. We refer the interested reader to the
book~\cite{Rudin90}, which contains the necessary background
required to generalize the proof of Theorem~\ref{thm:K} to this
setting.
\section{The equivalence of Rademacher cotype and metric cotype}
We start by establishing the easy direction in
Theorem~\ref{thm:cotype} and Theorem~\ref{thm:weak cotype}, i.e.
that metric cotype implies Rademacher cotype.
\subsection{Metric cotype implies Rademacher
cotype}\label{section:easy direction}
Let $X$ be a Banach space and assume that
$\Gamma_q^{(p)}(X)<\infty$ for some $1\le p\le q$. Fix
$\Gamma>\Gamma_q^{(p)}(X)$, $v_1,\ldots,v_n\in X$, and let $m$ be
an even integer. Define $f:\mathbb{Z}_m^n\to X$ by
$$
f(x_1,\ldots,x_n)=\sum_{j=1}^n e^{\frac{2\pi i x_j}{m}}v_j.
$$
Then
\begin{eqnarray}\label{eq:reverse cotype}
\sum_{j=1}^n\int_{\mathbb{Z}_m^n}\left\|f\left(x+\frac{m}{2}e_j\right)-f(x)\right\|_X^pd\mu(x)=2^p\sum_{j=1}^n
\|v_j\|_X^p,
\end{eqnarray}
and
\begin{multline}\label{eq:reverse cotype2}
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\left\|f\left(x+\delta\right)-f(x)\right\|_X^pd\mu(x)d\sigma(\delta)\\
=
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n e^{\frac{2\pi
i x_j}{m}}\left(e^{\frac{2\pi i \delta_j}{m}}-1\right)v_j
\Biggr\|_X^pd\mu(x)d\sigma(\delta).
\end{multline}
We recall the {\em contraction principle} (see~\cite{LT91}), which
states that for every $a_1,\ldots, a_n\in \mathbb{R}$,
$$
\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_ja_jv_j\Biggr\|_X^p\le
\left(\max_{1\le j\le n} |a_j|\right)^p\cdot
\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_jv_j\Biggr\|_X^p.
$$
Observe that for every $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_n)\in \{-1,1\}^n$,
\begin{eqnarray*}
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n
e^{\frac{2\pi i x_j}{m}}\left(e^{\frac{2\pi i
\delta_j}{m}}-1\right)v_j
\Biggr\|_X^pd\mu(x)d\sigma(\delta)\\&=&\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n
e^{\frac{2\pi i}{m}
\left(x_j+\frac{m(1-\varepsilon_j)}{4}\right)}\left(e^{\frac{2\pi i
\delta_j}{m}}-1\right)v_j
\Biggr\|_X^pd\mu(x)d\sigma(\delta)\\
&=&\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n \varepsilon_j
e^{\frac{2\pi i x_j}{m}}\left(e^{\frac{2\pi i
\delta_j}{m}}-1\right)v_j \Biggr\|_X^pd\mu(x)d\sigma(\delta).
\end{eqnarray*}
Taking expectation with respect to $\varepsilon$, and using the contraction
principle, we see that
\begin{eqnarray}\label{eq:reverse cotype 2nd}
&& \int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n
e^{\frac{2\pi i x_j}{m}}\left(e^{\frac{2\pi i
\delta_j}{m}}-1\right)v_j
\Biggr\|_X^pd\mu(x)d\sigma(\delta)\\&&\quad=
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j
e^{\frac{2\pi i x_j}{m}}\left(e^{\frac{2\pi i
\delta_j}{m}}-1\right)v_j
\Biggr\|_X^pd\mu(x)d\sigma(\delta)\nonumber\\&&\quad \le
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}2^p\left(\max_{1\le j\le
n}\left|e^{\frac{2\pi i
\delta_j}{m}}-1\right|\right)^p\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j v_j
\Biggr\|_X^pd\mu(x)d\sigma(\delta)\nonumber\\*&&\quad
\le\left(\frac{4\pi}{m}\right)^p\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j v_j
\Biggr\|_X^p,\nonumber
\end{eqnarray}
where in the last inequality above we used the fact that for
$\theta\in [0,\pi]$, $|e^{i\theta}-1|\break\le \theta$.
Combining~\eqref{eq:def weak cotype}, \eqref{eq:reverse cotype},
\eqref{eq:reverse cotype2}, and~\eqref{eq:reverse cotype 2nd}, we
get that
\begin{eqnarray*}
2^p\sum_{j=1}^n \|v_j\|_X^p\le
\Gamma^pm^p\left(\frac{4\pi}{m}\right)^pn^{1-\frac{p}{q}}\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n
\varepsilon_jv_j
\Biggr\|_X^p\!\!=\!\left(4\pi\Gamma\right)^pn^{1-\frac{p}{q}}\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n
\varepsilon_jv_j \Biggr\|_X^p.
\end{eqnarray*}
If $p=q$ we see that $C_q(X)\le 2\pi \Gamma_q(X)$. If $p<q$ then
when $\|v_1\|_X=\cdots=\|v_n\|_X=1$ we get that
$$
\Biggl(\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_jv_j \Biggr\|_X^q\Biggr)^{1/q}\ge
\Biggl(\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_jv_j
\Biggr\|_X^p\Biggr)^{1/p}=\Omega\left(\frac{n^{1/q}}{\Gamma}\right).
$$
This means that $X$ has ``equal norm cotype $q$", implying that
$X$ has cotype $q'$ for every $q'>q$ (see~\cite{Tza79}, \cite{KT81}, \cite{T-J89}
for quantitative versions of this statement). When $q=2$ this
implies that $X$ has cotype $2$ (see~\cite{Tza79} and the
references therein).
\subsection{Proof of Theorem~{\rm \ref{thm:cotype}} and Theorem~{\rm \ref{thm:weak cotype}}}
The proof of Theorem~\ref{thm:cotype} and Theorem~\ref{thm:weak
cotype} is based on several lemmas. Fix an odd integer $k\in
\mathbb N$, with $k< \frac{m}{2}$, and assume that $1\le p\le q$.
Given $j\in \{1,\ldots,n\}$, define $S(j,k)\subseteq \mathbb{Z}_m^n$ by
$$
S(j,k):= \left\{y\in [-k,k]^n\subseteq \mathbb{Z}_m^n:\ y_j\equiv
0\mod 2\ \mathrm{and}\ \forall\ \ell\neq j,\ y_\ell\equiv 1\mod
2\right\}.
$$
For $f:\mathbb{Z}_m^n\to X$ we define
\begin{eqnarray}\label{eq:def Ek}
\mathcal{E}_j^{(k)}f(x)=\left(f*\frac{\mathbf{1}_{S(j,k)}}{\mu(S(j,k))}\right)(x)=\frac{1}{\mu(S(j,k))}\int_{S(j,k)}
f(x+y)d\mu(y).
\end{eqnarray}
\begin{lemma}\label{lem:approx} \hskip-5pt
For every $p\ge 1${\rm ,} every $j\in \{1,\ldots,n\}${\rm ,} and every
$f:\mathbb{Z}_m^n\to X${\rm ,}
\begin{eqnarray*}
\int_{\mathbb{Z}_m^n}\left\|\mathcal{E}^{(k)}_jf(x)-f(x)\right\|_X^pd\mu(x)& \le&
2^pk^p
\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x)\\&&+2^{p-1}\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x).
\end{eqnarray*}
\end{lemma}
\begin{proof}
By convexity, for every $x\in \mathbb{Z}_m^n$,
\begin{eqnarray}\label{eq:convexity}\qquad
\left\|\mathcal{E}_j^{(k)}f(x)-f(x)\right\|_X^p&=&\left\|\frac{1}{\mu(S(j,k))}\int_{S(j,k)}
[f(x+y)-f(x)]d\mu(y)\right\|_X^p\\&\le&
\frac{1}{\mu(S(j,k))}\int_{S(j,k)} \|f(x)-f(x+y)\|_X^pd\mu(y).\nonumber
\end{eqnarray}
Let $x\in \{0,\ldots,k\}^n$ be such that for all $j\in \{1,\ldots,n\}$,
$x_j$ is a positive odd integer. Observe that there exists a
geodesic $\gamma:\{0,1,\ldots,\|x\|_\infty\}\to \mathbb{Z}_m^n$ such that
$\gamma(0)=0$, $\gamma(\|x\|_\infty)=x$ and for every
$t\in\{1,\ldots,\|x\|_\infty\}$,\break $\gamma(t)-\gamma(t-1)\in
\{-1,1\}^n$. Indeed, we define $\gamma(t)$ inductively as follows:
$\gamma(0)=0$, $\gamma(1)=(1,1,\ldots,1)$, and if $t\ge 2$ is odd
then
$$
\gamma(t)=\gamma(t-1)+\sum_{s=1}^n e_s\quad \mathrm{and}\quad
\gamma(t+1)= \gamma(t-1)+2\sum_{\substack{s\in \{1,\ldots, n\}\\
\gamma(t-1)_s<x_s}} e_s.
$$
Since all the coordinates of $x$ are odd,
$\gamma(\|x\|_\infty)=x$. In what follows we fix an arbitrary
geodesic $\gamma_x:\{0,1,\ldots,\|x\|_\infty\}\to \mathbb{Z}_m^n$ as
above. For $x\in (\mathbb{Z}_m\setminus \{0\})^n$ we denote
$|x|=(|x_1|,\ldots,|x_n|)$ and
$\mathrm{sign}(x)=(\mathrm{sign}(x_1),\ldots,\mathrm{sign}(x_n))$. If $x\in [-k,k]^n$ is
such that all of its coordinates are odd, then we define
$\gamma_x=\mathrm{sign}(x)\cdot \gamma_{|x|}$ (where the multiplication is
coordinate-wise).
If $y\in S(j,k)$ then all the coordinates of $y\pm e_j$ are odd.
We can thus define two geodesic paths
$$
\gamma_{x,y}^{+1}=x+e_j+\gamma_{y-e_j}\quad \mathrm{and}\quad
\gamma_{x,y}^{-1}=x-e_j+\gamma_{y+e_j},
$$
where the addition is point-wise.
For $z\in \mathbb{Z}_m^n$ and $\varepsilon\in \{-1,1\}^n$ define
\begin{multline*}
F^{+1}(z,\varepsilon)=\Big\{(x,y)\in \mathbb{Z}_m^n\times S(j,k):\ \exists t\in
\{1,\ldots,\|y-e_j\|_\infty\},\\ \gamma^{+1}_{x,y}(t-1)=z,\
\gamma^{+1}_{x,y}(t)=z+\varepsilon\Big\},
\end{multline*}
and
\begin{multline*}
F^{-1}(z,\varepsilon)=\Big\{(x,y)\in \mathbb{Z}_m^n\times S(j,k):\ \exists t\in
\{1,\ldots,\|y+e_j\|_\infty\},\\ \gamma^{-1}_{x,y}(t-1)=z,\
\gamma^{-1}_{x,y}(t)=z+\varepsilon\Big\}.
\end{multline*}
\begin{claim}\label{claim:independence} For every $z,w\in \mathbb{Z}_m^n$ and $\varepsilon,\delta\in
\{-1,1\}^n${\rm ,}
$$
|F^{+1}(z,\varepsilon)|+|F^{-1}(z,\varepsilon)|=|F^{+1}(w,\delta)|+|F^{-1}(w,\delta)|.
$$
\end{claim}
\vglue8pt
\begin{proof}
Define $\psi:\mathbb{Z}_m^n\times S(j,k)\to \mathbb{Z}_m^n\times S(j,k)$ by
$$
\psi(x,y)=(w-\varepsilon\delta z+\varepsilon\delta x,\varepsilon\delta y).
$$
We claim that $\psi$ is a bijection between $F^{+1}(z,\varepsilon)$ and
$F^{\varepsilon_j\delta_j}(w,\delta)$, and also $\psi$ is a bijection
between $F^{-1}(z,\varepsilon)$ and $F^{-\varepsilon_j\delta_j}(w,\delta)$. Indeed,
if $(x,y)\in F^{+1}(z,\varepsilon)$ then there exists $t\in
\{1,\ldots,\|y-e_j\|_\infty\}$ such that
$\gamma^{+1}_{x,y}(t-1)=z$ and $\gamma^{+1}_{x,y}(t)=z+\varepsilon$. The
path $w-\varepsilon\delta z+\varepsilon\delta \gamma_{x,y}^{+1}$ equals the path
$\gamma^{\varepsilon_j\delta_j}_{\psi(x,y)}$, which by definition goes
through $w$ at time $t-1$ and $w+\delta$ at time $t$. Since these
transformations are clearly invertible, we obtain the required
result for $F^{+1}(z,\varepsilon)$. The proof for $F^{-1}(z,\varepsilon)$ is
analogous.
\end{proof}
\begin{claim}\label{claim:fubini}
Denote $N=|F^{+1}(z,\varepsilon)|+|F^{-1}(z,\varepsilon)|${\rm ,} which is independent of
$z\in \mathbb{Z}_m^n$ and $\varepsilon\in \{-1,1\}^n${\rm ,} by
Claim~{\rm \ref{claim:independence}}. Then
$$
N\le \frac{k\cdot |S(j,k)|}{2^{n-1}}.
$$
\end{claim}
\begin{proof}
We have that
\begin{eqnarray*}
N\cdot m^n\cdot 2^n&=&\sum_{(z,\varepsilon)\in \mathbb{Z}_m^n\times
\{-1,1\}^n}\left(|F^{+1}(z,\varepsilon)|+|F^{-1}(z,\varepsilon)|\right)\\[6pt]&=&\hskip-3pt\sum_{(z,\varepsilon)\in
\mathbb{Z}_m^n\times \{-1,1\}^n}\hskip-3pt\Biggl(\sum_{(x,y)\in \mathbb{Z}_m^n\times
S(j,k)}\sum_{t=1}^{\|y-e_j\|_\infty}\mathbf
{1}_{\{\gamma_{x,y}^{+1}(t-1)=z\ \wedge\
\gamma_{x,y}^{+1}(t)=z+\varepsilon\}}\hskip-3pt\Biggr)\\[6pt]& &+\hskip-3pt\sum_{(z,\varepsilon)\in
\mathbb{Z}_m^n\times \{-1,1\}^n}\hskip-3pt\Biggl(\sum_{(x,y)\in \mathbb{Z}_m^n\times
S(j,k)}\sum_{t=1}^{\|y+e_j\|_\infty}\mathbf
{1}_{\{\gamma_{x,y}^{-1}(t-1)=z\ \wedge\
\gamma_{x,y}^{-1}(t)=z+\varepsilon\}}\hskip-3pt\Biggr)\\[6pt] &=&\sum_{(x,y)\in \mathbb{Z}_m^n\times
S(j,k)}\|y-e_j\|_\infty+ \sum_{(x,y)\in \mathbb{Z}_m^n\times
S(j,k)}\|y+e_j\|_\infty\\[6pt]
&\le& 2k \cdot m^n\cdot |S(j,k)|.
\end{eqnarray*}
\end{proof}
We now conclude the proof of Lemma~\ref{lem:approx}. Observe that
for $x\in \mathbb{Z}_m^n$ and $y\in S(j,k)$,
\begin{eqnarray}\label{eq:geodesic+}&&\\[-2pt] \nonumber
\frac{\|f(x)-f(x+y)\|_X^p}{2^{p-1}}&\le&
\|f(x)-f(x+e_j)\|_X^p\\
&&\nonumber+
\|y-e_j\|_\infty^{p-1}\sum_{t=1}^{\|y-e_j\|_\infty}
\|f(\gamma_{x,y}^{+1}(t))-f(\gamma_{x,y}^{+1}(t-1))\|_X^p\\
&\le& \|f(x)-f(x+e_j)\|_X^p\nonumber\\
&&+
k^{p-1}\sum_{t=1}^{\|y-e_j\|_\infty}
\|f(\gamma_{x,y}^{+1}(t))-f(\gamma_{x,y}^{+1}(t-1))\|_X^p,\nonumber
\\[-9pt]
\noalign{\noindent and}
\label{eq:geodesic-}&&\\[-2pt]
\nonumber\frac{\|f(x)-f(x+y)\|_X^p}{2^{p-1}}&\le&
\|f(x)-f(x-e_j)\|_X^p\\
&&+
\|y+e_j\|_\infty^{p-1}\sum_{t=1}^{\|y+e_j\|_\infty}
\|f(\gamma_{x,y}^{-1}(t))-f(\gamma_{x,y}^{-1}(t-1))\|_X^p\nonumber\\
&\le& \|f(x)-f(x-e_j)\|_X^p\nonumber\\ &&+
k^{p-1}\sum_{t=1}^{\|y+e_j\|_\infty}
\|f(\gamma_{x,y}^{-1}(t))-f(\gamma_{x,y}^{-1}(t-1))\|_X^p.\nonumber
\end{eqnarray}
Averaging inequalities~\eqref{eq:geodesic+}
and~\eqref{eq:geodesic-}, and integrating, we get that
\begin{eqnarray}\label{eq:use N}
&&\frac{1}{\mu(S(j,k))}\int_{\mathbb{Z}_m^n}\int_{S(j,k)}
\|f(x)-f(x+y)\|_X^pd\mu(y)d\mu(x)\\&&\quad \le \nonumber
2^{p-1}\int_{\mathbb{Z}_m^n}\|f(x+e_j)-f(x)\|_X^pd\mu(x)\nonumber\\
&&\nonumber\qquad +
(2k)^{p-1}\frac{N\cdot
2^{n}}{|S(j,k)|}\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(z+\varepsilon)-f(z)\|_X^pd\mu(z)\\ \label{eq:use
bound on N} &&\quad \le
2^{p-1}\int_{\mathbb{Z}_m^n}\|f(x+e_j)-f(x)\|_X^pd\mu(x)\\
&&\qquad +
(2k)^{p}\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(z+\varepsilon)-f(z)\|_X^pd\mu(z),\nonumber
\end{eqnarray}
where in~\eqref{eq:use N} we used Claim~\ref{claim:independence}
and in~\eqref{eq:use bound on N} we used Claim~\ref{claim:fubini}.
By~\eqref{eq:convexity}, this completes the proof of
Lemma~\ref{lem:approx}.
\end{proof}
Lemma~\ref{lem:pass to average on e} below is the heart of our
proof. It contains the cancellation of terms which is key to the
validity of Theorem~\ref{thm:cotype} and Theorem~\ref{thm:weak
cotype}.
\begin{lemma}\label{lem:pass to average on e} For every $f:\mathbb{Z}_m^n\to X${\rm ,}
every integer $n${\rm ,} every even integer~$m${\rm ,} every $\varepsilon\in \{-1,1\}^n${\rm ,}
every odd integer $k<m/2$,
and every $p\ge 1${\rm ,}
\begin{eqnarray*}&&\hskip-30pt
\int_{\mathbb{Z}_m^n} \Biggl\|\sum_{j=1}^n
\varepsilon_j\left[\mathcal{E}_j^{(k)}f(x+e_j)-\mathcal{E}_j^{(k)}f(x-e_j)\right]\Biggr\|_X^pd\mu(x)\\
&&\qquad \le
3^{p-1}
\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x-\varepsilon)\|_X^pd\mu(x)\\
&&\qquad\quad +\frac{\cdot24^pn^{2p-1}}{k^{p}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x) .
\end{eqnarray*}
\end{lemma}
We postpone the proof of Lemma~\ref{lem:pass to average on e} to
Section~\ref{section:the lemma}, and proceed to prove
Theorem~\ref{thm:cotype} and Theorem~\ref{thm:weak cotype}
assuming its validity.
\begin{proof}[Proof of Theorem~{\rm \ref{thm:cotype}} and Theorem~{\rm \ref{thm:weak cotype}}]
Taking
expectations with respect to $\varepsilon\in \{-1,1\}^n$ in
Lemma~\ref{lem:pass to average on e} we get that
\begin{eqnarray}\label{eq:E}
&& \\[-4pt]
&&\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n} \Biggl\|\sum_{j=1}^n
\varepsilon_j\left[\mathcal{E}_j^{(k)}f(x+e_j)-\mathcal{E}_j^{(k)}f(x-e_j)\right]\Biggr\|_X^pd\mu(x)\nonumber\\
& &\quad\le 3^{p-1}\mathbb{E}_\varepsilon
\int_{\mathbb{Z}_m^n}2^{p-1}\left(\|f(x+\varepsilon)-f(x)\|_X^p+\|f(x)-f(x-\varepsilon)\|_X^p\right)d\mu(x)\nonumber\\
&&\qquad +
\frac{24^pn^{2p-1}}{k^{p}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x)\nonumber\\&&\quad \le
\frac{6^p}{3}\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x)\nonumber\\
&&\qquad +\frac{24^pn^{2p-1}}{k^{p}}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x).\nonumber
\end{eqnarray}
Fix $x\in \mathbb{Z}_m^n$ and let $m$ be an integer which is divisible by
$4$ such that $m\ge 6n^{2+1/q}$. Fixing $C>C_q^{(p)}(X)$, and
applying the definition of $C_q^{(p)}(X)$ to the vectors
$\left\{\mathcal{E}_j^{(k)} f(x+e_j)-\mathcal{E}_j^{(k)}
f(x-e_j)\right\}_{j=1}^n$, we get
\begin{multline}\label{eq:just used cotype}
\mathbb{E}_\varepsilon\Biggl\|\sum_{j=1}^n \varepsilon_j\left[\mathcal{E}_j^{(k)} f(x+e_j)-\mathcal{E}_j^{(k)}
f(x-e_j)\right]\Biggr\|_X^p\\
\ge \frac{1}{C^p\cdot
n^{1-p/q}}\sum_{j=1}^n \left\|\mathcal{E}_j^{(k)} f(x+e_j)-\mathcal{E}_j^{(k)}
f(x-e_j)\right\|_X^p.
\end{multline}
Now, for every $j\in \{1,\ldots,n\}$,
\begin{multline}\label{eq:before average}
\sum_{s=1}^{m/4}
\left\|\mathcal{E}_j^{(k)}f\left(x+2se_j\right)-\mathcal{E}_j^{(k)}f\left(x+2(s-1)e_j\right)\right\|_X^p\\
\ge
\left(\frac{4}{m}\right)^{p-1}\left\|\mathcal{E}_j^{(k)}f\left(x+\frac{m}{2}e_j\right)-\mathcal{E}_j^{(k)}f\left(x\right)\right\|_X^p.
\end{multline}
Averaging~\eqref{eq:before average} over $x\in \mathbb{Z}_m^n$ we get that
\begin{multline}\label{eq:after average}
\int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f(x+e_j)-\mathcal{E}_j^{(k)}f(x-e_j)\right\|_X^pd\mu(x)\\
\ge\left(\frac{4}{m}\right)^p\int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f\left(x+\frac{m}{2}e_j\right)-\mathcal{E}_j^{(k)}f\left(x\right)\right\|_X^pd\mu(x).
\end{multline}
Combining~\eqref{eq:just used cotype} and~\eqref{eq:after average}
we get the inequality
\begin{multline}\label{eq:before lemma}
\mathbb{E}_\varepsilon \int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n \varepsilon_j\left[\mathcal{E}_j^{(k)}
f(x+e_j)-\mathcal{E}_j^{(k)} f(x-e_j)\right]\Biggr\|_X^pd\mu(x)\\\ge
\frac{1}{C^p\cdot n^{1-p/q}}\cdot
\left(\frac{4}{m}\right)^p\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f\left(x+\frac{m}{2}e_j\right)-\mathcal{E}_j^{(k)}f\left(x\right)\right\|_X^pd\mu(x).
\end{multline}
Now, for every $j\in \{1,\ldots,n\}$,
\begin{eqnarray}\label{eq:each j}
\qquad&&\hskip-35pt \int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f\left(x+\frac{m}{2}e_j\right)-\mathcal{E}_j^{(k)}f\left(x\right)\right\|_X^pd\mu(x)\\
&&\quad \ge\frac{1}{3^{p-1}}\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)
\nonumber\\&
& \qquad-\int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f\left(x+\frac{m}{2}e_j\right)
-f\left(x+\frac{m}{2}e_j\right)\right\|_X^pd\mu(x)\nonumber\\
&&\qquad-\int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f\left(x\right)-f\left(x\right)\right\|_X^pd\mu(x)\nonumber\\
&&\quad =\frac{1}{3^{p-1}}\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\nonumber\\
&&\qquad-2\int_{\mathbb{Z}_m^n}
\left\|\mathcal{E}_j^{(k)}f\left(x\right)-f\left(x\right)\right\|_X^pd\mu(x)\nonumber\\&&\quad \ge
\frac{1}{3^{p-1}}\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\nonumber\\
&&\qquad -
2^{p+1}k^p
\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x) \nonumber\\& &\qquad - 2^{p}\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x) ,\nonumber
\end{eqnarray} where we used Lemma~\ref{lem:approx}.
Combining~\eqref{eq:each j} with~\eqref{eq:before lemma},
we see that
\begin{eqnarray}\label{eq:almost done0}&&\\[-2pt]
&& \sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left\|f\left(x+\frac{m}{2}e_j\right)-f\left(x\right)\right\|_X^pd\mu(x)\nonumber\\&&\quad \le
\frac{(3Cm)^p n^{1-\frac{p}{q}}}{3\cdot 4^p}\mathbb{E}_\varepsilon
\int_{\mathbb{Z}_m^n}\Biggl\|\sum_{j=1}^n \varepsilon_j\left[\mathcal{E}_j^{(k)}
f(x+e_j)-\mathcal{E}_j^{(k)}
f(x-e_j)\right]\Biggr\|_X^pd\mu(x)\nonumber\\ \nonumber& &\qquad +
6^pk^pn\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x) \\ \nonumber& &\qquad
+6^p\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x)\nonumber\\
&&\quad \le \Biggl(\frac{(18Cm)^p n^{1-\frac{p}{q}}}{ 4^p}
+6^pk^pn\Biggr)\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x)\nonumber\\
& &\qquad+ \Biggl(\frac{(3Cm)^p
n^{1-\frac{p}{q}}}{
4^p}\cdot\frac{24^pn^{2p-1}}{k^{p}}+6^p\Biggr)\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x)
\nonumber\\&&\label{eq:almost done}\\[-8pt]
&&\quad \le (18Cm)^p
n^{1-\frac{p}{q}}
\left(\mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_X^pd\mu(x)d\sigma(\varepsilon)\right.
\nonumber\\
&&\qquad \left.+\frac{1}{n}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(x+e_j)-f(x)\|_X^pd\mu(x)\right),\nonumber
\end{eqnarray}
where in~\eqref{eq:almost done0} we used~\eqref{eq:E},
and~\eqref{eq:almost done} holds true when we choose $4n^2\le k\le
\frac{3m}{4n^{1/q}}$ (which is possible if we assume that $m\ge
6n^{2+1/q}$). By Lemma~\ref{lem:with zeros}, this completes the
proof of Theorem~\ref{thm:weak cotype}.
\end{proof}
\subsection{Proof of Lemma~{\rm \ref{lem:pass to average on e}}}\label{section:the lemma}
Fix $\varepsilon\in \{-1,1\}^n$, and $x\in \mathbb{Z}_m^n$. Consider
the following two sums:
\begin{eqnarray}\label{eq:A}
A_f(x,\varepsilon)&= &\sum_{j=1}^n
\varepsilon_j\left[\mathcal{E}_j^{(k)}f(x+e_j)-\mathcal{E}_j^{(k)}f(x-e_j)\right]\\
&=&\frac{1}{k(k+1)^{n-1}}\sum_{y\in
\mathbb{Z}_m^n} a_y(x,\varepsilon)f(y),\nonumber
\end{eqnarray}
and
\begin{eqnarray}\label{eq:B}
B_f(x,\varepsilon)&= &\frac{1}{k(k+1)^{n-1}} \sum_{z- x \in (-k,k)^n\cap
(2\mathbb{Z})^n}[f(z+\varepsilon)-f(z-\varepsilon)]\\
&=&\frac{1}{k(k+1)^{n-1}}\sum_{y\in \mathbb{Z}_m^n}
b_y(x,\varepsilon)f(y),\nonumber
\end{eqnarray}
where $a_y(x,\varepsilon),b_y(x,\varepsilon)\in \mathbb{Z}$ are appropriately chosen
coefficients, which are independent of $f$.
For $x\in \mathbb{Z}_m^n$ define $S(x)\subset \mathbb{Z}_m^n$,
\begin{multline*}
S(x)=\Big\{y\in x+ (2\mathbb{Z}+1)^n:\ d_{\mathbb{Z}_m^n}(y,x)=k,\\ \text{and}\
|\{j:\ |y_j-x_j|\equiv k \mod m\}|\ge2\Big\}.
\end{multline*}
\begin{claim}\label{claim:cases} For $x\in \mathbb{Z}_m^n$ and $y\notin
S(x)${\rm ,} $a_y(x,\varepsilon)=b_y(x,\varepsilon)$.
\end{claim}
\begin{proof} If there exists a coordinate $j\in \{1,\ldots,n\}$
such that $x_j-y_j$ is even, then it follows from our definitions
that $a_y(x,\varepsilon)=b_y(x,\varepsilon)=0$. Similarly, if $d_{\mathbb{Z}_m^n}(x,y)>k$
then $a_y(x,\varepsilon)=b_y(x,\varepsilon)=0$ (because $k$ is odd). Assume that
$x-y\in (2\mathbb{Z}+1)^n$. If $d_{\mathbb{Z}_m^n}(y,x)<k$ then for each $j$ the
term $f(y)$ cancels in $\mathcal{E}_j^{(k)}f(x+e_j)-\mathcal{E}_j^{(k)}(x-e_j)$,
implying that $a_y(x,\varepsilon)=0$. Similarly, in the sum defining
$B_f(x,\varepsilon)$ the term $f(y)$ appears twice, with opposite signs, so
that $b_y(x,\varepsilon)=0$.
It remains to deal with the case $|\{j:\ |y_j-x_j|\equiv k \mod
m\}|=1$. We may assume without loss of generality that
$$|y_1-x_1|\equiv k\mod m
\quad \text{ and for $j\ge 2$},\quad y_j-x_j\in (-k,k)\mod m.
$$
If $y_1-x_1\equiv k\mod m$ then $a_y(x,\varepsilon)=\varepsilon_1$, since in the
terms corresponding to $j\ge 2$ in the definition of $A_f(x,\varepsilon)$
the summand $f(y)$ cancels out. We also claim that in this case
$b_y(x,\varepsilon)=\varepsilon_1$. Indeed, if $\varepsilon_1=1$ then $f(y)$ appears in the
sum defining $B_f(x,\varepsilon)$ only in the term corresponding to
$z=y-\varepsilon$, while if $\varepsilon_1=-1$ then $f(y)$ appears in this sum only
in the term corresponding to $z=y+\varepsilon$, in which case its
coefficient is $-1$. In the case $y_1-x_1\equiv -k\mod m$ the same
reasoning shows that $a_y(x,\varepsilon)=b_y(x,\varepsilon)=-\varepsilon_1$.
\end{proof}
By Claim~\ref{claim:cases} we have
\begin{eqnarray}\label{eq:for later}
A_f(x,\varepsilon)-B_f(x,\varepsilon)=\frac{1}{k(k+1)^{n-1}}\sum_{y\in
S(x)}[a_y(x,\varepsilon)-b_y(x,\varepsilon)]f(y).
\end{eqnarray}
Thus,
\begin{eqnarray*}
\int_{\mathbb{Z}_m^n}\left\|A_f(x,\varepsilon)\right\|_X^pd\mu(x)&\le&
3^{p-1}\int_{\mathbb{Z}_m^n}\left\|B_f(x,\varepsilon)\right\|_X^pd\mu(x)\\
&&+3^{p-1}\int_{\mathbb{Z}_m^n}\Biggl\|\frac{1}{k(k+1)^{n-1}}\sum_{y\in
S(x)}a_y(x,\varepsilon)f(y)\Biggr\|_X^pd\mu(x)
\\& &+3^{p-1}\int_{\mathbb{Z}_m^n}\Biggl\|\frac{1}{k(k+1)^{n-1}}\sum_{y\in
S(x)}b_y(x,\varepsilon)f(y)\Biggr\|_X^pd\mu(x).
\end{eqnarray*}
Thus Lemma~\ref{lem:pass to average on e} will be proved once we
establish the following inequalities
\begin{equation}\label{eq:goal0}
\int_{\mathbb{Z}_m^n}\left\|B_f(x,\varepsilon)\right\|_X^pd\mu(x)\le
\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x-\varepsilon)\|_X^pd\mu(x),
\end{equation}
\begin{multline}\label{eq:goal}
\int_{\mathbb{Z}_m^n}\Biggl\|\frac{1}{k(k+1)^{n-1}}\sum_{y\in
S(x)}a_y(x,\varepsilon)f(y)\Biggr\|_X^pd\mu(x)\\
\le
\frac{8^{p}n^{2p-1}}{k^p}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}\left\|f(x+e_j)-f(x)\right\|_X^p,
\end{multline}
and
\begin{multline}\label{eq:goal2}
\int_{\mathbb{Z}_m^n}\Biggl\|\frac{1}{k(k+1)^{n-1}}\sum_{y\in
S(x)}b_y(x,\varepsilon)f(y)\Biggr\|_X^pd\mu(x)\\
\le
\frac{8^{p}n^{2p-1}}{k^p}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}\left\|f(x+e_j)-f(x)\right\|_X^p.
\end{multline}
Inequality~\eqref{eq:goal0} follows directly from the definition
of $B_f(x,\varepsilon)$, by convexity. Thus, we pass to the proof
of~\eqref{eq:goal} and~\eqref{eq:goal2}.
For $j=1,2,\ldots,n$ define for $y\in
S(x)$,
$$
\tau_j^{x}(y)=\left\{\begin{array}{ll} y-2ke_j & y_j-x_j\equiv k\mod m,\\
y & \text{otherwise},
\end{array}\right.
$$
and set $\tau_j^{x}(y)=y$ when $y\notin S(x)$. Observe that the
following identity holds true:
\begin{eqnarray}\label{eq:invariance}
\tau_j^x(y)=\tau_j^0(y-x)+x.
\end{eqnarray}
\begin{claim}\label{claim:convexity} Assume that for every
$j\in\{1,2,\ldots,n\}${\rm ,} $x,y\in \mathbb{Z}_m^n$ and $\varepsilon\in \{-1,1\}^n${\rm ,} we
are given a real number $\eta_j(x,y,\varepsilon)\in [-1,1]$. Then
\begin{multline*}
\int_{\mathbb{Z}_m^n}\Biggl\|\frac{1}{k(k+1)^{n-1}}\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\eta_j(x,y,\varepsilon)\left[f(y)-f(\tau_j^{x}(y))\right]\Biggr\|_X^pd\mu(x)\\
\le
\frac{8^{p}n^{2p-1}}{2k^p}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}\left\|f(x+e_j)-f(x)\right\|_X^pd\mu(x).
\end{multline*}
\end{claim}
\begin{proof} Denote by $N(x,\varepsilon)$ the number of nonzero summands in
$$
\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\eta_j(x,y,\varepsilon)\left[f(y)-f(\tau_j^{x}(y))\right].
$$
For every $\ell\ge 2$ let $S^\ell(x)$ be the set of all $y\in
S(x)$ for which the number of coordinates $j$ such that
$y_j-x_j\in \{k,-k\}\mod m$ equals $\ell$. Then $|S^\ell(x)|=
\binom{n}{\ell}2^\ell (k-1)^{n-\ell}$. Moreover, for $y\in
S^\ell(x)$ we have that $y\neq \tau_j^x(y)$ for at most $\ell$
values of $j$. Hence
\begin{eqnarray*}
N(x,\varepsilon)\le \sum_{\ell=2}^n |S^\ell(x)|\ell&=&\sum_{\ell=2}^n
\binom{n}{\ell}2^\ell (k-1)^{n-\ell}\ell\\
&=&2n\left[
(k+1)^{n-1}-(k-1)^{n-1}\right] \le \frac{4n^2}{k^2}k(k+1)^{n-1}.
\end{eqnarray*}
Now, using \eqref{eq:invariance}, we get
\begin{eqnarray}\label{eq:step1}&&\\
&& \int_{\mathbb{Z}_m^n}\Biggl\|\frac{1}{k(k+1)^{n-1}}\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\eta_j(x,y,\varepsilon)\left[f(y)-f(\tau_j^{x}(y))\right]\Biggr\|_X^pd\mu(x)
\nonumber\\&=&\int_{\mathbb{Z}_m^n}\left(\frac{N(x,\varepsilon)}{k(k+1)^{n-1}}\right)^p
\Biggl\|\frac{1}{N(x,\varepsilon)}\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\eta_j(x,y,\varepsilon)\left[f(y)-f(\tau_j^{x}(y))\right]\Biggr\|_X^pd\mu(x)\nonumber\\
&\le&
\int_{\mathbb{Z}_m^n}\frac{N(x,\varepsilon)^{p-1}}{k^{p}(k+1)^{(n-1)p}}\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\left\|f(y)-f(\tau_j^{x}(y))\right\|_X^p d\mu(x)\nonumber
\\&\le&\frac{4^{p-1}n^{2p-2}}{k^{2p-1}(k+1)^{n-1}}
\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\int_{\mathbb{Z}_m^n}\left\|f(y)-f(\tau_j^{x}(y))\right\|_X^p d\mu(x)\nonumber\\
&=&
\frac{4^{p-1}n^{2p-2}}{k^{2p-1}(k+1)^{n-1}}\sum_{j=1}^n\sum_{z\in
\mathbb{Z}_m^n}\int_{\mathbb{Z}_m^n}\left\|f(z+x)-f(\tau_j^0(z)+x)\right\|_X^pd\mu(x). \nonumber
\end{eqnarray}
Consider the following set:
$$
E_j=\{z\in \mathbb{Z}_m^n:\ \tau_j^0(z)=z-2ke_j\}.
$$
Observe that that for every $j$,
\begin{eqnarray}\label{eq:estimate Ej}
|E_j|&=&\sum_{\ell=1}^{n-1} \binom{n-1}{\ell}2^\ell
(k-1)^{n-1-\ell}\\
&\le&(k+1)^{n-1}-(k-1)^{n-1}\le
\frac{2n}{k}(k+1)^{n-1}.\nonumber
\end{eqnarray}
Using the translation invariance of the Haar measure on $\mathbb{Z}_m^n$
we get that
\begin{eqnarray}\label{eq:binom}
&& \sum_{j=1}^n\sum_{z\in
\mathbb{Z}_m^n}\int_{\mathbb{Z}_m^n}\left\|f(z+x)-f(\tau_j^0(z)+x)\right\|_X^pd\mu(x) \\
& =&\sum_{j=1}^n\sum_{z\in
E_j}\int_{\mathbb{Z}_m^n}\|f(z+x)-f(z+x-2ke_j)\|_X^pd\mu(x)\nonumber\\
&=&\sum_{j=1}^n
|E_j|\int_{\mathbb{Z}_m^n}\|f(w)-f(w-2ke_j)\|_X^pd\mu(w)\nonumber
\\&\le&
\frac{2n}{k}(k+1)^{n-1}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\|f(w)-f(w-2ke_j)\|_X^pd\mu(w)\nonumber\\
&\le&\label{eq:step2} \frac{2n}{k}(k+1)^{n-1}\\
&&\times\sum_{j=1}^n\int_{\mathbb{Z}_m^n}
\left((2k)^{p-1}\sum_{t=1}^{2k}\|f(w-(t-1)e_j)
-f(w-te_j)\|_X^p\right)d\mu(w)\nonumber\\
&\le&2^{p+1}nk^{p-1}(k+1)^{n-1}\sum_{j=1}^n\int_{\mathbb{Z}_m^n}\|f(z+e_j)-f(z)\|_X^pd\mu(z) ,
\nonumber
\end{eqnarray}
where in~\eqref{eq:binom} we used~\eqref{eq:estimate Ej}.
Combining \eqref{eq:step1} and \eqref{eq:step2} completes the
proof of Claim~\ref{claim:convexity}.
\end{proof}
\medskip
By Claim~\ref{claim:convexity}, inequalities \eqref{eq:goal} and
\eqref{eq:goal2}, and hence also Lemma~\ref{lem:pass to average on
e}, will be proved once we establish the following identities:
\begin{eqnarray}\label{eq:identity}
\sum_{y\in S(x)}a_y(x,\varepsilon)f(y)=\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\varepsilon_j\left[f(y)-f(\tau_j^{x}(y))\right].
\end{eqnarray}
and
\begin{eqnarray}\label{eq:identity2}
\sum_{y\in S(x)}b_y(x,\varepsilon)f(y)=\sum_{j=1}^n\sum_{y\in
\mathbb{Z}_m^n}\delta_j(x,y,\varepsilon)\left[f(y)-f(\tau_j^{x}(y))\right],
\end{eqnarray}
for some $\delta_j(x,y,\varepsilon)\in \{-1,0,1\}$.
Identity \eqref{eq:identity} follows directly from the fact that
\eqref{eq:A} implies that for every $y\in S(x)$,
$$
a_y(x,\varepsilon)=\sum_{j:\ y_j-x_j\equiv k\mod m} \varepsilon_j-\sum_{j:\
y_j-x_j\equiv-k\mod m} \varepsilon_j.
$$
It is enough to prove identity \eqref{eq:identity2} for $x=0$,
since $b_y(x,\varepsilon)=b_{y-x}(0, \varepsilon)$. To this end we note that it
follows directly from \eqref{eq:B} that for every $y\in S(0)$
$$
b_y(0,\varepsilon)=\left\{\begin{array}{ll} 1 & \exists j \ y_j\equiv
\varepsilon_jk \mod m\text{ and } \forall \ell\
y_\ell\not\equiv-\varepsilon_jk\mod m\\
-1 & \exists j \ y_j\equiv -\varepsilon_jk \mod m\text{ and } \forall
\ell\
y_\ell\not\equiv\varepsilon_jk\mod m\\
0& \text{otherwise}.\end{array}\right.
$$
For $y\in S(0)$ define
$$
y^\ominus_j=\left\{ \begin{array}{ll} -y_j& y_j\in \{k,-k\}\mod m\\
y_j& \text{otherwise}.\end{array}\right.
$$
Since $b_y(0,\varepsilon)=-b_{y^\ominus}(0,\varepsilon)$ we get that
\begin{eqnarray}\label{eq:passtotheta}
\sum_{y\in S(0)}b_y(0,\varepsilon)f(y)=\frac12\sum_{y\in
S(0)}b_y(0,\varepsilon)\left[f(y)-f(y^\ominus)\right].
\end{eqnarray}
Define for $\ell\in \{1,\ldots,n+1\}$ a vector
$y^{\ominus_\ell}\in \mathbb{Z}_m^n$ by
$$
y^{\ominus_\ell}_j=\left\{ \begin{array}{ll} -y_j& j<\ell \text{ and } y_j\in \{k,-k\}\mod m\\
y_j& \text{otherwise}.\end{array}\right.
$$
Then $y^{\ominus_{n+1}}=y^\ominus$, $y^{\ominus_1}=y$ and by
\eqref{eq:passtotheta}
$$
\sum_{y\in S(0)}b_y(0,\varepsilon)f(y)= \frac12\sum_{\ell=1}^n\sum_{y\in
S(0)}b_y(0,\varepsilon)\left[f(y^{\ominus_\ell})-f(y^{\ominus_{\ell+1}})\right].
$$
Since whenever $y^{\ominus_\ell}\neq y^{\ominus_{\ell+1}}$, each
of these vectors is obtained from the other by flipping the sign
of the $\ell$-th coordinate, which is in $\{k,-k\}\mod m$, this
implies the representation \eqref{eq:identity2}. The proof of
Lemma~\ref{lem:pass to average on e} is complete.\hfill \qed
\section{A nonlinear version of the Maurey-Pisier theorem}
In what follows we denote by $\mathrm{\bf diag}(\mathbb{Z}_m^n)$ the graph on $\mathbb{Z}_m^n$
in which $x,y\in \mathbb{Z}_m^n$ are adjacent if for every $i\in
\{1,\ldots,n\}$,\ $x_i-y_i\in \{\pm 1 \mod m\}$.
For technical reasons that will become clear presently, given
$\ell,n\in \mathbb N$ we denote by $\mathcal B(\mathcal{M};n,\ell)$ the infimum
over $\mathcal B>0$ such that for every even $m\in\mathbb N$ and for every
$f:\mathbb{Z}_m^n \to \mathcal{M}$,
\begin{eqnarray*}
\sum_{j=1}^n\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(x+\ell
e_j\right),f(x)\right)^2d\mu(x)\le \mathcal B^2 \ell^2
n\EE_{\varepsilon}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(x+\varepsilon),f(x))^2d\mu(x).
\end{eqnarray*}
\begin{lemma}\label{lem:a priori} For every metric space
$(\mathcal{M},d_\mathcal{M})${\rm ,} every $n,a\in \mathbb N${\rm ,} every even $m,r\in \mathbb
N$ with $0\le r<m${\rm ,} and every \pagebreak $f:\mathbb{Z}_m^n\to \mathcal{M}${\rm ,}
\begin{multline}\label{eq:mod}
\sum_{j=1}^n\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(x+(am+r)
e_j\right),f(x)\right)^2d\mu(x)\\* \le
\min\left\{r^2,(m-r)^2\right\}\cdot
n\EE_{\varepsilon}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(x+\varepsilon),f(x))^2d\mu(x).
\end{multline}
In particular{\rm ,} $\mathcal B(\mathcal{M};n,\ell)\le 1$ for every $n\in \mathbb N$ and
every even $\ell\in \mathbb N$.
\end{lemma}
\begin{proof} The left-hand side of~\eqref{eq:mod} depends only on
$r$, and remains unchanged if we replace $r$ by $m-r$. We may thus
assume that $a=0$ and $r\le m-r$. Fix $x\in \mathbb{Z}_m^n$ and $j\in
\{1,\ldots n\}$. Observe that
$$
\left\{x+\frac{1-(-1)^k}{2}\sum_{r\neq j}e_r
+ke_j\right\}_{k=0}^{r}
$$
is a path of length $r$ joining $x$ and $x+r e_j$ in the graph
$\mathrm{\bf diag}(\mathbb{Z}_m^n)$. Thus the distance between $x$ and $x+r e_j$ in
the graph $\mathrm{\bf diag}(\mathbb{Z}_m^n)$ equals $r$. If
$(x=w_0,w_1,\ldots,w_{r}=x+r e_j)$ is a geodesic joining $x$ and
$x+re_j$ in $\mathrm{\bf diag}(\mathbb{Z}_m^n)$, then by the triangle inequality
\begin{eqnarray}\label{eq:geo}
d_\mathcal{M}(f(x+r e_j),f(x))^2\le
r\sum_{k=1}^{r}d_\mathcal{M}(f(w_k),f(w_{k-1}))^2.
\end{eqnarray}
Observe that if we sum~\eqref{eq:geo} over all geodesics joining
$x$ and $x+r e_j$ in $\mathrm{\bf diag}(\mathbb{Z}_m^n)$, and then over all $x\in
\mathbb{Z}_m^n$, then in the resulting sum each edge in $\mathrm{\bf diag}(\mathbb{Z}_m^n)$
appears the same number of times. Thus, averaging this inequality
over $x\in \mathbb{Z}_m^n$ we get
\begin{eqnarray*}
\int_{\mathbb{Z}_m^n} d_\mathcal{M}(f(x+r e_j),f(x))^2d\mu(x)\le
r^2\EE_{\varepsilon}[d_\mathcal{M}(f(x+\varepsilon),f(x))]^2.
\end{eqnarray*}
Summing over $j=1,\ldots n$ we obtain the required result.
\end{proof}
\begin{lemma}\label{lem:sub}
For every four integers $\ell,k,s,t\in \mathbb N${\rm ,}
\[
\mathcal B\left(\mathcal{M};\ell k,st\right)\le \mathcal B\left(\mathcal{M};\ell,s\right)\cdot
\mathcal B\left(\mathcal{M};k,t\right).
\]
\end{lemma}
\begin{proof} Let $m$ be an even integer and take a function
$f:\mathbb{Z}_m^{\ell k}\to \mathcal{M}$. Fix $x\in \mathbb{Z}_m^{\ell k}$ and $\varepsilon\in
\{-1,1\}^{\ell k}$. Define $g:\mathbb{Z}_m^{\ell}\to \mathcal{M}$ by
\[
g(y)=f\Biggl(x+\sum_{r=1}^{k}\sum_{j=1}^{\ell} \varepsilon_{j+(r-1)\ell}\cdot
y_j\cdot e_{j+(r-1)\ell}\Biggr).
\]
By the definition of $\mathcal B\left(\mathcal{M};{\ell},s\right)$, \pagebreak applied to $g$,
for every $\mathcal B_1> \mathcal B\left(\mathcal{M};{\ell},s\right)$,
\begin{eqnarray*}\hskip-8pt
&&\hskip-12pt \sum_{a=1}^{\ell}\int_{\mathbb{Z}_m^{\ell}}
d_\mathcal{M}\Biggl(f\Biggl(x+\sum_{r=1}^{k}\sum_{j=1}^{\ell}
\varepsilon_{j+(r-1)\ell}\cdot y_j\cdot
e_{j+(r-1)\ell}+s\sum_{r=1}^{k}\varepsilon_{a+(r-1)\ell}\cdot
e_{a+(r-1)\ell}\Biggr),\\&\phantom{\le}&
f\Biggl(x+\sum_{r=1}^{k}\sum_{j=1}^{\ell} \varepsilon_{j+(r-1)\ell}\cdot
y_j\cdot
e_{j+(r-1)\ell}\Biggr)\Biggr)^2d\mu_{\mathbb{Z}_m^{\ell}}(y)\\
&\le& \mathcal B_1^2 s^2\ell\cdot \EE_{\delta} \int_{\mathbb{Z}_m^{\ell}}
d_\mathcal{M}\Biggl(f\Biggl(x+\sum_{r=1}^{k}\sum_{j=1}^{\ell}
\varepsilon_{j+(r-1)\ell}\cdot (y_j+\delta_j)\cdot
e_{j+(r-1)\ell}\Biggr),\\&\phantom{\le}&
f\Biggl(x+\sum_{r=1}^{k}\sum_{j=1}^{\ell} \varepsilon_{j+(r-1)\ell}\cdot
y_j\cdot e_{j+(r-1)\ell}\Biggr)\Biggr)^2d\mu_{\mathbb{Z}_m^{\ell}}(y).
\end{eqnarray*}
Averaging this inequality over $x\in \mathbb{Z}_m^{{\ell k}}$ and $\varepsilon\in
\{-1,1\}^{{\ell k}}$, and using the translation invariance of the
Haar measure, we get that
\begin{multline}\label{eq:ell}
\EE_{\varepsilon}\sum_{a=1}^{\ell}\int_{\mathbb{Z}_m^{{\ell k}}}
d_\mathcal{M}\Biggl(f\Biggl(x+s\sum_{r=1}^{k}\varepsilon_{a+(r-1)\ell}\cdot
e_{a+(r-1)\ell}\Biggr),f(x)\Biggr)^2d\mu_{\mathbb{Z}_m^{{\ell k}}}(x)\\
\le \mathcal B_1^2 s^2\ell \EE_{\varepsilon}
\int_{\mathbb{Z}_m^{\ell k}}
d_\mathcal{M}\left(f\left(x+\varepsilon\right),f\left(x\right)\right)^2d\mu_{\mathbb{Z}_m^{{\ell k}}}(x).
\end{multline}
Next we fix $x\in \mathbb{Z}_m^{{\ell k}}$, $u\in \{1,\ldots, \ell\}$, and
define $h_u:\mathbb{Z}_m^{k}\to \mathcal{M}$ by
$$
h_u(y)=f\Biggl(x+{s}\sum_{r=1}^{k}y_r
\cdot e_{u+(r-1)\ell}\Biggr).
$$
By the definition of $\mathcal B\left(\mathcal{M};{k},{t}\right)$, applied to
$h_u$, for every $\mathcal B_2>\mathcal B\left(\mathcal{M};{k},{t}\right)$ we have
\begin{eqnarray*}
&& \hskip-36pt
\sum_{j=1}^{k}\int_{\mathbb{Z}_m^{k}}
d_\mathcal{M}\Biggl(f\Biggl(x+{s}\sum_{r=1}^{k}y_r \cdot
e_{u+(r-1)\ell}+{st} \cdot e_{u+(j-1)\ell}\Biggr),\\
&& \hskip100pt f\Biggl(x+{s}
\sum_{r=1}^{k}y_r \cdot
e_{u+(r-1)\ell}\Biggr)\Biggr)^2d\mu_{\mathbb{Z}_m^{{k}}}(y)\\
&=&
\sum_{j=1}^{k}\int_{\mathbb{Z}_m^{k}}
d_\mathcal{M}\Bigl(h_u\Bigl(y+{t}e_j\Bigr),h_u(y)\Bigr)^2
d\mu_{\mathbb{Z}_m^{{k}}}(y) \\
&\le&\mathcal B_2^2t^2k\cdot\EE_{\varepsilon}
\int_{\mathbb{Z}_m^{k}}d\left(h_u\left(y+\varepsilon\right),h_u(y)\right)^2d\mu_{\mathbb{Z}_m^{k}}(y)\\
&=& \mathcal B_2^2t^2k \EE_{\varepsilon} \int_{\mathbb{Z}_m^{k}}d_\mathcal{M}\Biggl(f\Biggl(x+{s}
\sum_{r=1}^{k}(y_r+\varepsilon_{u+(r-1)\ell})\cdot
e_{u+(r-1)\ell}\Biggr),\\
&& \hskip100pt f\Biggl(x+{s}\sum_{r=1}^{k}y_r \cdot
e_{u+(r-1)\ell}\Biggr)\Biggr)^2d\mu_{\mathbb{Z}_m^{k}}(y).
\end{eqnarray*}
Summing this inequality over $u\in \{1,\ldots,\ell\}$ and
averaging over $x\in \mathbb{Z}_m^{{\ell k}}$, we get,
using~\eqref{eq:ell}, that
\begin{eqnarray*}
&&\hskip-5pt \sum_{a=1}^{{\ell
k}}\int_{\mathbb{Z}_m^{{\ell k}}}
d_\mathcal{M}\left(f\left(x+{st}e_a\right),f(x)\right)^2d\mu(x)\\&&\quad \le
\mathcal B_2^2t^2k \EE_{\varepsilon}\sum_{u=1}^{\ell} \int_{\mathbb{Z}_m^{{\ell k}}}
d_\mathcal{M}\Biggl( f\Biggl(x+{s}\sum_{r=1}^{k}\varepsilon_{u+(r-1)\ell}\cdot
e_{u+(r-1)\ell}\Biggr),f\left(x\right)\Biggr)^2
d\mu(x)\\
&&\quad \le \mathcal B_2^2t^2k\cdot\mathcal B_1^2s^2\ell\mathbb{E}_\varepsilon \int_{\mathbb{Z}_m^{{\ell
k}}}d_\mathcal{M}\left(f\left(x+\varepsilon\right),f\left(x\right)\right)^2d\mu(x).
\end{eqnarray*}
This implies the required result.
\end{proof}
\begin{lemma}\label{lem:use the sub} Assume that there exist integers $n_0,\ell_0>1$ such
that\break $\mathcal B(\mathcal{M};n_0,\ell_0) <1$. Then there exists $0<q<\infty$ such that
for every integer $n${\rm ,}
$$
m_q^{(2)}(\mathcal{M};n,3n_0)\le 2\ell_0n^{\log_{n_0}\ell_0}.
$$
In particular{\rm ,} $\Gamma_q^{(2)}(\mathcal{M})<\infty$.
\end{lemma}
\begin{proof} Let $q<\infty$ satisfy
$\mathcal B(\mathcal{M},n_0,\ell_0)<n_0^{-1/q}$. Iterating Lemma~\ref{lem:sub} we
get that for every integer $k$, $\mathcal B(n_0^k,\ell_0^k)\le
n_0^{-k/q}$. Denoting $n=n_0^k$ and $m=2\ell_0^k$, this implies
that for every $f:\mathbb{Z}_{m}^n\to \mathcal{M}$,
\begin{multline*}
\sum_{j=1}^n\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(x+\frac{m}{2}
e_j\right),f(x)\right)^2d\mu(x)\\
\le\frac14 m^2n^{1-\frac{2}{q}}
\EE_{\varepsilon}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(x+\varepsilon),f(x))^2d\mu(x).
\end{multline*}
For $f:\mathbb{Z}_m^{n'} \to \mathcal{M}$, where $n'\le n$, we define $g:\mathbb{Z}_m^{n'}\times \mathbb{Z}_m^{n-n'} \to \mathcal{M}$ by $g(x,y)=f(x)$. Applying the above inqeuality to $g$ we obtain,
\begin{multline*} \sum_{j=1}^{n'} \int_{\mathbb{Z}_m^{n'}} d_{\mathcal{M}} \left( f\left(x+\tfrac m2 e_j \right), f(x) \right )^2 s\mu(x) \\
\le \frac 14 m^2 n^{1-\frac 2q} \EE_{\varepsilon} \int_{\mathbb{Z}_m^{n'}} d{\mathcal{M}}(f(x+\varepsilon), f(x))^2 d\mu(x) . \end{multline*}
Hence, by
Lemma~\ref{lem:with zeros} we deduce that
$\Gamma_q^{(2)}(\mathcal{M};n_0^k,2\ell_0^k)\le 3$. For general $n$, let
$k$ be the minimal integer such that $n\le n_0^k$. By
Lemma~\ref{lem:monotone} we get that $\Gamma(\mathcal{M};n,2\ell_0^k)\le
3n_0^{1-2/q}\le 3n_0$. In other words,
\vskip12pt
\hfill $
\displaystyle{m_q^{(2)}(\mathcal{M};n,3n_0)\le 2\ell_0^k\le 2\ell_0n^{\log_{n_0}\ell_0}.}
$
\end{proof}
\begin{theorem}\label{thm:reverse cotype}
Let $n>1$ be an integer{\rm ,} $m$ an even integer{\rm ,} and $s$ an integer divisible
by $4$.
Assume that $\eta\in (0,1)$ satisfies
$8^{sn}\sqrt{\eta}<\frac12${\rm ,} and that there exists a mapping
$f:\mathbb{Z}_m^{n}\to \mathcal{M}$ such that
\begin{multline}\label{eq:condition}
\sum_{j=1}^{n}\int_{\mathbb{Z}_m^{n}}d_\mathcal{M}\left(f\left(x+s
e_j\right),f(x)\right)^2d\mu(x)
\\
> (1-\eta)s^2 n
\mathbb{E}_{\varepsilon}\int_{\mathbb{Z}_m^{n}}d_\mathcal{M}(f(x+\varepsilon),f(x))^2d\mu(x).
\end{multline}
Then
$$
c_\mathcal{M}\left(\left[s/4\right]_\infty^{n}\right)\le
1+8^{sn}\sqrt{\eta}.
$$
In particular{\rm ,} if $\mathcal B(\mathcal{M};n,s)=1$ then
$c_\mathcal{M}\left([s/4]_\infty^{n}\right)=1$.
\end{theorem}
\begin{proof} Observe first of all that~\eqref{eq:condition} and Lemma~\ref{lem:a
priori} imply that $m\ge 2s\sqrt{1-\eta}>2s-1$, so that $m\ge 2s$.
In what follows we will use the following numerical fact: If
$a_1,\ldots,a_r\ge 0$ and $0\le b\le \frac{1}{r}\sum_{j=1}^r a_j$,
then
\begin{eqnarray}\label{eq:b}
\sum_{j=1}^r \left(a_j-b\right)^2\le
\sum_{j=1}^r a_j^2-rb^2.
\end{eqnarray}
For $x\in \mathbb{Z}_m^n$ let $\mathcal{G}_j^+(x)$ (resp. $\mathcal{G}_j^-(x)$) be the set
of all geodesics joining $x$ and $x+se_j$ (resp. $x-se_j$) in the
graph $\mathrm{\bf diag}(\mathbb{Z}_m^n)$.
As we have seen
in the proof of Lemma~\ref{lem:a priori}, since $s$ is even, these
sets are nonempty.
Notice that if $m=2s$ then $\mathcal{G}_j^+(x) = \mathcal{G}_j^-(x)$; otherwise
$\mathcal{G}_j^+(x) \cap \mathcal{G}_j^-(x) = \emptyset$.
Denote $\mathcal{G}_j^{\pm}(x)=\mathcal{G}_j^+(x) \cup
\mathcal{G}_j^-(x)$, and for $\pi \in \mathcal{G}_j^\pm(x)$,
\[ \sgn(\pi)=\begin{cases} +1 & \text{ if } \pi \in \mathcal{G}_j^+(x)\\
-1 & \text{ otherwise} . \end{cases} \]
Each geodesic in $\mathcal{G}_j^{\pm}(x)$ has
length $s$. We write each $\pi\in \mathcal{G}_j^{\pm}(x)$ as a sequence of
vertices $\pi=(\pi_0=x,\pi_1,\ldots,\pi_{s}=x+ \sgn(\pi) se_j)$.
Using~\eqref{eq:b} with $a_j=d_\mathcal{M}(f(\pi_j),f(\pi_{j-1}))$ and
$b=\frac{1}{{s}}d_\mathcal{M}\left(f\left(x+se_j\right),f(x)\right)$, which
satisfy the conditions of~\eqref{eq:b} due to the triangle
inequality, we get that for each $\pi\in \mathcal{G}_j^\pm(x)$,
\begin{multline}\label{eq:before average on paths}
\sum_{\ell=1}^{s}\left[d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))-
\frac{1}{{s}}d_\mathcal{M}\left(f\left(x +\sgn(\pi)se_j\right),f(x)\right)\right]^2\\
\le
\sum_{k=1}^{{s}}d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))^2-\frac{1}{{s}}
d_\mathcal{M}\left(f\left(x + \sgn(\pi)se_j\right),f(x)\right)^2.
\end{multline}
By symmetry $|\mathcal{G}_j^+(x)|=|\mathcal{G}_j^-(x)|$, and this value is
independent of $x\in \mathbb{Z}_m^{n}$ and $j\in\{1,\ldots,n\}$. Denote
$g=|\mathcal{G}^\pm_j(x)|$, and observe that $g\le 2\cdot 2^{ns}$.
Averaging~\eqref{eq:before average on paths} over all $x\in
\mathbb{Z}_m^{n}$ and $\pi\in \mathcal{G}_j^\pm(x)$, and summing over $j\in
\{1,\ldots,n\}$, we get that
\begin{eqnarray}\label{eq:average geo}
&&\frac{1}{g}\sum_{j=1}^{n}\int_{\mathbb{Z}_m^{n}}\sum_{\pi\in \mathcal{G}_j^\pm (x)}
\sum_{\ell=1}^{{s}}\Bigg[d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1})) \\
&&\hskip1.5in-
\frac{1}{{s}}d_\mathcal{M}\left(f\left(x+ \sgn(\pi)
se_j\right),f(x)\right)\Bigg]^2d\mu(x)\nonumber\\
\nonumber &&\quad \le
sn\mathbb{E}_\varepsilon
\int_{\mathbb{Z}_m^{n}}d_\mathcal{M}(f(x+\varepsilon),f(x))^2\,d\mu(x)\nonumber\\*
&&\qquad -\frac{1}{s}\sum_{j=1}^{n}\int_{\mathbb{Z}_m^{n}}
d_\mathcal{M}\left(f\left(x+se_j\right),f(x)\right)^2\,d\mu(x)\nonumber\\*
&&\quad < \eta sn \mathbb{E}_\varepsilon\int_{\mathbb{Z}_m^{n}}d_\mathcal{M}(f(x+\varepsilon),f(x))^2\,d\mu(x).\nonumber
\end{eqnarray}
Define $\psi:\mathbb{Z}_{m}^{n}\to \mathbb{R}$ by
\begin{multline*}
\psi(x)= 2\eta sn2^{sn}
\mathbb{E}_\varepsilon[d_\mathcal{M}(f(x+\varepsilon),f(x))^2]\\ -\sum_{j=1}^{n}\sum_{\pi\in \mathcal{G}_j^\pm(x)}
\sum_{\ell=1}^{s}\left[d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))-
\frac{1}{{s}}d_\mathcal{M}\left(f\left(x+ \sgn(\pi)
se_j\right),f(x)\right)\right]^2.
\end{multline*}
Inequality~\eqref{eq:average geo}, together with the bound on
$g$, implies that
$$
0<\int_{\mathbb{Z}_m^{n}} \psi(x)d\mu(x)=\frac{1}{(2s-1)^n}\int_{\mathbb{Z}_m^{n}}
\sum_{\substack{y\in \mathbb{Z}_m^{n}\\
d_{\mathbb{Z}_m^{n}}(x,y)< s}}\psi(y)d\mu(x).
$$
It follows that there exists $x^0\in \mathbb{Z}_m^{n}$ such that
\begin{eqnarray}\label{eq:subgrid}
&&\sum_{\substack{y\in \mathbb{Z}_m^{n}\\
d_{\mathbb{Z}_m^{n}}(x^0,y)< s}}\sum_{j=1}^{n}\sum_{\pi\in
\mathcal{G}_j^+(x)\bigcup \mathcal{G}_j^-(x)} \\&&\qquad
\sum_{\ell=1}^{{s}}\left[d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))-
\frac{1}{{s}}d_\mathcal{M}\left(f\left(y+ \sgn(\pi) se_j\right),f(y)\right)\right]^2\nonumber\\
&&\quad
<2\eta sn2^{sn}\sum_{\substack{y\in \mathbb{Z}_m^{n}\\
d_{\mathbb{Z}_m^{n}}(x^0,y)< s}}\mathbb{E}_\varepsilon\left[d_\mathcal{M}(f(y+\varepsilon),f(y))^2\right].\nonumber
\end{eqnarray}
By scaling the metric $d_\mathcal{M}$ we may assume without loss of
generality that
\begin{eqnarray}\label{eq:average subgrid}
\frac{1}{(2s-1)^n}\sum_{\substack{y\in \mathbb{Z}_m^{n}\\
d_{\mathbb{Z}_m^{n}}(x^0,y)<s}}\mathbb{E}_\varepsilon\left[d_\mathcal{M}(f(y+\varepsilon),f(y))^2\right]=1.
\end{eqnarray}
It follows that there exists $y^0\in \mathbb{Z}_m^{n}$ satisfying
$d_{\mathbb{Z}_m^{n}}(x^0,y^0)< s$ such that
\begin{eqnarray}\label{eq:good point}
\mathbb{E}_\varepsilon\left[d_\mathcal{M}(f(y^0+\varepsilon),f(y^0))^2\right]\ge 1.
\end{eqnarray}
By translating the argument of $f$, and multiplying
(coordinate-wise) by an appropriate sign vector in $\{-1,1\}^{n}$,
we may assume that $y^0=0$ and all the coordinates of $x^0$ are
nonnegative. Observe that this implies that every $y\in
\{0,1,\ldots,s-1\}^{n}$ satisfies $d_{\mathbb{Z}_m^{n}}(x^0,y)<s$.
Thus~\eqref{eq:subgrid}, and \eqref{eq:average subgrid} imply that
for every $y\in \{0,1,\ldots,s-1\}^{n}$, every $j\in
\{1,\ldots,n\}$, every $\pi\in \mathcal{G}_j^\pm(y)$, and every $\ell\in
\{1,\ldots,s\}$,
\begin{multline}\label{eq:paths constant}
\left|d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))-
\frac{1}{{s}}d_\mathcal{M}\left(f\left(y+ \sgn(\pi)
se_j\right),f(y)\right)\right|\\
\le \sqrt{2\eta (2s-1)^nsn2^{sn}}\le
2^{2sn}\sqrt{\eta}.
\end{multline}
\begin{claim}\label{claim:adjacent} For every $\varepsilon,\delta\in
\{-1,1\}^{n}$ and every $x\in \mathbb{Z}_m^{n}${\rm ,} such that $x+\varepsilon\in
\{0,1,\ldots,s-1\}^{n}${\rm ,}
$$
\left|d_\mathcal{M}(f(x+\varepsilon),f(x))-d_\mathcal{M}(f(x+\delta),f(x))\right|\le
2\sqrt{\eta}\cdot 2^{2sn}.
$$
\end{claim}
\begin{proof} If $\varepsilon=\delta$ then there is nothing to prove, so assume that $\varepsilon_\ell=-\delta_\ell$.
Denote $S=\{j\in \{1,\ldots,n\}:\ \varepsilon_j=-\delta_j\}$ and define
$\theta,\tau\in \{-1,1\}^{n}$ by
\begin{equation*}
\theta_j=\begin{cases} -\varepsilon_\ell & j=\ell\\ \varepsilon_j & j\in S\setminus\{\ell\} \\
1 & j\notin S \end{cases} \qquad \mathrm {and} \qquad
\tau_j=\begin{cases} -\varepsilon_\ell & j=\ell\\ \varepsilon_j & j\in S\setminus\{\ell\} \\
-1 & j\notin S .\end{cases}
\end{equation*}
Consider the following path $\pi$ in $\mathrm{\bf diag}(\mathbb{Z}_m^{n})$: Start at
$x+\varepsilon\in \{0,1,\ldots,s-1\}^{n}$, go in direction $-\varepsilon$ (i.e. pass
to $x$), then go in direction $\delta$ (i.e. pass to $x+\delta$),
then go in direction $\theta$ (i.e. pass to $x+\delta+\theta$),
then go in direction $\tau$ (i.e. pass to $x+\delta+\theta+\tau$),
and repeat this process $s/4$ times. It is clear from the
construction that $\pi\in \mathcal{G}_\ell^{\-\varepsilon_\ell}(x+\varepsilon)$. Thus,
by~\eqref{eq:paths constant} we get that
\begin{multline*}
\left|d_\mathcal{M}(f(x+\varepsilon),f(x))-d_\mathcal{M}(f(x+\delta),f(x))\right|
\\ =\left|d_\mathcal{M}(f(\pi_1),f(\pi_0))-d_\mathcal{M}(f(\pi_2),f(\pi_1))\right|\le
2\sqrt{\eta}\cdot 2^{2sn}.
\end{multline*}
\end{proof}
\begin{corollary}\label{coro:near zero} There exists a number
$A\ge 1$ such that for every $\varepsilon\in \{-1,1\}^{n}${\rm ,}
$$
\left(1-4\sqrt{\eta}\cdot 2^{2sn}\right)A\le d_\mathcal{M}(f(\varepsilon),f(0))\le
\left(1+4\sqrt{\eta}\cdot 2^{2sn}\right)A.
$$
\end{corollary}
\begin{proof}
Denote $e=\sum_{j=1}^{n}e_j=(1,1,\ldots,1)$ and take
$$
A=\left(\mathbb{E}_\delta
\left[d_\mathcal{M}(f(\delta),f(0))^2\right]\right)^{1/2}.
$$
By~\eqref{eq:good point}, $A\ge 1$. By Claim~\ref{claim:adjacent}
we know that for every $\varepsilon,\delta\in \{-1,1\}^{2^s}$,
$$
d_\mathcal{M}(f(\varepsilon),f(0))\le d_\mathcal{M}(f(e),f(0))+2\sqrt{\eta}\cdot 2^{2sn}\le
d_\mathcal{M}(f(\delta),f(0))+4\sqrt{\eta}\cdot 2^{2sn}.
$$
Averaging over $\delta$, and using the Cauchy-Schwartz inequality,
we get that
\begin{eqnarray*}
d_\mathcal{M}(f(\varepsilon),f(0))&\le& \left(\mathbb{E}_\delta
\left[d_\mathcal{M}(f(\delta),f(0))^2\right]\right)^{1/2}+4\sqrt{\eta}\cdot
2^{2sn}\\
&=& A+4\sqrt{\eta}\cdot 2^{2sn} \le
\left(1+4\sqrt{\eta}\cdot 2^{2sn}\right)A.
\end{eqnarray*}
In the reverse direction we also know that
$$
A^2=\mathbb{E}_\delta [d_\mathcal{M}(f(\delta),f(0))^2]\le
\left[d_\mathcal{M}(f(\varepsilon),f(0))+4\sqrt{\eta}\cdot 2^{2sn}\right]^2,
$$
which implies the required result since $A\ge 1$.
\end{proof}
\begin{claim}\label{claim:path} Denote
\begin{eqnarray}\label{eq:defV}
V=\left\{x\in \mathbb{Z}_m^{n}:\ \forall j\ 0\le x_j\le\frac{s}{2}\
\mathrm{and}\ x_j\ \mathrm{is\ even}\right\}.
\end{eqnarray}
Then the following assertions hold true\/{\rm :}\/
\begin{enumerate}
\item For every $x,y\in V$ there is some $z\in \{x,y\}${\rm ,} $j\in
\{1,\ldots,n\}${\rm ,} and a path $\pi\in \mathcal{G}_j^+(z)$ of length $s$ which
goes through $x$ and $y$. Moreover{\rm ,} we can ensure that if
$\pi=(\pi_0,\ldots,\pi_{s})$ then for all $\ell\in
\{1,\ldots,s\}${\rm ,} $\{\pi_\ell,\pi_{\ell-1}\}\cap
\{0,\ldots,s-1\}^{n}\neq \emptyset$.
\item For every $x,y\in V${\rm ,}
$d_{\mathrm{\bf diag}(\mathbb{Z}_m^{n})}(x,y)=d_{\mathbb{Z}_m^{n}}(x,y)=\|x-y\|_\infty$.
\end{enumerate}
\end{claim}
\begin{proof} Let $j\in \{1,\ldots,n\}$ be such that
$|y_j-x_j|=\|x-y\|_\infty:= t$. Without loss of generality
$y_j\ge x_j$. We will construct a path of length $s$ in
$\mathcal{G}_j^+(x)$ which goes through $y$. To begin with, we define
$\varepsilon^\ell,\delta^\ell\in \{-1,1\}^{n}$ inductively on $\ell$ as follows:
\begin{eqnarray*}
\varepsilon_r^\ell&=&\begin{cases} 1 & x_r+2\sum_{k=1}^{\ell-1}(\varepsilon^k_r+\delta^k_r)<y_r\\ -1 &
x_r+2\sum_{k=1}^{\ell-1}(\varepsilon^k_r+\delta^k_r)>y_r \\
1 & x_r+2\sum_{k=1}^{\ell-1}(\varepsilon^k_r+\delta^k_r)=y_r \end{cases} \\
\noalign{\noindent and}
\delta_r^\ell &=&\begin{cases} 1 & x_r+2\sum_{k=1}^{\ell-1}(\varepsilon^k_r+\delta^k_r)<y_r\\ -1 &
x_r+2\sum_{k=1}^{\ell-1}(\varepsilon^k_r+\delta^k_r)>y_r \\
-1 & x_r+2\sum_{k=1}^{\ell-1}(\varepsilon^k_r+\delta^k_r)=y_r. \end{cases}
\end{eqnarray*}
If we define $a_\ell=x+\sum_{k=1}^\ell
\varepsilon^k+\sum_{k=1}^{\ell-1}\delta^k$ and $b_\ell=x+\sum_{k=1}^\ell
\varepsilon^k+\sum_{k=1}^{\ell}\delta^k$ then the sequence
$$(x,a_1,b_1,a_2,b_2,\ldots, a_{t/2-1},b_{t/2}=y)$$ is a path of
length $t$ in $\mathrm{\bf diag}(\mathbb{Z}_m^{n})$ joining $x$ and $y$. This proves
the second assertion above. We extend this path to a path of
length $s$ (in $\mathrm{\bf diag}(\mathbb{Z}_m^{n})$) from $x$ to $x+se_j$ as follows.
Observe that for every $1\le \ell\le t/2$,
$\varepsilon^\ell_j=\delta^\ell_j=1$. Thus
$-\varepsilon^\ell+2e_j,-\delta^\ell+2e_j\in \{-1,1\}^n$. If we define
$c_\ell=y+\sum_{k=1}^\ell
(-\varepsilon^k+2e_j)+\sum_{k=1}^{\ell-1}(-\delta^k+2e_j)$ and
$d_\ell=y+\sum_{k=1}^\ell
(-\varepsilon^k+2e_j)+\sum_{k=1}^{\ell}(-\delta^k+2e_j)$, then
$d_{t/2}=x+2te_j$. Observe that by the definition of $V$, $2t\le
s$, and $s-2t$ is even. Thus we can continue the path from
$x+2te_j$ to $x+se_j$ by alternatively using the directions
$e_j+\sum_{\ell\neq j} e_\ell$ and $e_j-\sum_{\ell\neq j} e_\ell$.
\end{proof}
\begin{corollary}\label{coro:pass to all x}
Assume that $x\in V$. Then
for $A$ as in Corollary~{\rm \ref{coro:near zero},} we have for all
$\varepsilon\in \{-1,1\}^{n}${\rm ,}
$$
\left(1-10\sqrt{\eta}\cdot 2^{2sn}\right)A\le
d_\mathcal{M}(f(x+\varepsilon),f(x))\le \left(1+10\sqrt{\eta}\cdot 2^{2sn}\right)A.
$$
\end{corollary}
\begin{proof}
By Claim~\ref{claim:path} (and its proof), there exist $j\in
\{1,\ldots,n\}$ and $\pi\in \mathcal{G}_j^+(0)$ such that
$\pi_1=e=(1,\ldots,1)$ and for some $k\in \{1,\ldots,s\}$,
$\pi_k=x$. Now, by~\eqref{eq:paths constant} we have that
$$
\left|d_\mathcal{M}\left(f(e),f(0)\right)-d_\mathcal{M}\left(f(\pi_{k-1}),f(x)\right)\right|\le
2\sqrt{\eta}\cdot 2^{2sn}.
$$
Observe that since $x\in V$, $x+e\in \{0,\ldots,s-1\}^{n}$. Thus
by Claim~\ref{claim:adjacent}
\begin{eqnarray*}
&&\hskip-36pt \left|d_\mathcal{M}\left(f(x+\varepsilon),f(x)\right)-d_\mathcal{M}\left(f(e),f(0)\right)\right|
\\
&&\quad \le
\left|d_\mathcal{M}\left(f(e),f(0)\right)-d_\mathcal{M}\left(f(\pi_{k-1}),f(x)\right)\right|\\& &\qquad +
\left|d_\mathcal{M}\left(f(\pi_{k-1}),f(x)\right)-d_\mathcal{M}\left(f(x+e),f(x)\right)\right|\\
&&\qquad +
\left|d_\mathcal{M}\left(f(x+\varepsilon),f(x)\right)-d_\mathcal{M}\left(f(x+e),f(x)\right)\right|
\\&&\quad \le 6\sqrt{\eta}\cdot 2^{2sn},
\end{eqnarray*}
so that the required inequalities follow from
Corollary~\ref{coro:near zero}.
\end{proof}
\begin{corollary}\label{coro:in V}
For every distinct $x,y\in V${\rm ,}
$$
\left(1-12\sqrt{\eta}\cdot 2^{2sn}\right)A\le
\frac{d_\mathcal{M}(f(x),f(y))}{\|x-y\|_\infty}\le\left(1+12\sqrt{\eta}\cdot
2^{2sn}\right)A,
$$
where $A$ is as in Corollary~{\rm \ref{coro:near zero}.}
\end{corollary}
\begin{proof}
Denote $t=\|x-y\|_\infty$; we may assume that there exists
$j\in \{1,\ldots,n\}$ such that $y_j-x_j=t$. By
Claim~\ref{claim:path} there is a path $\pi\in \mathcal{G}_j^+(x)$ of
length $s$ such that $\pi_{t}=y$. By~\eqref{eq:paths constant} and
Corollary~\ref{coro:pass to all x} we have for every $\ell\in
\{1,\ldots,s\}$
\begin{eqnarray*}
\left|d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))-\frac{1}{s}d_\mathcal{M}\left(f\left(x+se_j\right),f(x)\right)\right|\le
\sqrt{\eta}\cdot 2^{2sn},
\end{eqnarray*}
and
$$
\left(1-10\sqrt{\eta}\cdot 2^{2sn}\right)A\le
d_\mathcal{M}(f(\pi_0),f((\pi_1)) \le \left(1+10\sqrt{\eta}\cdot
2^{2sn}\right)A.
$$
Thus, for all $\ell\in \{1,\ldots,s\}$,
$$
\left(1-12\sqrt{\eta}\cdot 2^{2sn}\right)A\le
d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))\le \left(1+12\sqrt{\eta}\cdot
2^{2sn}\right)A.
$$
Thus
\begin{eqnarray*}
d_\mathcal{M}(f(x),f(y))&\le& \sum_{\ell=1}^{t}
d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))\le t\cdot
\left(1+12\sqrt{\eta}\cdot 2^{2sn}\right)A\\
&=&\|x-y\|_\infty\cdot
\left(1+12\sqrt{\eta}\cdot 2^{2sn}\right)A.
\end{eqnarray*}
On the other hand
\begin{eqnarray*}
d_\mathcal{M}(f(x),f(y))&\ge&
d_\mathcal{M}(f(x+se_j),f(x))-d_\mathcal{M}(f(x+se_j),f(y))\\
&\ge& sd_\mathcal{M}(f(x),f(\pi_1))-s\sqrt{\eta}\cdot 2^{2sn}-\sum_{\ell=t+1}^{s}d_\mathcal{M}(f(\pi_\ell),f(\pi_{\ell-1}))\\
&\ge& s\left(1-10\sqrt{\eta}\cdot
2^{2sn}\right)A-s\sqrt{\eta}\cdot
2^{2sn}\\
&&-\left(s-t\right)\left(1-12\sqrt{\eta}\cdot
2^{2sn}\right)A\\
&\ge& \|x-y\|_\infty\cdot\left(1-12\sqrt{\eta}\cdot
2^{2sn}\right)A.
\end{eqnarray*}
\vglue-20pt
\end{proof}
\medskip
This concludes the proof of Theorem~\ref{thm:reverse cotype},
since the mapping $x\mapsto x/2$ is a distortion $1$ bijection
between $(V,d_{\mathbb{Z}_m^n})$ and $[s/4]_\infty^n$.
\end{proof}
We are now in position to prove Theorem~\ref{thm:MPcotype}.
\begin{proof}[Proof of Theorem~{\rm \ref{thm:MPcotype}}]
We assume that $\Gamma_q^{(2)}(\mathcal{M})=\infty$ for all
$q<\infty$. By Lemma~\ref{lem:use the sub} it follows that for
every two integers $n,s>1$, $\mathcal B(\mathcal{M};n,s)=1$. Now the required
result follows from Theorem~\ref{thm:reverse cotype}.
\end{proof}
\begin{lemma}\label{lem:cotype implies no grid} Let $\mathcal{M}$ be a
metric space and $K>0$. Fix $q<\infty$ and assume that
$m:= m_q^{(2)}(\mathcal{M};n,K)<\infty$. Then
$$
c_\mathcal{M}\left(\mathbb{Z}_m^n\right)\ge \frac{n^{1/q}}{2K}.
$$
\end{lemma}
\begin{proof} Fix a bijection $f:\mathbb{Z}_m^n\to \mathcal{M}$. Then
\begin{eqnarray*}
\frac{nm^2}{4\|f^{-1}\|_{\mathrm{Lip}}^2}&\le&
\sum_{j=1}^n\int_{\mathbb{Z}_m^n}d_\mathcal{M}\left(f\left(x+\frac{m}{2}e_j\right),f(x)\right)^2d\mu(x)\\&\le&
K^2 m^2
n^{1-\frac{2}{q}}\int_{\{-1,01\}^n}\int_{\mathbb{Z}_m^n}d_\mathcal{M}(f(x+\varepsilon),f(x))^2d\mu(x)d\sigma(\varepsilon)\\&\le&
K^2 m^2 n^{1-\frac{2}{q}}\|f\|_{\mathrm{Lip}}^2.
\end{eqnarray*}
It follows that $ \mathrm{dist}(f)\ge \frac{n^{1/q}}{2K}$.
\end{proof}
\begin{corollary}\label{coro:power}
Let $\mathcal{F}$ be a family of metric spaces and $0<q,K,\break c<\infty$. Assume
that for all $n\in \mathbb N${\rm ,} $\Gamma_q^{(2)}(\mathcal{M};n,n^c)\le K$ for
every $\mathcal{M}\in \mathcal{F}$. Then for every integer $N${\rm ,}
$$
\mathcal D_N(\mathcal{F})\ge \frac{1}{2cK}\left(\frac{\log N}{\log \log
N}\right)^{1/q}.
$$
\end{corollary}
We require the following simple lemma, which shows that the
problems of embedding $[m]_\infty^n$ and $\mathbb{Z}_m^n$ are essentially
equivalent.
\begin{lemma}\label{lem:torus grid} The grid
$[m]_\infty^n$ embeds isometrically into $\mathbb{Z}_{2m}^n$. Conversely{\rm ,}
$\mathbb{Z}_{2m}^{n}$ embeds isometrically into $[m+1]_\infty^{2mn}$.
Moreover{\rm ,} for each $\varepsilon>0${\rm ,} $\mathbb{Z}_{2m}^n$ embeds with distortion
$1+6\varepsilon$ into $[m+1]^{(\lceil 1/\varepsilon\rceil+1) n}$.
\end{lemma}
\begin{proof} The first assertion follows by consideration of only
elements of $\mathbb{Z}_{2m}^n$ whose coordinates are at most $m-1$. Next,
the Fr\'echet embedding \[x\mapsto
(d_{\mathbb{Z}_{2m}}(x,0),d_{\mathbb{Z}_{2m}}(x,1),\ldots,d_{\mathbb{Z}_{2m}}(x,2m-1))\in
[m+1]_\infty^{2m},\] is an isometric embedding of $\mathbb{Z}_{2m}$. Thus
$\mathbb{Z}_{2m}^n$ embeds isometrically into $[m+1]_\infty^{2mn}$. The
final assertion is proved analogously by showing that $\mathbb{Z}_{2m}$
embeds with distortion $1+\varepsilon$ into $[m+1]_\infty^{\lceil
1/\varepsilon\rceil+1}$. This is done by consideration of the embedding
\begin{multline*}
x\mapsto (d_{\mathbb{Z}_m}(x,0),d_{\mathbb{Z}_m}(x,\lfloor 2\varepsilon
m\rfloor),d_{\mathbb{Z}_m}(x,\lfloor 4\varepsilon m\rfloor),d_{\mathbb{Z}_m}(x,\lfloor 6\varepsilon
m\rfloor),\ldots
\\
\ldots ,d_{\mathbb{Z}_m}(x,\lfloor 2\lceil 1/\varepsilon\rceil \varepsilon
m\rfloor)),
\end{multline*}
which is easily seen to have distortion at most
$1+6\varepsilon$.
\end{proof}
We are now in position to prove Theorem~\ref{thm:dicho}.
\begin{proof}[Proof of Theorem~{\rm \ref{thm:dicho}}]
We first prove the implication $1)\implies 2)$. Let $Z$ be the
disjoint union of all finite subsets of members of $\mathcal{F}$, i.e.
$$
Z= \bigsqcup\left\{\mathcal{N}:\ |\mathcal{N}|<\infty\ \mathrm{and}\ \exists\ \mathcal{M}\in
\mathcal{F},\ \mathcal{N}\subseteq \mathcal{M}\right\}.
$$
For every $k>1$ we define a metric $d_k$ on $Z$ by
$$
d_k(x,y)=\left\{ \begin{array}{ll} \frac{d_\mathcal{N}(x,y)}{\diam(\mathcal{N})}&
\exists\ \mathcal{M}\in \mathcal{F},\ \exists\ \mathcal{N}\subseteq \mathcal{M}\ \mathrm{s.t.}\
|\mathcal{N}|<\infty\ \mathrm{and}\ x,y\in \mathcal{N}\\
k& \mathrm{otherwise}.
\end{array}\right.
$$
Clearly $d_k$ is a metric. Moreover, by construction, for every
$K,k>1$,
$$
q^{(2)}_{(Z,d_k)}(K)\ge q^{(2)}_\mathcal{F}(K).
$$
Assume for the sake of contradiction that for every $K,k>1$,
$q^{(2)}_{(Z,d_k)}(K)=\infty$. In other words, for every $q<\infty$, and
$k\geq 1$, $\Gamma^{(2)}_q(Z,d_k)=\infty$. By Lemma~\ref{lem:use
the sub} it follows that for every $k\geq 1$, and every two
integers $n,s>1$,
$$
\mathcal B\left((Z,d_k);n,s\right)=1.
$$
Theorem~\ref{thm:reverse cotype} implies that
$c_{(Z,d_k)}\left([m]_\infty^{n}\right)=1$.
By our assumption there exists a metric space $X$ such that
$c_\mathcal{F}(X):= D>1$. Define a metric space $X'=X\times\{1,2\}$
via $d_{X'}((x,1),(y,1))=d_{X'}((x,2),(y,2))=d_X(x,y)$ and
$d_{X'}((x,1),(y,2))=2\diam(X)$. For large enough $s$ we have that
$c_{[2^{s-3}]_\infty^{2^s}}(X')<D$. Thus $c_{(Z,d_k)}(X')<D$ for
all $k$. Define
$$
k=\frac{4\diam(X)}{\min_{\substack{x,y\in X\\x\neq y}}d_X(x,y)}.
$$
Then there exists a bijection $f:X'\to (Z,d_k)$ with
$\mathrm{dist}(f)<\min\{2,D\}$. Denote $L=\|f\|_{\mathrm{Lip}}$.
We first claim that there exists $\mathcal{M}\in \mathcal{F}$, and a finite subset
$\mathcal{N}\subseteq \mathcal{M}$, such that $|f(X')\cap \mathcal{N}|\ge 2$. Indeed,
otherwise, by the definition of $d_k$, for all $x',y'\in X'$,
$d_k(f(x'),f(y'))=k$. Choosing distinct $x,y\in X$, we deduce that
$$
k=d_k(f(x,1),f(y,1))\le Ld_X(x,y)\le L\diam (X),
$$
and
\begin{eqnarray*}
k&=&d_k(f(x,1),f(y,2))
\geq \frac{L}{\mathrm{dist}(f)} \cdot d_{X'}((x,1),(y,2))\\
&>&
\frac{L}{2}\cdot 2\diam(X)=L\diam (X),
\end{eqnarray*}
which is a contradiction.
Thus, there exists $\mathcal{M}\in \mathcal{F}$ and a finite subset $\mathcal{N}\subseteq \mathcal{M}$
such that $|f(X')\cap \mathcal{N}|\ge 2$. We claim that this implies that
$f(X')\subseteq \mathcal{N}$. This will conclude the proof of
1)~$\Longrightarrow$~2), since the metric induced by $d_k$ on $\mathcal{N}$
is a re-scaling of $d_\mathcal{N}$, so that $X$ embeds with distortion
smaller than $D$ into $\mathcal{N}\subseteq \mathcal{M}\in \mathcal{F}$, which is a
contradiction of the definition of $D$.
Assume for the sake of a contradiction that there exists $x'\in
X'$ such that $f(x')\notin \mathcal{N}$. By our assumption there are
distinct $a',b'\in X'$ such that $f(a'),f(b')\in \mathcal{N}$. Now,
$$
1\ge d_k(f(a'),f((b'))
\geq \frac{L}{\mathrm{dist}(f)} \cdot d_{X'}(a',b') >
\frac{L}{2} \cdot \min_{\substack{u,v\in X\\u\neq v}}d_X(u,v),
$$
while
\begin{eqnarray*}
\frac{4\diam(X)}{\min_{\substack{u,v\in X\\u\neq
v}}d_X(u,v)}=k&=&d_k(f(x'),f((a'))\\
&\le& L d(x',a')\le
L\diam(X')=2L\diam(X),
\end{eqnarray*}
which is a contradiction.
To prove the implication $2)\implies 3)$ observe that in the above
argument we have shown that there exists $k,q<\infty$ such that
$\Gamma^{(2)}_q(Z,d_k)<\infty$. It follows that for some integer
$n_0$, $\mathcal B((Z,d_k);n_0,n_0)<1$, since otherwise by
Theorem~\ref{thm:reverse cotype} we would get that $(Z,d_k)$
contains, uniformly in $n$, bi-Lipschitz copies of $[n]_\infty^n$.
Combining Lemma~\ref{lem:torus grid} and Lemma~\ref{lem:cotype
implies no grid} we arrive at a contradiction. By
Lemma~\ref{lem:use the sub}, the fact that
$\mathcal B((Z,d_k);n_0,n_0)<1$, combined with Corollary~\ref{coro:power},
implies that $\mathcal D_n(Z,d_k)=\Omega((\log n)^{\alpha})$ for
some $\alpha>0$. By the definition of $(Z,d_k)$, this implies the
required result.
\end{proof}
We end this section by proving Theorem~1.8:
\begin{proof}[Proof of Theorem~{\rm 1.8}]
Denote $|X|=n$ and
$$
\Phi=\frac{\diam(X)}{\min_{x\neq y} d(x,y)}.
$$
Write $t=4\Phi/\varepsilon$ and let $s$ be an integer divisible by $4$ such
that $s\ge\max\{n,t\}$. Then $c_{[s]^{s}_\infty}(X)\le
1+\frac{\varepsilon}{4}$. Fix a metric space $Z$ and assume that
$c_Z(X)>1+\varepsilon$. It follows that $c_Z([s]_\infty^{s})\ge
1+\frac{\varepsilon}{2}$. By Theorem~\ref{thm:reverse cotype} we deduce
that
$$
\mathcal B(Z,s,4s)\le 1-\frac{\varepsilon^2}{2^{s^2}}.
$$
By Lemma~\ref{lem:use the sub} we have that $m_q^{(2)}(\mathcal{M};n,3s)\le
8sn^{\log_{s}(4s)}$, where $q\le \frac{10^s}{\varepsilon^2}$. Thus by
Lemma~\ref{lem:cotype implies no grid} and Lemma~\ref{lem:torus
grid} we see that for any integer $n\ge 8s$,
$$
c_Z\left(\left[n^5\right]_\infty^n\right)\ge
\frac{n^{1/q}}{4s}=\frac{n^{\varepsilon^2/10^s}}{4s}.
$$
Choosing $N\approx (C\gamma)^{\frac{2^{4s}}{\varepsilon^2}}$, for an
appropriate universal constant $C$, yields the required result.
\end{proof}
\section{Applications to bi-Lipschitz, uniform, and coarse
embeddings}
Let $(\mathcal{N},d_\mathcal{N})$ and $(\mathcal{M},d_\mathcal{M})$ be metric spaces. For $f:\mathcal{N}\to \mathcal{M}$
and $t>0$ we define
$$
\Omega_f(t)=\sup\{d_\mathcal{M}(f(x),f(y));\ d_\mathcal{N}(x,y)\le t\},
$$
and
$$
\omega_f(t)=\inf\{d_\mathcal{M}(f(x),f(y));\ d_\mathcal{N}(x,y)\ge t\}.
$$
Clearly $\Omega_f$ and $\omega_f$ are nondecreasing, and for
every $x,y\in \mathcal{N}$,
$$
\omega_f\left(d_\mathcal{N}(x,y)\right)\le d_\mathcal{M}(f(x),f(y))\le
\Omega_f\left(d_\mathcal{N}(x,y)\right)
$$
With these
definitions, $f$ is uniformly continuous if $\lim_{t\to
0}\Omega_f(t)=0$, and $f$ is a uniform embedding if $f$ is
injective and both $f$ and $f^{-1}$ are uniformly continuous.
Also, $f$ is a coarse embedding if $\Omega_f(t)<\infty$ for all
$t>0$ and $\lim_{t\to \infty} \omega_f(t)=\infty$.
\begin{lemma}\label{lem:coarse restriction} Let $(\mathcal{M},d_\mathcal{M})$ be a metric space{\rm ,} $n$ an integer{\rm ,}
$\Gamma>0${\rm ,} and $0< p\le
q\le r$. Then for every function $f:\ell_r^n\to \mathcal{M}${\rm ,} and every
$s>0${\rm ,}
$$
n^{1/q}\omega_f(2s)\le \Gamma
m_q^{(p)}(\mathcal{M};n,\Gamma) \cdot \Omega_f\Biggl(\frac{2\pi
sn^{1/r}}{m_q^{(p)}(\mathcal{M};n,\Gamma)}\Biggr).
$$
\end{lemma}
\begin{proof} Denote $m=m_q^{(p)}(\mathcal{M};n,\Gamma)$, and define
$g:\mathbb{Z}_m^n\to \mathcal{M}$ by
$$
g(x_1,\ldots,x_n)=f\Biggl(\sum_{j=1}^n se^{\frac{2\pi i
x_j}{m}}e_j\Biggr).
$$
Then
\begin{multline*}
\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}
d_\mathcal{M}(g(x+\varepsilon),g(x))^pd\mu(x)d\sigma(\varepsilon)\\
\le \max_{ \varepsilon\in
\{-1,0,1\}^n}\Omega_f\Biggl(s\Biggl(\sum_{j=1}^n \left|e^{\frac{2\pi i
\varepsilon_j}{m}}-1\right|^r\Biggr)^{1/r}\Biggr)^p \le \Omega_f\left(\frac{2\pi
sn^{1/r}}{m}\right)^p.
\end{multline*}
On the other hand,
\begin{eqnarray*}
\sum_{j=1}^n \int_{\mathbb{Z}_m^n}
d_\mathcal{M}\left(g\left(x+\frac{m}{2}e_j\right),g(x)\right)^pd\mu(x)\ge
n\omega_f(2s)^p.
\end{eqnarray*}
By the definition of $m_q^{(p)}(\mathcal{M};n,\Gamma)$ it follows that
$$
n\omega_f(2s)^p\le \Gamma^pm^p
n^{1-\frac{p}{q}}\Omega_f\left(\frac{2\pi sn^{1/r}}{m}\right)^p,
$$
as required.
\end{proof}
\begin{corollary}\label{coro:no coarse} Let $\mathcal{M}$ be a metric space and assume that there exist constants $c,\Gamma>0$ such that for
infinitely many integers $n${\rm ,} $m_q^{(p)}(\mathcal{M};n,\Gamma)\le
cn^{1/q}$. Then for every $r>q${\rm ,} $\ell_r$ does not uniformly or
coarsely embed into $\mathcal{M}$.
\end{corollary}
\begin{proof} To rule out the existence of a coarse embedding
choose $s=n^{\frac{1}{q}-\frac{1}{r}}$ in Lemma~\ref{lem:coarse
restriction}. Using Lemma~\ref{lem:lower m} we get that
\begin{equation} \label{eq:ref-coarse}
\omega_f\left(2n^{\frac{1}{q}-\frac{1}{r}}\right)\le c\Gamma
\Omega_f\left(2\pi\Gamma\right). \nonumber
\end{equation}
Since $q<r$, it follows that $\liminf_{t\to \infty}
\omega_f(t)<\infty$, so that $f$ is not a coarse embedding.
To rule out the existence of a uniform embedding, assume that
$f:\ell_r\to X$ is invertible and $f^{-1}$ is uniformly
continuous. Then there exists $\delta>0$ such that for $x,y\in
\ell_r$, if $d_\mathcal{M}(f(x),f(y))< \delta$ then $\|x-y\|_r<2$. It
follows that $\omega_f(2)\ge \delta$. Choosing $s=1$ in
Lemma~\ref{lem:coarse restriction}, and using Lemma~\ref{lem:lower
m}, we get that
\begin{equation}\label{eq:ref-uniform}
0<\delta\le \omega_f(2)\le c\Gamma\Omega_f\left(2\pi\Gamma\cdot
n^{\frac{1}{r}-\frac{1}{q}} \right). \nonumber
\end{equation}
Since $r>q$ it follows that $\limsup_{t\to 0} \Omega_f(t)>0$, so
that $f$ is not uniformly continuous.
\end{proof}
The following corollary contains Theorem~\ref{thm:uniform},
Theorem~\ref{thm:uniformL_p} and Theorem~\ref{thm:coarse}.
\begin{corollary} Let $X$ be a $K$-convex
Banach space. Assume that $Y$ is a Banach space which coarsely or
uniformly embeds into $X$. Then $q_Y\le q_X$. In particular{\rm ,} for
$p,q> 0${\rm ,} $L_p$ embeds uniformly or coarsely into $L_q$ if and
only if $p\le q$ or $q\le p\le 2$.
\end{corollary}
\begin{proof} By the Maurey-Pisier theorem~\cite{MP76}, for every
$\varepsilon>0$ and every $n\in \mathbb N$, $Y$ contains a $(1+\varepsilon)$ distorted copy of
$\ell_{q_Y}^n$. By Theorem~\ref{thm:K}, since $X$ is $K$-convex,
for every $q>q_X$ there exists $\Gamma<\infty$ such that
$m_q(\mathcal{M};n,\Gamma) =O\left(n^{1/q}\right)$. Thus, by the proof of
Corollary~ \ref{coro:no coarse}, if $Y$ embeds coarsely or
uniformly into $X$ then $q_Y\le q$, as required.
The fact that if $p\le q$ then $L_p$ embeds coarsely and uniformly
into $L_q$ follows from the fact that in this case $L_p$, equipped
with the metric $\|x-y\|_p^{p/q}$, embeds {\em isometrically} into
$L_q$ (for $p\le q\le 2$ this is proved in~\cite{BD-CK65}, \cite{WW75}.
For the remaining cases see Remark 5.10 in~\cite{MN04}). If $2\ge
p\ge q$ then $L_p$ is linearly isometric to a subspace of $L_q$
(see e.g.~\cite{Woj96}). It remains to prove that if $p>q$ and
$p>2$ then $L_p$ does not coarsely or uniformly embed into $L_q$.
We may assume that $q\ge 2$, since for $q\le 2$, $L_q$ embeds
coarsely and uniformly into $L_2$. But, now the required result
follows from the fact that $L_q$ is $K$ convex and $q_{L_q}=q$,
$q_{L_p}=p$ (see~\cite{MS86}).
\end{proof}
We now pass to the proof of Theorem~\ref{thm:infty grid}. Before
doing so we remark that Theorem~\ref{thm:infty grid} is almost
optimal in the following sense. The identity mapping embeds
$[m]_\infty^n$ into $\ell_q^n$ with distortion $n^{1/q}$. By the
Maurey-Pisier theorem~\cite{MP76}, $Y$ contains a copy of
$\ell_{q_Y}^n$ with distortion $1+\varepsilon$ for every $\varepsilon>0$. Thus $
c_Y([m]_\infty^n)\le n^{1/q_Y}$. Additionally, $[m]_\infty^n$ is
$m$-equivalent to an equilateral metric. Thus, if $Y$ is infinite
dimensional then $c_Y([m]_\infty^n)\le m$. It follows that
$$
c_Y([m]_\infty^n)\le \min\left\{n^{1/q_Y},m\right\}.
$$
\begin{proof}[Proof of Theorem~{\rm \ref{thm:infty grid}}] Assume that
$m$ is divisible by $4$ and
$$
m\ge \frac{2n^{1/q}}{C_q(Y)K(Y)}.
$$
By Theorem~\ref{thm:K}, for every $f:\mathbb{Z}_m^n \to Y$,
\begin{multline*}
\sum_{j=1}^n
\int_{\mathbb{Z}_m^n}\left\|f\left(x+\frac{m}{2}\right)-f(x)\right\|_Y^qd\mu(x)\\
\le
\left[15C_q(Y)K(Y)\right]^qm^q\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_Y^qd\mu(x)d\sigma(\varepsilon).
\end{multline*}
Thus, assuming that $f$ is bi-Lipschitz we get that
$$
\frac{nm^q}{2^q\|f^{-1}\|_{\mathrm{Lip}}^q}\le
\left[15C_q(Y)K(Y)\right]^qm^q\cdot \|f\|_{\mathrm{Lip}}^q,
$$
i.e.
$$
\mathrm{dist}(f)\ge \frac{n^{1/q}}{30C_q(Y)K(Y)}.
$$
By Lemma~\ref{lem:torus grid} this shows that for $m\ge
\frac{2n^{1/q}}{C_q(Y)K(Y)}$, such that $m$ is divisible by $4$, $
c_Y([m]_\infty^n)=\Omega\left(n^{1/q}\right)$. If $m<
\frac{2n^{1/q}}{C_q(Y)K(Y)}$ then the required lower bound follows
from the fact that $[m]_\infty^n$ contains an isometric copy of
$[m_1]_\infty^{n_1}$, where $m_1$ is an integer divisible by $4$,
$m_1\ge \frac{2n_1^{1/q}}{C_q(Y)K(Y)}$, and $m_1 =\Theta(m)$,
$n_1=\Theta(m^q)$. Passing to integers $m$ which are not
necessarily divisible by $4$ is just as simple.
\end{proof}
\begin{remark} Similar arguments yield bounds on $c_Y([m]_p^n)$,
which strengthen the bounds in~\cite{MN05-proc}.
\end{remark}
\begin{remark}\label{rem:L1}
Although $L_1$ is not $K$-convex, we can still show that
$$
c_1([m]_\infty^n)=\Theta\left(\min\left\{\sqrt{n},m\right\}\right).
$$
This is proved as follows. Assume that $f:\mathbb{Z}_m^n\to L_1$ is
bi-Lipschitz. If $m$ is divisible by $4$, and $m\ge \pi\sqrt{n}$,
then the fact that $L_1$, equipped with the metric
$\sqrt{\|x-y\|_1}$, is isometric to a subset of Hilbert
space~\cite{WW75}, \cite{DL97}, together with
Proposition~\ref{prop:hilbert}, shows that
\begin{multline*}
\sum_{j=1}^n
\int_{\mathbb{Z}_m^n}\left\|f\left(x+\frac{m}{2}\right)-f(x)\right\|_1d\mu(x)\\
\le
m^2\int_{\{-1,0,1\}^n}\int_{\mathbb{Z}_m^n}\|f(x+\varepsilon)-f(x)\|_1d\mu(x)d\sigma(\varepsilon).
\end{multline*}
Arguing as in the proof of Theorem~\ref{thm:infty grid}, we see
that for $m\approx \sqrt{n}$,
$c_1([m]_\infty^n)=\Omega\left(\sqrt{n}\right)$. This implies the
required result, as in the proof of Theorem~\ref{thm:infty grid}.
\end{remark}
\section{Discussion and open problems}\label{section:problems}
1.
Perhaps the most important open problem related to the
nonlinear cotype inequality on Banach spaces is whether for every
Banach space $X$ with cotype $q<\infty$, for every $1\le p\le q$
there is a constant $\Gamma<\infty$ such that
$m_q^{(p)}(X;n,\Gamma)=O\left(n^{1/q}\right)$.
By Lemma~\ref{lem:lower m} this is best possible.
In Theorem~\ref{thm:K} we proved that this is indeed the case when
$X$ is $K$-convex, while our proof of Theorem~\ref{thm:cotype}
only gives $m_q^{(p)}(X;n,\Gamma)=O\left(n^{2+1/q}\right)$.
\smallbreak 2. $L_1$ is not $K$-convex, yet we do know that
$m_2^{(1)}(L_1;n,4)=O\left(\sqrt{n}\right)$. This follows directly
from Remark~\ref{rem:L1}, Lemma~\ref{lem:multip} and
Lemma~\ref{lem:monotone}. It would be interesting to prove the
same thing for $m_2(L_1;n,\Gamma)$.
\smallbreak 3. We conjecture that the $K$-convexity assumption in
Theorem~\ref{thm:uniform} and Theorem~\ref{thm:coarse} is not
necessary. Since $L_1$ embeds coarsely and uniformly into $L_2$,
these theorems do hold for $L_1$. It seems to be unknown whether
any Banach space with finite cotype embeds uniformly or coarsely
into a $K$-convex Banach space. The simplest space for which we do
not know the conclusion of these theorems is the Schatten trace
class $C_1$ (see~\cite{Woj96}. In~\cite{T-J74} it is shown that
this space has cotype $2$). The fact that $C_1$ does not embed
uniformly into Hilbert space follows from the results
of~\cite{AMM85}, together with~\cite{Pisier78}, \cite{Kalton85}. For more
details we refer to the discussion in~\cite{BL00} (a similar
argument works for coarse embeddings of $C_1$ into Hilbert space,
by use of~\cite{Ran04}). We remark that the arguments presented here
show that a positive solution of the first problem stated above
would yield a proof of Theorem~\ref{thm:uniform} and
Theorem~\ref{thm:coarse} without the $K$-convexity assumption.
\section{Acknowledgments} We are grateful to Keith Ball for several valuable discussions. We also thank Yuri Rabinovich for
pointing out the connection to Matou\v{s}ek's BD Ramsey theorem.
\newcommand{\name}[1]{\textsc{#1}}
\newcommand{\_\_\_\_\_\_\_\_\_\,}{\_\_\_\_\_\_\_\_\_\,}
|
1,108,101,564,265 | arxiv | \section{Introduction.}
The
recent tunneling experiments on Ga$_{1-x}$Mn$_x$As by
Richardella et al.~\cite{Yazdani} have demonstrated critical,
multifractal correlations in the local
tunneling density of states at the
metal-insulator transition in this semiconductor. While this would
be expected from the theory of Anderson localization in
non-interacting fermions, the experiment bears clear signs of the
relevance of electron-electron interactions. Most strikingly, the
critical correlations were found to persist very close to the Fermi
level, even upon doping further into the metallic regime. This
phenomenon clearly originates in interactions, which single out the
Fermi level as a special energy.
The effect of interaction
has been studied on either side of the metal-insulator (MI)
transition~\cite{50Anderson}, but rather little is known about its
role close to criticality. In a seminal work Efros and Shklovskii
showed \cite{ES, ES-Springer}, that deep in the insulator the
Coulomb interactions between localized electrons create a pseudogap
in the density of states (DoS) near the Fermi level
$\varepsilon_{F}$, where the DoS vanishes as $\rho(\varepsilon)\sim
|\varepsilon-\varepsilon_{F}|^{2}$ in 3D. This is reflected in the
Efros-Shklovskii law of variable-range hopping conductivity at low temperatures,
$\ln \sigma(T)\propto -\sqrt{T_0/T}$. In the opposite limit of
weakly disordered metals, Altshuler and Aronov discovered
\cite{AA-Elsevier-book}
interaction corrections to both the DoS near $\varepsilon_F$ and the
low $T$ conductance. In particular, for spinless fermions, disorder
and repulsive interactions both enhance the tendency to localize. In
the weakly localized regime the tunneling DoS has a dip at
$\varepsilon_F$ with~\cite{AA-Elsevier-book}
$\delta\rho=\rho(\varepsilon_{F}+\omega)-\rho(\varepsilon_{F})\propto
\sqrt{\omega}$. However, in contrast to the classical Efros-Shklovskii pseudo-gap,
the Altshuler-Aronov corrections are of purely quantum (exchange) origin:
$\delta\rho\propto \hbar^{3/2}$, if the diffusion coefficient and
the Fermi-velocity are held fixed.
The quantum corrections of Ref.~\cite{AA-Elsevier-book} can be
effectively summed to obtain a non-perturbative result near $d=2$ by
using the formalism of the non-linear sigma-model due to
Finkel'stein \cite{Fin-review, Bel-Kirk}. An effective action
approach was suggested by Levitov and Shytov~\cite{LevShyt} and
Kamenev and Andreev \cite{KamenevAndreev} to derive the
non-perturbative expression for the tunneling DoS in a weakly
disordered 2D system near the Fermi energy. Remarkably, in the lower
critical dimension $d=2$, the DoS at $\varepsilon_F$ vanishes
exactly in the thermodynamic limit. In higher dimensions $d>2$
instead, the above results suggest the following qualitative picture
\cite{LMNS, Bel-Kirk, Vojta}: The pseudo-gap in the one-particle DoS
gradually grows with increasing disorder or repulsion strength.
$\rho(\varepsilon_F)$ eventually vanishes at the localization
transition, and remains zero in the insulator. The shape of the
pseudo-gap evolves from the quantum behavior $\delta\rho\propto
\sqrt{\omega}$ in the metal to some non-trivial power
$\rho(\varepsilon_{F}+\omega)\propto \omega^{\mu}$, $\mu>0$ at the
Anderson transition point, to the classical $\omega^{2}$ behavior in
the deep insulator. A power law suppressed density of states at criticality, $\rho(\omega)\propto \omega^\mu$ is also predicted within an $\epsilon$-expansion in $d=2+\epsilon$ dimensions \cite{Bel-Kirk}. However, the actual scenario might be more
complex if the localization transition is accompanied, or even
preceded by a transition to glassy or other density-modulated
phases, aspects, which so far have not been taken into account
by existing theories.
A fascinating property of electronic eigenfunctions near the
localization transition of non-interacting particles is their
multifractality~\cite{Mirlin-rep}. It is an exact property of
critical states at the mobility edge $\varepsilon=\varepsilon_m$,
but, also off-critical states exhibit multifractal character inside
their localization or correlation radius $\xi$~\cite{KrCue}. This
fractality has important effects on interactions (both repulsive and
attractive), as it enhances their local matrix elements. In the case
of predominantly attractive interactions, it may induce local
pairing gaps in weak Anderson insulators. In more conducting
systems, it may lead to enhanced superconducting transition
temperatures.~\cite{IFK, IFKCue, Bur, KrProc}.
It has remained an unresolved theoretical question whether such
subtle wavefunction correlations survive in the presence of Coulomb
interactions.
The reason to doubt their survival is most easily seen on the level of a Hartree-Fock (HF) approach,
where the combinations optimizing one-particle HF orbitals in the presence of interaction
are a linear combination of non-interacting wave functions of different energies.
If one na\"{\i}vely assumes that the fractal patterns of such wave functions are only weakly correlated,
one may expect partial or even complete degradation of the fractal structure in the HF wavefunctions, due to a superposition of a large number of random uncorrelated fractal patterns. In reality the fractal patterns of non-interacting wave functions are strongly correlated even at large energy separation ~\cite{IFK, IFKCue, KrProc}. Nevertheless, the question remains whether such interaction-induced superpositions give rise to a change of the fractal dimension of HF wave functions or may destroy fractality completely, despite of the correlations. There is indeed a subtle interplay between the strength of correlations and the effective number of non-interacting wave functions which superpose in the HF wavefunctions.
Another na\"{\i}ve argument, advocating the opposite conclusion, puts forward that the HF Hamiltonian is essentially a one-particle Hamiltonian of the same basic symmetry as for the non-interacting case. Hence, invoking universality, one would expect the same statistics of both non-interacting and HF wave functions. The flaw in this argument is that the matrix elements of the HF Hamiltonian which are {\em self-consistently} determined, possess correlations which could be long-range in the presence of badly screened long-range interactions between particles. Thus the two different na\"{\i}ve arguments lead to two opposite conclusions. According to one of them the fractal pattern of the HF wave function should be smeared out while the other one advocates unchanged fractal patterns. One of the main results of this study is to show that the first argument is in fact closer to reality.
On the other hand, the direct observation of
multifractality in the tunneling spectra of Ref.~\cite{Yazdani}
strongly suggests that the mulfifractality survives interaction.
Indeed, the measured auto-correlation function
of the local DoS (LDoS) showed a well-established power-law decay
with distance on the sample surface. Surprisingly, this critical
behavior appeared to be nearly pinned to the Fermi energy without any
fine-tuning of the ${\rm Mn}$ impurity concentration, implying that
the mobility edge $\varepsilon_m$ remains close to $\varepsilon_F$
in a broad range of disorder strengths. As mentioned above this
indicates the importance of interactions, since they single out
$\varepsilon_{F}$ as the center of the pseudogap. In this Letter, we
address the problem of multifractality at the interacting
localization transition theoretically, and study the mechanism by
which interactions pin the mobility edge $\varepsilon_m$ nearly to
$\varepsilon_F$ in a broad parameter range.
As a na\"{\i}ve rationale for this pinning of the mobility edge one
may consider that localization occurs earlier
where the density of states is lower. Thus localization is naturally
prone to occur first within the interaction-induced pseudogap,
making $\varepsilon_m$ track $\varepsilon_F$ rather closely.
However, it is only the single-particle (tunneling) DoS that has a
pronounced dip near $\varepsilon_{F}$, while the global
thermodynamic DoS, $dn/d\mu$, that enters the conductivity via the
Einstein relation, usually shows a different behavior. Hence, it is
not obvious which notion of DoS is relevant for localization and
transport purposes (cf. discussions in \cite{BES84,LMNS,Lee} about
global vs. local $(dn/d\mu)^{-1}$). A more detailed analysis is thus
required in order to show that indeed the LDoS close to
$\varepsilon_F$ can be critical, while in the bulk of the spectrum
the correlations are still metallic.
\subsection{Model}
To address these questions we consider a model of spinless fermions on a 3D
cubic lattice of size $N=10^3$ with the tight-binding Hamiltonian
\begin{equation} \label{tbH}
H_{0}=\sum_{i}(\epsilon_{i}-\mu)\,c^{\dagger}_{i}c_{i}-t
\sum_{\langle ij \rangle}c^{\dagger}_{i}c_{j}+{\rm h.c.}, \end{equation}
interacting via long-range Coulomb repulsion:
\begin{equation}\label{H-int}
H_{1}=\frac{U}{2}\sum_{i,j}\frac{n_{i}n_{j}}{r_{ij}}. \end{equation}
We employ periodic boundary conditions and choose $t=1$ as the unit
of energy. The on-site energies $\epsilon_{i}$ are random,
independently and uniformly distributed in $\epsilon_{i}\in
[-\frac{W}{2},\frac{W}{2}]$. The chemical potential $\mu$ depends on
interaction and is chosen so as to keep the average density $1/2$.
For non-interacting particles ($U=0$) the localization transition is
known to occur at the disorder strength $W_{c}=16.5$ in this model
\cite{16-5}. In the present work we choose $W=14 < W_{c}$ (not
particularly close to $W_{c}$), so as to mimic conditions of
Ref.~\cite{Yazdani} where the impurity concentration was not
specially tuned.
We attack this problem numerically by considering the interactions
in the Hartree-Fock (HF) approximation. This amounts to studying an
effective single-particle model with self-consistent on-site
energies and hopping amplitudes. In order to clarify the role of long-range interactions, we truncated the Coulomb interaction at a finite range, and then progressively increased its range up to the
size $L=10$ of the 3D system, defining $r_{ij}$ as the shortest distance on the torus.
We first took into account the Hartree
terms (occupation numbers $n_{j}$ in the sum
$U\sum_{j}r_{ij}^{-1}\,n_{j}$) and the Fock terms (expectation values of
$c^{\dagger}_{i}c_{j}$) up to the $5^{\rm th}$ nearest
neighbors. Then we considered the Fock terms up to the $5^{\rm th}$ nearest
neighbors while the Hartree terms were considered up to the $20^{\rm th}$ nearest
neighbors. Finally we tackled the full self-consistent problem for all neighbors.
\subsection{Overview of results}
The main result of our paper is to establish the
persistence of multifractality in the presence of full-range Coulomb interaction.
Notably, the fractal
dimension we find, $d_{2}\approx 1.57\pm 0.05$, appears to be significantly larger than in the non-interacting case.
With decreasing range of interaction the effective $d_{2}$ in a finite sample decreases ($d_{2}=1.38\pm 0.05$ for interaction up to the $5^{\rm th}$ nearest neighbor) until it reaches its value $d_{2}=1.35\pm 0.05$
for the non-interacting case. This marks essential
progress in comparison to earlier works based on the HF approach~\cite{Vojta}. As we will
describe below, the critical behavior exhibits various further
interesting features that are specific to the interacting case. Most
importantly, we establish that within the insulating
phase, even considerably far from the metal-insulator transition, the mobility edge remains very close to the Fermi level.
Further, we study the evolution of the pseudo-gap in the HF density of states (DoS) $\rho(\omega)$ as the increasing interaction drives the system towards the localization transition. In particular we confirm (within our accuracy) the scaling relationships suggested by McMillan and Shklovskii ~\cite{McMillan,LMNS} which relate the critical power law of the pseudo-gap $\rho\propto \omega^\mu$ with the dynamical scaling exponent $\eta$ and the exponent that describes the dependence of the static dielectric constant $\kappa_{0}\propto \xi^{\eta-1}$ on the localization radius $\xi$ in the insulator phase.
Finally, for the first time we address the question of multiplicity of HF solutions, and the competition of the related glassy features and localization. We show that within our accuracy the onset of multiplicity of solutions with increasing interaction strength (an indication of an emerging glassy energy landscape) coincides with the localization transition. In contrast, the charge ordering (typical for a Mott transition) occurs at much stronger interaction. Thus we argue that the localization transition should better be called an Anderson-glass transition, rather than an Anderson-Mott transition.
A preliminary version of this paper was published as a preprint \cite{cond-mat}.
\section{Hartree-Fock calculations}
The effective Hartree-Fock (HF) Hamiltonian which corresponds to the
model given in Eqs.~(\ref{tbH},\ref{H-int}) is:
\begin{equation}\label{HF-H}
H_{\rm HF}=\sum_{i}\tilde{V}_{i}\,c^{\dagger}_{i}c_{i}-\sum_{ij}\left( \tilde{t}_{ij}\,c^{\dagger}_{i}c_{j}+{\rm h.c.}\right)\,.
\end{equation}
Here
\begin{equation}\label{tildeV} \tilde{V}_{i}=\epsilon_{i}+\sum_{j}\frac{U}{|{\bf
r}_{i}-{\bf r}_{j}|}\,\langle c^{\dagger}_{j}c_{j}\rangle_{0}-\mu,
\end{equation}
\begin{equation} \label{tilde-t} \tilde{t}_{ij}=t_{ij}+ \frac{U}{|{\bf
r}_{i}-{\bf r}_{j}|}\,\langle c^{\dagger}_{j}c_{i}\rangle_{0}, \end{equation}
where $t_{ij}=t$ is the bare nearest-neighbor hopping and $\langle
...\rangle_{0}$ denotes the quantum-mechanical ground-state
expectation value evaluated on the Slater determinant formed by the
lowest $N/2$ HF levels. The effective on-site energy $\tilde{V}_{j}$
contains the interaction-induced {\it Hartree} term which leads to
correlated on-site energies (potentially at long range), while the effective hopping
$\tilde{t}_{ij}$ contains the {\it Fock} term which may be
long-range as well.
We carried out calculations on a cubic 3D lattice of size $L=10$, using 3 ranges of interactions.
The first one took into account up to the 20$^{\rm th}$ nearest neighbors (460 sites $j$
nearest to $i$) in the Hartree term and Fock terms corresponding
to the 5$^{\rm th}$ nearest neighbors (the 56 nearest sites up to distance
$\sqrt{5}$).
A second calculation restricted the
Hartree and the Fock terms equally to the 5$^{\rm th}$ nearest neighbors
in order to check the importance of Hartree terms and to ensure a correct implementation of the Pauli principle. A third calculation performed the self-consistent Hartree-Fock calculations with the full range of Coulomb interactions.
Note that the role of the Hartree and Fock terms is not the same in the metal and the insulator.
Deep in the insulator side, it is important to
keep the longer range Hartree terms to obtain the full classical
Efros-Shklovskii gap, while long range Fock terms are negligible due
to strong localization of the wavefunctions. In the metal, however, the role of
the Fock terms is expected to be more significant, while the Hartree terms incorporate an effective screening at long distances.
\subsection{Numerical implementation}
Even though completely standard, we briefly review the main steps involved in finding solutions of the HF equations.
The set $X$ of parameters to be
found self-consistently comprises all the $\langle
c^{\dagger}_{i}c_{j}\rangle_{0}$, ($\sim L^{6}/2$ parameters for the
full-scale Coulomb interaction) plus the
$L^{3}$ diagonal parameters $\langle n_{i}\rangle_{0}=\langle
c^{\dagger}_{i}c_{i}\rangle_{0}$ (i.e. $\sim 500.000$ parameters for $L=10$).
The chemical potential $\mu$ is
always adjusted to assure half filling, as described below.
In order to find a self-consistent solution we begin with a
random initial guess for all the parameters $X_{\rm in}^{(0)}$
satisfying the condition
\begin{equation} \label{fixN} \sum_{i}\langle n_{i}\rangle_{0}={\cal N}_{e}, \end{equation}
where the number of particles ${\cal N}_{e}=N/2$ is fixed in our
calculation (half-filling). Diagonalizing the effective Hamiltonian
Eqs.~(\ref{HF-H})-(\ref{tilde-t}) using the initial $X_{\rm
in}^{(0)}$ one obtains the eigenfunctions $\psi_{m}({\bf r})$ and
eigenvalues $\varepsilon_{m}$, from which we compute the {\it
output} parameters $X_{\rm out}^{(0)}$:
\begin{eqnarray}\label{self-con} \langle
n_{i}\rangle_{0}&=&\sum_{m}|\psi_{m}({\bf
r}_{i})|^{2}\,f(\varepsilon_{m}), \\ \langle
c^{\dagger}_{i}c_{j}\rangle_{0}&=&\sum_{m}\psi_{m}^{*}({\bf
r}_{i})\psi_{m}({\bf r}_{j})\,f(\varepsilon_{m}), \end{eqnarray}
where $f(\varepsilon)$ is the Fermi distribution function with the
chemical potential $\mu$. It is to be found from the condition:
\begin{equation} \label{chem} {\cal N}_{e}=\sum_{m}f(\varepsilon_{m}). \end{equation}
At $T=0$ considered in this paper there is an uncertainty of the
position of $\mu$ between the two energy levels $\varepsilon_{{\cal
N}_{e}+1}$ and $\varepsilon_{{\cal N}_{e}}$. For most of the
calculations we have chosen $\mu = \frac{1}{2}(\varepsilon_{{\cal
N}_{e}+1}+\varepsilon_{{\cal N}_{e}})$. However, to avoid an artificial hard minigap at
the bottom of the DoS dip, to study the latter we used a parametric mixing
$\mu=(1-a)\,\varepsilon_{{\cal N}_{e}+1}+a\,\varepsilon_{{\cal
N}_{e}}$ with $0<a<1$; $a$ was fixed for a given disorder
realization, but taken at random for different disorder
realizations.
An updated set of parameters to be used as initial parameters
$X_{\rm in}^{(n+1)}$ for the next, i.e., $(n+1)$-th iteration is chosen as
follows ($n=0,1...$):
\begin{equation}\label{update}
X_{\rm in}^{(n+1)}=(1-\alpha)\,X_{\rm in}^{(n)}+\alpha\,X_{\rm out}^{(n)}.
\end{equation}
The parameter $\alpha\in[0,1]$ is chosen such that the iteration
process is stable and leads to a convergent solution. The iteration
procedure is terminated and the output set of parameters is taken as
the converged solution if the absolute value of the difference
between the values of all the parameters $X$ of the previous and the
final iteration is less than $10^{-4}$ for the truncated Coulomb interaction, and
$10^{-5}$ for the full range Coulomb interaction. Once a converged solution is obtained, and the final set of
HF eigenfunctions $\psi_{m}({\bf r})$ and eigenvalues
$\varepsilon_{m}$ have been calculated, one can compute any quantity
expressible in terms of $\psi_{m}({\bf r})$ and $\varepsilon_{m}$.
The procedure is then repeated for different realizations of
disorder to obtain disorder averaged quantities, such as the DoS and the LDoS correlation functions.
We point out that the solution of the HF equations is unique only at small enough $U$ within the metallic regime. At large $U$, one expects a number of solutions that grows exponentially with the volume. In this regime, we analyze {\em typical} solutions of the HF equations, without optimizing the HF energy among different solutions. This choice will be discussed and justified in Sec.~\ref{s:glass}.
The typical number of iterations needed to obtain a HF solution for one realization of
disorder was $\sim 2000$ for the full-scale Coulomb interaction. The total computational time to obtain one HF solution was mostly limited by the time $\sim L^{9}$ of diagonalization of a matrix Hamiltonian
of the size $L^{3}\times L^{3}$ needed in each iteration. With a typical number of disorder realizations $\sim 2000$ the total time at $L=10$ was of the order of $1000/(\#cores)$ hours for each parameter set of interaction and disorder strengths. For all values ($\sim 20$) of interaction strengths $U$ necessary for our scaling analysis and the average number of cores $\sim 50$ used for parallel computing the total time was of the order of 400 hours for $L=10$.
\section{McMillan-Shklovskii scaling.}
The metal insulator transition in disordered systems is expected to occur as a second order phase transition at some interaction strength $U_c$. Close to criticality, where $\tau\equiv |1-U/U_c|\ll1$, one expects a scaling form for the density of states
as
\begin{eqnarray}\label{f-rho} \rho(\omega\equiv
\varepsilon-\varepsilon_F)=\Delta^{-1}\; f_\rho(\omega/\delta), \end{eqnarray}
with
\begin{eqnarray}\label{gamma-over-mu} \Delta\propto \tau^{-\gamma},
\;\;\;\;\delta \propto \tau^{\gamma/{\mu}}. \end{eqnarray}
In the critical regime,
\begin{eqnarray}\label{large-x} f_\rho(|x|\gg 1)&\sim& |x|^{\mu}, \end{eqnarray}
whereas in the metal and the insulator,
\begin{eqnarray}\label{small-x}
f_{\rho,M}(|x|\ll 1)&=&{\rm const.},\\
f_{\rho,I}(|x|\ll 1)&\sim& x^{2}, \end{eqnarray}
which capture the shape of the Altshuler-Aronov and Efros-Shklovskii pseudogaps as limiting cases.
The exponents in Eq.~(\ref{gamma-over-mu}) are chosen such that the dependence on the critical parameter $\tau$ disappears at criticality.
The scaling \cite{LMNS,McMillan} is based on an assumption about the potential of a point
charge within the critical regime, i.e., at a distance $a\ll r\ll
\xi$. Here $\xi$ is the correlation length which diverges at the
transition as,
\begin{equation}\label{xi-nuu}\xi\propto |1-U/U_{c}|^{-\nu}, \end{equation}
and $a$ is a certain microscopic length (e.g., the distance between
donors in doped semiconductors \cite{LMNS}). In our simulations it
can be taken equal to the lattice spacing. The assumption is that
the potential behaves as a modified power law,
\begin{equation}\label{screening} V(r)\sim
U\, \left( \frac{a}{r}\right)^{\eta},\;\;\;\;U\sim e^{2}/a, \;\;\;\;\;a\ll r \ll \xi. \end{equation}
As Eq.~(\ref{screening}) is essentially the relationship between the
length scale $r$ and the energy (or inverse-time) scale $V$, the exponent $\eta$
should coincide with the dynamical scaling exponent $z$. For the
non-interacting case the dynamic exponent takes its maximum value
$\eta = d=3$, while the minimal theoretically admissible value
cannot be smaller than the exponent of the Coulomb
potential~\cite{McMillan}, $\eta\geq 1$.
The exponent $\eta$ also governs the scaling of the static
dielectric constant in the insulator~\cite{McMillan}:
\begin{equation} \label{Macmillan} \kappa_{0}\propto
(1-U/U_{c})^{-\zeta},\;\;\;\;\zeta=\nu\,(\eta-1). \end{equation}
To show this it is enough to assume that in the insulator at
distances $r\gg\xi$ the potential $V(r)$ takes the usual form
of dielectric screening,
\begin{equation} \label{diel-scr} V(r)=\frac{e^{2}}{\kappa_{0}\,r},\;\;\;\;r\gg
\xi \end{equation}
and matches with the potential in Eq.~(\ref{screening}) at distances $r\sim \xi$.
The characteristic energy scale $\delta$ in Eq.~(\ref{f-rho}) is set by the
potential $V(r)$ at $r\sim \xi$:
\begin{equation}\label{delta-eta} \delta=V(\xi) = e^{2}a^{\eta-1}\,\xi^{-\eta}.
\end{equation}
In a system of finite size $L$, in the critical region
where $\xi\gg L$, one should replace $\delta$ by the
mean level spacing $\delta_{L}\sim V(L)$ .
Finally, the characteristic scale $\Delta$ of the DoS can be
expressed through $\delta$ and $\xi$ by a relationship following from dimensional arguments:
\begin{equation} \label{Delta-xi}
\Delta^{-1}=\frac{1}{\delta\,\xi^{3}}=\frac{\xi^{-(3-\eta)}}{U\,a^{\eta}},\;\;\;U\equiv
e^{2}/a. \end{equation}
Eqs.~(\ref{delta-eta},\ref{Delta-xi}), as well as the scaling
Eq.~(\ref{f-rho}) are valid for:
\begin{equation} \label{limits} a\ll \xi,\;\;\;\;\;U\gg \omega\gg
\delta_{L}\equiv U\, (a/L)^{\eta}. \end{equation}
From Eqs.~(\ref{delta-eta},\ref{Delta-xi}) and (\ref{gamma-over-mu})
one immediately obtains the following scaling relations~\cite{McMillan,LMNS}:
\begin{equation} \label{Shkl} \gamma=\nu(3-\eta),\;\;\;\;\mu=\frac{3}{\eta}-1,
\end{equation}
in terms of $\eta$ and the correlation length exponent $\nu$.
Note that the above scaling assumes only one critical scale $\xi$ separating different regimes of
$V(r)$. Should an additional scale (e.g., one related with a "screening transition") appear, the exponent $\mu$ will be independent of the dynamical scaling exponent $\eta$.
In the next section we analyze the evolution of the density of states across the transition in the light of the above scaling assumptions.
\section{Pseudo-gap in the density of states (DoS).}
\begin{figure}[h]
\center{\includegraphics[width=1\linewidth]{F1_NJP_Volodya_req2}}
\label{fig:DoS}
\caption{(Color online)
Disorder-averaged DoS $\rho(\varepsilon)$ of
the HF states at $W=14$ at different interaction strengths $U$. The
crossover from the quantum Altshuler-Aronov correction $\delta\rho\sim
\sqrt{|\varepsilon-\varepsilon_{F}|}$ to the classical Efros-Shklovskii gap
$\rho\sim (\varepsilon-\varepsilon_{F})^{2}$ is seen. The bandwidth progressively increases with increasing $U$. - Insert:
$\rho(\varepsilon_{F})$ as a function of $U$. For $U>1.5$ the DoS
$\rho(\varepsilon_{F})\approx 0.5/(UL^{2})$ follows the classical Efros-Shklovskii
law, where $L=10$ is the system size.} \label{Fig:DoS}
\end{figure}
In Fig.~\ref{Fig:DoS} we present the DoS of
the HF levels.
One can see that deep in the metallic and in the insulating
regimes, HF correctly captures the Altshuler-Aronov and Efros-Shklovskii pseudogap features
discussed above, while it provides a non-trivial mean field approach
to describing various interesting phenomena happening at and close
to the MI transition. The curvature of $\rho(\varepsilon)$ at small
$\omega=\varepsilon-\varepsilon_{F}$ is seen to change sign as $U$
increases.
From RG and scaling arguments~\cite{McMillan,LMNS} as presented above, one expects a critical power law
$\rho(\omega)\sim\omega^\mu$ in a frequency regime where $\omega>1/\rho(\omega)\xi^d$.
\begin{figure}[h]
\center{
\includegraphics[width=0.7\linewidth]{DOS_both_collapse}}
\caption{Collapse of data for the DoS in the window $U\gg
\omega\gg\delta_{L}$ (Eq.~(\ref{limits})) onto the scaling function
$f_{\rho}(x)$: (a) metallic side with $f_{\rho,M}(x)=1+x^{0.53}$ (b)
insulating side with $f_{\rho,I}(x)=[x^{-2}+x^{-0.68}]^{-1}$. }
\label{fig:collapse}
\end{figure}
In Fig.~\ref{fig:collapse} we verified the scalings
Eqs.~(\ref{f-rho},\ref{large-x},\ref{small-x})
by collapsing the "low-energy" data (in the regime~(\ref{limits})) for
$\rho(\varepsilon)$ close to $\varepsilon_{F}$ onto the universal
scaling functions $f_{\rho,M}(x)$ and $f_{\rho,I}(x)$. From the
power-law behavior of $f_{\rho,M}$ at $x\gg 1$ we found for the
exponent $\mu$:
\begin{equation}\label{mu-met} \mu_{M}=0.53, \end{equation}
very close to the value experimentally observed in the tunneling DoS close to criticality~\cite{LeePRL}. We
cross-checked this result in Fig.~\ref{fig:loga-logb} by plotting
$\ln\Delta$ versus $\ln(1/\delta)$. We obtained an almost linear curve
in accordance with Eq.~(\ref{gamma-over-mu}), with the same slope
$\mu_M=0.53$ as in Fig.~\ref{fig:collapse}(a).
\begin{figure}[t]
\center{\includegraphics[width=0.5\linewidth]{scaling_DOS_loga_b_NJP}
\caption{Log-Log plot of $\Delta$ vs. $1/\delta$, as obtained from
the metallic side of the data collapse for the DoS. It is almost
linear with the slope $\mu_M=0.53$. }} \label{fig:loga-logb}
\end{figure}
In the insulator the best collapse corresponds to:
\begin{equation}\label{mu-ins} \mu_{I}=0.68, \end{equation}
but the reliability of this exponent is not as high as the one on the
metallic side (for instance a test similar to Fig.~\ref{fig:loga-logb}
yields a slope 0.54, consistent rather with $\mu_{M}$, but smaller than
$\mu_{I}=0.68$). However, to be conservative we may conclude that
the critical exponent $\mu$ lies in the range:
\begin{equation}\label{mu-mu} \mu=0.60\pm 0.15. \end{equation}
From the obtained exponent $\mu$ we can estimate the dynamical
scaling exponent $\eta$:
\begin{equation}\label{eta} \eta=\frac{3}{1+\mu}=1.9\pm 0.2. \end{equation}
This yields the exponent $\zeta=0.9\pm 0.2$ characterizing the divergence of the dielectric constant in Eq.~(\ref{Macmillan}),
in a reasonably good agreement with the experimental value
\cite{LMNS} $\zeta\approx 0.71$.~\footnote{the exponent $\nu\approx 1.0$ in the insulator is obtained in the next section}
Thus we conclude that our results are compatible with the McMillan-Shklovskii scaling as well as with the available experimentally obtained values of the exponents $\mu$ and $\zeta$.
\section{ Auto-correlation of the local DoS and the fractal dimension $d_{2}$.}
\begin{figure}[h]
\center{\includegraphics[width=0.7\linewidth]{F2-full}}
\caption{(Color online) Data collapse of the auto-correlation
of the LDoS, $K(R;\varepsilon_{F})$ at the Fermi energy for disorder strength
$W=14$ and a full-scale Coulomb interaction in a sample with $L=10$
onto (a) metallic and (b) insulating scaling functions $f_{M,I}$
[Eqs.~(\ref{f-met},\ref{f-ins})]. We find $d_2=1.57\pm 0.05$. The control calculations for non-interacting system with the same protocol and the same system size gave $d_{2}=1.34\pm 0.05$. }\label{fig:K-collapse}
\end{figure}
To study multifractality of the local DoS, we have computed the spatial correlations of
the HF wavefunctions $|\psi_{n}({\bf r})|^{2}$:
\begin{equation} \label{K-r} K(R;\varepsilon)=\frac{\left\langle
\sum_{n,\varepsilon_{n}\in\Omega(\varepsilon)}\sum_{{\bf r}}|\psi_{n}({\bf r})|^{2}|\psi_{n}({\bf r+R})|^{2}
\right\rangle}{\left\langle\sum_{n,\varepsilon_{n}\in\Omega(\varepsilon)}\sum_{{\bf r}}|\psi_{n}({\bf r})|^{4}\right\rangle}, \end{equation}
where $\varepsilon_{n}$
are the associated eigenvalues,
$\langle ...\rangle$ denotes the ensemble average over random
realizations of on-site energies $\epsilon_{n}$, and $\Omega(\varepsilon)$ is a narrow interval of energies
of the order of the mean level spacing $\delta$, centered at
$\varepsilon$. Multifractal correlations \cite{IFKCue, Mirlin-rep} imply that in the range of distances
$\ell_{0}<R<\xi$ the correlation function is the same as at the
localization transition, $K(R;\varepsilon)\sim
(\ell_{0}/R)^{d-d_{2}}$. Here $\xi$ is the localization or correlation length
which diverges at the transition, and $\ell_{0}$ is
of the order of the lattice constant. $d=3$ is the dimensionality of space
and $d_{2}<d$ is the correlation fractal dimension. For $R>\xi$ the
correlation function $K$ distinguishes delocalized and localized regimes, saturating
to a constant in a metal and decreasing exponentially in an insulator. Close to
criticality one expects scaling behavior, that is: the correlations should collapse
to a single curve upon rescaling $R$ by $\alpha$ (a finite size corrected version of $\xi$),
and amplitudes by $\beta$, and expressing $K(R;\varepsilon)=\beta^{-1}\,f_{M,I}(R/\alpha)$,
where $f_{M,I}(x)$ are universal scaling functions on the metallic and insulating sides of the transition, respectively.
In order to optimize the choice of $\alpha, \beta$ (which both depend on $U$ and $\varepsilon$, while $W=14$ is fixed) and to
determine the correlation dimension $d_{2}$
we use a simple analytical ansatz for $f_{M,I}$, which
captures the multifractal characteristics of the eigenfunction
correlations:
\begin{eqnarray} \label{f-met} f_{M}(x) &=& x^{-(d-d_2)}\,e^{-x/B}+1, \\
\label{f-ins} f_{I}(x)&=& x^{-(d-d_2)}\,e^{-x/B}. \end{eqnarray}
The fits were optimized by the value $B\approx 2$.
We first discuss the DoS correlations $K(R;\varepsilon_F)$ at the Fermi
level, upon varying the strength of the interaction $U$, cf.~Fig.~\ref{fig:K-collapse}.
The good quality of the collapse demonstrates that the behavior of
$K(R;\varepsilon)$ is consistent (using full-range Coulomb interactions) with multifractal correlations
with dimension:
\begin{equation}
d_{2}\approx 1.57\pm 0.05, \;\;\;\;{\rm full-range}\;\;{\rm Coulomb}.
\end{equation}
This is significantly larger than the fractal dimension found for the non-interacting case in the limit of large sample sizes
$d_{2}=1.29\pm 0.05$~\cite{Mirlin-rep} from the multifractal analysis of the moments of $|\psi_{n}({\bf r})|^{2}$. To gauge the finite-size effects in the non-interacting case we performed calculations of the correlation function $K(R;\varepsilon_{m})$ with collapse of the data for different disorder strengths $W$ (and $U=0$) similar to Fig.\ref{fig:K-collapse}. This yielded the effective $d_{2}=1.34\pm 0.05$ for the sample size $L=10$. From this we conclude that the presence of full-range Coulomb interactions strongly affects the multifractal correlations at the Fermi level, which are governed by a new interacting critical point with a correlation fractal dimension $d_{2}$ {\it larger} than for the non-interacting case.
This result is in full agreement with the qualitative picture outlined in the Introduction. It is also in line with recent results obtained via the $\epsilon=d-2$ expansion in the unitary ensemble \cite{BurMirGor}. According to that study:
\begin{equation}\label{eps-d-2}
d_{2}^{{\rm inter}}=2-\frac{\epsilon}{2},\;\;\;\;d_{2}^{{\rm non-inter}}=2-\sqrt{2\epsilon}.
\end{equation}
Although the $\epsilon$ expansion fails to give an accurate prediction for $d=3$, the tendency for $\epsilon\leq 1$ is clearly that $d_{2}^{{\rm inter}}>d_{2}^{{\rm non-inter}}$.
An increase of $d_2$ is also expected from studies of systems with frustrating
interactions on the Bethe lattice, where the Efros-Shklovskii- (or Hartree-) type
suppression of the density of states around the chemical potential is found to
reduce the abundance of resonances (i.e., small denominators in the locator expansion). Therefore, the wavefunctions have less tendency to follow rare paths to increase the number of resonant sites visited, and thus
form less sparse fractals ~\cite{Yu}.
The same analysis was repeated at higher interaction strength $U=1.5$ and $U=3.0$, where the critical HF states appear away from the Fermi energy. The result is that away from the Fermi energy $d_{2}$ is practically indistinguishable from the non-interacting case.
To assess the effect of the range of the interactions, we also computed $d_{2}$ at the Fermi level
when the Coulomb interaction was truncated as described in Sec.~2 (the critical interaction strength in this case was $U_{c}\approx 0.75$, a bit smaller than for the full-scale Coulomb interaction). With the Coulomb interaction restricted to 5 nearest neighbors in the Fock terms, the correlation dimension was $d_{2}\approx 1.39\pm 0.05$ both when the Coulomb interactions in the Hartree terms were restricted to 5 or to 20 nearest neighbors. This result shows that Coulomb interactions of full range are essential to change the fractal dimension $d_{2}$ significantly. With truncated Coulomb interaction, the effective $d_{2}$ in a finite sample gradually decreases and approaches its non-interacting value.
\begin{figure}[h]
\center{\includegraphics[width=0.7\linewidth]{alpha_both_full_1_nu_NJP}}
\caption{(Color online) Evolution of the finite size
correlation length $\alpha$ with the interaction strength $U$ at $\varepsilon=\varepsilon_F$ ($W=14$ being fixed).
The raising and falling parts correspond to the metal and the insulator, respectively. In the interval $0.8<U<0.9$
the error-bars are too large to distinguish
between various scenarios discussed in the main text.
}
\label{Fig:alpha}
\end{figure}
\section{ Metal-insulator transition.}
Fig.~\ref{Fig:alpha} shows the evolution
of the finite size-corrected correlation length $\alpha(U)$, as
obtained from the scaling collapse of Fig.~\ref{fig:K-collapse}. It exhibits strong
non-monotonicity, indicating a localization transition: For
$U<U_{<}\approx 0.79$, $\alpha(U)$ increases with increasing $U$ while for
$U>U_{>}\approx 0.89$, it decreases. The best fits to
critical power laws yield $\xi(U)=a\,|U-U_{<(>)}|^{-\nu_{M(I)}}$
with:
\begin{equation} \label{nu-nu}\nu_{M}\approx 0.50\pm 0.05,\;\;\;\; \nu_{I}\approx
0.96\pm 0.05,\;\;\;\;\;{\rm full}-{\rm range}\;\;\;{\rm Coulomb}.\end{equation}
The difference in the fit exponents is too big to be a mere result
of statistical errors, or of systematic errors related with the
fitting procedure. Indeed, as an independent check we
computed the critical exponents for a non-interacting system of
equal size, using the same method. We obtained \cite{fse}
much closer exponents $\nu_{M}\approx
1.20\pm 0.05$, $\nu_{I}\approx 1.08\pm 0.08$. We thus believe that
the difference in the fit exponents~(\ref{nu-nu})
is a genuine interaction effect, which persists to fairly large scales. Possible interpretations of these findings are discussed further below.
In this context it is interesting
to note that the exponent $\nu_{M}\approx 0.5$ has been reported in
earlier experiments on ${\rm Si:P}$, which remained a puzzle for
theorists for a long time~\cite{Bel-Kirk}.
Eqs.~(\ref{nu-nu}) might reflect the degradation of the multifractal
pattern due to the interaction-induced mixing of non-interacting
wavefunctions, which we expect to be much stronger in the
delocalized than in the localized phase (where fewer non-interacting wavefunctions
involved
have a significant overlap). Another phenomenon that
undoubtedly influences the MI transition is the gradual breakdown of
screening in the metallic phase. The interactions, which are well
screened deep in the metal, must become long range somewhere on the
way to the insulator~\cite{LMNS}. This entails a crossover, or even a phase
transition, to a glassy phase~\cite{dobro,MuellerPankov}. We observe
a trace of the latter via the onset of non-uniqueness of the HF
solutions roughly at the same point as the MI
transition
but our resolution is not
sufficient to determine whether the two phenomena coincide. It also
remains an interesting open question whether screening breaks down
at the MI transition only, or already within the metal, as it
happens in mean field models with similar
ingredients~\cite{MuellerStrack}.
It is interesting to compare our results for the exponent $\nu_{I}$ with the $\epsilon$-expansion \cite{BurMirGor} obtained from the Finkel'stein's theory \cite{Fin-review} in the unitary ensemble. According to this theory:
\begin{equation}
\nu^{{\rm inter}}=\frac{1}{\epsilon}-1.64,\;\;\;\;\nu^{{\rm non-inter}}=\frac{1}{2\epsilon}-\frac{3}{4}.
\end{equation}
These expressions are meaningless at $\epsilon=1$, where they evaluate to negative $\nu$. One may think, however, that
for small $\epsilon$ they give a correct relationship between $\nu^{{\rm inter}}$ and $\nu^{{\rm non-inter}}$, as was the case for the fractal dimension $d_{2}$ in Eq.~(\ref{eps-d-2}).
However, the relationship between $\nu^{{\rm inter}}$ and $\nu^{{\rm non-inter}}$ is ambiguous in the region of $\epsilon<1$ where both of them are still positive. Indeed, one can see that for very small $\epsilon$, one has $\nu^{\rm{inter}}>\nu^{{\rm non-inter}}$. However, as $\epsilon$ increases, $\nu^{{\rm non-inter}}$ catches up with
$\nu^{{\rm inter}}$, and at $\epsilon > 0.55$ we have $\nu^{\rm{inter}}<\nu^{{\rm non-inter}}$, as in our results for $d=3$. This may indicate that two competing mechanisms are at play, whose relative importance depends on the dimensionality.
We also note that the exponent $\nu_{I}$ increases when the Coulomb interaction is truncated or when the mobility edge moves away from the Fermi level:
\begin{equation}
\nu_{I}=1.31\pm0.1,\;\;(5^{{\rm th}}\; {\rm neighbors});\;\;\;\nu_{I}=1.36\pm 0.1,\;\;(\varepsilon_{m}-\varepsilon_{F}=1.7).
\end{equation}
In contrast, the exponent $\nu_{M}$ is almost insensitive to truncation, but decreases with increasing interaction strength:
\begin{equation}
\nu_{M}=0.40\pm0.03,\;\;\;(U=3,\varepsilon_{m}-\varepsilon_{F}=1.7).
\end{equation}
The large error bars in the interval $U\in[0.8,0.9]$ do not allow us
to determine $\xi(U)$ by an accurate treatment of the finite-size
scaling $\alpha(U)=\xi(U)\,f_{\alpha}(\xi/L)$ \footnote{In our fitting procedure we used $f_{\alpha}(y)=[1+Cy^{1/\nu}]^{-\nu}$.}. Two different
scenarii may be envisioned to reconcile the fit exponents
(\ref{nu-nu}) with standard theoretical considerations: (a) There is
a single localization transition close to $U_{c}=0.9$ with a
shoulder in the dependence of $\xi(U)$ on the metallic side, due to an additional phase transition
or a crossover in a different sector, such as the breakdown of
screening or the onset of glassiness. In that case $\nu_M$ would be
expected to approach $\nu_I$ sufficiently close to $U_c$ and on
large scales. (b) There are two separate transitions at
$U_{c1}\approx 0.79$ and $U_{c2}\approx 0.89$, with critical
wavefunctions in an entire finite interval $U\in[U_{c1},U_{c2}]$. In
this more exotic (and less probable) scenario, there would be no a priori reason for the
two exponents to coincide.
\section{ Finite-energy mobility edge in the insulator.}
A similar scaling analysis as above may be performed for $\varepsilon$
away from $\varepsilon_F$, which determines a critical line - the mobility edge
$\varepsilon_m(U)$. Of course, such a mobility edge is defined sharply only at the
mean field level of the HF equations, which neglect the finite life-time
of higher energy excitations due to inelastic processes involving either phonons or delocalized excitations of purely electronic origin
~\cite{AndersonFleishman}.
Nevertheless, the phase space
for decay processes at low energies is strongly suppressed, and due to the
pseudogap even more severely so than in an ordinary Fermi liquid.
This gives us confidence that the features of single particle HF
levels are representative of the fully interacting system. In
particular, the statement that $\varepsilon_m$ remains close to $\varepsilon_F$
is a result which we believe to be robust beyond the Hartree-Fock
approximation. Hereby $\varepsilon_m$ should be interpreted as the (approximate) location where
the LDoS correlations become critical up to the relevant length scale set by inelastic decay processes.
Our result is shown in Fig.~\ref{fig:sketch} together
with the bandedge, defined as the energy where $\rho(\varepsilon)$ drops to half of its maximal value.
Fig.~\ref{fig:sketch} demonstrates that the mobility edge is indeed
trapped in a narrow range around $\varepsilon_F$. This holds for values of $U$ nearly all
the way up to $U_*\approx 4$ where the last states around the
maximum of the DoS localize (at $W=14$). This confirms the
expectations of Ref.~\cite{Yazdani} that in a relatively broad
region of the parameters $W$ and $U$, states near $\varepsilon_F$ are
almost critical. Note also that $U_*$ is almost 5 times larger than
$U_c\approx 0.85$ where the MI transition occurs. In fact $U>
U_*$ brings the system already very close to a Mott-type transition
where charge-density wave order sets in. It remains an interesting
question to study how such charge ordering effects and glassiness
(i.e., the multiplicity of HF solutions) affect the localization as
the interaction strength increases.
\begin{figure}[h]
\center{\includegraphics[width=1.0\linewidth]{sketch}} \caption{(Color
online) Phase diagram. In a wide range of the interaction strength
$U$ the mobility edge (solid blue line) stays close to
$\varepsilon_{F}$ as compared to the bandedges (green dashed line). This behavior is in a qualitative agreement with a conjecture \cite{BurMirGor-arX} that the mobility edge $U(\varepsilon-\varepsilon_{F})-U_{c}\propto (\varepsilon-\varepsilon_{F})^{\frac{1}{\eta\nu}}$ with $\nu\eta\approx 2$.}
\label{fig:sketch}
\end{figure}
\section{Multiplicity of HF solutions: glassiness and charge ordering.}
\label{s:glass}
Finally we briefly address the issue of multiple solutions of the
Hartree-Fock equations and its relation to the onset of glassy
behavior as the interaction $U$ increases.
Our iterative procedure to solve the HF equations begins with an
input configuration of occupation numbers $n_{\rm in}({\bf r})$ on
each of the $N$ lattice sites. At sufficiently large $U$ the
converged output HF solution $n_{\rm out}({\bf r})$ is generally
different for runs with different inputs. To quantify this
difference statistically we studied the quantity:
\begin{equation}\label{form} D(U)=\frac{1}{N_{\rm sol}}\sum_{m}\frac{1}{N}\sum_{{\bf
r}}|n_{\rm out}^{(m)}({\bf r})-n_{\rm out}^{(0)}({\bf r})|^{2},
\end{equation}
where the superscript $m$ labels the set of $N_{\rm sol}$ different solutions which were obtained from
initial density patterns $n_{\rm in}^{(m)}({\bf r})$, while $0$ denotes a
reference solution. In the simplest test we have chosen $N_{\rm sol}=10$ solutions, out of
which 8 were obtained from random inputs and 2 had a checkerboard order as
input. The results for $D(U)$ are presented in Fig.~\ref{fig:sol}. One can see that for $U<4$ the average deviation of the solutions from
the reference solution is small. It is thus reasonable to assume that
physical properties evaluated on the various solutions are
statistically very similar. However, starting from $U\approx 4$ the
function $D(U)$ sharply increases.
\begin{figure}[h]
\center{\includegraphics[width=0.5\linewidth]{glass1}}
\caption{Average variance of on-site occupation numbers $n({\bf r})$
as a function of interaction strength.} \label{fig:sol}
\end{figure}
\begin{figure}[h]
\center{\includegraphics[width=1.0\linewidth]{glass2}}
\caption{Difference between the output on-site occupation number and
the checkerboard input occupation number: (a) $U=4$ output is random
and uncorrelated with the input, (b) $U=5$ output has almost the
same checkerboard structure as the input. } \label{fig:checker}
\end{figure}
In order to check whether this increase is due to a significant
variation between {\it random} HF solutions or whether this increase
in $D(U)$ is due to stabilization of a checkerboard density pattern
in the HF solution, we consider the solution obtained from
initializing with a checkerboard input and plot the difference
between the solution and the corresponding input pattern.
The result for $U=4$ shows (see Fig.~\ref{fig:checker}(a)) that the
difference has a clear checkerboard structure which implies that the
output was random. However, as $U$ increases to $U=5$ the difference
reduces significantly (see Fig.~\ref{fig:checker}(b)), which signals
the tendency to retain the checkerboard order in the solution.
Comparing also with free energies of random solutions, we concluded
that the transition to a checkerboard structure (charge density
wave) occurs somewhere in the range $4< U <5$.
A more precise identification of the onset of multiplicity of
solutions shows that it starts at much smaller values $U\approx
0.7$, which roughly coincides with the $U_{c}$ at which localization
at the Fermi energy occurs \footnote{these preliminary results are obtained with the Coulomb interaction truncated to $5^{{\rm th}}$ neighbors in the Fock terms and to $20^{{\rm th}}$ neighbors in the Hartree terms}. In order to show this we generated 10
different HF solutions at $U=1.0$ and characterized them globally by
the total energy $E$ per site. The fact that the iterative HF
procedure at $U=1.0$ converges to different values of $E$ is a
manifestation of the existence of multiple local minima. Physically,
one may expect that this will reflect in the onset of glassy
behavior associated with the slow dynamics or relaxation between
the minima that correspond to the various HF solutions.
We then used the above solutions as inputs at slightly decreased
interaction strength $U=0.9$, the resulting solutions still
being of different total energy (see Fig.~\ref{fig:onset}(a)). Upon
decreasing the interaction in steps of $0.1$, and using the
solutions of the previous step as initial condition, we found that
at $U\approx 0.7$ the total energies, after a large number of
iterations, coincided (Fig.~\ref{fig:onset}(b)). That value of $U$ can thus
be interpreted, at this HF mean field level, as the border of a
glassy regime. Upon further decrease of $U$ the solutions did not
diverge anymore, implying the existence of a unique HF solution
(Fig.~\ref{fig:onset}(c)).
The fact that in the insulating regime the HF equations develop a
number of solutions which grows exponentially with the volume is to
be expected, as this is well-known to be the case in the classical
limit of vanishing hopping, $t=0$. One may wonder, whether and how
key features like the Efros-Shklovskii Coulomb gap are present in {\em typical}
solutions to the HF equations, or whether they occur only in the lowest energy solutions, which are very difficult to find. We argue here, that all typical
solutions are expected to exhibit a parabolic Coulomb gap, as we
indeed observed numerically.
To understand how such a Coulomb gap comes about, consider the HF Fock equations
in the limit of vanishing hopping, $t=0$, where all HF orbitals are
completely localized.
\begin{figure}[h]
\includegraphics[width=1.0\linewidth]{glass-onset}
\caption{Convergence of the HF procedure for the total energy per
site: (a) $U=0.9$, the total energy converges to different output
values, depending on the initial configuration of on-site occupation
numbers; (b) $U=0.7$, different initial configurations of occupation
numbers obtained in the previous step converge to the same value of
the total energy, and thus to the same HF solution; (c) $U=0.6$ no
further divergence in the total energy is observed any more.}
\label{fig:onset}
\end{figure}
A HF solution consists in an assignment of
occupation numbers $n_i \in \{0,1\}$ to the sites $i$, according to
whether the local potential,
\begin{eqnarray} E_i = \epsilon_i +\sum_j \frac{n_j}{r_{ij}} \end{eqnarray}
is above ($\to n_i=0$) or below ($\to n_i =1$) the chemical
potential $\mu$ ($\mu \approx 0$ is always adjusted to assure half
filling). In the classical limit, the HF procedure consists in
updating the occupation numbers until convergence to a stable point
is reached. The final HF solution is a minimum of the HF energy with
respect to the change of any of the $n_i$, if the HF energy is
written in grand-canonical form, including the term $-\mu\sum_i
n_i$. That is, there is stability with respect to single particle
addition or removal.
As long as there is no suppression in the low energy distribution of
the local potentials $E_i$ there are lots of rearrangements in each
update. Those die out only once at least a parabolic pseudo-gap
develops in the distribution of the $E_i$'s, such that the probability of a change of occupation triggering other rearrangements becomes small. Note that the $E_i$'s
are just the classical limit of the HF eigen-energies. Thus the
convergence of the HF procedure guarantees essentially the presence
of a Coulomb gap in the LDOS. This happens, even though we do not
impose explicitly the stability of HF solutions with respect to
single particle moves, i.e., swaps between configurations $(n_i,n_j)
=(0,1)$ and $(1,0)$. The latter are the elementary moves considered
in standard arguments for the Efros-Shklovskii Coulomb gap in classical Coulomb
glasses, but the above reasoning shows that one does not need to impose that extra
stability constraint to obtain a well-developed Coulomb
gap.~\footnote{Similar observations were made by A. Amir,
M.~Palassini, B.~Shklovskii and B. Skinner, private discussion}.
When, on top of the HF equations, stability with respect to
particle-hole excitations and more complex rearrangements is
imposed, the Coulomb gap is hardening a bit, but no essential new
features appear~\cite{Moebius}. For this reason we contented ourselves
with an analysis of {\em typical} solutions of the HF equations, without
further minimizing the HF energy among the
exponentially many solutions. We expect that the localization
properties, multifractality etc. evolve only very weakly as one
biases the considered HF solutions towards lower-lying and more
stable solutions. Indeed, our scalings work well when evaluating them in
typical solutions on the insulating side, and key physical
observables behave as we expected, even in the limit $t=0$.
\section{ Conclusion.} We have studied numerically the localization
transition in a 3D Anderson model of spinless fermions, with Coulomb
interactions treated within the HF approximation. The metal-insulator transition was
identified via the localization at the Fermi level, determined from
a detailed study of the auto-correlation function of the HF
eigenfunctions. Our main results are: {\em (i)} Multifractal power
law scalings in the local DoS
survive the presence of interactions, and extend up to a (large) correlation length $\xi(U,\varepsilon)$.
{\em (ii)} A critical Coulomb gap in the weakly insulating phase pins the mobility edge close
to $\varepsilon_F$ for a wide range of parameters, while most higher energy excitations are still
delocalized. At disorder strength $W=14$ (moderately close, but not fine-tuned, to the non-interacting
critical disorder $W=16.5$) the critical $U_{c}(\varepsilon_F)$ for
the metal-insulator transition is $\sim 5$ times smaller than the $U_*$ required
for localization of the entire HF spectrum. This is in qualitative agreement with
the experimental observations of Ref.~\cite{Yazdani}.
{\em (iii)} A scaling analysis of the DoS reveals a critical Coulomb anomaly $\rho(\omega)\sim\omega^{0.6\pm0.15}$, and scaling laws as anticipated in Refs.~\cite{McMillan,LMNS}.
{\em (iv)} The apparent correlation length exponents display a significant asymmetry between the metallic and
insulating sides, similar to tendencies reported in experiments. We conjecture that they arise from
crossover phenomena in the metallic phase related with the breakdown of screening or the onset of glassy
metastability seen in HF.
Those deserve further future studies.
\subsection*{Acknowledgments}
We thank I. Girotto for help with parallel
programming and A. Yazdani, M. Feigel'man and B. I. Shklovskii for stimulating discussions.
MA is grateful to S. A. Jafari and F. Shahbazi for
useful comments and interest in this work and to the CM\&SP section of
ICTP for hospitality.
\section*{References}
|
1,108,101,564,266 | arxiv | \section{Introduction}
Resonance energy transfer (RET) \cite{Forster1946,GovorovBook,Jones2019} constitutes an important mechanism through which an excited quantum emitter (donor) may transfer its energy to a neighboring one in the ground state (acceptor). Amid the several situations where RET plays a relevant role, a remarkable example is the light harvesting process in plants, in which chlorophyll molecules are excited by the absorption of light and can efficiently transfer this excitation energy to their neighboring molecules \cite{Scholes2011,Bredas2016}.
Different energy transfer mechanisms have been extensively discussed not only in physics, but also in several areas like chemistry, biology and engineering. An efficient energy transfer allows for a variety of applications, such as photovoltaics \cite{Chanyawadee2009}, luminescence \cite{Baldo2000,Song2020}, sensing \cite{Diaz2018}, quantum information \cite{Unold2005,Argyropoulos2019}, and many others. Due to these numerous applications and to advances in different areas combined with the great development of new technologies, controlled modification of the RET rate has also become a topic of huge interest. In this context, substantial theoretical and experimental efforts have been dedicated to investigate the influence of different geometries and materials, such as planar geometries \cite{Marocico2011,Poddubny2015,Bouchet2016}, cavities \cite{Andrew2000,Ghenuche2014}, nanoparticles \cite{Xie2009,Vincent2011,Aissoui2017,Vetrone2018,Schatz2018,Bohlen2019}, cylinders \cite{Marocico2009,Karanikolas2014} and waveguides \cite{Argyropoulos2019,Marocico2011,Marticano2010,deRoque2015,Fiscelli2018}.
Among the progress in so many areas, the field of plasmonics stands out with intense growth in recent decades. Plasmonics consists in the study of the science and applications of the surface plasmon polaritons, which are electromagnetic surface waves coupled to the conduction electrons to form collective charge excitations that propagate at the interface between a dielectric and a conductor \cite{MaierBook,NunoBook}. In particular, surface plasmons supported by graphene are confined much more strongly and present longer propagation lengths when compared to those in conventional noble metals \cite{NunoBook,Iranzo2018,Ni2018}. Another important advantage is their chemical potential tunability that can be achieved by gating and doping \cite{NunoBook,Grigorenko2012,deAbajo2014}. In this sense, graphene provides a suitable platform for manipulation of light-matter interaction and the influence on the RET rate between two emitters has already been analysed both for the case of a monolayer \cite{Velizhanin2012,Biehs2013,Karanikolas2015} and for a nanodisk \cite{Karanikolas2016}. In all of them, the authors explore precisely the change in the RET rate caused by the possibility of tuning the chemical potential.
However, when submitted to an external magnetic field, plasmons and cyclotron excitations hybridize, originating new modes in graphene, named magnetoplasmon polaritons (MPPs) \cite{NunoBook,Ferreira2012}. The MPPs may enhance even more the light-matter interactions, creating a new opportunity to actively control the RET. In this paper we take advantage of graphene's magneto-optical response and propose a setup that takes the degree of RET manipulation to unprecedented levels: two emitters placed in the vicinity of a suspended graphene monolayer in vacuum, submitted to an external magnetic field applied perpendicularly to the monolayer. We demonstrate that the RET rate may change dramatically with respect to the result in free space even for small modulations of the magnetic field. Furthermore, this giant effect may be obtained even for somewhat modest values of the field. Interestingly, our results suggest that magnetoactive materials could act as a logic gate in some practical circumstances, meaning that they could be turned on and off without the need of physical contact, specially at room temperature. Our findings show that a magnetic field applied to the graphene monolayer can be used as an external agent for tuning continuously RET rates.
This paper is organized as follows. In Sec. \ref{SecGreenFunction} we introduce the system under investigation, the Green's tensor formalism used in the calculation of the RET rate between two emitters in the presence of an arbitrary environment and some important features related to the graphene's response to the external applied magnetic field. In particular, we provide an analysis of how graphene's conductivities vary as a function of the magnetic field, exploring their behavior for distinct values of chemical potential and temperature. Section \ref{SecResults} comprises our main results on the resonance energy transfer between the emitters. For example, we highlight the understanding of the MPPs as the fundamental agents to achieve the intense variations of the RET rate. Section \ref{SecConclusions} is left for final comments and conclusions.
\section{\label{SecGreenFunction}Resonance energy transfer close to a graphene sheet in a magnetic field}
In this work we shall be concerned with the RET rate between a pair of two-level quantum emitters $A$ (in the excited state) and $B$ (in the ground state), separated by a distance $r$, both at the same distance $z$ from a suspended graphene sheet in vacuum in thermal equilibrium at temperature $T$. Moreover, the graphene sheet is subjected to a uniform and static external magnetic field $\bm{B} = B \bm{\hat{z}}$ applied perpendicularly to it, as sketched in Fig. \ref{System}.
\begin{figure}
\begin{center}
\includegraphics[width=7.8cm]{System.png}
\end{center}
\vskip -0.6cm
\caption{A pair of two-level emitters separated by a distance $r$, both at a distance $z$ from a suspended graphene sheet. An external magnetic field $\bm{B} = B \bm{\hat{z}}$ is applied perpendicularly to the sheet.}
\label{System}
\end{figure}
In the following subsections, we briefly introduce the Green function approach commonly used to calculate the modified RET rate between two quantum emitters when placed in the vicinity of any medium. Then, we move on to the description of the graphene's response to the applied magnetic field, presenting the main equations needed to determine the new RET rate in this particular case.
\subsection{\label{SubsecGreenForm}Methodology}
In the presence of an arbitrary environment, the RET rate $\Gamma$ between two quantum emitters in vacuum located at $\bm{r}_A$ and $\bm{r}_B$, such that $r = |\bm{r}_B - \bm{r}_A|$, normalized by the RET rate in free space $\Gamma^{(0)}$ can be written as \cite{Marocico2009}
\begin{equation}
\frac{\Gamma}{\Gamma^{(0)}} = \frac{\big\vert \bm{d}_B \cdot \mathds{G} (\bm{r}_B, \bm{r}_A, \omega_0)\cdot \bm{d}_A \big\vert^2}{\big\vert \bm{d}_B \cdot \mathds{G}^{(0)} (\bm{r}_B, \bm{r}_A, \omega_0)\cdot \bm{d}_A \big\vert^2} \,,
\label{RETRate}
\end{equation}
\noindent where $\omega_0$ is the transition frequency of the emitters, $\bm{d}_A$ and $\bm{d}_B$ are their transition electric dipole moments and $\mathds{G}$ and $\mathds{G}^{(0)}$ are the electromagnetic Green dyadics of the full setup and in free space, respectively. The electromagnetic Green dyadic satisfies
\begin{equation}
\left[ \nabla \! \times \! \nabla \! \times \, -
\epsilon (\omega, \bm{r}) \frac{\omega^2}{c^2} \right] \mathds{G} (\bm{r}, \bm{r}^\prime, \omega) = - \delta (\bm{r} - \bm{r}^\prime)\, \mathds{I}
\label{EMGDHelmoltz}
\end{equation}
\noindent with the appropriate boundary conditions \cite{NovotnyNanoOptics}, where $c$ is the light velocity in vacuum and $\epsilon (\omega, \bm{r})$ stands for the electric permittivity of the medium. In our case, we take $\epsilon (\omega, \bm{r}) = \epsilon_0$, where $\epsilon_0$ is the electric permittivity of vacuum. It will be convenient to separate the Green dyadic as a sum of two contributions, namely
\begin{equation}
\mathds{G} (\bm{r}_B, \bm{r}_A, \omega_0) = \mathds{G}^{(0)} (\bm{r}_B, \bm{r}_A, \omega_0) + \mathds{G}^{(\textrm{S})} (\bm{r}_B, \bm{r}_A, \omega_0) \,.
\end{equation}
\noindent In this expression $\mathds{G}^{(0)} (\bm{r}_B, \bm{r}_A, \omega_0)$ is the solution to Eq. (\ref{EMGDHelmoltz}) in the absence of any object and $\mathds{G}^{(\textrm{S})} (\bm{r}_B, \bm{r}_A, \omega_0)$ represents the scattered part of the Green function and must obey the electromagnetic field boundary conditions \cite{NovotnyNanoOptics} at the graphene sheet. The procedure to evaluate the scattered part of the total Green function follows from the equation \cite{NovotnyNanoOptics}
\begin{equation}
\mathds{G}^{(\textrm{S})} = \frac{i}{2} \int \frac{d^2 \bm{k}_{\|}}{\left( 2\pi \right)^2}\, \mathds{R} \, \frac{e^{i\left[ \bm{k}_{\|} \cdot \left( \bm{r}_B - \bm{r}_A \right) + k_{0z} \left( z_B + z_A \right) \right]}}{k_{0z}} \,,
\end{equation}
\noindent where
\begin{equation}
\mathds{R} = \!\! \sum_{p, q = \{\textrm{TE,TM}\}} \!\! r^{p, q} \, \bm{\epsilon}_{p}^{+} \otimes \bm{\epsilon}_{q}^{-}
\end{equation}
\noindent denotes the reflection matrix with $r^{p, q}$ corresponding to the reflection coefficient for an incoming $q$-polarized wave that is reflected as a $p$-polarized one \cite{NovotnyNanoOptics}. In addition, the TE- and TM-polarization unitary vectors are defined as
\begin{align}
\bm{\epsilon}_{\textrm{TE}}^{+} &= \bm{\epsilon}_{\textrm{TE}}^{-} = \frac{- k_y \bm{\hat{x}} + k_x \bm{\hat{y}}}{k_{\|}} \,, \\
\bm{\epsilon}_{\textrm{TM}}^{\pm} &= \frac{\pm k_{0z} (k_x \bm{\hat{x}} + k_y \bm{\hat{y}}) - k_{\|}^2 \bm{\hat{z}}}{k_{\|} (\omega_0/c)} \,,
\end{align}
\noindent with $\bm{k}_{\|} = k_x \bm{\hat x} + k_y \bm{\hat y}$ and $k_{0z} = \sqrt{(\omega_0/c)^2 - k_{\|}^2}$.
For the sake of simplicity, we analyze emitters with both transition dipole moments being oriented along the $z$-axis (and perpendicular to the graphene sheet), such that Eq. (\ref{RETRate}) reduces to
\begin{equation}
\frac{\Gamma}{\Gamma^{(0)}} = \frac{\big\vert \mathds{G}_{zz} (\bm{r}_B, \bm{r}_A, \omega_0) \big\vert^2}{\big\vert \mathds{G}^{(0)}_{zz} (\bm{r}_B, \bm{r}_A, \omega_0) \big\vert^2} \,.
\label{RETRateSimpl}
\end{equation}
\noindent More explicitly, we can write \cite{NovotnyNanoOptics}
\begin{equation}
\mathds{G}^{(0)}_{zz} = \frac{e^{i \omega_0 r/c}}{4 \pi r} \left[ 1- \left( \frac{c}{\omega_0 r} \right)^2 + \frac{i c}{\omega_0 r} \right]
\label{G0zz}
\end{equation}
\noindent and $\mathds{G}^{(\textrm{S})}_{zz} = \bm{\hat z} \cdot \mathds{G}^{(\textrm{S})} \cdot \bm{\hat z}$ is the only contribution of the scattered Green function that needs to be considered, given by
\begin{equation}
\mathds{G}^{(\textrm{S})}_{zz} = \frac{i c^2}{8 \pi^2 \omega_0^2} \int d\bm{k}_{\|} \frac{k_{\|}^2 \, r^{\textrm{TM,TM}} \, e^{i\left[ \bm{k}_{\|} \cdot \left( \bm{r}_B - \bm{r}_A \right) + k_{0z} \left( z_B + z_A \right) \right]} }{k_{0z}} \,.
\end{equation}
\noindent Writing this equation in polar coordinates, performing the angular integration and identifying $z_A = z_B = z$, we get
\begin{equation}
\mathds{G}^{(\textrm{S})}_{zz} = \frac{i c^2}{4\pi \omega_0^2} \int_0^\infty \!\!\! dk_{\|} \frac{k_{\|}^3 \, J_0 (k_{\|} r) \,r^{\textrm{TM,TM}} \, e^{2i k_{0z} z}}{k_{0z}} \,,
\label{GSzz}
\end{equation}
\noindent where $J_0$ is the cylindrical Bessel function of zeroth order. It is worth mentioning that all information about the influence of the environment is only encoded in $r^{\textrm{TM,TM}}$, which denotes the reflection coefficient of an incoming TM-polarized wave that is reflected with the same TM-polarization \cite{NovotnyNanoOptics}. This arises as a direct consequence of our choice for the direction of the transition dipole moments as being perpendicular to the medium, so they do not couple to TE waves.
\subsection{\label{SubsecGraphProp}Reflection coefficient and conductivities of graphene in a magnetic field}
According to Eq. (\ref{GSzz}), in order to evaluate the scattered Green function, it is required the reflection coefficient $r^{\textrm{TM,TM}}$. It is well known that graphene is a magneto-optical material, in the sense that, under the influence of a perpendicular external magnetic field, its conductivity becomes a tensor with nonzero diagonal and nondiagonal elements and we need to take into account a transverse conductivity ($\sigma_{xy}$), in addition to the standard longitudinal one ($\sigma_{xx}$). The existence of the former contribution makes the TM reflection coefficient slightly more complicated than usual, to wit \cite{Tse2012, KortKamp2014}
\begin{equation}
r^{\textrm{TM,TM}} = \frac{2 Z^{\textrm{E}} \sigma_{xx} + \eta_0^2 (\sigma_{xx}^2 + \sigma_{xy}^2)}{(2 + Z^{\textrm{H}} \sigma_{xx}) (2 + Z^{\textrm{E}} \sigma_{xx}) + \eta_0^2 \sigma_{xy}^2} \,,
\label{rTMTM}
\end{equation}
\noindent where $Z^{\textrm{E}} = k_{0z}/(\omega_0 \epsilon_0)$, $Z^{\textrm{H}} = \omega_0 \mu_0/k_{0z}$, $\eta_0^2 = \mu_0/\epsilon_0$ and $\mu_0$ is the magnetic permeability of vacuum. Here, we shall neglect spatial dispersion and the expressions to be used for the longitudinal and transverse conductivities were obtained in Ref.~\cite{Gusynin2007} from an approach in the quantum context applying the Kubo formula, yielding
\begin{widetext}
\begin{align}
\sigma_{xx} (\omega, B) = \frac{e^3 v_F^2 B \hbar (\omega + i \tau^{-1})}{i \pi} \sum_{n = 0}^{\infty} \bigg\{ &\frac{n_F (M_n) - n_F (M_{n+1}) + n_F (- M_{n+1}) - n_F (- M_n)}{(M_{n+1} - M_n) \left[ (M_{n+1} - M_n)^2 - \hbar^2 (\omega + i \tau^{-1})^2 \right]} \nonumber \\
+ &\frac{n_F (- M_n) - n_F (M_{n+1}) + n_F (- M_{n+1}) - n_F (M_n)}{(M_{n+1} + M_n) \left[ (M_{n+1} + M_n)^2 - \hbar^2 (\omega + i \tau^{-1})^2 \right]} \bigg\} \,,
\label{sigmaxx}
\end{align}
\vspace{-0.5cm}
\begin{align}
\sigma_{xy} (\omega, B) = - \frac{e^3 v_F^2 B}{\pi} \sum_{n = 0}^{\infty} &\left[ n_F (M_n) - n_F (M_{n+1}) - n_F (- M_{n+1}) + n_F (- M_n) \right] \nonumber \\
\times &\left[ \frac{1}{(M_{n+1} - M_n)^2 - \hbar^2 (\omega + i \tau^{-1})^2} + \frac{1}{(M_{n+1} + M_n)^2 - \hbar^2 (\omega + i \tau^{-1})^2} \right] \,.
\label{sigmaxy}
\end{align}
\end{widetext}
\noindent Due to the magnetic field, the graphene energy spectrum is quantized into nonequidistant Landau levels (LLs), with energies given by $M_n = \textrm{sign}(n)\sqrt{2 |n| \hbar v_F^2 e B}$, where $n = 0, \pm 1, \pm 2, ...$, $v_F = 10^6$~m/s is the Fermi velocity and $- e$ is the electron charge \cite{Gusynin2007}. Also, $n_F (E) = [1 + e^{(E - \mu_c)/k_B T}]^{-1}$ is the Fermi-Dirac distribution, $\mu_c$ is the chemical potential and $\tau^{-1}$ is a phenomenological scattering rate which causes a small broadening in the LLs (throughout this paper we shall take $\tau = 1$~ps).
\begin{figure*}
\begin{center}
\includegraphics[width=17.8cm]{condmuTver.pdf}
\end{center}
\vskip -0.5cm
\caption{Real and imaginary parts of the longitudinal and transverse conductivities of graphene as functions of the external magnetic field for $\omega_0 = 6 \pi \times 10^{13}$~rad/s, $v_F = 10^6$~m/s and $\tau = 1$~ps. The first, second and third rows were obtained using $\mu_c = 0$~eV, $\mu_c = 0.1$~eV and $\mu_c = 0.2$~eV, respectively. Also, the first column was evaluated with $T = 4$~K while the second one, with $T = 300$~K.}
\label{condmuT}
\end{figure*}
From Eqs. (\ref{sigmaxx}) and (\ref{sigmaxy}), one can see that these conductivities are quite sensitive to variations in some parameters. In particular, the density of the charge carriers depends heavily on the temperature of the medium, so that, in order to explore its effect on the RET rate, we analyze the conductivities at low and room temperatures. Figure \ref{condmuT} portrays the real and imaginary parts of the longitudinal and transverse conductivities as functions of the external magnetic field $B$. Each row shows the behavior for a different value of chemical potential $\mu_c$ ($0$~eV, $0.1$~eV and $0.2$~eV, respectively). Panels (a)-(c) illustrate the behavior for temperature $T~=~4$~K, whilst (d)-(f) are results for $T~=~300$~K. In all of them, we consider $\omega_0 = 6 \pi \times 10^{13}$~rad/s ($\lambda_0 = 2 \pi c/ \omega_0 = 10$~$\mu$m) and intensities of $B < 16$~T. The dependence with $B$ is not simple, so let us begin with Fig. \ref{condmuT}(a). The sharp peaks appear whenever $\hbar \omega_0$ equals the difference in energy between two LLs whose intraband or interband transition is allowed by selection rules and the Fermi-Dirac distribution (which, in this case of low temperature, resembles a step function). For instance, the largest peak around $B \approx 11.6$~T is due to the resonance of $\hbar \omega_0$ with the first intraband transition ($0 \rightarrow 1$), while the others are due to interband transitions ($- n \rightarrow n + 1$, $- n - 1 \rightarrow n$). Despite being vanishingly small, as expected from Eq. (\ref{sigmaxy}), we plotted the transverse conductivity for $\mu_c = 0$~eV for consistency. In Figs. \ref{condmuT}(b) and \ref{condmuT}(c), we have $\mu_c \neq 0$ and a feature that stands out are the discontinuities in the plots. As $B$ increases, the LLs also increase in energy and these discontinuities show up each time a given LL crosses the chemical potential value. They occur whenever $M_n = \mu_c$, so that the corresponding value of the magnetic field is obtained from
\begin{equation}
B = \frac{\mu_c^2}{2 n \hbar e v_F^2} \,,
\end{equation}
\noindent valid for $n > 0$. In the case of Fig. \ref{condmuT}(b) ($\mu_c = 0.1$~eV), the crossing of the last LL ($n = 1$) occurs for $B \approx 7.6$~T. This explains why we can still see the sharp peak around $B \approx 11.6$~T that is generated from the resonance of $\hbar \omega_0$ with the intraband transition $0 \rightarrow 1$, since $M_0 < \mu_c < M_1$ for such region of field intensities. On the other hand, resonances with smaller $B$ do not appear in this plot because these interband transitions are never allowed by the Fermi-Dirac distribution. The most extreme case is seen in Fig. \ref{condmuT}(c), in which no transition between LLs contributes and only discontinuities take place.
We now switch to the results at room temperature (the second column of panels in Fig. \ref{condmuT}). In short, the mathematical outcome of increasing the temperature is to provide longer decay tails to the Fermi-Dirac distribution of graphene. As an immediate consequence, more LLs are allowed to have a non zero occupation probability and, hence, new contributions from multiple transitions between LLs can emerge because of the thermal fluctuations. So where there were solely effects of the discontinuities, we now notice the two intertwined key features previously reported: {\it (i)} the sharp peaks due to the resonances of $\hbar \omega_0$ and {\it (ii)} the discontinuities arising from the crossings, but smoothed by the higher temperature and appearing as small steps as shown in the bottom inset of Fig. \ref{condmuT}(f). They can also be seen in the curves of (e) if we zoom in enough. However they do not exist in (d) as it is the case of zero chemical potential and, consequently, there are no crossings of the LLs [this result is very similar to the one obtained in (a)]. The positions of the peaks mentioned in {\it (i)} are independent of $\mu_c$ and $T$, so that they always manifest themselves at the same values of $B$ in all the curves of Fig. \ref{condmuT}. In the case of larger values of the chemical potential, combined with the smooth profile of the Fermi-Dirac distribution at $T = 300$~K, even higher peaks for a few of the subsequent intraband transitions ($1 \rightarrow 2$, $2 \rightarrow 3$) are allowed, but they happen at somewhat unrealistic values of the magnetic field around $68.2$~T and $115.8$~T, and therefore are not shown in the plots. Incidentally, this explains why the curves in Fig. \ref{condmuT}(f) do not go to zero after the peak, that is the effect of the $1 \rightarrow 2$ transition kicking in.
\section{\label{SecResults}Results and discussions}
\begin{figure*}
\begin{center}
\includegraphics[width=17.8cm]{RETmuz0005Tver.pdf}
\end{center}
\vskip -0.5cm
\caption{Normalized RET rate as functions of the external magnetic field. Each color represents a separation $r$ between the emitters with dominant transition wavelength $\lambda_0 = 10$~$\mu$m, both at a distance $z = 50$~nm from the graphene sheet. The first, second and third row panels were obtained using $\mu_c = 0$~eV, $\mu_c = 0.1$~eV and $\mu_c = 0.2$~eV, respectively. Also, the first column was evaluated with $T = 4$~K while the second shows results for $T = 300$~K.}
\label{RETmuTz0005}
\end{figure*}
The results for the resonance energy transfer were evaluated using the same parameters presented in the analysis of the conductivities. Figure \ref{RETmuTz0005} depicts the normalized RET rate calculated according to Eq. (\ref{RETRateSimpl}) as a function of the applied magnetic field for four different configurations of distance $r$ between the emitters. Panels (a)-(c) and (d)-(f) refer to temperatures $T~=~4$~K and $T~=~300$~K, respectively, and each row refers to a chemical potential value exactly as in Fig. \ref{condmuT}. We chose to work in the near-field region ($z~=~50$~nm~$\ll~\lambda_0$) in order to explore the interaction of the emitters with the graphene's surface magnetoplasmon polaritons (MPPs), as we shall elaborate later on. One could expect a Zeeman splitting for the values of $B$ considered here, as well as a $z$-dependent Casimir shift of the emitters transition energy. However, in the unlikely event that such effects do significantly shift the ``bare" frequency $\omega_0$ (the electric and/or magnetic polarizabilities of the emitters would have to be abnormally large), it would be just a matter of replacing the shifted frequency in our calculations.
It should be noticed that the results for the normalized RET rate in Fig. \ref{RETmuTz0005} are naturally correlated with the response of graphene to the external field, expressed in terms of its longitudinal and transverse conductivities. In this sense, when the magnetic field gets close to a value for which the conductivities present a discontinuity (whose reason was discussed in Sec. \ref{SubsecGraphProp}), this effect is directly reflected in the RET rate. Analogously, whenever there is a contribution coming from permitted transitions between LLs, the normalized RET rate is drastically reduced and then increases again while there are still magnetic field values to which other permitted transitions may contribute.
From the plots of Fig. \ref{RETmuTz0005}, a key fact that stands out is a striking non-monotonic dependence on $r$. When the emitters are very close to each other, the excitation transfer is dominated by the free-space channel, and the graphene impact is not so significant. By increasing $r$ (and keeping $z$ fixed), the environment starts to play a more important role and the relative RET rate shoots up by orders of magnitude. Finally, by increasing $r$ even more, the maximum of $\Gamma/\Gamma^{(0)}$ shrinks about 2 orders of magnitude for $\mu_c = 0, 0.1$~eV and drops about a factor of 10 for $\mu_c = 0.2$~eV. Such effect occurs in a similar way for both temperatures studied.
In order to explain such an impressive variation of the RET rate, it is necessary to make a small digression about the graphene mode structure and, in particular, of its MPPs. The MPPs are surface waves allowed by Maxwell equations under certain boundary conditions. Such surface waves are characterized by the decaying behavior in the $z$-direction in both sides of the graphene sheet and they must be associated with a pole in the reflection coefficients \cite{NunoBook}. Therefore, from Eq. (\ref{rTMTM}) we have
\begin{equation}
(2 + Z^{\textrm{H}} \sigma_{xx}) (2 + Z^{\textrm{E}} \sigma_{xx}) + \eta_0^2 \sigma_{xy}^2 = 0 \,.
\end{equation}
\noindent Enforcing a relation between $k_{\|}$ and $\omega_0$, we arrive at the general dispersion relation for the MPPs \cite{NunoBook}. A straightforward manipulation gives
\begin{widetext}
\begin{equation}
k_{\|}^4 + \frac{4 \omega_0^2}{c^2} \left\{ \frac{1}{\eta_0^2 \sigma_{xx}^2} \left[1 + \frac{\eta_0^2}{4} \left( \sigma_{xx}^2 + \sigma_{xy}^2 \right) \right]^2 - 1 \right\} k_{\|}^2 - \frac{4 \omega_0^4}{c^4} \left\{ \frac{1}{\eta_0^2 \sigma_{xx}^2} \left[1 + \frac{\eta_0^2}{4} \left( \sigma_{xx}^2 + \sigma_{xy}^2 \right) \right]^2 - 1 \right\} = 0 \,,
\end{equation}
\noindent leading to
\begin{equation}
k_{\|}^2 = \frac{2 \omega_0^2}{c^2} \left\{ 1 - \frac{1}{\eta_0^2 \sigma_{xx}^2}\left[1 + \frac{\eta_0^2}{4} \left( \sigma_{xx}^2 + \sigma_{xy}^2 \right) \right]^2 \right\} \left[1 \mp \sqrt{1 + \frac{\eta_0^2 \sigma_{xx}^2}{1 - \dfrac{\eta_0^2}{2} \left( \sigma_{xx}^2 - \sigma_{xy}^2 \right) + \dfrac{\eta_0^4}{16} \left( \sigma_{xx}^2 + \sigma_{xy}^2 \right)^2} } \right] \,.
\end{equation}
\end{widetext}
\noindent The solutions that interest us are those whose real part of $k_{\|}$ is positive \cite{NunoBook}. In order to handle with the previous relation, we can use the fact that, away from the intense variation around $B = 11.6$~T, we have $\eta_0^2 \sigma_{xx}^2 \ll 1$ and also $\eta_0^2 \sigma_{xy}^2 \ll 1$. Hence, it is reasonable to expand this formula and retain only its first terms, yielding
\begin{align}
k_{\|}^{(+)} &= k_{\textrm{MPP}} \approx \frac{2 i \epsilon_0 \omega_0}{\sigma_{xx}} \,,
\label{kpMPP} \\
k_{\|}^{(-)} &= k_{\textrm{QTE}} \approx \frac{\omega_0}{c} \sqrt{1 - \frac{\eta_0^4}{4} \left (\sigma_{xx}^2 - \sigma_{xy}^2 \right)^2} \,.
\label{kpQTE}
\end{align}
\noindent The so called quasi-transverse-electric (QTE) modes \cite{Ferreira2012} given by (\ref{kpQTE}) play virtually no role in the RET, while the MPP branch (\ref{kpMPP}) is the main focus of this work. The fact that $k_{\textrm{MPP}}$ is not purely real indicates that such surface modes have a dissipative character and therefore a finite propagation length {\it parallel} to the graphene's surface \cite{NunoBook}, given roughly by
\begin{equation}
L_{\textrm{MPP}} \approx \frac{1}{{\rm Im}(k_{\textrm{MPP}})} = \frac{1}{2 \epsilon_0 \omega_0} \frac{|\sigma_{xx}|^2}{{\rm Re} \, \sigma_{xx}} \,.
\label{LMPP}
\end{equation}
\begin{figure}
\begin{center}
\includegraphics[width=7.8cm]{LMPPmu00102Tver.pdf}
\end{center}
\vskip -0.5cm
\caption{MPP propagation length as a function of the magnetic field for different values of the chemical potential and {\bf (a)} $T = 4$~K and {\bf (b)} $T = 300$~K. The same parameters used in the analysis of the conductivities were also employed here.}
\label{LMPPmu00102T}
\end{figure}
In Fig. \ref{LMPPmu00102T}, the propagation length of the MPPs is plotted as a function of the external magnetic field for the three values of chemical potential considered before. The upper and lower plots correspond to calculations using $T~=~4$~K and $T~=~300$~K, respectively, and, in broad strokes, their main features can be traced back to the longitudinal conductivity. For $\mu_c = 0.2$~eV, the role of the magnetoplasmons is quite evident: we see that the two emitters are within the MPPs range for $r \lesssim 5$ $\mu$m $\approx 0.5 \lambda_0$, that explains the consistent dominance of the green curve in Figs. \ref{RETmuTz0005}(c) and \ref{RETmuTz0005}(f). It also explains the characteristic discontinuities for temperature $T = 4$~K and why there are such precipitous drops at the resonances in the case of $T = 300$~K (both clearly correlated with the results of $L_{\rm MPP}$). A similar reasoning can be extended to the set of parameters $\mu_c = 0.1$~eV and $T = 4$~K, specially for low fields, where we can also note that the two emitters are within the MPPs range for $r \lesssim 1$ $\mu$m $\approx 0.1 \lambda_0$, in agreement with the enhanced normalized RET rate obtained in Fig. \ref{RETmuTz0005} in this same configuration. This explanation is less evident for the other results of Fig. \ref{LMPPmu00102T}, but it is clear that, at least in the $B = 3-11$~T range, the steady rise in the $L_{\rm MPP}$ corresponds to the ``great hill" profile in the RET plots centered in $B \approx 8$~T. In addition, let us note that the lower $L_{\rm MPP}$ values for $\mu_c = 0, 0.1$~eV also explain the fact that the maximum relative RET occurs for shorter distances in these cases (the green ``hill" is well below the red one in panels (a), (b), (d) and (e) of Fig. \ref{RETmuTz0005}). Finally, as the distance between the emitters gets too large, they evade the propagation range of the MPPs, explaining the downard trend for $r \gtrsim L_{\rm MPP}$ in all curves of Fig. \ref{RETmuTz0005}.
A remarkable feature present in Fig. \ref{RETmuTz0005} that is still to be discussed is the extreme sensitivity of the normalized RET rate with respect to variations in the magnetic field. Indeed, we see that for $T = 300$~K, $\mu_c = 0.2$~eV and $r =\lambda_0$, the relative RET rate can change by impressive five to six orders of magnitude, even for tiny variations of magnetic field around $1$~T. We see that, by using the magnetic field as a ``dial" to tune the transition frequency to a possible LL transition, one could essentially ``turn off" the graphene sheet, at least with respect to the RET process. Such incredible sensitivity may be also traced to fact that the MPPs depend critically upon Re $\sigma_{xx}$, so small variations in the conductivity can generate big effects in the $L_{\rm MPP}$ and huge modifications in the RET rate. As an aside, we should point out that the normalized RET inherit the small steps that are present in the conductivites, as shown in the inset of Fig. \ref{RETmuTz0005}(f).
Another interesting feature of the normalized RET rate is the oscillatory character - quite intense, for some parameters - as a function of the magnetic field. Although the previous formulas hold in all distance regimes, from now on we shall be concerned with the analysis solely in the near-field region ($\omega_0 z/c \ll 1$) in order to understand this intriguing behavior. Splitting the contribution of the propagating and evanescent modes in (\ref{GSzz}), we can write
\begin{align}
\mathds{G}^{(\textrm{S})}_{zz} &= \frac{i c^2}{4\pi \omega_0^2} \left\{ \int_0^{\omega_0/c} \!\!\! dk_{\|} \frac{k_{\|}^3 \, J_0 (k_{\|} r) \,r^{\textrm{TM,TM}} \, e^{2i k_{0z} z}}{k_{0z}} \right.\nonumber \\
&\left. + \int_{\omega_0/c}^\infty \!\!\! dk_{\|} \frac{k_{\|}^3 \, J_0 (k_{\|} r) \,r^{\textrm{TM,TM}} \, e^{- 2 \kappa_{0z} z}}{i \kappa_{0z}} \right\} \,,
\label{GSzzPropEva}
\end{align}
\noindent with $\kappa_{0z} = i k_{0z} = \sqrt{k_{\|}^2 - \omega_0^2/c^2}$. The evanescent part largely dominates the propagating one in the near-field regime, so Eq. (\ref{GSzzPropEva}) can be approximated to
\begin{equation}
\mathds{G}^{(\textrm{S})}_{zz} \approx \frac{c^2}{4\pi \omega_0^2} \int_0^\infty dk_{\|} k_{\|}^2 \, J_0 (k_{\|} r) \,r^{\textrm{TM,TM}} \, e^{- 2 k_{\|} z} \,,
\label{GSzzNF}
\end{equation}
\noindent where we used $\kappa_{0z} \approx k_{\|}$. Applying the same considerations to the reflection coefficient (\ref{rTMTM}), we get
\begin{equation}
r^{\textrm{TM,TM}} \approx \frac{k_{\|} - \dfrac{i \eta_0 \sigma_{xx}}{2} \! \left[ 1 + \dfrac{\sigma_{xy}^2}{\sigma_{xx}^2} \right] \! \dfrac{\omega_0}{c} }{k_{\|} - i \left\{ \dfrac{2 \epsilon_0 \omega_0}{\sigma_{xx}} + \dfrac{ \eta_0 \sigma_{xx}}{2} \! \left[ 1 + \dfrac{\sigma_{xy}^2}{\sigma_{xx}^2} \right] \! \dfrac{\omega_0}{c} \right\} } \,.
\label{rTMTMNF}
\end{equation}
\noindent Moreover, away from $B \approx 11.6$~T we may retain only the very first contribution in $\eta_0 \sigma_{xx}$, yielding
\begin{align}
r^{\textrm{TM,TM}} &\approx \frac{k_{\|}}{k_{\|} - \dfrac{2 i \epsilon_0 \omega_0}{\sigma_{xx}} } \,,
\label{rTMTMNF}
\end{align}
\noindent from which one immediately identifies the magnetoplasmon polariton at the pole $k_{\textrm{MPP}}~=~2 i \epsilon_0 \omega_0 / \sigma_{xx}$ in accordance with the result obtained in Eq. (\ref{kpMPP}). The substitution of Eq. (\ref{rTMTMNF}) in Eq. (\ref{GSzzNF}) leads us to a simpler expression for the scattering Green function, to wit
\begin{equation}
\mathds{G}^{(\textrm{S})}_{zz} \approx \frac{c^2}{4\pi \omega_0^2} \int_0^\infty dk_{\|} \frac{k_{\|}^3 \, J_0 (k_{\|} r)}{k_{\|} - \dfrac{2 i \epsilon_0 \omega_0}{\sigma_{xx}}} \, e^{- 2 k_{\|} z} \,.
\label{GSzzNF1}
\end{equation}
\noindent Despite its relative simplicity, we could not solve (\ref{GSzzNF1}) in terms of well known functions. We are, however, particularly interested in the $|2 i \epsilon_0 \omega_0 /\sigma_{xx}| \gg 1/z$ regime, corresponding to low magnetic fields (away from the abrupt changes at the LL transitions). Then, an analytical solution for Eq. (\ref{GSzzNF1}) is available, and also taking into account that $\textrm{Im}(\sigma_{xx}) \gg \textrm{Re}(\sigma_{xx})$, we get
\begin{align}
\mathds{G}^{(\textrm{S})}_{zz} &\approx \frac{c^2 \, \textrm{Im}(\sigma_{xx})}{4 \pi \epsilon_0 \omega_0^3} \frac{3 z (3 r^2 - 8z^2)}{(r^2 + 4z^2)^{7/2}} \,.
\label{GSzzNF2}
\end{align}
In Fig. (\ref{RET-NF-Ima}) we depict the comparison of the RET rate using (\ref{GSzzNF1}) and (\ref{GSzzNF2}). It is clearly seen that the low field approximation captures a sort of average behavior, but fails to show the marked oscillations present in (\ref{GSzzNF1}). At this point, we remember that the denominator in Eq. (\ref{GSzzNF1}) comes from the $r^{\textrm{TM,TM}}$, whose pole provides us with the dispersion relation of the MPPs. To derive Eq. (\ref{GSzzNF2}) we effectively disregarded this pole and, consequently, the information on the contribution of the interaction with the surface plasmons. That led us to a result with a clear interpretation in terms of images - as $\sqrt{r^2+(2z)^2}$ is the distance between an emitter and the image of the other - but it should be recalled that such interpretation was not to be obviously expected: we are in the low conductivity regime, so these dressed images probably owe their appearance more to the plane symmetry than to the (short) distance regime.
\begin{figure}
\begin{center}
\includegraphics[width=8.6cm]{RETmu01z0005r002T300NFIm.pdf}
\end{center}
\vskip -0.5cm
\caption{Normalized RET rate as a function of the magnetic field for the case previously shown with $T = 300$~K, $\mu_c = 0.1$~eV and $r~=~0.02\lambda_0$. The plots are comparisons between results obtained with $\mathds{G}^{(\textrm{S})}_{zz}$ calculated using Eqs. (\ref{GSzzNF1}) (blue curve) and (\ref{GSzzNF2}) (red curve).}
\label{RET-NF-Ima}
\end{figure}
\section{\label{SecConclusions}Final remarks and conclusions}
In summary, we have investigated the resonance energy transfer between two emitters near a graphene sheet in the presence of a constant, uniform and perpendicular magnetic field. The fundamental motivation was to take advantage of the remarkable magneto-optical properties of graphene in order to tailor and control the RET rate between the emitters. From our findings, we conclude that, in addition to providing us with a promising platform to manipulate atomic interaction through an external agent, the RET is particularly suitable to active manipulation due to its extreme sensitivity to variations of the magnetic field. We have demonstrated that the strongly confined magnetoplasmon polaritons supported by the graphene monolayer play a key role in the excitation transfer between the emitters. We stress that the RET rate can be enormously altered, suffering abrupt variations up to six orders of magnitude with respect to the free space value. Moreover, specially in the case of room temperature, these huge variations occur for feasible values of the magnetic field (of the order of $1$~T for appropriate choices of the system parameters), being within the scope of experimental realization. As a matter of fact, the RET modulation is so large and so sharp that magnetoactive materials could be thought as an energy transfer switch, that can be turned on and off with no physical contact. Altogether we expect that these results will not only allow for an alternative way to control the resonance energy transfer but also pave the way for the development of new devices in plasmonics and nanophotonics.
\begin{acknowledgments}
P. P. A and C. F. thank L. Martín-Moreno for enlightening discussions. C.F. and F.S.S.R. acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) for financial support (grant numbers 310365/2018-0 9 and 309622/2018-2). F.S.S.R. (grant number E26/203.300/2017) and P.P.A. acknowledge Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ).
\end{acknowledgments}
|
1,108,101,564,267 | arxiv |
\section{Introduction
The training of a machine learning model is often represented through an optimization problem. The goal is to calibrate the model's parameters to optimize its goodness-of-fit with respect to training data. The goodness-of-fit is usually quantified through a loss function that sums up the losses from the misrepresentation of every single training data set, see, e.g., \citet{Goodfellow2016} or \citet{Hansen2010,LeCam90} for similar optimization problems in imaging and statistics.
In `big data' settings, the sum of these loss functions consists of thousands or millions of terms, making classical optimization methods computationally infeasible.
Thus, efficiently solving optimization problems of this form has been a focus of machine learning and optimization research in the past decades. Here, methods often build upon the popular stochastic gradient descent method.
Originally, stochastic gradient descent was proposed by \citet{RobbinsMonro} to optimize not only sums of loss functions, but also expectations of randomized functions.\footnote{Actually, the `stochastic approximation method' of \citet{RobbinsMonro} aims at finding roots of functions that are given as expectations of randomized functions. The method they construct resembles stochastic gradient descent for a least squares loss function.} Of course, a normalized sum is just a special case of an expected value, making stochastic gradient descent available for the kind of training problem described above. Based on stochastic gradient descent ideas, improved algorithms have been proposed for optimizing sums of loss functions, such as \citet{Chambolle2018,Defazio2014,Duchi2011}. Unfortunately, these methods often specifically target sums of loss functions and are often infeasible to optimize general expected values of loss functions.
The optimization of expected values of loss functions appears in the presence of countably infinite and continuous data in functional data analysis and non-parametric statistics (e.g., \citet{Sinova2018}), physics-informed deep learning (e.g., \citet{PINN}), inverse problems (e.g., \citet{Bredies2018}), and continuous data augmentation/adversarial robustness (e.g., \citet{cohen,Shorten2019,AdversarialRobustRL}). Some of these problems are usually studied after discretising the data. Algorithms for discrete data sometimes deteriorate at the continuum limit, i.e. as the number of data sets goes to infinity. Thus, we prefer studying the continuum case immediately. Finally, `continuous data' can also refer to general noise models. Here, expected values are minimized in robust optimization (e.g., \citet{Nemirovski2009}), variational Bayesian inference (e.g., \citet{Cherief2019}), and optimal control (e.g., \citet{May2013}). Overall, the optimization of general expected values is a very important task in modern data science, machine learning, and related fields.
In this work, we study stochastic gradient descent for general expected values in a continuous-time framework. We now proceed with the formal introduction of the optimization problem, the stochastic gradient descent algorithm, its continuous-time limits, and current research in this area.
\subsection{Problem setting and state of the art} \label{subsec_problemsett}
We study optimization problems of the form
\begin{equation} \label{Eq:OptProb}
\min_{\theta \in X} \Phi(\theta) := \int_S f(\theta, y) \pi(\mathrm{d}y),
\end{equation}
where $X := \mathbb{R}^K$, $S$ is a Polish space, $f: S \times X \rightarrow \mathbb{R}$ is a measurable function that is continuously differentiable in the first variable, and $\pi$ is a probability measure on $S$. Moreover, we assume that the integral above always exists.
We refer to $X$ as \emph{parameter space}, $S$ as \emph{index set}, $\Phi$ as \emph{full target function}, and $f$ as \emph{subsampled target function}. In these optimization problems, it is usually impossible or intractable to evaluate the integral $\Phi(\theta)$ or its gradient $\nabla \Phi(\theta)$ for $\theta \in X$.
Hence, traditional optimization algorithms, such as steepest gradient descent or Newton methods are not applicable.
As mentioned above, it is possible to employ stochastic optimization methods, such as the \emph{stochastic gradient descent (SGD)} method, see \citet{KushnerYin,RobbinsMonro}. The stochastic gradient descent method for \eqref{Eq:OptProb} proceeds through the following discrete-time dynamic that iterates over $n \in \mathbb{N}:= \{1,2,\ldots \}$:
\begin{align} \label{Eq:SGD_discrete_time}
\theta_{n} = \theta_{n-1} - \eta_n \nabla_{\theta}f(\theta_{n-1}, y_n),
\end{align}
where $y_1, y_2, \ldots \sim \pi$ independent and identically distributed (i.i.d.), $(\eta_n)_{n=1}^\infty \in (0, \infty)^\mathbb{N}$ is a non-increasing sequence of \emph{learning rates}, and $\theta_0 \in X$ is an appropriate initial value.
Hence, SGD is an iterative method that employs only the gradient of the integrand $f$, but not $\Phi$.
SGD converges to the minimizer of $\Phi$, if $\eta_n \rightarrow 0$, as $n \rightarrow \infty$, sufficiently slowly, and $f(\cdot, y)$ $(y \in S)$ is strongly convex; see, e.g., \citet{Bubeck}. Additionally, SGD is used in practice also for non-convex optimization problems and with constant learning rate. The constant learning rate setting is popular especially due to its regularizing properties; see \citet{Ali20a,smith2021on}.
To understand, improve, and study discrete-time dynamical systems, it is sometimes advantageous to represent them in continuous time, see, e.g. the works by \citet{deWiljes2018,Kovachki21,Trillos20}. Continuous-time models allow us to concentrate on the underlying dynamics and omit certain numerical considerations. Moreover, they give us natural ways to construct new, efficient algorithms.
The discrete-time dynamic in \eqref{Eq:SGD_discrete_time} is sometimes represented through a continuous-time diffusion process, see \citet{Ali20a,Li2019, Weinan2, Mandt2016, Mandt2017,wojtowytsch2021stochastic}:
$$
\mathrm{d} \theta_t = -\nabla \Phi(\theta_t) \mathrm{d}t + \sqrt{\eta(t)} \Sigma(\theta_t)^{1/2} \mathrm{d}W_t,
$$
where $\Sigma(\theta) = \int (\nabla_\theta f(\theta; y)- \nabla_\theta\Phi(\theta)) \otimes (\nabla_\theta f(\theta; y)- \nabla_\theta \Phi(\theta)) \pi(\mathrm{d}y)$, $(W_t)_{t \geq 0}$ is a $K$-dimen\-sional Brownian motion, and $(\eta(t))_{t \geq 0}$ is an interpolation of the learning rate sequence. While this diffusion approach is suitable to describe the dynamic of the moments of SGD, it does not immediately allow us to construct new stochastic optimization algorithms, as the system depends on the inaccessible $\nabla \Phi$.
\begin{figure}
\centering
\input{cartoon_discrete.tex}
\caption{Cartoon of the stochastic gradient process $(\theta^\dagger_t)_{t \geq 0}$ with index process $(\mathbf{i}(t))_{t \geq 0}$ on the discrete index set $S:= \{-1, -0.6,\ldots,1\}$. The index process is a Markov pure jump process on $S$. The process $(\theta^\dagger_t)_{t \geq 0}$ aims at optimizing the $\mathrm{Unif}(S)$-integral of the subsampled target functional is $f(\theta,y) := \frac{1}{2}(\theta - y^2)^2$ $(\theta \in X:=\mathbb{R}, y \in S)$.}
\label{fig:cartoon_discrete}
\end{figure}
A continuous-time representation of stochastic gradient descent that does not depend on $\Phi$ has recently been proposed by \citet{Latz}. This work only considers the discrete data case, i.e., $S$ is finite and $\pi := \mathrm{Unif}(S)$. SGD is represented by the \emph{stochastic gradient process} $(\theta^\dagger_t)_{t \geq 0}$. It is defined through the coupled dynamical system
\begin{equation}\label{eq_SGPC_discrete}
\mathrm{d} \theta_t^\dagger = - \nabla_{\theta} f(\theta_t^\dagger; \mathbf{i}(t)) \mathrm{d}t,
\end{equation}
where $(\mathbf{i}(t))_{t \geq 0}$ is a suitable continuous-time Markov process on $S$, which we call \emph{index process}. Hence, the process $(\theta_t^\dagger)_{t \geq 0}$ represents gradient flows with respect to the subsampled target functions that are switched after random waiting times. The random waiting times are controlled by the continuous-time Markov process $(\mathbf{i}(t))_{t \geq 0}$.
We show an example of the coupling of exemplary process $(\mathbf{i}(t))_{t \geq 0}$ and $(\theta_t^\dagger)_{t \geq 0}$ in Figure~\ref{fig:cartoon_discrete}. The setting is $S := \{-1, -0.6,\ldots,1\}$, $\pi := \mathrm{Unif}(S)$, $X:= \mathbb{R}$, and $f(\theta, y) := \frac{1}{2}(\theta - y^2)^2$ $(\theta \in X, y \in S)$. There, we see that the sample path of $(\theta_t^\dagger)_{t \geq 0}$ is piecewise smooth, with non-smooth behavior at the jump times of $(\mathbf{i}(t))_{t \geq 0}$.
If the process $(\mathbf{i}(t))_{t \geq 0}$ is homogeneous-in-time, the dynamical system represents a constant learning rate. Inhomogeneous $(\mathbf{i}(t))_{t \geq 0}$ with decreasing mean waiting times, on the other hand, model a decreasing learning rate. Under certain assumptions, the process $(\theta_t^\dagger)_{t \geq 0}$ converges to a unique stationary measure when the learning rate is constant or to the minimizer of $\Phi$ when the learning rate decreases.
\subsection{This work.} \label{Subsec_Intro_thisWork}
We now briefly introduce the continuous-time stochastic gradient descent methods that we study throughout this work. Then, we summarize our main contributions and give a short paper outline.
In the present work, we aim to generalize the dynamical system \eqref{eq_SGPC_discrete} to include more general spaces $S$ and probability measures $\pi$ -- studying the more general optimization problems of type \eqref{Eq:OptProb}.
We proceed as follows: We define a stationary continuous-time Markov process $(V_t)_{t \geq 0}$ on $S$ that is geometrically ergodic and has $\pi$ as its stationary measure. This process $(V_t)_{t \geq 0}$ is now our \emph{index process}. Similarly to \eqref{eq_SGPC_discrete}, we then couple $(V_t)_{t \geq 0}$ with the following gradient flow:
\begin{equation} \label{Eq_SGPC}
\mathrm{d} \theta_t = - \nabla_{\theta} f(\theta_t, V_t) \mathrm{d}t.
\end{equation}
Note that the index process $(V_t)_{t \geq 0}$ can be considerably more general than the Markov jump processes studied by \citet{Latz}; we discuss examples below and in Section~\ref{mainprocess}.
As the dynamical system \eqref{Eq_SGPC} contains the discrete version \eqref{eq_SGPC_discrete} as a special case, we refer to $(\theta_t)_{t \geq 0}$ also as \emph{stochastic gradient process}.
We give an example for $(\theta_t, V_t)_{t \geq 0}$ in Figure~\ref{fig:cartoon_continuous}. There, we consider $S :=[-1,1]$, $\pi := \mathrm{Unif}[-1, 1]$, $X:= \mathbb{R}$, and $f(\theta,y) := \frac{1}{2}(\theta - y^2)^2$ $(\theta \in X, y \in S)$. A suitable choice for $(V_t)_{t \geq 0}$ is a reflected Brownian motion on $[-1, 1]$. Although it is coupled with $(V_t)_{t \geq 0}$, the process $(\theta_t)_{t \geq 0}$ appears to be relatively smooth. This may be due to the smoothness of the subsampled target function $f$.
Moreover, we note that the example in Figure~\ref{fig:cartoon_discrete} is a discretized data version of the example here in Figure~\ref{fig:cartoon_continuous}.
\begin{figure}
\centering
\input{cartoon_cont.tex}
\caption{Cartoon of the stochastic gradient process $(\theta_t)_{t \geq 0}$ with index process $(V_t)_{t \geq 0}$, with continuous index set $S:= [-1,1]$. The index process is a reflected Brownian motion on $S$.}
\label{fig:cartoon_continuous}
\end{figure}
More similarly to the discrete data case \eqref{eq_SGPC_discrete}, one could also choose $(V_t)_{t \geq 0}$ to be a Markov pure jump process on $S$ that has $\pi$ as a stationary measure.
Indeed, the reflected Brownian motion was constructed rather artificially. Sampling from $\mathrm{Unif}[-1, 1]$ is not actually difficult in practice and we just needed a way to find a continuous-time Markov process that is stationary with respect to $\mathrm{Unif}[-1, 1]$. However, there are cases, where one may not be able to sample independently from $\pi$. For instance, $\pi$ could be the measure of interest in a statistical physics simulation or Bayesian inference. In those cases, Markov chain Monte Carlo methods are used to approximate $\pi$ through a Markov chain stationary with respect to it, see, e.g., \citet{Robert2004}. In other cases, the data might be time series data that is streamed at runtime of the algorithm -- a related problem has been studied by \citet{Sirignano2017SIFIN}.
Hence, in the work, we also discuss stochastic optimization in those cases or -- more generally -- stochastic optimization with respect to data from arbitrary sources.
As the index process $(V_t)_{t \geq 0}$ is stationary, the stochastic gradient process as defined above would, again, represent the situation of a constant learning rate $(\eta_n)_{n =1}^\infty$.
However, as before we usually cannot hope for convergence to a stationary point if there is not a sense of a decreasing learning rate.
Hence, we need to introduce an inhomogeneous variant of $(V_t)_{t \geq 0}$ that represents a decreasing learning rate.
We start with the stochastic process $(V_t^{\rm dc})_{t \geq 0}$ that represents the index process associated to the discrete-time stochastic gradient descent dynamic \eqref{Eq:SGD_discrete_time} with constant learning rate parameter $\eta = 1$. This index process is given by
$$
V_t^{\rm dc} = \sum_{n=1}^\infty y_n\mathbf{1}[t \in [n-1,n) ] = \sum_{n=1}^\infty y_n\mathbf{1}[t - n +1 \in [0,1) ] \qquad (t \geq 0),
$$
where $y_1, y_2, \ldots \sim \pi$ i.i.d.\ and $\mathbf{1}[\cdot]$ is the indicator function: $\mathbf{1}[{\rm true}] = 1$ and $\mathbf{1}[{\rm false}] = 0$.
We now want to turn the process $(V_t^{\rm dc})_{t \geq 0}$ into the index process $(V_t^{\rm dd})_{t \geq 0}$ that represents a decreasing learning rate $(\eta_n)_{n=1}^\infty$. It is defined through:
$$(V_t^{\rm dd})_{t \geq 0} = \sum_{n=1}^\infty y_n\mathbf{1}\left[t \in \left[H_{n-1}, H_n\right) \right] =\sum_{n=1}^\infty y_n\mathbf{1}\left[ \frac{t- H_{n-1}}{\eta_n} \in \left[0, 1\right) \right], $$
where we denote $H_n := \sum_{m = 1}^n \eta_m$.
Hence, we can represent $(V_t^{\rm dd})_{t \geq 0} := (V_{\beta(t)}^{\rm dc})_{t \geq 0}$, where $\beta: [0, \infty) \rightarrow [0, \infty)$ is given by
\begin{equation} \label{eq_beta_and_eta}
\beta(t) = \sum_{n= 1}^\infty \frac{t+n-1-H_{n-1}}{\eta_n}\mathbf{1}\left[t \in \left[H_{n-1}, H_n\right) \right] \qquad (t \geq 0)
\end{equation}
is a piecewise linear, non-decreasing function with $\beta(t) \rightarrow \infty,$ as $t \rightarrow \infty$.
Following this idea, we turn our homogeneous index process $(V_t)_{t \geq 0}$ that represents a constant learning rate into an inhomogeneous process with decreasing learning rate using a suitable rescaling function $\beta$.
In that case, we obtain a stochastic gradient process of type
\begin{equation*}
\mathrm{d} \xi_t = - \nabla_{\xi} f(\xi_t, V_{\beta(t)}) \mathrm{d}t,
\end{equation*}
which we will use to represent the stochastic gradient descent algorithm with decreasing learning rate.
Note that while we require $\beta$ to satisfy certain conditions that ensure the well-definedness of the dynamical system, it is not strictly necessary for it to be of the form \eqref{eq_beta_and_eta}. Actually, we later assume that $\beta$ is smooth.
The main contributions of this work are the following:
\begin{itemize}
\item We study stochastic gradient processes for optimization problems of the form \eqref{Eq:OptProb} with finite, countably infinite, and continuous index sets $S$.
\item We give conditions under which the stochastic gradient process with constant learning rate is well-defined and that it can approximate the full gradient flow $\mathrm{d} \zeta_t = - \nabla \Phi(\zeta_t) \mathrm{d}t$ at any accuracy. In addition, we study the geometric ergodicity of the stochastic gradient process and properties of its stationary measure.
\item We study the well-definedness of the stochastic gradient process with decreasing learning rate and give conditions under which the process converges to the minimizer of $\Phi$ in the optimization problem \eqref{Eq:OptProb}.
\item In numerical experiments, we show the suitability of our stochastic gradient process for (convex) polynomial regression with continuous data and the (non-convex) training of physics-informed neural networks with continuous sampling of function-valued data.
\end{itemize}
This work is organized as follows. In Section~\ref{mainprocess}, we study the index process $(V_t)_{t \geq 0}$ and give examples for various combinations of index spaces $S$ and probability measures $\pi$. Then, in Sections~\ref{Sec_SGPC} and \ref{Sec_SGPD}, we analyze the stochastic gradient process with constant and decreasing learning rate, respectively. In Section~\ref{Sec_PracticalOptimisation}, we review discretization techniques that allow us to turn the continuous dynamical systems into practical optimization algorithms. We employ these techniques in Section~\ref{Sec_NumExp}, where we present numerical experiments regarding polynomial regression and the training of physics-informed neural networks. We end with conclusions and outlook in Section~\ref{Sec_conclusions}.
\section{The index process: Feller processes and geometric ergodicity}\label{mainprocess}
Before we define the stochastic gradient flow, we introduce and study the class of stochastic processes $(V_t)_{t\ge 0}$ that can be used for the data switching in (\ref{Eq_SGPC}). Moreover, we give an overview of appropriate processes for various measures $\pi$. For more background material on (continuous-time) stochastic processes, we refer the reader to the book by \citet{RY}, the book by \citet{Liggett}, and other standard literature.
Let $\mathcal{S}=(S,m)$ be a compact Polish space and
\begin{align*}
\Omega=\{\omega: [0,\infty)\to S\ |\ \omega\ \text{is right continuous with left limits.}\}.
\end{align*}
We consider a filtered probability space $(\Omega, \mathcal{F},(\mathcal{F}_t)_{t\ge 0},(\mathbb{P}_x)_{x\in S})$, where $\mathcal{F}$ is the smallest $\sigma$-algebra on $\Omega$ such that the mapping $\omega \to \omega(t)$ is measurable for any $t\ge 0$ and the filtration $\mathcal{F}_t$ is right continuous. Let $(V_t)_{t\ge 0}$ be a $(\mathcal{F}_t)_{t\ge 0}$ adapted stochastic process from $\Omega$ to $S$. We assume that $(V_t)_{t\ge 0}$ is Feller with respect to $(\mathcal{F}_t)_{t\ge 0}.$ $(\mathbb{P}_x)_{x\in S}$ is a collection of probability measures on $\Omega$ such that $\mathbb{P}_x(V_0=x)=1.$
For any probability measure $\mu$ on $S$, we define
$$
\mathbb{P}_\mu(\cdot):=\int_S \mathbb{P}_y(\cdot)\mu(\mathrm{d}y)
$$
and denote expectations with respect to $\mathbb{P}_x$ and $\mathbb{P}_\mu$ by $\mathbb{E}_x$ and $\mathbb{E}_\mu$, respectively.
Below we give a set of assumptions on the process $(V_t)_{t\ge 0}$. We need those to ensure that a certain coupling property holds. We comment on these assumptions after stating them.
\begin{assumption}\label{as1.0}
Let $(V_t)_{t\ge 0}$ be a Feller process on $(\Omega, \mathcal{F}, (\mathcal{F}_t))_{t\ge 0},(\mathbb{P}_x)_{x\in S})$. We assume the following:
\begin{itemize}
\item[(i)] $(V_t)_{t\ge 0}$ admits a unique invariant measure $\pi$.
\item[(ii)]For any $x\in S$, there exist a family $(V^x_t)_{t\ge 0}$ and a stationary version $(V^\pi_t)_{t\ge 0}$ defined on the same probability space $(\tilde\Omega, \mathcal{\tilde F},\tilde\mathbb{P})$ such that, $(V^x_t)_{t\ge 0}\stackrel{d}{=} (V_t)_{t\ge 0}$ in $\mathbb{P}_x$ and $(V^\pi_t)_{t\ge 0}\stackrel{d}{=} (V_t)_{t\ge 0}$ in $\mathbb{P}_\pi$, i.e. for any $0\leq t_1< \cdots< t_n$,
$$\tilde\mathbb{P}(V_{t_1}^x\in A_1, \cdots, V_{t_n}^x \in A_n) = \mathbb{P}_x(V_{t_1}\in A_1, \cdots, V_{t_n} \in A_n),$$
$$\tilde\mathbb{P}(V_{t_1}^\pi\in A_1, \cdots, V_{t_n}^\pi \in A_n) = \mathbb{P}_\pi(V_{t_1}\in A_1, \cdots, V_{t_n} \in A_n),$$
where $A_1, \cdots, A_n \in \mathcal{B}(S)$.
\item[(iii)] Let $T^x:=\inf{\{t\ge 0\ |\ V^x_t=V^\pi_t \}}$ be a stopping time. There exist constants $C, \delta>0$ such that for any $t\geq 0$,
$$
\sup_{x\in S}\tilde\mathbb{P}(T^x\ge t)\le C\exp({-\delta t}).
$$
\end{itemize}
\end{assumption}
First, we assume that $(V_t)_{t\geq 0}$ has a stationary measure $\pi$.
Second, we assume that for the process $(V_t)_{t\ge 0}$ that starts from $x$ with probability $1$, we can find a coupled process $(V^x_t)_{t\ge 0}$. Also, given that the process $(V_t)_{t\ge 0}$ starts with its invariant measure $\pi$, we can find a stationary version $(V^\pi_t)_{t\ge 0}$ of $(V_t)_{t\ge 0}$. Here, the processes $(V^x_t)_{t\ge 0}$ and $(V^\pi_t)_{t\ge 0}$ are defined on the same probability space.
Third, we assume that the processes $(V^x_t)_{t\ge 0}$ and $(V^\pi_t)_{t\ge 0}$ intersect exponentially fast. The exponential rate can be chosen uniformly in $x$ since $S$ is compact.
With Assumption \ref{as1.0}, we have the following lemma.
\begin{lemma}[Geometric Ergodicity]\label{lemma1}
Under Assumption \ref{as1.0}, there exist constants $C, \delta>0$ such that for any $x\in S$ and $t\geq 0$,
\begin{align*}
\sup_{A\in \mathcal{B}(S)} |\mathbb{P}_x(V_t\in A)-\pi(A)|\le C\exp({-\delta t}),
\end{align*}
where $\mathcal{B}(S)$ is the set of all Borel measurable sets of $S$.
\end{lemma}
\begin{proof}
For any given $x\in S$,
we construct the following process by coupling $(V^x_t)_{t\ge 0}$ and $(V^\pi_t)_{t\ge 0}$:
\begin{align*}
\tilde V^x_t=\left\{
\begin{aligned}
&V^x_t,\ \ \ 0\le t\le T^x,\\
&V^\pi_t,\ \ \ t> T^x.
\end{aligned}
\right.
\end{align*}
By the strong Markov property, $(\tilde V^x_t)_{t\ge 0}\stackrel{d}{=} (V^x_t)_{t\ge 0}$. For any $A\in \mathcal{B}(S)$, notice that
\begin{align*}
\abs{\mathbb{P}_x(V_t\in A)-\pi(A)}\\
=& |\tilde \Prb(V^x_t\in A)-\tilde \Prb(V^\pi_t\in A)|\\
=&|\tilde \Prb(\tilde V^x_t\in A)-\tilde \Prb(V^\pi_t\in A)|\\
=& |\tilde \Prb(\tilde V^x_t\in A, \tilde V^x_t\ne V^\pi_t) + \tilde \Prb(\tilde V^x_t\in A, \tilde V^x_t = V^\pi_t) \\
&\ \ \ -(\tilde \Prb(V^\pi_t\in A, \tilde V^x_t\ne V^\pi_t)+\tilde \Prb(V^\pi_t\in A, \tilde V^x_t= V^\pi_t))|\\
=& |\tilde \Prb(\tilde V^x_t\in A, \tilde V^x_t\ne V^\pi_t) -\tilde \Prb(V^\pi_t\in A, \tilde V^x_t\ne V^\pi_t)|\\
\le& 2 \tilde \Prb(\tilde V^x_t\ne V^\pi_t)\\
\le& 2 \tilde \Prb(T^x\ge t)\le C\exp({-\delta t}).
\end{align*}
From the third assumption in Assumption \ref{as1.0}, $C$ and $\delta$ are independent of $x$ and this completes the proof.
\end{proof}
In the lemma above, we have shown geometric ergodicity of $(V_t)_{t \geq 0}$ in the total variation distance. Next, we show that the same rate of convergence of $(V_t)_{t \geq 0}$ holds in the weak topology.
\begin{corollary}\label{corergodic}
Under Assumption \ref{as1.0}, there exist constants $C, \delta>0$ such that for any $h\in \mathcal{C}(S),$ i.e. the set of all continuous function on $S$, we have
\begin{align*}
\sup_{x\in S}\abs {\mathbb{E}_x[h(V_t)]- \int_S h(y)\pi(\mathrm{d}y)} \le C\norm{h}_{\infty}\exp({-\delta t})
\end{align*}
where $\norm{h}_{\infty}:=\sup_{x\in S}|h(x)|$
\end{corollary}
\begin{proof}
Rewrite $\mathbb{E}_x[h(V_t)]$ as
\begin{align*}
\mathbb{E}_x[h(V_t)]= \int_S h(y) \mathbb{P}_x(V_t\in \mathrm{d}y).
\end{align*}
Then we have
\begin{align*}
\abs {\mathbb{E}_x[h(V_t)]- \int_S h(y)\pi(\mathrm{d}y)}&=\abs { \int_S h(y) [\mathbb{P}_x(V_t\in \mathrm{d}y)-\pi(\mathrm{d}y)]}\\
&\le \norm{h}_\infty\abs { \mathbb{P}_x(V_t)-\pi}( S),
\end{align*}
where $\frac{1}{2}\abs { \mathbb{P}_x(V_t)-\pi}( S)$ is the total variation of the measure $\mathbb{P}_x(V_t)-\pi.$
Notice that $$\abs { \mathbb{P}_x(V_t)-\pi}( S)=2\sup_{A\in\mathcal{B} (S)} |\mathbb{P}_x(V_t\in A)-\pi(A)|.$$
By Lemma \ref{lemma1}, we have
\begin{align*}
\sup_{x\in S}\abs {\mathbb{E}_x[h(V_t)]- \int_S h(y)\pi(\mathrm{d}y)}\le& \norm{h}_\infty\sup_{x\in S}\abs { \mathbb{P}_x(V_t)-\pi}( S)\\
\le& 2\norm{h}_\infty\sup_{x\in S}\sup_{A\in\mathcal{B}(S)}|\mathbb{P}_x(V_t\in A)-\pi(A)|\\
\le& C\norm{h}_{\infty}\exp({-\delta t}),
\end{align*}
which completes the proof.
\end{proof}
We now study four examples for processes that satisfy our assumptions: L\'evy processes with two-sided reflections on a compact interval, continuous-time Markov processes on finite and countably infinite spaces, and processes on rectangular sets with independent coordinates.
\subsection{Example 1: L\'evy processes with two-sided reflection} \label{Subsec_Ex1_Levy_refle} For any $b>0,$ we say a triplet $((X_t)_{t\ge 0},(L_t)_{t\ge 0},(U_t)_{t\ge 0})$ is a solution to the Skorokhod problem of the L\'evy process $(X_t)_{t \geq 0}$ on the space $S := [0,b]$ if for all $t\geq0,$
\begin{align}\label{BMR}
V_t= X_t +L_t-U_t,
\end{align}
where $(L_t)_{t \geq 0},\ (U_t)_{t \geq 0}$ are non-decreasing right continuous processes such that
\begin{align*}
\int_0^\infty V_t\mathrm{d}L_t=\int_0^\infty (b-V_t)\mathrm{d}U_t=0.
\end{align*}
In other words, $(L_t)_{t \geq 0}$ and $(U_t)_{t \geq 0}$ can only increase when $(V_t)_{t \geq 0}$ is at the lower boundary $0$ or the upper boundary $b$. From \citet[Proposition~5.1]{Andersen1}, we immediately have that the process $(V_t)_{t \geq 0}$ in (\ref{BMR}) satisfies Assumption \ref{as1.0}. The geometric ergodicity follows from \citet[Remark~5.3]{Andersen1}.
As an example, the standard Brownian Motion (BM) reflected at 0 and 1 can be written as
$$V_t = B_t + \tilde{L}_t^0 - \tilde{L}_t^1,$$
where $(B_t)_{t \geq 0}$ is a standard BM and $(\tilde{L}^a_t)_{t \geq 0}$ is the symmetric local time of $(V_t)_{t \geq 0}$ at $a$. Intuitively, a local time describes the time spent at a given point of a continuous stochastic process. The formal definition of symmetric local time of continuous semimartingales can be found, for example, in \citet[Chapter~VI]{RY}. For the optimization problem (\ref{Eq:OptProb}) with $S = [0, 1]$ and $\pi$ being the uniform measure on $S$, the corresponding stochastic process in (\ref{Eq_SGPC}) can be chosen to be this Brownian Motion with two-sided refection since its invariant measure is the uniform measure on $[0, 1]$.
To see this, for $x\in[0, 1]$, from \citet[Theorem~5.4]{Andersen1},
\begin{align*}
\pi([x,1])=\mathbb{P}(B_{\tau_x\land \tau_{x-1}}=x)=\mathbb{P}(\tau_x<\tau_{x-1})=1-x,
\end{align*}
where $\tau_a = \inf\{t\ge0| B_t = a\}$.
\subsection{Example 2: Continuous-time Markov processes}
We consider a continuous-time Markov process $V_t$ on state space $I=\{1,2,\ldots,N\}$ with transition rate matrix
$$\mathbf{A}_N=\mathbf{\Lambda}_N-N\lambda \mathbf{I}_{N},$$
where $\lambda>0$, $\mathbf{\Lambda}_N$ is a $N\times N$ matrix whose entry is always $\lambda$, and $\mathbf{I}_{N}$ is the identity matrix.
From \citet{Latz}, we know that the transition probability is given by
\begin{align*}
\mathbb{P}(V_{t+s}=i|V_s=j)=\frac{1-\exp(- \lambda Nt)}{N}+\exp(- \lambda Nt)\mathbf{1}[{i=j}].
\end{align*}
The invariant measure $\pi$ of $V_t$ is the uniform measure on $I$, i.e. $\pi(i)=1/N$ for $i\in \{1,...,N\}$. To see that $(V_t)_{t \geq 0}$ satisfies the rest of Assumption \ref{as1.0}, consider a stationary version $(\hat V_t)_{t\ge 0}$ that is independent of $(V_t)_{t\ge 0}$.
Let $V_0=1$. We define $T=\inf\{t\ge 0, V_t=\hat V_t\}$. Moreover, for $i,\ j\in \mathbb{N}$, we denote the $i$-th and $j$-th jump time of $(V_t)_{t\ge 0}$ and $(\hat V_t)_{t\ge 0}$ by $T_i$ and $\hat T_j$, respectively. Then we have
$$
\mathbb{P}(T=0)=\mathbb{P}(V_0=1,\hat V_0=1)=\mathbb{P}(V_0=1)\mathbb{P}(\hat V_0=1)=\frac{1}{N}.
$$
For any $i,\ j\in \mathbb{N}$, since $T_i$ and $\hat T_j$ are independent, $\mathbb{P}(T_i=\hat T_j)=0$. Let $Y_t=(V_t, \hat V_t)$ be a Markov process on $I\times I$ with transition probability:
\begin{align*}
\mathbb{P}(Y_{t+s}=(i,j)|Y_s=(i_0,j_0))&=\frac{1-\exp(- 2\lambda Nt)}{2N}\Big(\mathbf{1}[{i=i_0}]+\mathbf{1}[{j=j_0}]\Big) \\ &\qquad +\exp(- 2\lambda Nt)\mathbf{1}[{(i,j)=(i_0,j_0)}].
\end{align*}
Thus, $T$ is the first time when $Y_t$ hits $\{(i,i)|i=1,...,N\}.$ Let the n-th jump time of $Y_t$ be $\tau_n$, for $t>0$, we have
\begin{align*}
\mathbb{P}(T\ge t)=& \sum_{n\ge 1}\mathbb{P}(T=\tau_n,\tau_n\ge t)\\
=& \sum_{n\ge 1}\exp({-2(N-1)n\lambda t})\frac{N-1}{N}\frac{1}{N-1}\Big(\frac{N-2}{N-1}\Big)^{n-1}\\
\le& C\exp({-2(N-1)\lambda t}),
\end{align*}
where the second equality follows from $$\mathbb{P}(T=\tau_n)= \frac{N-1}{N}\frac{1}{N-1}\Big(\frac{N-2}{N-1}\Big)^{n-1}$$ since there are $2N-4$ states available for the next jump.
Thus, $(V_t)_{t \geq 0}$ satisfies Assumption~\ref{as1.0}.
\subsection{Example 3: Continuous-time Markov processes with countable states}
We consider a continuous-time Markov process $(V_t)_{t \geq 0}$ on state space $\mathbb{N}_0 := \mathbb{N} \cup \{0\}$ with exponential jump times. At time $t$, if $V_t\in\mathbb{N},$ it jumps to $0$ with probability $1$ at the next jump time. Otherwise, if $V_t = 0$, it jumps to $i$ with probability $1/2^i.$
It is easy to verify that the invariant measure $\pi$ of $V_t$ is $\pi(\{i\})=1/2^{i+1}$. One may consider $\mathbb{N}$ as one state and view $V_t$ as a Markov process with two states.
To verify that $(V_t)_{t \geq 0}$ satisfies the rest of Assumption \ref{as1.0}, similarly to the previous example, we consider a stationary version $(\hat V_t)_{t\ge 0}$ that is independent of $(V_t)_{t\ge 0}$.
Let $V_0=0$ and $T=\inf\{t\ge 0, V_t=\hat V_t=0\}$. For $i,\ j\in \mathbb{N}_0$, we denote the $i$-th and $j$-th jump time of $(V_t)_{t\ge 0}$ and $(\hat V_t)_{t\ge 0}$ by $T_i$ and $\hat T_j$, respectively. Then we have
$$
\mathbb{P}(T=0)=\mathbb{P}(V_0=0,\hat V_0=0)=\mathbb{P}(V_0=0)\mathbb{P}(\hat V_0=0)=\frac{1}{2}.
$$
For any $i,\ j\in \mathbb{N}_0$, since $T_i$ and $\hat T_j$ are independent, $\mathbb{P}(T_i=\hat T_j)=0$. Let $Y_t=(V_t, \hat V_t)$ be a Markov process on $\mathbb{N}_0\times \mathbb{N}_0$.
Notice that $T$ is the first time when $Y_t$ hits $(0,0).$ Let the n-th jump time of $Y_t$ be $\tau_n$, for $t>0$, we have
\begin{align*}
\mathbb{P}(T\ge t)=& \sum_{n\ge 1}\mathbb{P}(T=\tau_n,\tau_n\ge t)\\
=& \sum_{k\ge 0}\mathbb{P}(T=\tau_{2k+1},\tau_{2k+1}\ge t)\\
=& \sum_{k\ge 0}\exp({-2(2k+1) t})\frac{1}{2^{k+2}}
\le \exp({-t}),
\end{align*}
where the second and the third equality follows from $\mathbb{P}(T=\tau_{2k+1})= 2^{-k-1} $ and $\mathbb{P}(T=\tau_{2k})=0$ for $k\ge 1$. Since $\inf\{t\ge 0, V_t=\hat V_t\}$ is upper bounded by $T$, $(V_t)_{t \geq 0}$ satisfies Assumption~\ref{as1.0}.
\subsection{Example 4: Multidimensional processes}
For multidimensional processes, Assumption \ref{as1.0} is satisfied if each component satisfies Assumption \ref{as1.0} and all components are mutually independent. We illustrate this by discussing the 2-dimensional case -- higher-dimensional processes can be constructed inductively. Multidimensional processes arise, e.g., when the underlying space $S$ is multidimensional. They also arise when the $S$ is one-dimensional, but we run multiple processes in parallel to obtain a mini-batch SGD instead of single-draw SGD.
Let $(S^1,m^1)$ and $(S^2,m^2)$ be two compact Polish spaces. We consider the probability triples $(\Omega^1, (\mathcal{F}^1_t))_{t\ge 0},(\mathbb{P}^1_a)_{a\in S^1})$ and $(\Omega^2, (\mathcal{F}^2_t))_{t\ge 0}, (\mathbb{P}^2_b)_{b\in S^2})$ with $\mathbb{P}^1_a((V^1_0=a)=\mathbb{P}^2_b(V^2_0=b)=1$. Let $(V^1_t)_{t\ge 0}$ and $(V^2_t)_{t\ge 0}$ be $(\mathcal{F}^1_t))_{t\ge 0}$ and
$(\mathcal{F}^2_t))_{t\ge 0}$ adapted and from $\Omega^1$ to $S^1$ and $\Omega^2$ to $S^2$ respectively.
In the following proposition, we construct a 2-dimensional process $(V^1_t,V^2_t)_{t\ge 0}$ from $\Omega^1\times \Omega^2$ to $(S^1\times S^2,m^1+m^2)$ with a family of probability measures $(\mathbb{P}_{(a,b)})_{(a,b)\in S^1\times S^2}$ such that $\mathbb{P}_{(a,b)}(A\times B)=\mathbb{P}^1_a(A)\mathbb{P}^2_b(B)$ for $A\in \mathcal{F}^1$ and $B\in \mathcal{F}^2$.
We now show that the joint process $(V^1_t,V^2_t)_{t\ge 0}$ is Feller and satisfies Assumption \ref{as1.0}, if the marginals do.
\begin{prop}
Let $(V^1_t)_{t\ge 0}$ and $(V^2_t)_{t\ge 0}$ be c\`adl\`ag and Feller with respect to $(\mathcal{F}^1_t)_{t\ge 0}$ and $(\mathcal{F}^2_t)_{t\ge 0},$ respectively, and satisfy Assumption \ref{as1.0} with probability $(\mathbb{P}^1_a)_{a\in S^2}$ and $(\mathbb{P}^2_b)_{b\in S^2}$, respectively. Then $(V^1_t,V^2_t)_{t\ge 0}$ is also c\`adl\`ag and Feller with respect to $\sigma(\mathcal{F}^1_t\times\mathcal{F}^2_t)_{t\ge 0}$ and satisfies Assumption \ref{as1.0} with $(\mathbb{P}_{(a,b)})_{(a,b)\in S^1\times S^2}$.
\end{prop}
\begin{proof}
It is obvious that the process $(V^1_t,V^2_t)_{t\ge 0}$ is c\`adl\`ag and Markovian. To verify the Feller property, we show that for any continuous function $F$ on $S^1\times S^2$, $\mathbb{E}_{(x,y)}[F(V^1_t,V^2_t)]$ is continuous in $(x, y)$. We shall prove this by showing this property for separable $F$ and approximate general continuous functions using this special case. Let $f$ and $g$ be continuous functions on $S^1$ and $S^2$ respectively, then we have
\begin{align}\label{app1}
\mathbb{E}_{(x,y)}[f(V^1_t)g(V^2_t)]=\mathbb{E}^1_x[f(V^1_t)]\mathbb{E}^2_y[g(V^2_t)],
\end{align}
which implies $\mathbb{E}_{(x,y)}[f(V^1_t)g(V^2_t)]$ is continuous in $(x,y)$ since $(V_t^1)_{t\ge 0}$ and $(V_t^2)_{t\ge 0}$ are Feller. By the Stone–Weierstrass theorem, for any $k\ge1$, any continuous function $F$ on $S^1\times S^2$ can be approximated as the following,
\begin{align*}
\sup_{(x,y)\in S^1\times S^2}\abs{F(x,y)-\sum_{i=1}^{n_k}f_i^k(x)g_i^k(y)}\le \frac{1}{k}
\end{align*}
where $f_i^k$ and $g_i^k$ are continuous.
From (\ref{app1}), this implies $\mathbb{E}_{(x,y)}[F(V^1_t,V^2_t)]$ is continuous on $S^1\times S^2$.
Next, we prove that $(V^1_t,V^2_t)_{t\ge 0}$ satisfies Assumption \ref{as1.0}.
Let $\pi^1$ and $\pi^2$ be the invariant measures of $(V^1_t)_{t\ge 0}$ and $(V_t^2)_{t\ge 0}$, respectively. Then $\pi^1\times \pi^2$ is the invariant measure of $(V^1_t,V^2_t)_{t\ge 0}$ since $(V^1_t)_{t\ge 0}$ and $(V^2_t)_{t\ge 0}$ are independent.
From Assumption \ref{as1.0}, we know there exist $(\tilde\Omega^1, \mathcal{\tilde F}^1,\tilde \Prb^1),$ $(\tilde\Omega^2, \mathcal{\tilde F}^2,\tilde \Prb^2),$ such that for any $a\in S^1$ and $b\in S^2$ , $(V^{1,a}_t)_{t\ge 0}\stackrel{d}{=} (V^1_t)_{t\ge 0}$ in $\mathbb{P}^1_a$ and $(V^{2,b}_t)_{t\ge 0}\stackrel{d}{=} (V^2_t)_{t\ge 0}$ in $\mathbb{P}^2_b$. We define $\tilde \Prb$ on $\tilde\Omega^1\times \tilde\Omega^2$ such that
$$
\tilde \Prb(A\times B)=\tilde \Prb^1(A)\tilde \Prb^2(B), \ \ (A\in \mathcal{\tilde F}^1, \ B\in \mathcal{\tilde F}^2).
$$
Then we have that $((V^{1,a}_t)_{t\ge 0})_{a\in S^1}$ and $(V^{1,\pi^1}_t)_{t\ge 0}$ are independent of $((V^{2,b}_t)_{t\ge 0})_{b\in S^2}$ and $(V^{2,\pi^2}_t)_{t\ge 0}$ under $\tilde \Prb.$
Similar to the proof of Lemma \ref{lemma1}, we construct the following processes by the coupling method:
\begin{align*}
\tilde V^{1,a}_t=\left\{
\begin{aligned}
&V^{1,a}_t,\ \ \ 0\le t\le T^{1,a},\\
&V^{1,\pi^1}_t,\ \ \ t> T^{1,a},
\end{aligned}
\right.
\end{align*}
and
\begin{align*}
\tilde V^{2,b}_t=\left\{
\begin{aligned}
&V^{2,b}_t,\ \ \ 0\le t\le T^{2,b},\\
&V^{2,\pi^2}_t,\ \ \ t> T^{2,b}.
\end{aligned}
\right.
\end{align*}
Then the distribution of $(\tilde V^{1,a}_t,\tilde V^{2,b}_t)_{t\ge 0}$ under $\tilde \Prb$ is the same as the distribution of $(V^1_t,V^2_t)_{t\ge 0}$ under $\mathbb{P}_{(a,b)};$
the distribution of $(V^{1,\pi^1}_t, V^{2,\pi^2}_t)_{t\ge 0}$ under $\tilde \Prb$ is the same as the distribution of $(V^1_t,V^2_t)_{t\ge 0}$ under $\mathbb{P}_{\pi^1\times\pi^2}.$ Moreover, $(\tilde V^{1,a}_t,\tilde V^{2,b}_t)_{t\ge 0}$ intersects the invariant state $(V^{1,\pi^1}_t, V^{2,\pi^2}_t)_{t\ge 0}$ at time $T^{1,a}\lor T^{1,b}.$ For any $(a,b)\in S^1\times S^2$,
$$
\tilde \Prb(T^{1,a}\lor T^{1,b}\ge t)\le\tilde \Prb(T^{1,a}\ge t)+ \tilde \Prb(T^{1,b}\ge t)\le C\exp({-\delta t}).
$$
\end{proof}
We have now discussed various properties of potential index processes and move on to study the stochastic gradient process.
\section{Stochastic gradient processes with constant learning rate} \label{Sec_SGPC}
We now define and study the stochastic gradient process with constant learning rate.
Here, the switching between data sets is performed in a homogeneous-in-time way. Hence, it models the discrete-time stochastic gradient descent algorithm when employed with a constant learning rate. Although, one can usually not hope to converge to the minimizer of the target functional in this case, this setting is popular in practice.
To obtain the stochastic gradient process with constant learning rate, we will couple the gradient flow \eqref{Eq_SGPC} with the an appropriate process $(V_{t/\varepsilon})_{t\ge0}$. Here, $(V_t)_{t\ge0}$ is a Feller process introduced in Section \ref{mainprocess} and $\varepsilon > 0$ is a scaling parameter that allows us to uniformly control a switching rate parameter. To define the stochastic process associated with this stochastic gradient descent problem, we first introduce the following assumptions that guarantee the existence and uniqueness of the solution of the associated stochastic differential equation. After its formal definition and the proof of well-definedness, we move on to the analysis of the process. Indeed, we show that the process approximates the full gradient flow \eqref{eq:AS:th}, as $\varepsilon \downarrow 0$. Moreover, we show that the process has a unique stationary measure to which it converges in the longtime limit at geometric speed.
We commence with regularity properties of the subsampled target function $f$ that are necessary to show the well-definedness of the stochastic gradient process.
\begin{assumption}\label{asSGPf}
Let $f(\theta, y)\in \mathcal{C}^2(\mathbb{R}^K\times S,\mathbb{R})$.
1. $\nabla_{\theta} f,$ $H_\theta f$ are continuous.
2. $\nabla_{\theta} f(\theta,y)$ is Lipschitz in $x$ and the Lipschitz constant is uniform for $y\in S.$
3. For $\theta\in \mathbb{R}^K$, $f(\theta,\cdot)$ and $\nabla_{\theta} f$ are integrable w.r.t to the probability measure $\pi(\cdot)$.
\end{assumption}
Now, we move on to the formal definition of the stochastic gradient process.
\begin{defi}
For $\varepsilon>0$, the \emph{stochastic gradient process with} \emph{constant learning rate} \emph{(SGPC)} is a solution of the following stochastic differential equation,
\begin{equation}\label{eq:AS:theta}
\left\{ \begin{array}{l}
\mathrm{d}\theta^\varepsilon_t = - \nabla_{\theta} f(\theta^\varepsilon_t, V_{ t/\varepsilon})\mathrm{d}t, \\
\theta_0^\varepsilon = \theta_0,
\end{array} \right.
\end{equation}
where f satisfies Assumption \ref{asSGPf} and $(V_t)_{t\ge0}$ is a Feller process that satisfies Assumption \ref{as1.0}.
\end{defi}
Given these two assumptions, we can indeed show that the SGPC is a well-defined Markov process. Moreover, we show that the stochastic gradient process is Markovian, a property it shares with the discrete-time stochastic gradient descent method.
\begin{prop}\label{thwk30} Let Assumptions \ref{as1.0} and \ref{asSGPf} hold. Then, equation (\ref{eq:AS:theta}) has a unique strong solution, i.e. the solution $(\theta^\varepsilon_t)_{t \geq 0}$ is measurable with respect to $\mathcal{F}^\varepsilon_t:=\mathcal{F}_{t/\varepsilon}$ for any $t\ge 0$. For $y\in S$, $(\theta_t^\varepsilon,V_{t/\varepsilon})_{t\ge 0} $ is a Markov process under $\mathbb{P}_y$ with respect to $(\mathcal{F}^\varepsilon_t)_{t\ge 0}$.
\end{prop}
\begin{proof}
The existence and the uniqueness of the strong solution to the equation (\ref{eq:AS:theta}) can be found in \citet[Chapter 2, Theorem 4.1]{Kushner1}.
To prove the Markov property, we define the operator $(Q^\varepsilon_t)_{t\ge 0}$ such that
$$
Q^\varepsilon_t h(x,y):=\mathbb{E}_y[h(\theta^\varepsilon_t,V_{t/\varepsilon})|\theta^\varepsilon_0=x ],
$$
for any function $h$ bounded and measurable on $\mathbb{R}^K\times S$. For any $ s,t\ge 0$, we want to show
\begin{align*}
\mathbb{E}[h(\theta^\varepsilon_{t+s},V_{(t+s)/\varepsilon})|\mathcal{F}^\varepsilon_s]=Q^\varepsilon_t h(\theta^\varepsilon_s,V_{s/\varepsilon}).
\end{align*}
We set
$ \hat \theta^\varepsilon_t:=\theta^\varepsilon_{t+s},\ \mathcal{\hat F}_t:=\mathcal{F}^\varepsilon_{t+s},\ \hat V_{t/\varepsilon}:=V_{(t+s)/\varepsilon}.$
Since
\begin{align*}
\theta^\varepsilon_{t+s}=\theta^\varepsilon_s-\int_s^{t+s}\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon}) \mathrm{d}m,
\end{align*}
we have
\begin{align*}
\hat\theta^\varepsilon_t=\hat\theta^\varepsilon_0-\int_0^t\nabla_{\theta} f(\hat\theta^\varepsilon_m, \hat V_{m/\varepsilon}) \mathrm{d}m.
\end{align*}
Hence $\hat\theta^\varepsilon_t$ is the solution of equation (\ref{eq:AS:theta}) with $\hat\theta^\varepsilon_0=\theta^\varepsilon_s$ and $ \hat V_0=V_{ s/ \varepsilon}.$
Moreover,
\begin{align*}
\mathbb{E}[h(\theta^\varepsilon_{t+s},V_{(t+s)/\varepsilon})|\mathcal{F}^\varepsilon_s]&=\mathbb{E}[h(\hat \theta^\varepsilon_t,\hat V_{t/\varepsilon})|\hat \theta^\varepsilon_0=\theta^\varepsilon_s,\hat V_0=V_{s/\varepsilon}]\\
&=\mathbb{E}_{\hat V_0}[h(\hat \theta^\varepsilon_t,\hat V_{t/\varepsilon})|\hat \theta^\varepsilon_0=\theta^\varepsilon_s]\\
&= Q^\varepsilon_t h(\theta^\varepsilon_s,V_{s/\varepsilon}),
\end{align*}
where the second equality and third equality follow from the homogeneous Markov property of $(V^\varepsilon_t)_{t\ge 0}$.
\end{proof}
\subsection{Approximation of the full gradient flow} \label{Subsec:ApproxFullGradientFlow}
We now let $\varepsilon\to0$ and study the limiting behavior of SGPC. Indeed, we aim to show that here the SGPC converges to the \emph{full gradient flow}
\begin{equation}\label{eq:AS:th}
\mathrm{d}\zeta_t = -\Big[\int_{S} \nabla_{\zeta} f(\zeta_t, v)\pi(\mathrm{d}v)\Big]\mathrm{d}t.
\end{equation}
We study this topic for two reasons: First, we aim to understand the interdependence of $(V_t)_{t\ge0}$ and $(\theta_t^{\varepsilon})_{t\ge0}$. Second, we understand SGPC as an approximation to the full gradient flow \eqref{eq:AS:th}, as motivated in the introduction. Hence, we should show that SGPC can approximate the full gradient flow at any accuracy.
We now denote $g(\cdot):=\int_{S} \nabla_{\zeta} f(\cdot, v)\pi(\mathrm{d}v)\in \mathcal{C}^1(\mathbb{R}^K,\mathbb{R}^K).$ Then, we can define $(\zeta_t)_{t\ge 0}$ through the dynamical system $\mathrm{d}\zeta_t = - g(\zeta_t)\mathrm{d}t$. Moreover, let $\mathcal{C}([0,\infty):\mathbb{R}^K)$ be the space of continuous functions from $[0,\infty)$ to $\mathbb{R}^K$ equipped with the distance
$$
\rho\Big((\varphi_t)_{t\ge 0},(\varphi'_t)_{t\ge 0}\Big):= \int_0^\infty \exp({-t}) (1\land\sup_{0\le s\le t}\norm{\varphi_s-\varphi_s'})\mathrm{d}t,
$$
where $(\varphi_t)_{t\ge 0},(\varphi'_t)_{t\ge 0} \in \mathcal{C}([0,\infty):\mathbb{R}^K)$. {We study the weak limit of the system (\ref{eq:AS:theta}) as $\varepsilon\to0$. Similar problems have been discussed in, for example, \citet{Kushner1} and \citet{Kushner2}.}
\begin{theorem}\label{wcovtheta}
Let $\theta^\varepsilon_0=\theta_0$ and $\zeta_0=\theta_0$. Moreover, let $(\theta^\varepsilon_t)_{t\ge 0}$ and $(\zeta_t)_{t\ge 0}$ solve (\ref{eq:AS:theta}) and (\ref{eq:AS:th}), respectively. Then $(\theta^\varepsilon_t)_{t\ge 0}$ under $\mathbb{P}_\pi$ converges weakly to $(\zeta_t)_{t\ge 0}$ in $\mathcal{C}([0,\infty):\mathbb{R}^K)$ as $\varepsilon\to 0$, i.e.
for any bounded continuous function $F$ on $\mathcal{C}([0,\infty):\mathbb{R}^K)$,
$$\mathbb{E}_\pi [F\big((\theta^\varepsilon_t)_{t\ge 0}\big)] \to \mathbb{E}_\pi [F\big((\zeta_t)_{t\ge 0}\big)]=F\big((\zeta_t)_{t\ge 0}\big).$$
\end{theorem}
\begin{proof}
We shall verify that $(\theta^\varepsilon_t)_{t\ge 0}$ is tight by checking:
\begin{align*}
1.&\ \sup_{0<\varepsilon<1}\norm{\theta^\varepsilon_0}<+\infty;\\
2.&\ \textit{For any fixed } T>0,\ \lim_{\delta\to 0} \sup_{0<\varepsilon<1}\sup_{s,t\in[0,T],|s-t|\le \delta}\norm{\theta^\varepsilon_t-\theta^\varepsilon_s}\to 0.
\end{align*}
The first condition follows from $\theta^\varepsilon_0=\theta_0$. For the second condition, by Assumption \ref{asSGPf}, let $C_0=\sup_{y\in S}\norm{\nabla_{\theta} f(0, y)}$ and $L_f$ be the Lipschitz constant of $\nabla_{\theta} f(\cdot, y)$, we have
\begin{align*}
\frac{\mathrm{d}\norm{\theta^\varepsilon_t}^2}{\mathrm{d}t}
&=-2 \ip{\theta^\varepsilon_t,\nabla_{\theta} f(\theta^\varepsilon_t, V_{t/\varepsilon})}\\
&=-2 \ip{\theta^\varepsilon_t,\nabla_{\theta} f(\theta^\varepsilon_t, V_{t/\varepsilon})-\nabla_{\theta} f(0, V_{t/\varepsilon})}-2\ip{\theta^\varepsilon_t,\nabla_{\theta} f(0, V_{t/\varepsilon})}\\
&\le 2L_f \norm{\theta^\varepsilon_t}^2+ 2C_0\norm{\theta^\varepsilon_t}\\
&\le 2L_f \norm{\theta^\varepsilon_t}^2+ \norm{\theta^\varepsilon_t}^2+C^2_0\\
&= (2L_f+1) \norm{\theta^\varepsilon_t}^2+C^2_0.
\end{align*}
By Gr\"onwall's inequality,
\begin{align}
\norm{\theta^\varepsilon_t}^2\le (\norm{\theta_0}^2+C^2_0)e^{(2L_f+1) t}.
\end{align}
Therefore, $\theta^\varepsilon_t$ is bounded on any finite time interval.
For any fixed $T>0,$ let $$C_{T,f,\theta_0}=\sup_{\norm{x}\le (\norm{\theta_0}^2+C^2_0)e^{(2L_f+1) T},\ y\in S }\norm{\nabla_{\theta} f(x, y)}.$$
Then for any $s,t\in[0,T]$,
\begin{align*}
\norm{\theta^\varepsilon_t-\theta^\varepsilon_s}\le \int_s^t\norm{\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})} \mathrm{d}m \le C_{T,f,\theta_0} |t-s|.
\end{align*}
Hence, $(\theta^\varepsilon_t)_{t\ge 0}$ is tight in $\mathcal{C}([0,\infty):\mathbb{R}^K)$. By Prokhorov's theorem, let $(\theta_t)_{t\ge 0}$ be a weak limit of $(\theta^\varepsilon_t)_{t\ge 0}$. We shall verify that $(\theta_t)_{t\ge 0}$ satisfies equation (\ref{eq:AS:th}), which is equivalent to show that for any bounded differentiable function $\varphi$, $h$
\begin{align*}
\mathbb{E}_\pi\Big[\Big(\varphi(\theta_t)- \varphi(\theta_s)+\int_s^t\ip{\nabla_{\theta}\varphi(\theta_m),g(\theta_m)}\mathrm{d}m\Big)h\Big((\theta_{t_i})_{i=1,...,n}\Big)\Big]=0,
\end{align*}
$\forall\ 0\leq t_1 < \cdots < t_n\leq s.$ The case $t=0$ is obvious. Since $(\theta^\varepsilon_t)_{t\ge 0}$ is a strong solution to equation (\ref{eq:AS:theta}), for any $0\le s<t$,
\begin{align}\label{net}
\varphi(\theta^\varepsilon_t)= \varphi(\theta^\varepsilon_s)-\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})}\mathrm{d}m.
\end{align}
Hence, we have
\begin{align*}
\mathbb{E}_\pi\Big[\Big(\varphi(\theta^\varepsilon_t)- \varphi(\theta^\varepsilon_s)+\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_s),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})}\mathrm{d}m\Big)h\Big((\theta^\varepsilon_{t_i})_{i=1,...,n}\Big)\Big]=0,
\end{align*}
Moreover, when $\varepsilon\to 0,$
\begin{align*}
\mathbb{E}_\pi\Big[\Big(\varphi(\theta^\varepsilon_t)- \varphi(\theta^\varepsilon_s)\Big)h\Big((\theta^\varepsilon_{t_i})_{i=1,...,n}\Big)\Big]\to \mathbb{E}_\pi\Big[\Big(\varphi(\theta_t)- \varphi(\theta_s)\Big)h\Big((\theta_{t_i})_{i=1,...,n}\Big)\Big].
\end{align*}
Hence, all we need to show is the following
\begin{align}
\mathbb{E}_\pi\Big[\Big(\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})}\mathrm{d}m-\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),g(\theta^\varepsilon_m)}\mathrm{d}m\Big)h\Big((\theta^\varepsilon_{t_i})_{i=1,...,n}\Big)\Big]\to 0,
\end{align}
which is equivalent to prove that
\begin{align}
\mathbb{E}_\pi\Big[\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})}\mathrm{d}m-\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),g(\theta^\varepsilon_m)}\mathrm{d}m\Big|\mathcal{F}^\varepsilon_s\Big]\to 0. \label{important}
\end{align}
Let $\tilde \e:= 1/[1/\sqrt{\varepsilon}]$, where $[x]$ is the greatest integer less than or equal to $x$. Then we have the following decomposition
\begin{align*}
&\mathbb{E}_\pi\Big[\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})}\mathrm{d}m-\int_s^t\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),g(\theta^\varepsilon_m)}\mathrm{d}m\Big|\mathcal{F}^\varepsilon_s\Big]\\
=& \tilde \e\sum^{1/\tilde \e}_{i=0}\tilde \e^{-1}\mathbb{E}_\pi\Big[\int_{s+i(t-s)\tilde \e}^{s+(i+1)(t-s)\tilde \e}\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})-g(\theta^\varepsilon_m)}\mathrm{d}m\Big|\mathcal{F}^\varepsilon_s\Big]\\
=&\tilde \e\sum^{1/\tilde \e}_{i=0}\mathbb{E}_\pi\Big[\tilde \e^{-1}\mathbb{E}_\pi\Big[\int_{s+i(t-s)\tilde \e}^{s+(i+1)(t-s)\tilde \e}\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_m),\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})-g(\theta^\varepsilon_m)}\mathrm{d}m\Big|\mathcal{F}^\varepsilon_{s+i(t-s)\tilde \e}\Big]\Big|\mathcal{F}^\varepsilon_s\Big].
\end{align*}
We claim that as $\varepsilon\to 0$,
\begin{align}\label{correct}
\sup_{0\le r<t} \tilde \e^{-1}\mathbb{E}_\pi\Big[\int_r^{r+(t-s)\tilde \e}G(\theta^\varepsilon_m,V_{m/\varepsilon})\mathrm{d}m\Big|\mathcal{F}^\varepsilon_r\Big]\to 0,
\end{align}
where $G(x,y):=\ip{\nabla_{\theta}\varphi(x),\nabla_{\theta} f(x, y)-g(x)}$.
Notice that for any fixed $t>0$, $(\theta^\varepsilon_s)_{ 0\le s\le t}$ is uniformly equicontinuous. Hence, we have
\begin{align*}
\sup_{0\le r\le t} \sup_{r\le m\le r+(t-s)\tilde \e}\norm{\theta^\varepsilon_m-\theta^\varepsilon_r}&= \sup_{0\le r\le t} \sup_{r\le m\le r+(t-s)\tilde \e}\int_r^{r+(t-s)\tilde \e}\norm{\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})} \mathrm{d}m\\
&\le \tilde \e \sup_{0\le m\le t} \norm{\nabla_{\theta} f(\theta^\varepsilon_m, V_{m/\varepsilon})}.
\end{align*}
Therefore, as $\varepsilon \to 0,$
\begin{align*}
\sup_{0\le r<t} \abs{ \tilde \e^{-1}\mathbb{E}_\pi\Big[\int_r^{r+(t-s)\tilde \e}G(\theta^\varepsilon_m,V_{m/\varepsilon})\mathrm{d}m\Big|\mathcal{F}^\varepsilon_r\Big]-\tilde \e^{-1}\mathbb{E}_\pi\Big[\int_r^{r+(t-s)\tilde \e}G(\theta^\varepsilon_r,V_{m/\varepsilon})\mathrm{d}m\Big|\mathcal{F}^\varepsilon_r\Big]}\to 0.
\end{align*}
Hence, (\ref{correct}) is equivalent to
\begin{align}\label{alsocorrect}
\sup_{0\le r<t} \abs{ \tilde \e^{-1}\mathbb{E}_\pi\Big[\int_r^{r+(t-s)\tilde \e}G(\theta^\varepsilon_r,V_{m/\varepsilon})\mathrm{d}m\Big|\mathcal{F}^\varepsilon_r\Big]}\to 0.
\end{align}
By Corollary \ref{corergodic},
\begin{align*}
&\sup_{0\le r<t} \abs{ \tilde \e^{-1}\mathbb{E}_\pi\Big[\int_r^{r+(t-s)\tilde \e}G(\theta^\varepsilon_r,V_{m/\varepsilon})\mathrm{d}m\Big|\mathcal{F}^\varepsilon_r\Big]}\\
=& \sup_{0\le r<t} \abs{ \tilde \e^{-1}\mathbb{E}_{V_{r/\varepsilon},x=\theta^\varepsilon_r}\Big[\int_r^{r+(t-s)\tilde \e}G(x,V_{(m-r)/\varepsilon})\mathrm{d}m\Big]}\\
=& \sup_{0\le r<t} \abs{ \tilde \e^{-1}\mathbb{E}_{V_{r/\varepsilon},x=\theta^\varepsilon_r}\Big[\int_r^{r+(t-s)\tilde \e}\ip{\nabla_{\theta}\varphi(x),\nabla_{\theta} f(x, V_{(m-r)/\varepsilon})-g(x)}\mathrm{d}m\Big]}\\
=& \sup_{0\le r<t} \abs{ \tilde \e^{-1}\int_r^{r+(t-s)\tilde \e}\mathbb{E}_{V_{r/\varepsilon},x=\theta^\varepsilon_r}\Big[\ip{\nabla_{\theta}\varphi(x),\nabla_{\theta} f(x, V_{(m-r)/\varepsilon})-g(x)}\Big]\mathrm{d}m}\\
\le& \sup_{0\le r<t} \tilde \e^{-1}\int_r^{r+(t-s)\tilde \e}\abs{\mathbb{E}_{V_{r/\varepsilon},x=\theta^\varepsilon_r}\Big[\ip{\nabla_{\theta}\varphi(x),\nabla_{\theta} f(x, V_{(m-r)/\varepsilon})-g(x)}\Big]}\mathrm{d}m\\
\le & \sup_{0\le r<t}\tilde \e^{-1}\norm{\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_r),\nabla_{\theta} f(\theta^\varepsilon_r, \cdot)}}_\infty \int_r^{r+(t-s)\tilde \e} e^{-\delta (m-r)/\varepsilon}dm\\
=& \sup_{0\le r<t}\tilde \e^{-1}\norm{\ip{\nabla_{\theta}\varphi(\theta^\varepsilon_r),\nabla_{\theta} f(\theta^\varepsilon_r, \cdot)}}_\infty \int_0^{(t-s)\tilde \e} e^{-\delta k/\varepsilon}dk\\
\le & C_{t,\varphi,f}\frac{\varepsilon}{\delta\tilde \e}\le C_{t,\varphi,f}\frac{\sqrt{\varepsilon}}{2\delta}\to0.
\end{align*}
This completes the proof of (\ref{important}).
Hence, any weak limit of $(\theta^\varepsilon_t)_{t\ge 0}$ is a martingale solution to equation (\ref{eq:AS:th}). Since equation (\ref{eq:AS:th}) is a deterministic ordinary differential equation and $\theta^\varepsilon_0=\theta_0$ is independent of $\varepsilon$, we have
$(\theta^\varepsilon_t)_{t\ge 0}$ converges weakly to $(\zeta_t)_{t\ge 0}$ as $\varepsilon\to 0$.
\end{proof}
Instead of looking at the full trajectories of the processes, we can also study their distributions and show convergence in the Wasserstein distance. We first need to introduce some notation.
Let $\nu$ and $\nu'$ be two probability measures on $(\mathbb{R}^K, \mathcal{B}(\mathbb{R}^K) )$. We define the Wasserstein distance between those measures by
\[
{{\mathcal W}}_d(\nu,\nu') = \inf_{\Gamma\in\mathcal{H}(\nu,\nu') }\int_{\mathbb{R}^K\times\mathbb{R}^K}d(y,y')\Gamma(\mathrm{d}y,\mathrm{d}y'),
\]
where $d(y,y'):=1\land\norm{y-y'}$ and $\mathcal{H}(\nu,\nu')$ is the set of coupling between $\nu$ and $\nu'$, i.e.
$$\mathcal{H}(\nu,\nu') = \{
\Gamma \in \text{Pr} (\mathbb{R}^K\times\mathbb{R}^K) :
\Gamma(A \times \mathbb{R}^K) = \nu(A),
\Gamma(\mathbb{R}^K \times B) = \nu'(B), \forall
A, B \in \mathcal{B}(\mathbb{R}^K)
\}.$$
To simplify the notation, for $B\in \mathcal{B}(\mathbb{R}^K)$, $\theta\in \mathbb{R}^K$, and $y\in S$, we denote
$$C^\varepsilon_t(B|\theta, y):=\mathbb{P}_y(\theta^\varepsilon_t\in B|\theta^\varepsilon_0=\theta),$$ $$C^\varepsilon_t(B|\theta,\pi):=\mathbb{P}_\pi(\theta^\varepsilon_t\in B|\theta^\varepsilon_0=\theta),$$
where $\pi$ is the invariant measure of $(V_t)_{t \geq 0}$.
Now we study the approximation property of SGPC in the Wasserstein distance. Indeed, the following corollary follows immediately from Theorem \ref{wcovtheta}.
\begin {corollary}\label{deuxthe}
There exists a function $\alpha: (0,1) \to [0,1],$ such that
$$
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|\theta_0,\pi),\delta(\cdot-\zeta_t))\le (\exp(t)\alpha(\varepsilon))\land 1.
$$
Moreover, $\lim_{\varepsilon\to 0}\alpha(\varepsilon)=0$.
\end {corollary}
\begin{proof}
By Theorem \ref{wcovtheta}, we have $(\theta^\varepsilon_t)_{t\ge 0}\Rightarrow(\zeta_t)_{t\ge 0}$. By Skorokhod's representation theorem, there exists a sequence $(\tilde\theta^\varepsilon_t)_{t\ge 0}$ such that
\begin{align*}
&(\tilde\theta^\varepsilon_t)_{t\ge 0}\overset{d}{=}(\theta^\varepsilon_t)_{t\ge 0}\ \text{under}\ \mathbb{P}_\pi,\\
&\rho\Big((\tilde\theta^\varepsilon_t-\zeta_t)_{t\ge 0},0\Big)\to 0\ \text{almost\ surely\ in}\ \mathbb{P}_\pi.
\end{align*}
This implies
\begin{align*}
\mathbb{E}_\pi[F((\tilde\theta^\varepsilon_t-\zeta_t)_{t\ge 0})]\to F(0),
\end{align*}
for any bounded continuous function $F$ on $\mathcal{C}([0,\infty):\mathbb{R}^K)$. By taking
$$F((\tilde\theta^\varepsilon_t-\zeta_t)_{t\ge 0}))= \sup_{t\ge 0} \exp({-t}) \left(1\land\sup_{0\le s\le t}\norm{\tilde\theta^\varepsilon_t-\zeta_t}\right)$$
and
\begin{align*}
\alpha(\varepsilon):=\mathbb{E}_\pi\Big[\sup_{t\ge 0} \exp({-t}) \left(1\land\sup_{0\le s\le t}\norm{\tilde\theta^\varepsilon_t-\zeta_t}\right)\Big]\to 0,
\end{align*}
we have, for all $t\ge 0$,
\begin{align*}
\mathbb{E}_\pi\Big[1\land\norm{\tilde\theta^\varepsilon_t-\zeta_t}\Big]\le \exp(t) \alpha(\varepsilon).
\end{align*}
Since $1\land\norm{\tilde\theta^\varepsilon_t-\zeta_t}\le 1$ and $\tilde\theta^\varepsilon_t \overset{d}{=} \theta^\varepsilon_t$, denoting the distribution of $\tilde\theta^\varepsilon_t$ as $F_{\tilde\theta^\varepsilon_t}$,
\begin{align*}
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|\theta_0,\pi),\delta(\cdot-\zeta_t))\le& {{\mathcal W}}_d(F_{\tilde\theta^\varepsilon_t}\ ,C^\varepsilon_t(\cdot|\theta_0,\pi))+\mathbb{E}_\pi\Big[1\land\norm{\tilde\theta^\varepsilon_t-\zeta_t}\Big] \\
\le& (\exp(t)\alpha(\varepsilon))\land 1.
\end{align*}
\end{proof}
Finally in this section, we look at a technical result concerning the asymptotic behavior of the full gradient flow $(\zeta_t)_{t \geq 0}.$ First, we will additionally assume that the subsampled target function $f(\cdot, y)$ in the optimization problem is strongly convex, with a convexity parameter that does not depend on $y \in S$. We state this assumption below.
\begin{assumption}[Strong Convexity]\label{as1.2}
For any $x_1,x_2\in \mathbb{R}^K,$
$$
\ip{x_1-x_2,\nabla_{\theta} f(x_1, y)-\nabla_{\theta} f(x_2, y)}\ge \kappa\norm{x_1-x_2}^2
$$
where $\kappa>0$ and $\kappa$ is independent of $y\in S$.
\end{assumption}
Strong convexity implies, of course, that the full target function $g := \int_S \nabla_{\theta}f(\cdot, y) \pi(\mathrm{d}y)$ has a unique minimizer $\theta^*$. It also implies that the associated full gradient flow $(\zeta_t)_{t \geq 0}$ converges at exponential speed to this unique minimizer. We give a short proof of this statement below.
\begin{lemma}\label{norandomthe}
Let $(\zeta_t)_{t\ge 0}$ be the process that solves (\ref{eq:AS:th}) with initial data $\theta_0$. Under Assumption \ref{as1.2}, we have
\begin{align*}
\norm{\zeta_t-\theta_*}^2\le \norm{\theta_0-\theta_*}^2\exp({-\kappa t}),
\end{align*}
where $\theta_*$ is a stationary solution of (\ref{eq:AS:th}).
\end{lemma}
\begin{proof}
Since $\theta_*$ is a stationary solution, $$g(\theta_*)=0\ \ \text{and}\ \ \mathrm{d}(\zeta_t-\theta_*) = -(g(\zeta_t)-g(\theta_*))\mathrm{d}t.$$
Therefore,
\begin{align*}
\frac{\mathrm{d}\norm{\zeta_t-\theta_*}^2}{\mathrm{d}t} = 2\ip{\zeta_t-\theta_*, \frac{\mathrm{d}(\zeta_t-\theta_*)}{\mathrm{d}t}} =-2 \ip{\zeta_t-\theta_*,g(\zeta_t)-g(\theta_*)}\le -2\kappa \norm{\zeta_t-\theta_*}^2.
\end{align*}
By Gr\"onwall's inequality,
\begin{align*}
\norm{\zeta_t-\theta_*}^2\le \norm{\theta_0-\theta_*}^2\exp({-\kappa t}).
\end{align*}
\end{proof}
\subsection{Longtime behavior and ergodicity} We now study the longtime behavior of SGPC, i.e. the behavior and distribution of $(\theta_t^\varepsilon, V_{t/\varepsilon})$ for $t \gg 0$ large.
Indeed, the main result of this section will be the geometric ergodicity of this coupled process and a study of its stationary measure.
Initially, we study stability of the stochastic gradient process $(\theta^{\varepsilon}_t)_{t \geq 0}.$
\begin{lemma}\label{boundedtheta}
Under Assumption \ref{as1.2}, we have
$$
\norm{\theta^\varepsilon_t}^2\le \norm{\theta^\varepsilon_0}^2 \exp({-\kappa t})+\frac{8K_f^2 }{\kappa^2 },
$$
where $K_f:= \sup_{y\in S} \norm{\nabla_{\theta} f(0,y)}$.
\end{lemma}
\begin{proof}
By It\^o's formula, we have
\begin{align}\label{gronvxi}
\frac{\mathrm{d}\norm{\theta^\varepsilon_t}^2}{\mathrm{d}t} =
2\ip{\theta^\varepsilon_t, d\theta^\varepsilon_t/dt}
=-2 \ip{\theta^\varepsilon_t,\nabla_{\theta} f(\theta^\varepsilon_t, V_{t/\varepsilon})} .
\end{align}
By Assumption \ref{as1.2},
\begin{align*}
\ip{\theta^\varepsilon_t,\nabla_{\theta} f(\theta^\varepsilon_t, V_{t/\varepsilon})}=& \ip{\theta^\varepsilon_t-0,\nabla_{\theta} f(\theta^\varepsilon_t, V_{t/\varepsilon})-\nabla_{\theta} f(0, V_{t/\varepsilon})}+\ip{\theta^\varepsilon_t,\nabla_{\theta} f(0, V_{t/\varepsilon})}\\
\ge& \kappa \norm{\theta^\varepsilon_t}^2-\norm{\theta^\varepsilon_t}\norm{\nabla_{\theta} f(0,V_{t/\varepsilon})}\\
\ge& \frac{\kappa}{2} \norm{\theta^\varepsilon_t}^2- \frac{4}{\kappa }\norm{\nabla_{\theta} f(0, V_{t/\varepsilon})}^2\\
\ge& \frac{\kappa}{2} \norm{\theta^\varepsilon_t}^2- \frac{4K_f^2}{\kappa }.
\end{align*}
Hence (\ref{gronvxi}) implies
\begin{align}\label{gronvxi1}
\frac{\mathrm{d}\norm{\theta^\varepsilon_t}^2}{\mathrm{d}t} \le -\kappa \norm{\theta^\varepsilon_t}^2+ \frac{8K_f^2}{\kappa }.
\end{align}
Multiplying $\exp({\kappa t})$ on both sides of (\ref{gronvxi1}), we get
\begin{align*}
\frac{\mathrm{d}(\norm{\theta^\varepsilon_t}^2\exp({\kappa t}))}{\mathrm{d}t} \le \frac{8K_f^2 \exp({\kappa t})}{\kappa },
\end{align*}
that is
\begin{align*}
\norm{\theta^\varepsilon_t}^2\exp({\kappa t})-\norm{\theta^\varepsilon_0}^2\le \frac{8K_f^2 (\exp({\kappa t})-1)}{\kappa^2 }\le \frac{8K_f^2 \exp({\kappa t})}{\kappa^2 }.
\end{align*}
Therefore,
$$
\norm{\theta^\varepsilon_t}^2\le \norm{\theta^\varepsilon_0}^2 \exp({-\kappa t})+\frac{8K_f^2 }{\kappa^2 }.
$$
\end{proof}
Using this lemma, we are now able to prove the first main result of this section, showing geometric ergodicity of $(\theta^\varepsilon_t,V_{t/\varepsilon})_{t \geq 0}.$ First, we introduce a Wasserstein distance, on the space on which $(\theta^\varepsilon_t,V_{t/\varepsilon})_{t \geq 0}$ lives.
Let $\Pi$ and $\Pi'$ be two probability measures on $(\mathbb{R}^K\times S, \mathcal{B}(\mathbb{R}^K\times S) )$. We define the Wasserstein distance between those measures by
\[
\widetilde{{\mathcal W}}_{\tilde d}(\Pi,\Pi') = \inf_{\widetilde{\Gamma}\in\mathcal{H}(\Pi,\Pi') }\int_{(\mathbb{R}^K\times S)\times(\mathbb{R}^K\times S)}\tilde d((u,v),(u',v'))\widetilde{\Gamma}(\mathrm{d}u\mathrm{d}v,\mathrm{d}u'\mathrm{d}v'),
\]
where $\tilde d((u,v),(u',v')):=\boldsymbol{1}_{v\ne v'}+(1\land\norm{u-u'})\boldsymbol{1}_{v=v'}$. For $a\in S$ and $m\in\mathbb{R}^K$, let $H^\varepsilon_t(\cdot|m,a)$ be the distribution of $(\theta^\varepsilon_t,V_{t/\varepsilon})$ under $\mathbb{P}_a$ with $\theta^\varepsilon_0=m$.
Moreover, recall that $(V_t)_{t\ge 0}$ is a Feller process that satisfies Assumption \ref{as1.0}. More specifically, it satisfies Assumption \ref{as1.0} (iii) with a constant $\delta$:
$$
\sup_{x\in S}\tilde\mathbb{P}(T^x\ge t)\le C\exp({-\delta t}),
$$
where $T^x:=\inf{\{t\ge 0\ |\ V^x_t=V^\pi_t \}}$. With the constant $\delta$ defined this way, we have the following theorem.
\begin{theorem}\label{dispitheta}
Under Assumption \ref{as1.2}, for any $0< \varepsilon\le 1\land ({\delta}/{2\kappa})$, the (coupled) process $(\theta_t^\varepsilon,V_{t/\varepsilon})_{t\ge 0} $ admits an unique stationary
measure $\Pi^\varepsilon$ on $(\mathbb{R}^K\times S, \mathcal{B}(\mathbb{R}^K\times S)).$ Moreover,
\begin{align}\label{ineq2.6}
\widetilde{{\mathcal W}}^2_{\tilde d}(H^\varepsilon_t(\cdot|m,a),\Pi^\varepsilon)\le C_f\exp({-\kappa t})\int_{\mathbb{R}^K}(1+ \norm{x-m}^2)\Pi^\varepsilon(\mathrm{d}x,S),
\end{align}
\begin{align}\label{ineq:w_pi}
\widetilde{{\mathcal W}}^2_{\tilde d}(H^\varepsilon_t(\cdot|m,\pi),\Pi^\varepsilon)\le C_f\exp({-\kappa t})\int_{\mathbb{R}^K} \norm{x-m}^2\Pi^\varepsilon(\mathrm{d}x,S),
\end{align}
where the constant $C_f$ only depends on $f$.
\end{theorem}
\begin{proof} To obtain the existence of the invariant measure, we apply the weak form of Harris' Theorem in \citet[Theorem 3.7]{Martin} by verifying the Lyapunov condition, the $\tilde{d}$-contracting condition, and the $\tilde{d}$-small condition.
\noindent(i) {\bf Lyapunov condition:} Let $V(x,y)=\norm{x}^2$. To verify that it satisfies (2.1) in \citet[Definition 2.1]{Martin}, we take $x=\theta^\varepsilon_0$ and $(P_t)_{t \geq 0}$ to be the semi-group associated with $(\theta^\varepsilon_t)_{t \geq 0}$. Then Lemma \ref{boundedtheta} yields that $V(x,y)$ is a Lyapunov function. The existence of the Lyapunov function can be understood as that the coupled process does not go to infinity.
\noindent(ii) {\bf$\tilde d$-contracting condition:} $\tilde d$-contracting states that there exists $t^*>0$ such that for any $t>t^*$, there exists some $\alpha<1$ such that
$$ \widetilde{{\mathcal W}}_{\tilde d}(\delta_{(m,a)}P^\varepsilon_t, \delta_{(n,b)}P^\varepsilon_t)\leq \alpha \tilde d ((m,a), (n,b))$$
for any $(m, a), (n, b)\in \mathbb{R}^K\times S$ such that $\tilde d ((m,a), (n,b))<1$. Here, $(P^\varepsilon_t)_{t \geq 0}$ is the semi-group operator associated with (\ref{eq:AS:theta}). Notice that $\tilde d((m,a),(n,b))<1$ implies $a=b$. Let $ (\theta^{(m,a)}_t)_{t \geq 0},\ (\theta^{(n,a)}_t)_{t \geq 0}$ solve the following equations:
\begin{align*}
\theta^{(m,a)}_t= m-\int_0^t \nabla_{\theta} f(\theta^{(m,a)}_t, \tilde V^a_{ s/\varepsilon}) \mathrm{d}s,\\
\theta^{(n,a)}_t= n-\int_0^t \nabla_{\theta} f(\theta^{(n,a)}_t, \tilde V^a_{s/\varepsilon}) \mathrm{d}s,
\end{align*}
where $\tilde V^a_t= V^a_t$ for $t\le T^a$ and $\tilde V^a_t= V^\pi_t$ for $t>T^a.$
Then by It\^o's formula and Assumption \ref{as1.2},
\begin{align*}
\mathrm{d}\norm{\theta^{(m,a)}_t-\theta^{(n,a)}_t}^2/\mathrm{d}t=&-2\ip{\theta^{(m,a)}_t-\theta^{(n,a)}_t,\nabla_{\theta} f(\theta^{(m,a)}_t, \tilde V^a_{ s/\varepsilon})-\nabla_{\theta} f(\theta^{(n,a)}_t, \tilde V^a_{s/\varepsilon})}\\
\le& -\kappa\norm{\theta^{(m,a)}_t-\theta^{(n,a)}_t}^2.
\end{align*}
By Gr\"onwall's inequality,
\begin{equation}\label{eq:aa}
\norm{\theta^{(m,a)}_t-\theta^{(n,a)}_t}^2\le \exp({-\kappa t})\norm{m-n}^2.
\end{equation}
Noticing that $\tilde d((m,a),(n,b))<1$ implies $\norm{m-n}^2<1$, by choosing $t\ge \frac{1}{\kappa},$ we obtain
\begin{align*}
\norm{\theta^{(m,a)}_t-\theta^{(n,a)}_t}^2 &\le \exp({-1})\norm{m-n}^2 \\ &= \exp({-1})(\norm{m-n}^2\land 1)=\exp({-1})\tilde{d}^2((m,a),(n,a)).
\end{align*}
Therefore, with $t^* = 1/\kappa$,
\begin{align*}
\widetilde{{\mathcal W}}_{\tilde d}(H^\varepsilon_t(\cdot|m,a),H^\varepsilon_t(\cdot|n,b))\le \tilde \E\Big[\norm{\theta^{(m,a)}_t-\theta^{(n,a)}_t}\Big]\le \exp({-1})\tilde d((m,a),(n,b)).
\end{align*}
\noindent(iii) {\bf $\tilde d$-small condition:} We shall verify that there exists $t_*>0$ such that for any $t>t_*$, the sublevel set $\mathscr{V}:=\{(x, y)\in \mathbb{R}^K\times S\ |\ V(x, y)\leq {32K_f^2 }/{\kappa^2 }\}$ is $\tilde d$-small for $(P^\varepsilon_t)_{t \geq 0}$, meaning that there exists a constant $\zeta$ such that
$$ \widetilde{{\mathcal W}}_{\tilde d}(\delta_{(m,a)}P^\varepsilon_t, \delta_{(n,b)}P^\varepsilon_t)\leq 1-\zeta,$$
for all $(m, a), (n, b)\in \mathscr{V}$. Let $ (\theta^{(m,a)}_t)_{t \geq 0},\ (\theta^{(n,b)}_t)_{t \geq 0}$ solve the following equations:
\begin{align*}
\theta^{(m,a)}_t= m-\int_0^t \nabla_{\theta} f(\theta^{(m,a)}_t, \tilde V^a_{ s/\varepsilon}) \mathrm{d}s,\\
\theta^{(n,b)}_t= n-\int_0^t \nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^b_{s/\varepsilon}) \mathrm{d}s,
\end{align*}
where $\tilde V^a_t= V^a_t$ for $t\le T^a$ and $\tilde V^a_t= V^\pi_t$ for $t>T^a$; $\tilde V^b_t= V^b_t$ for $t\le T^b$ and $\tilde V^b_t= V^\pi_t$ for $t>T^b$.
By It\^o's formula, Assumption \ref{as1.2}, and the $\varepsilon$-Young inequality,
\begin{align*}
&\mathrm{d}\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2/\mathrm{d}t\\
=&-2\ip{\theta^{(m,a)}_t-\theta^{(n,b)}_t,\nabla_{\theta} f(\theta^{(m,a)}_t, \tilde V^a_{t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^b_{ t/\varepsilon})}\\
=& -2\ip{\theta^{(m,a)}_t-\theta^{(n,b)}_t,\nabla_{\theta} f(\theta^{(m,a)}_t, \tilde V^a_{t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^a_{ t/\varepsilon})}\\
&\ \ -2\ip{\theta^{(m,a)}_t-\theta^{(n,b)}_t,\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^a_{ t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^b_{ t/\varepsilon})}\\
\le& -2\kappa \norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2+2\abs{\ip{\theta^{(m,a)}_t-\theta^{(n,b)}_t,\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^a_{ t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^b_{t/\varepsilon})}}\\
\le& -\kappa \norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2+\frac{4}{\kappa}\norm{\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^a_{ t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^b_{ t/\varepsilon})}^2.
\end{align*}
Multiplying $\exp({\kappa t})$ on both sides, we obtain
\begin{align*}
\mathrm{d}\Big(\exp({\kappa t})\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2\Big)/\mathrm{d}t
\le \frac{4\exp({\kappa t})}{\kappa}\norm{\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^a_{ t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_t, \tilde V^b_{t/\varepsilon})}^2,
\end{align*}
that is
\begin{align*}
\exp({\kappa t})\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2 &\le \norm{m-n}^2 \\&\qquad+ \frac{4}{\kappa}\int_0^t \exp({\kappa s})\norm{\nabla_{\theta} f(\theta^{(n,b)}_s, \tilde V^a_{ t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_s, \tilde V^b_{ t/\varepsilon})}^2\mathrm{d}s.
\end{align*}
Notice that $\tilde V^a_t= \tilde V^b_t$ if $t>T^a\lor T^b$. Hence, we have
\begin{align*}
&\exp({\kappa t})\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2 \\ & \qquad \le \norm{m-n}^2+\frac{4}{\kappa}\int_0^{t\land \varepsilon (T^a\lor T^b)} \exp({\kappa s})\norm{\nabla_{\theta} f(\theta^{(n,b)}_s, \tilde V^a_{ t/\varepsilon})-\nabla_{\theta} f(\theta^{(n,b)}_s, \tilde V^b_{ t/\varepsilon})}^2\mathrm{d}s.
\end{align*}
For $(m,a), (n,b)\in \mathscr{V}\times S$, by Lemma \ref{boundedtheta}, we have that $$\norm{\nabla_{\theta} f(\theta^{(m,a)}_t,\tilde V^a_{ t/\varepsilon})} \text{ and }\norm{\nabla_{\theta} f(\theta^{(n,b)}_t,\tilde V^b_{ t/\varepsilon})}$$ are bounded by some constant $C_f.$
This implies
\begin{align}\label{eqprop1}
\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2\le \exp({-\kappa t})\norm{m-n}^2+4 C_f \exp({-\kappa t}) \exp({\kappa \varepsilon (T^a\lor T^b)}).
\end{align}
Recall $0< \varepsilon\le 1\land \frac{\delta}{2\kappa}$. By (iii), Assumption \ref{as1.0},
\begin{align*}
\tilde \E [\exp({\kappa \varepsilon (T^a\lor T^b)})]=&\kappa \varepsilon\int_0^\infty \exp({\kappa \varepsilon x})\tilde \Prb(T^a\lor T^b\ge x)\mathrm{d}x\\
\le& C\kappa \varepsilon\int_0^\infty \exp({\kappa \varepsilon x})\exp({-\delta x})\mathrm{d}x\\
\le& \frac{C\delta}{2}\int_0^\infty \exp\left({-\frac{\delta x}{2} }\right)\mathrm{d}x= C.
\end{align*}
Therefore,
\begin{align*}
\tilde \E \Big[\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2\Big]\le \exp({-\kappa t})\norm{m-n}^2+ C_f \exp({-\kappa t}).
\end{align*}
Moreover,
$$
\tilde \Prb(\tilde V^a_{t/\varepsilon}\ne \tilde V^b_{t/\varepsilon})=\tilde \Prb(T^a\lor T^b\ge {t}/{\varepsilon})\le C\exp({-\delta {t}/{\varepsilon}}) \le C\exp({-2\kappa t}).
$$
Hence for any $(m,a),(n,b)\in\mathscr{V}\times S$, by taking $t\ge \frac{1}{\kappa}[\log(8C_f)+\log({512K_f^2 }/{\kappa^2 })+\frac{1}{2}\log(8C)],$ we have
$$
\widetilde{{\mathcal W}}^2_{\tilde d}(H^\varepsilon_t(\cdot|m,a),H^\varepsilon_t(\cdot|n,b))\le \tilde \Prb(\tilde V^a_{t/\varepsilon}\ne \tilde V^b_{t/\varepsilon})+\tilde \E \Big[\norm{\theta^{(m,a)}_t-\theta^{(n,b)}_t}^2\Big]\le \frac{1}{2}.
$$
We have verified all three conditions from \citet[Theorem 3.7]{Martin} and hence conclude the existence and uniqueness of the invariant measure of $(\theta_t^\varepsilon,V_{t/\varepsilon})_{t\ge 0}$ denoted as $\Pi^\varepsilon$.
Next, we are going to prove (\ref{ineq2.6}) and \eqref{ineq:w_pi}. Define $\Theta^\varepsilon$ such that $( \Theta^\varepsilon, \tilde V^\pi_0)\sim \Pi^\varepsilon$ in $\tilde \Prb.$ Let $\theta^{\Pi^\varepsilon}_t$ and $\theta^{(m,\pi)}_t$ solve following equations
\begin{align*}
\theta^{\Pi^\varepsilon}_t= \Theta^\varepsilon-\int_0^t \nabla_{\theta} f(\theta^{\Pi^\varepsilon}_t, \tilde V^\pi_{t/\varepsilon}) \mathrm{d}s,\\
\theta^{(m,\pi)}_t= m-\int_0^t \nabla_{\theta} f(\theta^{(m,\pi)}_t, \tilde V^\pi_{t/\varepsilon}) \mathrm{d}s.
\end{align*}
Recall that from (\ref{eqprop1}) and (\ref{eq:aa}), we have
\begin{align*}
\norm{\theta^{(m,a)}_t-\theta^{\Pi^\varepsilon}_t}^2\le& \exp({-\kappa t})\norm{m-\Theta^\varepsilon}^2+4 C_f \exp({-\kappa t}) \exp({\kappa \varepsilon T^a}),\\
\norm{\theta^{(m,\pi)}_t-\theta^{\Pi^\varepsilon}_t}^2\le& \exp({-\kappa t})\norm{m-\Theta^\varepsilon}^2.
\end{align*}
Therefore, by (iii), Assumption \ref{as1.0} and $0< \varepsilon\le 1\land ({\delta}/{2\kappa})$,
\begin{align*}
\widetilde{{\mathcal W}}^2_{\tilde d}(H^\varepsilon_t(\cdot|m,a),\Pi^\varepsilon)\le& \tilde \Prb(\tilde V^a_{t/\varepsilon}\ne \tilde V^\pi_{t/\varepsilon})+\tilde \E \Big[\norm{\theta^{(m,a)}_t-\theta^{\Pi^\varepsilon}_t}^2\Big]\\
\le& \tilde \Prb(T^a\ge \frac{t}{\varepsilon})+ \exp({-\kappa t})\tilde \E[\norm{m-\Theta^\varepsilon}^2]+4 C_f \exp({-\kappa t}) \tilde \E[\exp({\kappa \varepsilon T^a})]\\
\le& C_f\exp({-\kappa t})\int_{\mathbb{R}^K}(1+ \norm{x-m}^2)\Pi^\varepsilon(\mathrm{d}x,S),
\end{align*}
\begin{align*}
\widetilde{{\mathcal W}}^2_{\tilde d}(H^\varepsilon_t(\cdot|m,\pi),\Pi^\varepsilon)\le& \tilde \E \Big[\norm{\theta^{(m,\pi)}_t-\theta^{\Pi^\varepsilon}_t}^2\Big]\\
\le& \exp({-\kappa t})\tilde \E[\norm{m-\Theta^\varepsilon}^2]\\
\le& C_f\exp({-\kappa t})\int_{\mathbb{R}^K} \norm{x-m}^2\Pi^\varepsilon(\mathrm{d}x,S).
\end{align*}
\end{proof}
In the following corollaries, we study the integrals on the right-hand side of the inequalities in Theorem~\ref{dispitheta}. Moreover, we show that the result above immediately implies not only geometric ergodicity of the coupled process $(\theta^{\varepsilon}_t, V_{t/\varepsilon})_{t \geq 0}$, but also of its marginal, the stochastic gradient process $(\theta^{\varepsilon}_t)_{t \geq 0}$.
\begin{corollary}\label{cor:w_conv}
Under the same assumptions as Theorem \ref{dispitheta}, there exists a constant
$C_{f,m}$ that depends only on $f$ and the initial value $m=\theta_0^\varepsilon$, such that
\begin{align}\label{disthepi}
\widetilde{{\mathcal W}}_{\tilde d}(H^\varepsilon_t(\cdot|m,a),\Pi^\varepsilon)\le C_{f,m}\exp\left({-\frac{\kappa t}{2} }\right),
\end{align}
\begin{align}\label{disthepi2}
\widetilde{{\mathcal W}}_{\tilde d}(H^\varepsilon_t(\cdot|m,\pi),\Pi^\varepsilon)\le C_{f,m}\exp\left({-\frac{\kappa t}{2} }\right),
\end{align}
\begin{align}\label{distheta}
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,a),\Pi^\varepsilon(\cdot,S))\le C_{f,m}\exp\left({-\frac{\kappa t}{2} }\right),
\end{align}
\begin{align}\label{distheta1}
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,\pi),\Pi^\varepsilon(\cdot,S))\le C_{f,m}\exp\left({-\frac{\kappa t}{2} }\right).
\end{align}
\end{corollary}
\begin{proof}
By Lemma \ref{boundedtheta},
$$
\int_{\norm{x}^2\ge\frac{8K^2 }{\kappa^2 }+\norm{m}^2+1 }\Pi^\varepsilon(\mathrm{d}x,S)=\lim_{t\to \infty} \tilde \Prb(\norm{\theta^{(m,a)}_t}\ge \frac{8K_f^2 }{\kappa^2 }+\norm{m}^2+1 )=0.
$$
Let $C_{f,m}=C_f(2\norm{m}+\frac{4K_f }{\kappa }+2)$. From (\ref{ineq2.6}), we have
\begin{align*}
\tilde{{\mathcal W}}_{\tilde d}(H^\varepsilon_t(\cdot|m,a),\Pi^\varepsilon)\le& C_f\exp\left({\frac{-\kappa t}{2}}\right)\Big(\int_{\mathbb{R}^K}(1+ \norm{x-m}^2)\Pi^\varepsilon(\mathrm{d}x,S)\Big)^{1/2}\\
=&C_f\exp\left({\frac{-\kappa t}{2}}\right)\Big(\int_{\norm{x}^2\le\frac{8K_f^2 }{\kappa^2 }+\norm{m}^2+1} (1+\norm{x-m}^2)\Pi^\varepsilon(\mathrm{d}x,S)\Big)^{1/2}\\
\le& C_{f,m}\exp\left({\frac{-\kappa t}{2}}\right).
\end{align*}
(\ref{disthepi2}) can be derived similarly from (\ref{ineq:w_pi}). Moreover, notice that
\begin{align}\label{eqs:api}
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,a),\Pi^\varepsilon(\cdot,S))\le& \tilde \E \Big[\norm{\theta^{(m,a)}_t-\theta^{\Pi^\varepsilon}_t}\Big],
\end{align}
\begin{align}\label{eqs:pipi}
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,\pi),\Pi^\varepsilon(\cdot,S))\le& \tilde \E \Big[\norm{\theta^{(m,\pi)}_t-\theta^{\Pi^\varepsilon}_t}\Big].
\end{align}
(\ref{distheta}) and (\ref{distheta1}) can be derived similarly from (\ref{eqs:api}) and (\ref{eqs:pipi}).
\end{proof}
Combining (\ref{distheta}) and (\ref{distheta1}), we immediately have:
\begin{corollary}\label{cor3}
Under the same assumptions as Theorem \ref{dispitheta},
\begin{align*}
{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,a),C^\varepsilon_t(\cdot|m,\pi))\le C_{f,m}\exp\left({-\frac{\kappa t}{2} }\right).
\end{align*}
\end{corollary}
So far, we have shown that the stochastic gradient process $(\theta^\varepsilon_t)_{t \geq 0}$ converges to a unique stationary measure $\Pi^\varepsilon(\cdot, S)$. It is often not possible to determine this stationary measure. However, we can comment on its asymptotic behavior as $\varepsilon \rightarrow 0$. Indeed, we will show that $\Pi^\varepsilon(\cdot, S)$ concentrates around the minimizer $\theta_*$ of the full target function.
\begin{prop}\label{dispixing}
Under Assumption \ref{as1.2}, the
measure $\Pi^\varepsilon(\cdot,S)$ on $(\mathbb{R}^K, \mathcal{B}(\mathbb{R}^K))$ approximates $\delta(\cdot-\theta_*).$ In other words, we have
\begin{align*}
{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*))\le \rho(\varepsilon)
\end{align*}
where $\rho: (0,1) \rightarrow [0,1]$ and $\lim_{\varepsilon\to 0}\rho(\varepsilon)=0.$
\end{prop}
\begin{proof}
By the triangle inequality,
\begin{align*}
&{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*))\nonumber\\
\le& {{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),C^\varepsilon_t(\cdot|m,\pi))
+{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,\pi),\delta(\cdot-\zeta_t))+ {{\mathcal W}}_d(\delta(\cdot-\zeta_t),\delta(\cdot-\theta_*)).
\end{align*}
Let $\theta_0=\theta^\varepsilon_0=\theta_*.$ Then by Lemma \ref{norandomthe}, we have the last term
$$
{{\mathcal W}}_d(\delta(\cdot-\zeta_t),\delta(\cdot-\theta_*))\le \norm{\theta_*-\theta_*}\exp({-\kappa t})=0.
$$
By (\ref{disthepi}) and Corollary \ref{deuxthe}, for any $t\ge 0,$
\begin{align*}
{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),C^\varepsilon_t(\cdot|m,\pi))+{{\mathcal W}}_d(C^\varepsilon_t(\cdot|m,\pi),\delta(\cdot-\zeta_t))
\le C_{f,\theta_*}\exp\left({-\frac{\kappa t}{2}}\right)+(\exp(t)\alpha(\varepsilon))\land 1.
\end{align*}
By choosing $t=-\log({{1\land\alpha(\varepsilon)}})/2,$ we get
\begin{align*}
{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*))\le& C_{f,\theta_*}\exp\left({-\frac{\kappa t}{2}}\right)+(\exp(t)\alpha(\varepsilon))\land 1\\
\le& C_{f,\theta_*}(\alpha(\varepsilon))^\frac{\kappa}{4}+(\alpha(\varepsilon))^{1/2}.
\end{align*}
Taking $\rho(\varepsilon):=\big(C_{f,\theta_*}(\alpha(\varepsilon))^\frac{\kappa}{4}+(\alpha(\varepsilon))^{1/2}\big)\land 1$ completes the proof.
\end{proof}
\section{Stochastic gradient processes with decreasing learning rate} \label{Sec_SGPD}
Constant learning rates are popular in some practical situations, but the associated stochastic gradient process usually does not converge to the minimizer of $\Phi$. This is also true for the discrete-time stochastic gradient descent algorithm. However, SGD can converge to the minimizer if the learning rate is decreased over time. In the following, we discuss a decreasing learning version of the stochastic gradient process and show that this dynamical system indeed converges to the minimizer of $\Phi$.
As discussed in Section~\ref{Subsec_Intro_thisWork}, we obtain the stochastic gradient process with decreasing learning rate by non-linearly rescaling the time in the constant-learning-rate index process $(V_t)_{t \geq 0}$. Indeed, we choose a function $\beta:[0, \infty) \rightarrow [0,\infty)$ and then define the decreasing learning rate index process by $(V_{\beta(t)})_{t \geq 0}$. We have discussed an intuitive way to construct a rescaling function $\beta$ also in Section~\ref{Subsec_Intro_thisWork}.
In the following, we define $\beta$ through an integral $\beta(t) = \int_0^t \mu(s) \mathrm{d}s,$ $t \geq 0$. We commence this section with necessary growth conditions on $\mu$ which allow us to then give the formal definition of the stochastic gradient process with decreasing learning rate. Then, we study the longtime behavior of this process.
\begin{assumption}\label{asmu}
Let $\mu:[0,\infty)\to (0,\infty)$ be a non-decreasing continuously differentiable function with $\lim_{t\to\infty} \mu(t)=\infty$ and
\begin{align*}
\lim_{t\to\infty}\frac{\mu'(t)t}{\mu(t)}=0.
\end{align*}
\end{assumption}
Assumption \ref{asmu} implies that $\mu$ goes to infinity, but at a very slow pace. Indeed it says that $\lim_{t\to\infty}\mu(t)/t^\gamma=0$, $\gamma>0$, that is $\mu(t)$ grows slower than any polynomial.
\begin{defi}
The \emph{stochastic gradient process with decreasing learning rate (SGPD)} is a solution of the following stochastic differential equation,
\begin{equation}\label{eq:AS:xi}
\left\{ \begin{array}{l}
\mathrm{d}\xi_t = - \nabla_\xi f(\xi_t, V_{\beta{(t)}})\mathrm{d}t, \\
\xi_0 = \theta_0,
\end{array} \right.
\end{equation}
where f satisfies Assumption \ref{asSGPf}, $(V_t)_{t\ge0}$ is a Feller process that satisfies Assumption \ref{as1.0}, and $\beta(t)=\int_0^t \mu(s)\mathrm{d}s$ with $\mu$ satisfying Assumption \ref{asmu}.
\end{defi}
To see that $(\xi_t)_{t \geq 0}$ is well-defined, consider the following: $t \mapsto \beta(t)$ is an increasing continuous function. Thus, $(V_{\beta(t)})_{t \geq 0}$ is c\`adl\`ag and Feller with respect to $(\mathcal{F}_{\beta(t)})_{t \geq 0}$. We then obtain well-definedness of $(\xi_t)_{t \geq 0}$ by replacing $(V_{t/\varepsilon})_{t \geq 0}$ by $(V_{\beta(t)})_{t \geq 0}$ in the proof of Proposition \ref{thwk30}.
We now move on to studying the longtime behavior of the SGPD $(\xi_t)_{t \geq 0}$. In a first technical result, we establish a connection between SGPD $(\xi_t)_{t \geq 0}$ and a time-rescaled version of SGPC $(\theta_t^\varepsilon)_{t \geq 0}$. To this end, note that
$\dot{\beta}(t) = \mu(t) >0$, $\beta(t)$ is strictly increasing. Hence, the inverse function of $\beta(t)$ exists and $$(\beta)^{-1}(t)=\int_0^t \frac{1}{\mu( (\beta)^{-1}(s)))}\mathrm{d}s.$$
This gives us the following inequality.
\begin{prop}\label{hardprop} For any $0<\varepsilon<1$,
\begin{align*}
\norm{\xi_{t}-\theta^\varepsilon_{\varepsilon \beta(t)}}^2\le C_{f,\theta_0,\mu}\left[\frac{\exp({-2\varepsilon\kappa (\beta(t)-\beta(\frac{t}{2}))})}{\varepsilon}+\frac{1}{\varepsilon}\Big(\abs{\frac{1}{\mu(t)}-\varepsilon}+\abs{\frac{1}{\mu(\frac{t}{2})}-\varepsilon}\Big)\right]
\end{align*}
almost surely,
where the constant $C_{f,\theta_0,\mu}$ depending only on $f$, the initial data $\theta_0$, and $\mu$.
\end{prop}
\begin{proof}
From (\ref{eq:AS:xi}) and (\ref{eq:AS:theta}),
\begin{align*}
\xi_{t} =& \theta_0-\int_0^{\beta(t)} \nabla_{\xi} f(\xi_{(\beta)^{-1}(s)}, V_{s}) \mathrm{d}(\beta)^{-1}(s),\\
\theta^\varepsilon_{ t}=& \theta_0-\varepsilon\int_0^{\frac{t}{\varepsilon}} \nabla_\theta f(\theta^\varepsilon_{\varepsilon s}, V_{s}) \mathrm{d}s.
\end{align*}
Let $b_t:=\mathrm{d}(\beta)^{-1}(t)/\mathrm{d}t=1/\mu( (\beta)^{-1}(t)))>0,$ we have
\begin{align*}
\xi_{(\beta)^{-1}(t)} =& \theta_0-\int_0^{t} \nabla_{\xi} f(\xi_{(\beta)^{-1}(s)}, V_{s}) b_s \mathrm{d}s\\
\theta^\varepsilon_{\varepsilon t}=& \theta_{0}-\varepsilon\int_0^{t} \nabla_\theta f(\theta^\varepsilon_{\varepsilon s}, V_{s}) \mathrm{d}s
\end{align*}
Therefore, by It\^o's formula and Assumption \ref{as1.2},
\begin{align*}
\mathrm{d}\norm{\theta^\varepsilon_{\varepsilon t}-\xi_{(\beta)^{-1}(t)}}^2/\mathrm{d}t=& -2\ip{\theta^\varepsilon_{\varepsilon t}-\xi_{(\beta)^{-1}(t)},\varepsilon\nabla_\theta f(\theta^\varepsilon_{\varepsilon t}, V_t)-\varepsilon\nabla_{\xi} f(\xi_{(\beta)^{-1}(t)}, V_t)} \\
&-2(\varepsilon-b_t) \ip{\theta^\varepsilon_{\varepsilon t}-\xi_{(\beta)^{-1}(t)},\nabla_{\xi} f(\xi_{(\beta)^{-1}(t)}, V_t)}\\
\le& -2\varepsilon\kappa\norm{\theta^\varepsilon_{\varepsilon t}-\xi_{(\beta)^{-1}(t)}}^2+C_{f,\theta_0}\abs{b_t-\varepsilon} ,
\end{align*}
where the last step follows from the boundedness of $\theta^\varepsilon_{\varepsilon t}$, $\xi_{(\beta)^{-1}(t)}$, and $\nabla_{\xi} f(\xi_{(\beta)^{-1}(t)}, V_t)$. $\xi_{(\beta)^{-1}(t)}$ is bounded can be showed similarly to Lemma \ref{boundedtheta}. Multiplying $\exp({2\varepsilon\kappa t})$ on both sides, we obtain
\begin{align*}
\mathrm{d}\Big(\exp({2\varepsilon\kappa t})\norm{\theta^\varepsilon_{\varepsilon t}-\xi_{(\beta)^{-1}(t)}}^2\Big)/\mathrm{d}t
\le& C_{f,\theta_0}\abs{b_t-\varepsilon}\exp({2\varepsilon\kappa t}),
\end{align*}
which implies
\begin{align*}
\norm{\theta^\varepsilon_{\varepsilon t}-\xi_{(\beta)^{-1}(t)}}^2\le& C_{f,\theta_0}\exp({-2\varepsilon\kappa t})\int_0^t\abs{b_s-\varepsilon}\exp({2\varepsilon\kappa s})\mathrm{d}s.
\end{align*}
Notice that $b_s$ is bounded and non-increasing, hence we have
\begin{align*}
\norm{\xi_{t}-\theta^\varepsilon_{\varepsilon \beta(t)}}^2\le& C_{f,\theta_0}\exp({-2\varepsilon\kappa \beta(t)})\int_0^{\beta(t)}\abs{b_s-\varepsilon}\exp({2\varepsilon\kappa s})\mathrm{d}s\\
=& C_{f,\theta_0}\exp({-2\varepsilon\kappa \beta(t)})\Big(\int_0^{\beta(\frac{t}{2})}+\int^{\beta(t)}_{\beta(\frac{t}{2})}\Big)\abs{b_s-\varepsilon}\exp({2\varepsilon\kappa s})\mathrm{d}s\\
\le& C_{f,\theta_0,\mu}\frac{\exp({-2\varepsilon\kappa (\beta(t)-\beta(\frac{t}{2}))})}{\varepsilon} \\ &\qquad+C_{f,\theta_0}\exp({-2\varepsilon\kappa \beta(t)})\int^{\beta(t)}_{\beta(\frac{t}{2})}\abs{b_s-\varepsilon}\exp({2\varepsilon\kappa s})\mathrm{d}s\\
\le& C_{f,\theta_0,\mu}\left[\frac{\exp({-2\varepsilon\kappa (\beta(t)-\beta(\frac{t}{2}))})}{\varepsilon}+\frac{1}{\varepsilon}\Big(\abs{\frac{1}{\mu(t)}-\varepsilon}+\abs{\frac{1}{\mu(\frac{t}{2})}-\varepsilon}\Big)\right].
\end{align*}
\end{proof}
Now, we get to the main result of this section, where we show the convergence of $(\xi_t)_{t \geq 0}$ to the minimizer $\theta_*$ of $\Phi$. In the following, we denote
\begin{align*}
D_t(B|\theta_0, a)&:=\mathbb{P}_a(\xi_t\in B|\xi_0=\theta_0), \\ D_t(B|\theta_0,\pi)&:=\mathbb{P}_\pi(\xi_t\in B|\xi_0=\theta_0) \qquad \qquad (B\in \mathcal{B}(\mathbb{R}^K), \theta_0\in \mathbb{R}^K),
\end{align*}
where $a \in S$ and $\pi$ is the invariant measure of $(V_t)_{t \geq 0}$, respectively.
\begin{theorem}
Under Assumption \ref{as1.2}, given $\theta_0\in \mathbb{R}^K$ and $a\in S$, there exists $T>0$ such that for any $t>T$,
\begin{align}\label{th2.1}
{{\mathcal W}}_d(D_t(\cdot|\theta_0,\pi),\delta(\cdot-\theta_*))\le C_{f,\theta_0,\mu}A(t),
\end{align}
\begin{align}\label{th2.2}
{{\mathcal W}}_d(D_t(\cdot|\theta_0,a),\delta(\cdot-\theta_*))\le C_{f,\theta_0,\mu}A(t),
\end{align}
where $$A(t):=\exp\left({\frac{-\kappa t}{8}}\right)+\left[\frac{\mu(t)-\mu(\frac{t}{2})}{\mu(t)}\right]^{1/2}+\rho\left(\frac{1}{\mu(\frac{t}{2})}\right)$$
and $\lim_{t\to\infty}A(t)=0.$
\end{theorem}
\begin{proof}
To prove (\ref{th2.1}), by the triangle inequality,
\begin{align*}
&{{\mathcal W}}_d(D_t(\cdot|\theta_0,\pi),\delta(\cdot-\theta_*))\\
\le& {{\mathcal W}}_d(D_t(\cdot|\theta_0,\pi),C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi))+{{\mathcal W}}_d(C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi),\Pi^\varepsilon(\cdot,S))
+{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*)).
\end{align*}
For the last two terms, by (\ref{distheta1}) and Proposition \ref{dispixing},
\begin{align*}
{{\mathcal W}}_d(C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi),\Pi^\varepsilon(\cdot,S))+{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*))\le C_{f,m}\exp({{-\kappa \varepsilon\beta(t)}/{2}})+\rho(\varepsilon).
\end{align*}
For the first term, by Proposition \ref{hardprop},
\begin{align*}
&{{\mathcal W}}_d(D_t(\cdot|\theta_0,\pi),C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi)) \\&\qquad \le C_{f,\theta_0,\mu}\left[\frac{\exp({-2\varepsilon\kappa (\beta(t)-\beta(\frac{t}{2}))})}{\varepsilon}+\frac{1}{\varepsilon}\Big(\abs{\frac{1}{\mu(t)}-\varepsilon}+\abs{\frac{1}{\mu(\frac{t}{2})}-\varepsilon}\Big)\right]^{1/2}.
\end{align*}
Since $\lim_{t\to\infty}\mu(t)=\infty$, there exists $T>0$ such that $1/\mu(\frac{T}{2}) < \frac{\delta}{2\kappa}$.
Let $\varepsilon=1/\mu(\frac{t}{2})$, $t>T$, we have
\begin{align*}
\exp\left({\frac{-\kappa \varepsilon\beta(t)}{2}}\right)=\exp\left({\frac{-\kappa \int_0^t\mu(s)\mathrm{d}s}{2\mu(\frac{t}{2})}}\right)\le \exp\left({\frac{-\kappa t}{8}}\right)
\end{align*}
and
\begin{align*}
\frac{\exp\left({-2\varepsilon\kappa (\beta(t)-\beta(\frac{t}{2}))}\right)}{\varepsilon}&=\mu\left(\frac{t}{2}\right)\exp\left({\frac{-\kappa \int_{\frac{t}{2}}^t\mu(s)\mathrm{d}s}{\mu(\frac{t}{2})}}\right) \\ &\le \mu\left(\frac{t}{2}\right)\exp\left({\frac{-\kappa t}{2}}\right)\le C \exp\left({\frac{-\kappa t}{8}}\right).
\end{align*}
Therefore,
\begin{align*}
{{\mathcal W}}_d(D_t(\cdot|\theta_0,\pi),\delta(\cdot-\theta_*))\le C_{f,\theta_0,\mu}\Big[\exp\left({\frac{-\kappa t}{8}}\right)+\left(\frac{\mu(t)-\mu(\frac{t}{2})}{\mu(t)}\right)^{1/2}+\rho\left(\frac{1}{\mu\left(\frac{t}{2}\right)}\right)\Big]
\end{align*}
From Assumption \ref{asmu}, by the mean value theorem,
\begin{align*}
\frac{\mu(t)-\mu(\frac{t}{2})}{\mu(t)}=\frac{t\mu'(\tau_t)}{2\mu(t)}=\frac{\tau_t\mu'(\tau_t)}{\mu(\tau_t)}\frac{t}{2\tau_t}\frac{\mu(\tau_t)}{\mu(t)}\le \frac{\tau_t\mu'(\tau_t)}{\mu(\tau_t)}\to 0
\end{align*}
where $\tau_t\in[\frac{t}{2},t].$ Thus (\ref{th2.1}) is obtained by taking
$$A(t):=\exp\left({\frac{-\kappa t}{8}}\right)+\left[\frac{\mu(t)-\mu(\frac{t}{2})}{\mu(t)}\right]^{1/2}+\rho\left(\frac{1}{\mu(\frac{t}{2})}\right).$$
To prove (\ref{th2.2}), by the triangle inequality,
\begin{align*}
{{\mathcal W}}_d(D_t(\cdot|\theta_0,a),\delta(\cdot-\theta_*))&\le {{\mathcal W}}_d(D_t(\cdot|\theta_0,a),C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,a)) \\ &\ \ \ \ \ +{{\mathcal W}}_d(C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,a),C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi))\\ &\ \ \ \ \ +{{\mathcal W}}_d(C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi),\Pi^\varepsilon(\cdot,S)) \\ &\ \ \ \ \
+{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*)).
\end{align*}
By Corollary \ref{cor3}, we have
$${{\mathcal W}}_d(C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,a),C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi))\le C_{f,m}\exp\left({-\frac{\kappa \varepsilon \beta(t)}{2} }\right).$$
Notice that $C_{f,m}\exp({-\frac{\kappa t}{4} })\le C_{f,m}A(t)$
when $\varepsilon=1/\mu(\frac{t}{2}).$
Similar to the proof of (\ref{th2.1}), we have
\begin{align*}
{{\mathcal W}}_d(D_t(\cdot|\theta_0,a),C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,a))+{{\mathcal W}}_d(C^\varepsilon_{\varepsilon\beta(t)}(\cdot|\theta_0,\pi),\Pi^\varepsilon(\cdot,S))
+&{{\mathcal W}}_d(\Pi^\varepsilon(\cdot,S),\delta(\cdot-\theta_*))\\
& \qquad \qquad \qquad \le C_{f,\theta_0,\mu}A(t),
\end{align*}
which completes the proof.
\end{proof}
Thus, we have shown that the distribution of $(\xi_t)_{t \geq 0}$ converges in Wasserstein distance to the Dirac measure concentrated in the minimizer $\theta_*$ of $\Phi$. This result is independent of whether we initialize the index process $(V_{\beta(t)})_{t \geq 0}$ with its stationary measure or with any deterministic value.
\section{From continuous dynamics to practical optimization.} \label{Sec_PracticalOptimisation}
So far, we have discussed the stochastic gradient process as a continuous-time coupling of an ODE and a stochastic process. In order to apply the stochastic gradient process in practice, we need to discretize ODE and stochastic process with appropriate time-stepping schemes. That means, for a given increasing sequence $(t(k))_{k=0}^\infty$, with $t(0) := 0$ and $\lim_{k \rightarrow \infty} t(k) = \infty,$ we seek discrete-time stochastic processes $(\widehat{V}_k, \widehat{\theta}_k)_{k=0}^\infty$, such that $(\widehat{V}_k, \widehat{\theta}_k)_{k=0}^\infty \approx (V_{t(k)}, \theta_{t(k)}^\varepsilon)_{k=0}^\infty$ and analogous discretizations for $(V_{\beta(t)}, \xi_t)_{t \geq 0}$.
In the following, we propose and discuss time stepping strategies and the algorithms arising from them. We discuss the index process and gradient flow separately, which we consider sufficient as the coupling is only one-sided.
\subsection{Discretization of the index process}
We have defined the stochastic gradient process for a huge range of potential index processes $(V_t)_{t \geq 0}$. The discretization of such processes has been the topic of several works, see, e.g., \citet{Gillespie1977,lord_powell_shardlow_2014}. In the following, we focus on one case and refer to those previous works for other settings and details.
Indeed, we study the setting $S := [-1, 1]$ and $\pi := \mathrm{Unif}[-1, 1]$ and discuss the discretization of $(V_t)_{t \geq 0}$ as a Markov pure jump process and as a reflected Brownian motion.
\subsubsection*{Markov pure jump process.} A suitable Markov pure jump process is a piecewise constant c\`adl\`ag process $(V_t)_{t \geq 0}$ with Markov transition kernel
\begin{align*}
\mathbb{P}_x(V_t \in \cdot) = \exp(-\lambda t)\delta(\cdot - x) + (1-\exp(-\lambda t))\mathrm{Unif}[-1,1] \qquad \qquad (t \geq 0),
\end{align*}
where $\lambda > 0$ is a rate parameter. We can now discretize the process $(V_t)_{t \geq 0}$ just through sampling from this Markov kernel for our discrete time points. We describe this in Algorithm~\ref{alg:MPJ}.
\begin{algorithm}[hptb]\caption{Discretized Markov pure jump process}
\begin{algorithmic}[1]
\STATE initialize $\widehat{V}_0$, $\lambda > 0$, and a sequence of points $(t(k))_{k=0}^\infty$
\FOR{$k = 1, 2,\ldots$}
\STATE sample $U \sim \mathrm{Unif}[0,1]$
\IF{$U \leq \exp(-\lambda (t(k)-t(k-1))$}
\STATE $\widehat{V}_k \leftarrow \widehat{V}_{k-1}$ \COMMENT{process stays at its current position}
\ELSE
\STATE sample $\widehat{V}_k \sim \mathrm{Unif}[-1,1]$ \COMMENT{process jumps to a new position}
\ENDIF
\ENDFOR
\RETURN $(\widehat{V}_k)_{k=0}^\infty$
\end{algorithmic}
\label{alg:MPJ}
\end{algorithm}
\subsubsection*{Reflected Brownian motion.} We have defined the reflected Brownian Motion on a non-empty compact interval through the Skorohod problem in Subsection~\ref{Subsec_Ex1_Levy_refle}.
Let $\sigma >0$ and $(W_t)_{t \geq 0}$ be a standard Brownian motion. Probably the easiest way to sample a reflected Brownian motion is by discretizing the rescaled Brownian motion $(\sigma \cdot W_t)_{t \geq 0}$ using the Euler--Maruyama scheme and projecting back to $S$, whenever the sequence leaves $S$. This scheme has been studied by \citet{Petterson}. We describe the full scheme in Algorithm~\ref{alg:RBM}.
\citet{Petterson} shows that this scheme converges at a rather slow rate. As we usually assume that the domain on which we move is rather low-dimensional and the sampling is rather cheap, we can afford small discretization stepsizes $t(k)-t(k-1)$, for $k \in \mathbb{N}$. Thus, the slow rate of convergence is manageable. Other schemes for the discretization of reflected Brownian motions have been discussed by, e.g., \citet{blanchet_murthy_2018, LIU1995}.
\begin{algorithm}[hptb]\caption{Discretized Reflected Brownian motion on $S$}
\begin{algorithmic}[1]
\STATE initialize $\widehat{V}_0$, $\sigma > 0$, a sequence of points $(t(k))_{k=0}^\infty$, and the projection operator $\mathrm{proj}_S$ mapping onto $S$
\FOR{$k = 1, 2,\ldots$}
\STATE $V' \leftarrow V_{k-1} + \sigma \sqrt{t(k)-t(k-1)} \psi $, \quad $\psi \sim \mathrm{N}(0,1^2)$ \COMMENT{Euler-Maruyama update}
\IF{$V' \not\in S$}
\STATE $\widehat{V}_k \leftarrow \mathrm{proj}_S V'$ \COMMENT{project back}
\ELSE
\STATE $\widehat{V}_k \leftarrow V'$ \COMMENT{accept Euler--Maruyama update}
\ENDIF
\ENDFOR
\RETURN $(\widehat{V}_k)_{k=0}^\infty$
\end{algorithmic}
\label{alg:RBM}
\end{algorithm}
\subsection{Discretization of the gradient flow}
We now briefly discuss the discretization of the gradient flow in the stochastic gradient process. Based on these ideas, we will conduct numerical experiments in Section~\ref{Sec_NumExp}.
\subsubsection*{Stochastic gradient descent} In stochastic gradient descent, the gradient flow is discretized with a forward Euler method. This method leads to an accurate discretization of the respective gradient flow if the stepsize/learning rates are sufficiently small. In the presence of rather large stepsizes and stiff vector fields, however, the forward Euler method may be inaccurate and unstable, see, e.g., \citet{Quarteroni2007}.
\subsubsection*{Stability} Several ideas have been proposed to mitigate this problem. The stochastic proximal point method, for instance, uses the backward Euler method to discretize the gradient flow; see \citet{Bianchi2015}.
Unfortunately, such implicit ODE integrators require us to invert a possibly highly non-linear and complex vector field. In convex stochastic optimization this inversion can be replaced by evaluating a proximal operator. For strongly convex optimization, on the other hand, \citet{eftekhari} proposes stable explicit methods.
\subsubsection*{Efficient optimizers} Plenty of highly efficient methods for stochastic optimization methods are nowadays available, especially in machine learning. Those have often been proposed without necessarily thinking of the stable and accurate discretization of a gradient flow: such are adaptive methods \citet{Adam}, variance reduced methods (e.g., \citet{Defazio2014}), or momentum methods (e.g., \citet{Kovachki21} for an overview), which have been shown in multiple works to be highly efficient; partially also in non-convex optimization. We could understand those methods also as certain discretizations of the gradient flow. Thus, we may also consider the combination of a feasible index process $(V_t)_{t \geq 0}$ with the discrete dynamical system in, e.g., the Adam method (\citet{Adam}).
\section{Applications} \label{Sec_NumExp}
We now study two fields of application of the stochastic gradient process for continous data. In the first example, we consider regularized polynomial regression with noisy functional data. In this case, we can easily show that the necessary assumptions for our analysis hold. Thus, we use it to illustrate our analytical results and especially to learn about the implicit regularization that is put in place due to different index proccesses.
In the second example, we study so-called physics-informed neural networks. In these continuous-data machine learning problems, a deep neural network is used to approximate the solution of a partial differential equation. The associated optimization problem is usually non-convex. Our analysis does not hold in this case: We study it to get more insights in the behavior of the stochastic gradient process in state-of-the-art deep learning problems.
\begin{figure}
\centering
\includegraphics[width = 0.8\textwidth]{polynomial_regression/data_truth_SGPDIFF.pdf}
\caption{True function $\Theta$ (red) and noisy observation $g$ (grey) in the polynomial regression example.}
\label{fig:truth_polyn}
\end{figure}
\subsection{Polynomial regression with functional data}
We begin with a simple polynomial regression problem with noisy functional data. We observe the function $g:[-1,1] \rightarrow \mathbb{R}$ which is given through
\begin{equation*}
g(y) := \Theta(y) + \Xi(y) \qquad (y \in [-1,1]),
\end{equation*}
where $\Theta: [-1, 1] \rightarrow \mathbb{R}$ is a smooth function and $\Xi$ is a Gaussian process with highly oscillating, continuous realizations. We aim at identifying the unknown function $\Theta$ subject to the observational noise $\Xi$. Here, we represent the function $\Theta$ on a basis consisting of a finite number of Legendre polynomials on $[-1,1]$. We denote this basis of Legendre polynomials by $(\ell_k)_{k=1}^K$.
To estimate the prefactors of the polynomials, we minimize the potential
\begin{equation} \label{eq:polyn_full_opt}
\Phi(\theta) := \frac{1}{2} \int_{[-1,1]} \left(g(y)-\sum_{k = 1}^K \theta_k\ell_k(y)\right)^2 \mathrm{d}y + \frac{\alpha}{2} \| \theta \|^2_2 \qquad \qquad (\theta \in X),
\end{equation}
where $\alpha > 0$ is a regularization parameter. This can be understood as a maximum-a-posteriori estimation of the unknown $\theta$ with Gaussian prior under the (misspecified) assumption that the data is perturbed with Gaussian white noise.
We employ the following associated subsampled potentials:
\begin{equation}
f(\theta, y) := \frac{1}{2} \left(g(y)-\sum_{k = 1}^K \theta_k\ell_k(y)\right)^2 + \frac{\alpha}{2} \| \theta \|^2_2 \qquad \qquad (\theta \in X, y \in [-1, 1]).
\end{equation}
Those subsampled potentials satisfy the strong convexity assumption, i.e., Assumption~\ref{as1.2}.
\begin{figure}
\centering
\includegraphics[width=0.29\textwidth]{polynomial_regression/error_trajectory_SGD0_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width=0.29\textwidth]{polynomial_regression/estimation_SGD0_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_SGD0_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width=0.29\textwidth]{polynomial_regression/error_trajectory_SGD2_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width=0.29\textwidth]{polynomial_regression/estimation_SGD2_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_SGD2_lambd0.1_alpha0.0001_Nruns100.pdf}
\caption{Estimation results of the polynomial regression problem using stochastic gradient descent with constant learning rate $\eta = 0.1$ (top row) and a version of stochastic gradient descent that uses the implicit midpoint rule (bottom row). The figures depict the mean over 100 runs (black solid line), mean $\pm$ standard deviation (black dotted line). Left column: trajectory of the rel\_err over time; centre column: comparison of $\Theta$ (solid red line) and estimated polynomial; right column: estimation error in terms of abs\_err.}
\label{fig:sgd_polyn}
\end{figure}
\subsubsection*{Setup}
In particular, we have produced artificial data $g$, by setting $\Theta := \sin(\pi \cdot)$ and choosing
$$
\Xi(x) = \sum_{j=1}^{200}\frac{10}{1000 + (\pi j)^{3/2}}\sin(2\pi j(x-0.5))\Xi_j \qquad \qquad (x \in [-1,1])
$$
and i.i.d.\ random variables $\Xi_1,\ldots, \Xi_{200} \sim \mathrm{N}(0,1^2)$. Note that $\Xi$ is a Gaussian random field given through the truncated Karhunen-Lo\`eve expansion of a covariance operator that is related to the Mat\'ern family, see, e.g., \citet{Lindgren2011}.
We show $\Theta$ and $g$ in Figure \ref{fig:truth_polyn}. For our estimation, we set $\alpha := 10^{-4}$ and use the $K=9$ Legendre polynomials with degrees $0,\ldots,8$. We employ the stochastic gradient process with constant learning rate, using either a reflected diffusion process or a pure Markov jump process for the index process $(V_t)_{t \geq 0}$. We discretize the gradient flow using the implicit midpoint rule: an ODE $z' = q(z), z(0) = z_0$ is then discretized with stepsize $h > 0$ by successively solving the implicit formula $$z_k = z_{k-1} + \frac{h}{2}q(z_k) + \frac{h}{2}q(z_{k-1}) \qquad (k \in \mathbb{N}).$$ In our experiments, we choose $h = 0.1$. We use Algorithms~\ref{alg:MPJ} and \ref{alg:RBM} to discretize the index processes with constant stepsize $t(\cdot)-t(\cdot -1) = 10^{-2}.$
We perform $J := 100$ repeated runs for each of the considered settings for $N := 5\cdot 10^4$ time steps and thus, obtain a family of trajectories $(\theta^{(j,n)})_{n = 1,\ldots,N, j=1,\ldots,J}$. In each case, we choose the initial values $V(0) := 0$ and the $\theta^{(j,0)} := (0.5,\ldots, 0.5).$
We study the distance of the estimated polynomial to the true function $\Theta$
by the relative error: $$\mathrm{rel\_err}_{n,j} := \frac{\sum_{l = 1}^{L}\left(\Theta(x_l)- \sum_{k = 1}^K \theta_k^{(j,n)}\ell_k(x_l) \right)^2}{\sum_{l' = 1}^{L}\Theta(x_{l'})^2},$$ for trajectory $j \in \{1,\ldots, J\}$ and time step $n \in \{1,\ldots,N\}$. Here $(x_l)_{l=1}^{L}$ are $L := 10^3$ equispaced points in $[-1,1]$. Moreover, we compare the estimated polynomial to the true function $\Theta$ by
$$\mathrm{abs\_err}_{j,x} := \left\lvert\Theta(x)- \sum_{k = 1}^K \theta_k^{(j,N)}\ell_k(x) \right\rvert $$ for trajectory $j \in \{1,\ldots, J\}$ at position $x \in [-1,1].$ In each case, we study mean and standard deviation (StD) computed over the $100$ runs.
\begin{figure}[htb]
\centering
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_Diff_method2_sigvar5_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_Diff_method2_sigvar5_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_Diff_method2_sigvar5_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_Diff_method2_sigvar0.5_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_Diff_method2_sigvar0.5_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_Diff_method2_sigvar0.5_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_Diff_method2_sigvar0.05_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_Diff_method2_sigvar0.05_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_Diff_method2_sigvar0.05_alpha0.0001_Nruns100.pdf}
\caption{Estimation results of the polynomial regression problem using the stochastic gradient process with reflected Brownian motion process with $\sigma = 5$ (top row), $\sigma = 0.5$ (centre row), and $\sigma = 0.05$ (bottom row). The figures depict the mean over 100 runs (black solid line), mean $\pm$ standard deviation (black dotted line). Left column: trajectory of the rel\_err over time; centre column: comparison of $\Theta$ (solid red line) and estimated polynomial; right column: estimation error in terms of abs\_err.}
\label{fig:Diff_results}
\end{figure}
\subsubsection*{Results and discussion}
For the polynomial regression problem we now study:
\begin{itemize}
\item stochastic gradient descent, as given in \eqref{Eq:SGD_discrete_time}, with constant learning rate $\eta_{(\cdot)} = h = 0.1$ (Figure~\ref{fig:sgd_polyn} top row),
\item stochastic gradient descent algorithm, for which the forward Euler update is replaced by an implicit midpoint rule update, with constant learning rate $\eta_{(\cdot)} = h = 0.1$ (Figure~\ref{fig:sgd_polyn} bottom row),
\item the stochastic gradient process with reflected Brownian motion as an index process with standard deviation $\sigma \in \{5, 0.5, 0.05\}$ (Figure~\ref{fig:Diff_results}), and
\item the stochastic gradient process with Markov pure jump process as an index process with rate parameter $\lambda \in \{10, 1, 0.1, 0.01\}$ (Figure~\ref{fig:MJP_results}).
\end{itemize}
In addition to those plots, we give means and standard deviations of the relative errors at the terminal state of the iterations in Table~\ref{Table_Results_polynomial}. To compare the convergence behavior of the different methods, we plot the rel\_err within the first 2000 discrete time steps in Figure~\ref{fig:error_comparison}.
\begin{figure}[htb]
\centering
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_MJP_method2_lambd10_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_MJP_method2_lambd10_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_MJP_method2_lambd10_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_MJP_method2_lambd1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_MJP_method2_lambd1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_MJP_method2_lambd1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_MJP_method2_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_MJP_method2_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_MJP_method2_lambd0.1_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/error_trajectory_MJP_method2_lambd0.01_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/estimation_MJP_method2_lambd0.01_alpha0.0001_Nruns100.pdf}
\includegraphics[width = 0.26\textwidth]{polynomial_regression/measure_error_MJP_method2_lambd0.01_alpha0.0001_Nruns100.pdf}
\caption{Estimation results of the polynomial regression problem using the stochastic gradient process with pure jump index process with $\lambda = 10$ (first row), $\lambda = 1$ (second row), $\lambda = 0.1$ (third row), and $\lambda = 0.01$ (fourth row). The figures depict the mean over 100 runs (black solid line), mean $\pm$ standard deviation (black dotted line). Left column: trajectory of the rel\_err over time; centre column: comparison of $\Theta$ (solid red line) and estimated polynomial; right column: estimation error in terms of abs\_err.}
\label{fig:MJP_results}
\end{figure}
\begin{table}[]
\begin{tabular}{l|l|ll}
\textbf{Method} & \textbf{Parameters} & \textbf{Mean of $\mathrm{rel\_err}_{N,(\cdot)}$} & \textbf{$\pm$ StD} \\ \hline
{SGD} & $\eta_{(\cdot)} = 0.1$ & $1.844 \cdot 10^{-2}$ & $\pm 4.012 \cdot 10^{-3}$ \\ \hline
{SGD implicit} & $\eta_{(\cdot)} = 0.1$ & $1.719 \cdot 10^{-2}$ & $\pm 3.939 \cdot 10^{-3}$ \\ \hline
\multirow{3}{*}{{\begin{tabular}[c]{@{}l@{}}SGPC with \\ reflected diffusion \\ index process\end{tabular}}} & $\sigma = 5 $ & $1.586 \cdot 10^{-2}$ & $\pm 4.038 \cdot 10^{-3}$ \\
& $\sigma = 0.5 $ & $1.587 \cdot 10^{-2} $ & $\pm 2.979 \cdot 10^{-3}$ \\
& $\sigma = 0.05 $ & $ 4.637 \cdot 10^{-2} $ & $\pm 8.776 \cdot 10^{-2}$ \\ \hline
\multirow{4}{*}{{\begin{tabular}[c]{@{}l@{}}SGPC with \\ Markov pure jump\\ index process\end{tabular}}} & $ \lambda = 10$ & $2.100 \cdot 10^{-2}$ & $\pm 6.049 \cdot 10^{-3}$ \\
& $ \lambda = 1$ & $3.427 \cdot 10^{-2}$ & $\pm 1.105 \cdot 10^{-2}$ \\
& $ \lambda = 0.1$ & $3.866 \cdot 10^{-2}$ & $\pm 1.142 \cdot 10^{-2}$ \\
& $ \lambda = 0.01$ & $3.178 \cdot 10^{-1}$ & $\pm 2.124 \cdot 10^{-1}$
\end{tabular}
\caption{Accuracy of the estimation in the polynomial regression model. Mean and standard deviation of the relative error of the methods at the final point of their trajectory. In particular, sample mean and sample standard deviation of $ j \mapsto \mathrm{rel\_err}_{N,j}$, with $N = 5 \cdot 10^4$, computed over $100$ independent runs. } \label{Table_Results_polynomial}
\end{table}
We learn several things from these results. Unsurprisingly, the index processes with a strong autocorrelation $(\lambda = 0.01, \sigma = 0.05)$ lead to larger errors in the reconstruction: the processes move too slowly to capture the index spaces appropriately. In the other cases, we can assume that the processes have reached their stationary regime.
Thus, in the figures and table, we should learn about the implicit regularization that is implicated by the different subsampling schemes, see \citet{Ali20a, smith2021on}. We especially see that the mean errors are reduced as $\sigma$ respectively $\lambda$ increases, which illustrates the approximation of the full gradient flow as shown in Theorem~\ref{wcovtheta}. Although, we should note that we compute the error to the truth $\Theta$, which is likely not the true minimizer of the full optimization problem \ref{eq:polyn_full_opt}.
\begin{figure}
\centering
\includegraphics[scale = 0.45]{polynomial_regression/error_plot_comparison.pdf}
\caption{Comparison of the mean rel\_err of the stochastic methods for time $t \leq 200$.}
\label{fig:error_comparison}
\end{figure}
It appears that the stochastic gradient processes with reflected diffusion index process $\sigma \in \{0.5, 5\}$ returns the best results. Looking at the error plots in the right column of Figure~\ref{fig:Diff_results}, we see that SGPC especially outperforms the other algorithms close to the boundary. For $\sigma = 5$ this could be seen as a numerical artefact due to the time step $t-t(\cdot-1)$ being too large. This though is likely not the case for $\sigma = 0.5$, where we see a similar effect, albeit a bit weaker.
In the convergence plot, Figure~\ref{fig:error_comparison}, we see for different methods different speeds of convergence to their respective stationary regime. Those speeds again depend on the autocorrelation of the processes. Interestingly, the SGPC with reflected diffusion index process and $\sigma = 5$ appears to be the best of the algorithms.
\subsection{Solving partial differential equations using neural networks (NN)}
Partial differential equations (PDEs) are used in science and engineering to model systems and processes, such as: turbulent flow, biological growth, or elasticity. Due to the implicit nature of a PDE and its complexity, the model they represent usually needs to be approximated (`solved') numerically. Finite differences, elements, and volumes have been the state of the art for solving PDEs for the last decades. Recently, deep learning approaches have gained popularity for the approximation of PDE solutions. Here, deep learning is particularly successful in high-dimensional settings, where classical methods suffer from the curse of dimensionality. See for example \citet{PINN, DeepXDE} for physics-informed neural networks (PINN). Integrated PyTorch-based packages are available for example see \citet{NeuroDiffEq, nangs}. More recently, see \citet{FNO} for a state-of-the-art performance based on the Fourier neural operator.
Physics-informed neural networks are a very natural field of application of deep learning with continuous data. Below we introduce PINNs, the associated continuous-data optimization problem, and the state-of-the-art in the training of PINNs. Then we consider a particular PDE, showcase the applicability of SGP, and compare its performance with the standard SGD-type algorithm.
The basic idea of PINNs consists in representing the PDE solution by a deep neural network where the parameters of the network are chosen such that the PDE is optimally satisfied. Thus, the problem is reduced to an optimization problem with the loss function formulated from differential equations, boundary conditions, and initial conditions. More precisely, for PDE problems of Dirichlet type, we aim to solve a system of equations of type
\begin{equation}\label{eq:PDE}
\left\{ \begin{array}{rlll}
\mathcal{L}(u(t, x)) &= &s(t, x) \qquad &(t\in[0, \infty),\ x\in D) \\
u(0,x) &= &u_0(x) \qquad &(x\in D) \,\\
u(t, x) &= &b(t, x) \qquad &(t\in[0, \infty),\ x\in D)
\end{array} \right.
\end{equation}
where $D\subset\mathbb{R}^d$ is an open, connected, and bounded set and $\mathcal{L}$ is a differential operator defined on a function space $V$ (e.g. $H^1(D)$). The unknown is $u:\bar{D}\to \mathbb{R}^n$. Functions $s(t, x)$, $b(t, x)$, and $u_0(x)$ are given. In numerical practice, we need to replace the infinite-dimensional space $V$ by a -- in some sense -- discrete representation. Traditionally, one employs a finite-dimensional subspace of $V$, say $\mathrm{span}\{\psi_1,\ldots,\psi_K\}$, where $\psi_1,\ldots,\psi_K$ are basis functions in a finite element method. To take advantage of the recent development of machine learning, one could solve the problem on a set of deep neural networks contained in $V$, say
\begin{align*}
{\Big\lbrace} \psi(\cdot; \theta):\ &\psi(x; \theta) = (W^{(K)} \sigma(\cdot) + b^{(K)}) \circ \cdots \circ (W^{(1)} \sigma(x) + b^{(1)}), x \in [0, \infty) \times D, \\
&\theta = \left((W^{(K)}, b^{(K)}), \ldots, (W^{(1)}, b^{(1)})\right)\in \prod_{k=1}^K \left(\mathbb{R}^{n_{k} \times n_{k-1}} \times \mathbb{R}^{n_{k}}\right) =: X
{\Big\rbrace},
\end{align*}
where $\sigma: \mathbb{R} \rightarrow \mathbb{R}$ is an activation function, applied component-wise, $n_0 = d+1$ and $n_K = 1$ to match input and output of the PDE's solution space, and $n_1, \ldots, n_{K-1}$ determine the network's architecture.
In simpler terms, let $u_{\rm}(\cdot; \theta) \in V$ be the output of a feedforward neural network (FNN) with parameters (biases/weights) denoted by $\theta \in X$. The parameters can be learned by minimizing the mean squared error (MSE) loss
\begin{align*}
\Phi(\theta; \mathcal{L}, s, u_0, b) :=& \int_0^\infty w(t) \int_D \left(\mathcal{L}(u(t, x;\theta)) - s(x)\right)^2 \mathrm{d}x\mathrm{d}t+\int_{\partial D} \left(u(0, x;\theta) - u_0(x)\right)^2 \mathrm{d}x\\
&\ +\int_0^\infty w(t) \int_{\partial D} \left(u(t, x;\theta) - b(t, x)\right)^2 \mathrm{d}x\mathrm{d}t,
\end{align*}
where the first term is the $L^2$ norm of the PDE residual, the second term is the $L^2$ norm of the residual for the initial condition, the third term is the $L^2$ norm of the residual for the boundary conditions, and $w: [0, \infty) \rightarrow [0, \infty)$ is an appropriate weight function. The FNN then represents the solution via solving the following minimization problem
\begin{equation} \label{EQ_Opt_PINNs_cont}
\min_{\theta \in X} \Phi(\theta; \mathcal{L}, s, u_0, b).
\end{equation}
Note that in physics-informed neural networks, differential operators w.r.t. the input $x$ and the gradient w.r.t the parameter $\theta$ are both obtained using automatic differentiation.
\subsubsection*{Training of physics-informed neural networks}
In practice, the optimization problem \eqref{EQ_Opt_PINNs_cont} is often replaced by an optimization problem with discrete potential
\begin{align*}
\widehat{\Phi}(\theta; \mathcal{L}, s, u_0, b) :=& \sum_{k=1}^K \left(\mathcal{L}(u(t_k, x_k;\theta)) - s(x_k)\right)^2 + \sum_{k'=1}^{K'} \left(u(0, x'_{k'};\theta) - u_0(x'_{k'})\right)^2 \\
&\ + \sum_{k''=1}^{K''} \left(u(t''_{k''}, x''_{k''};\theta) - b(t_{k''}'' , x_{k''}'')\right)^2,
\end{align*}
for appropriate continuous indices $$(x_{k}, t_k)_{k=1}^K \in [0, \infty)^K \times D^K, (x_{k'}')_{k'=1}^{K'} \in \partial D^K, (x_{k''}'', t_{k''}'')_{k''=1}^{K''} \in [0, \infty)^K \times \partial D^K$$ that may be chosen deterministically or randomly, see for example \citet{nangs, DeepXDE}.
Focusing the training on a fixed set of samples can be problematic: fixing a set of random samples might be unreliable; a reliable cover of the domain will likely only be reached through tight meshing, which scales badly. \citet{Sirignano} propose to use SGD on the continuous data space. They employ the discrete dynamic in \eqref{Eq:SGD_discrete_time}. Naturally, we would like to follow \citet{Sirignano} and employ the SGP dynamic on the continuous index set.
To train the PINNs with SGP, we again choose the reflected Brownian motion as an index process, which we discretize with the Euler--Maruyama scheme in Algorithm~\ref{alg:RBM}. In addition, we employ mini-batching to reduce the variance in the estimator: We sample $M \in \mathbb{N}$ independent index processes $(V_t^{(1)})_{t \geq 0},\ldots, (V_t^{(M)})_{t \geq 0}$ and then employ the dynamical system
$$
\mathrm{d}\theta_t = - \frac{1}{M}\sum_{m=1}^M\nabla_\theta f(\theta_t, V_t^{(m)})\mathrm{d} t.
$$
Hence, rather than optimizing with respect to a single data set, we optimize with respect to $M$ different data sets in each iteration. While we only briefly mention the mini-batching throughout our analysis, one can easily see that it is fully contained in our framework.
In preliminary experiments, we noticed that the Brownian motion for the sampling on the boundary is not very effective: possibly due to its localizing effect. Hence, we obtain training data on the boundary by sampling uniformly, which we consider justified as a mesh on the boundary scales more slowly as a mesh in the interior and as the boundary behavior of the considered PDE is rather predictable.
\subsubsection*{PDE and results} We now describe the partial differential equation that we aim to solve with our PINN model. After introducing the PDE we immediately outline the PINN's architectures and show our estimation results.
We the train networks on Google Colab Pro using GPUs (often T4 and P100, sometimes K80). We are certain that a more efficient PDE solution could be obtained by classical methods, e.g., the finite element method. We do not compare the deep learning methods with classical methods, as we are mainly interested in SGP and SGD in non-convex continuous-data settings. Other methods that could approximate the PDE solution are not our focus.
The PDE we study is a transport equation; which is a linear first order, time-dependent model. One of the main advantages of studying this particular model is that we know an analytical solution that allows us to compute a precise test error.
\begin{example}[1D Transport equation] We solve the one-dimensional transport equation on the space $[0, 1]$ with periodic boundary condition:
\begin{equation}\label{eq:transport}
\left\{ \begin{array}{l}
u_t + u_x = 0, \ \ t\in[0, \infty),\ \ x\in [0, 1] \\
u(t=0) = \sin(2\pi x),\\
u(t,0) = u(t, 1).
\end{array} \right.
\end{equation}
\end{example}
The neural network approximation of this PDE has already been studied by \citet{nangs}, our experiments partially use the code associated to this work.
The network architecture is defined by a three-layer deep neural network with 128 neurons per layer and a Rectified Linear Unit (ReLu) activation function. While theoretically the solution exists globally in time, we restrict $t$ to a compact domain and w.l.o.g, we assume $t\in[0, 1]$.
From the interior of the domain of time and space variables, i.e. $(0, 1)\times(0, 1)$, we use Algorithm~\ref{alg:RBM} with $\sigma=0.5$ to sample the train set of size $3 \cdot 10^4$ for SGPC and SGPD and we uniformly sample $600$ points for the train set of SGD. In addition, as a part of the train set for all three methods, we sample uniformly $20$ and $60$ points for the initial condition and periodic boundary condition, respectively.
The learning rate for SGD and SGPC is $0.01$. The learning rate for SGPD is defined as
$$\eta(t) = \frac{0.01}{\log(t+2)^{0.3}},$$
which is chosen such that the associated $\mu := 1/ \eta$ satisfies Assumption \ref{asmu}.
For all three methods, we use Adam \citep[see][]{Adam} as the optimizer to speed up the convergence; we use an $L^2$ regularizer with weight $0.1$ to avoid overfitting.
Each model is trained over $600$ iterations with batch size $50$. The training process for SGPC and SGPD contains only one epoch, while we train $50$ epochs in the SGD case. We evaluate the models by testing on a uniformly sampled test set of size $2 \cdot 10^3$ and compare the predicted values with the theoretical solution
$$u(t, x) = \sin(2\pi (x-t)).$$
We obtain the losses, the predicted solutions, and the test errors by averaging over $30$ random experiments, i.e. $30$ independent runs of SGD, SGPC, and SGPD, respectively. We give the results in Figures \ref{fig:transport_loss}, \ref{fig:transport_sol}, and \ref{fig:transport_error}. Note that the timings are very similar for each of the algorithms, the fact that SGPC and SGPD require us to first sample reflected Brownian motions is negligible.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\textwidth]{1Dtransport/loss.png}
\includegraphics[width=0.4\textwidth]{1Dtransport/log_loss.png}
\caption{The plots of the loss vs iteration and its log scale for SGD, SGPC, and SGPD. The losses are obtained by averaging over $30$ random experiments.}\label{fig:transport_loss}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.3\textwidth]{1Dtransport/t1.png}
\includegraphics[width=0.3\textwidth]{1Dtransport/t5.png}
\includegraphics[width=0.3\textwidth]{1Dtransport/t9.png}
\caption{The plots of the solutions at time $t=0.1,0.5,0.9$. We evaluate the models at $30$ uniformly sampled points. For each method, the predicted values are taken by averaging over the predicted values from the best models (the model that achieves the lowest training loss within the 600 iteration steps) in $30$ random experiments. The black curve is the theoretical solution.}\label{fig:transport_sol}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.3\textwidth]{1Dtransport/error_t1.png}
\includegraphics[width=0.3\textwidth]{1Dtransport/error_t5.png}
\includegraphics[width=0.3\textwidth]{1Dtransport/error_t9.png}
\caption{The plots of the test error at time $t=0.1,0.5,0.9$. We evaluate the models at $2000$ uniformly sampled points. For each method, the predicted values are taken by averaging over the predicted values from the best models (the model that achieves the lowest training loss within the 600 iteration steps) in $30$ random experiments. At each point $x$, the error calculated by taking the absolute value of the difference between the predicted value and the true solution.}\label{fig:transport_error}
\end{figure}
From Figure \ref{fig:transport_loss}, we notice that while SGD and SGPC behave similarly, SGPD does converge faster. Here, Assumption \ref{asmu} provides a way of designing a non-constant learning rate in practice. On the test set, the mean squared errors for SGD, SGPC, and SGPD are $4.5\cdot10^{-4}$, $3.5\cdot10^{-4}$, and $2.8\cdot10^{-4}$. These test errors refer to the averaged model output of the 30 models from independent experiments. Combined with Figure \ref{fig:transport_sol} and Figure \ref{fig:transport_error}, we observe that SGPC and SGPD generalize at least slightly better on the test set. This improved generalization error might be due to the additional test data generated by the Brownian motion, as compared to the fixed training set used in PINNs. The combination with the reduction of the learning rate in SGPD, appears to be especially effective.
\section{Conclusions and outlook} \label{Sec_conclusions}
In this work we have proposed and analyzed a continuous-time stochastic gradient descent method for optimization with respect to continuous data. Our framework is very flexible: it allows for a whole range of random sampling patterns on the continuous data space, which is particularly useful when the data is streamed or simulated. Our analysis shows ergodicity of the dynamical system under convexity assumptions -- converging to a stationary measure when the learning rate is constant and to the minimizer when the learning rate decreases. In experiments we see the suitability of the method and the effect of different sampling patterns on its implicit regularization.
We end this work by now briefly listing some interesting problems for future research in this area.
First, we would like to learn how the SGP sampling patterns perform in large-scale (adversarially-)robust machine learning and in other applications we have mentioned but not studied here. Moreover, from a both practical and analytical perspective, it would be interesting to also consider non-compact index spaces $S$. Those appear especially in robust optimal control and variational Bayes. Finally, we consider the following generalization of the optimization problem \eqref{Eq:OptProb} to be of high interest:
\begin{equation*}
\min_{\theta \in X} \int_S f(\theta, y) \Pi(\mathrm{d}y|\theta),
\end{equation*}
where $\Pi$ is now a Markov kernel from $X$ to $S$. Hence, in this case the probability distribution and the sampling pattern itself depend on the parameter $\theta$. Optimization problems of this form appear in the optimal control of random systems (e.g., \cite{Deqing}) and empirical Bayes (e.g., \cite{Casella}) but also in reinforcement learning (e.g., \cite{Sutton}).
\acks{JL and CBS acknowledge support from the EPSRC through grant EP/S026045/1 “PET++: Improving Localisation, Diagnosis and Quantification in Clinical and Medical PET Imaging with Randomised Optimisation”.}
\vskip 0.2in
|
1,108,101,564,268 | arxiv | \section{Introduction}
\label{intro}
Network analysis consists of numerous tasks including community detection~\cite{fortunato2010community}, role discovery~\cite{rossi2015role}, link prediction~\cite{liben2007link}, etc. As relations exist between nodes that disobey the \textit{i.i.d} assumption, it is non-trivial to apply traditional data mining techniques in networks directly. Network embedding (NE) fills the gap by mapping nodes in a network into a low-dimensional space according to their structural information in the network. It has been reported that using embedded node representations can achieve promising performance on many network analysis tasks~\cite{perozzi2014deepwalk,grover2016node2vec,cao2015grarep,ribeiro2017struc2vec}.
Previous NE techniques mainly relied on eigendecomposition~\cite{shaw2009structure,tenenbaum2000global}, but the high computational complexity of eigendecomposition makes it difficult to apply in real-world networks. With the fast development of neural network techniques, unsupervised embedding algorithms have been widely used in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors in the learned embedding space, e.g., word2vec~\cite{mikolov2013efficient,mikolov2013distributed} and GloVe~\cite{pennington2014glove}. By drawing an analogy between random walks on networks and word sequences in text, DeepWalk~\cite{perozzi2014deepwalk} learns node representations based on random walks using the same mechanism of word2vec. Afterwards, a sequence of studies have been conducted to improve DeepWalk either by extending the definition of neighborhood to higher-order proximity~\cite{cao2015grarep,tang2015line,grover2016node2vec} or incorporating more information for node representations such as attributes~\cite{li2017attributed,wang2017attributed} and heterogeneity~\cite{chang2015heterogeneous,tang2015pte}.
\begin{figure}
\centering
\includegraphics[width=3.0in]{graph.pdf}
\caption{An example of ten nodes belonging to (1) three groups (different colors indicate different groups) based on global structural information, i.e., the regular equivalence and (2) two groups (groups are shown by the dashed ellipses) based on local structural information, i.e., the community. For example, nodes 0, 1, 4, 5 and 8 belong to the same group Community 1 based on local structural perspective because they have more internal connections. Node 0 and 2 are far from each other, but they are in the same group based on global structural perspective.}
\label{fig:exp}
\end{figure}
Although a variety of NE methods have been proposed, two major limitations exist in previous NE studies: (1) \textbf{Structure preservation}. Previous studies applied random walk to learn representations. However, random walk based embedding strategies can only capture local structural information, i.e., first-order and higher-order proximity within the neighborhood of the target node~\cite{lyu2017enhancing} and fail in capturing the global structural information, e.g., structural or regular equivalence~\cite{wasserman1994social}. An example of global structural information and local structural information is shown in Fig.~\ref{fig:exp} and empirical evidence based on this example for illustrating this limitation will be shown in Section~\ref{case}. (2) \textbf{Uncertainty modeling}. Previous methods represent a node into a point vector in the learned embedding space. However, real-world networks may be noisy and imbalanced. Point vector representations are deterministic~\cite{dos2016multilabel} and are not capable of modeling the uncertainties of node representations.
There are limited studies trying to address these limitations in the literature. For instance, \textit{struc2vec}~\cite{ribeiro2017struc2vec} builds a hierarchy to measure similarity at different scales, and constructs a multilayer graph to encode the structural similarities. \textit{SNS}~\cite{lyu2017enhancing} discovers graphlets as a pre-processing step to obtain the structural similar nodes. However, both studies aim only to solve the problem of \textbf{structure preservation} to some extent. Thus the limitation of \textbf{uncertainty modeling} remains a challenge. \cite{dos2016multilabel} and \cite{bojchevski2017deep} put effort in improving classification tasks by embedding nodes into Gaussian distributions but both methods only capture the neighborhood information based on random walk techniques. Therefore, the problem of \textbf{structure preservation} has not been solved in these studies.
In this paper, we propose \textit{struc2gauss}, a new structure preserving network embedding framework. \textit{struc2gauss} learns node representations in the space of Gaussian distributions and performs NE based on global structural information so that it can address both limitations simultaneously. On the one hand, \textit{struc2gauss} generates node context based on structural similarity measures to learn node representations so that global structural information can be taken into consideration. On the other hand, \textit{struc2gauss} learns node representations via Gaussian embedding and each node is represented as a Gaussian distribution where the mean indicates the position of this node in the embedding space and the covariance represents its uncertainty. Furthermore, we analyze and compare three different structural similarity measures for networks, i.e., RoleSim, MatchSim and SimRank, and two different energy functions for Gaussian embedding to calculating the closeness of two embedded Gaussian distributions, i.e., expected likelihood and KL divergence.
We summarize the contributions of this paper as follows:
\begin{itemize}
\item We propose a flexible structure preserving network embedding framework, \textit{struc2gauss}, which learns node representations in the space of Gaussian distributions based on global structural information.
\item We investigate the influence of different energy functions and different structural similarity measures on NE to preserve global structural information of networks.
\item We conduct extensive experiments which demonstrate the effectiveness of \textit{struc2gauss} in capturing the global structural information of networks and modeling the uncertainty of learned node representations.
\end{itemize}
The rest of the paper is organized as follows. Section~\ref{related} provides an overview of the related work. We present the problem statement in Section~\ref{notations}. Section~\ref{s2g} explains the technical details of \textit{struc2gauss}. In Section \ref{exp} we then discuss our experimental study. Finally, in Section \ref{conc} we draw conclusions and outline directions for future work.
\section{Related Work}
\label{related}
\subsection{Network Embedding}
Network embedding (NE) fills the gap by mapping nodes in a network into a low-dimensional space according to their structural information in the network. The learned node representations can boost the performance in many network analysis tasks, e.g., community detection and link prediction. Previous methods mainly focused on matrix factorization and eigendecomposition~\cite{shaw2009structure,tenenbaum2000global} to reduce the dimension of network data.
With increasing attention attracted by neural network research, unsupervised neural network techniques have opened up a new world for embedding. \textit{word2vec} as well as Skip-Gram and CBOW~\cite{mikolov2013efficient,mikolov2013distributed} learn low-rank representations of words in text based on word context and show promising results of different NLP tasks. Based on \textit{word2vec}, DeepWalk~\cite{perozzi2014deepwalk} first introduces such embedding mechanism to networks by treating nodes as words and random walks as sentences. Afterwards, a sequence of studies have been conducted to improve DeepWalk either by extending the definition of neighborhood to higher-order proximity~\cite{cao2015grarep,tang2015line,grover2016node2vec} or incorporating more information for node representations such as attributes~\cite{li2017attributed,wang2017attributed} and heterogeneity~\cite{chang2015heterogeneous,tang2015pte}. We refer the reader to~\cite{hamilton2017repre} for more details.
However, almost all these state-of-the-art methods only concern the local structural information represented by random walks and fail to capture global structural information. SNS~\cite{lyu2017enhancing} and \textit{struc2vec} are two exceptions which take global structural information into consideration. SNS uses graphlet information for structural similarity calculation as a pre-propcessing step and \textit{struc2vec} applies the dynamic time warping to measure similarity between two nodes' degree sequences and builds a new multilayer graph based on the similarity. Then similar mechanism used in DeepWalk has been used to learn node representations.
\subsection{Structural Similarity}
\label{strucsim}
Structure based network analysis tasks can be categorized into two types: structural similarity calculation and network clustering .
Calculating structural similarities between nodes is a hot topic in recent years and different methods have been proposed. SimRank~\cite{jeh2002simrank} is one of the most representative notions to calculate structural similarity. It implements a recursive definition of node similarity based on the assumption that two objects are similar if they relate to similar objects. SimRank++~\cite{antonellis2008simrank++} adds an
evidence weight which partially compensates for the neighbor matching cardinality problem. P-Rank~\cite{zhao2009p} extends SimRank by jointly encoding both in- and out-link relationships into structural similarity computation. MatchSim~\cite{lin2009matchsim} uses maximal matching of neighbors to calculate the structural similarity. RoleSim~\cite{jin2011axiomatic} is the only similarity measure which can satisfy the automorphic equivalence properties.
\begin{comment}
\begin{table*}
\centering
\caption{Comparison between global and local structural information.}
\label{tb:diff}
\begin{tabular}{|l|l|l|}
\hline
& Global structural information & Local structural information \\ \hline
Descriptions & \begin{tabular}[c]{@{}l@{}}Two nodes are structurally similar if they\\ (1) belong to the same role;\\ (2) are structurally/automorphically/regularly equivalent.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Two nodes are structurally similar if they\\ (1) directly connect to each other;\\ (2) share common neighbors.\end{tabular} \\ \hline
Applications & Role discovery~\cite{rossi2015role} & Community Detection~\cite{fortunato2010community} \\ \hline
\end{tabular}
\end{table*}
\end{comment}
Network clusters can be based on either global or local structural information. Graph clustering based on global structural information is the problem of role discovery~\cite{rossi2015role}. In social science research, roles are represented as concepts of equivalence~\cite{wasserman1994social}. Graph-based methods and feature-based methods have been proposed for this task. Graph-based methods take nodes and edges as input and directly partition nodes into groups based on their structural patterns. For example, Mixed Membership Stochastic Blockmodel~\cite{airoldi2008mixed} infers the role distribution of each node using the Bayesian generative model. Feature-based methods first transfer the original network into feature vectors and then use clustering methods to group nodes. For example, RolX~\cite{henderson2012rolx} employs ReFeX~\cite{henderson2011s} to extract features of networks and then uses non-negative matrix factorization to cluster nodes. Local structural information based clustering corresponds to the problem of community detection~\cite{fortunato2010community}. A community is a group of nodes that interact with each other more frequently than with those outside the group. Thus, it captures only local connections between nodes.
\section{Problem Statement}
\label{notations}
We illustrated local and global structural information in Section~\ref{intro} using the example in Fig.~\ref{fig:exp}. In this study, we only consider the global structural information, so without mentioning it explicitly, structural information indicates the global one. We formally define the problem of structure preserving network embedding.
\begin{mydef}
\textbf{Structure Preserving Network Embedding}. Given a network $G = (V, E)$, where $V$ is a set of nodes and $E$ is a set of edges between the nodes, the problem of \textbf{Structural Preserving Network Embedding} aims to represent each node $v\in V$ into a Gaussian distribution with mean $\mu$ and covariance $\Sigma$ in a low-dimensional space $\mathbb{R}^d$, i.e., learning a function $f: V\to \mathcal{N}(x;\mu,\Sigma)$, where $\mu\in \mathbb{R}^d$ is the mean, $\Sigma\in\mathbb{R}^{d\times d}$ is the covariance and $d\ll |V|$. In the space $\mathbb{R}^d$, the global structural information of nodes can be preserved and the uncertainty of node representations can be captured.
\end{mydef}
\section{\textit{struc2gauss}}
An overview of our proposed \textit{struc2gauss} framework is shown in Fig.~\ref{fig:frame}. Given a network, a similarity measure is employed to calculate the similarity matrix, then the training set which consists of positive and negative pairs are sampled based on the similarity matrix. Finally, Gaussian embedding techniques are applied on the training set and generate the embedded Gaussian distributions as the node representations.
\label{s2g}
\begin{figure*}
\centering
\includegraphics[width=4.7in]{GaussEmb.pdf}
\caption{Overview of the \textit{struc2gauss} framework.}
\label{fig:frame}
\end{figure*}
\subsection{Structural Similarity Calculation}
It has been theoretically proved that random walk sampling based NE methods are not capable of capturing structural equivalence~\cite{lyu2017enhancing}. Thus, to capture global structural information, we calculate the structural similarity as a pre-processing step similar to \cite{lyu2017enhancing,ribeiro2017struc2vec}.
In this paper, we use RoleSim~\cite{jin2011axiomatic} for the structural similarity since it satisfies all the requirements of Axiomatic Role Similarity Properties for modeling the equivalence~\cite{jin2011axiomatic}. RoleSim also generalizes Jaccard coefficient and corresponds linearly to the maximal weighted matching. RoleSim metric between two nodes $u$ and $v$ is defined as:
\begin{equation}
RoleSim(u,v)=(1-\beta)\max_{M(u,v)}\frac{\sum_{(x,y)\in M(u,v)}RoleSim(x,y)}{N(u)+N(v)-|M(u,v)|}+\beta
\end{equation}
where $N(u)$ and $N(v)$ are neighbors of node $u$ and $v$, respectively. $M(u,v)$ is a matching between
$N(u)$ and $N(v)$, i.e., $M(u, v)\subseteq N(u)\times N(v)$ is a bijection between $N(u)$ and $N(v)$. The parameter $\beta$ is a decay factor where $0 < \beta < 1$. RoleSim values can be computed iteratively and are guaranteed to converge. The procedure of computing RoleSim consists of three steps:
\begin{itemize}
\item Step 1: Initialize matrix of RoleSim scores $R^0$;
\item Step 2: Compute the $k^{th}$ iteration $R^k$ scores for the $(k-1)^{th}$ iteration's values, $R^{k-1}$ using:
\begin{equation}
R^{k}(u,v)=(1-\beta)\max_{M(u,v)}\frac{\sum_{(x,y)\in M(u,v)}R^{k-1}(x,y)}{N(u)+N(v)-|M(u,v)|}+\beta
\end{equation}
\item Repeat Step 2 until $R$ values converge for each pair of nodes.
\end{itemize}
Note that other ways to capture global structural information will be discussed in Section~\ref{dis} and other structural similarity methods will be compared in Section~\ref{sim} empirically.
\subsection{Training Set Sampling}
To learn node representations using Gaussian embedding, we have to sample training set based on the similarity matrix. For node $v$, we rank its similarity values towards other nodes and then select top-$k$ most similar nodes $u_i,i=1,...,k$ as its positive set $\Gamma_{+}=\{(v,u_i)|i=1,...,k\}$. For the negative set, we randomly select the same number of nodes $\{u'_i,i=1,...,k\}$ same to~\cite{vilnis2014word}, i.e., $\Gamma_{-}=\{(v,u'_i)|i=1,...,k\}$. Therefore, $k$ is a parameter indicating the \textit{number of positive/negative nodes per node}. We will generate $r$ positive and negative sets for each node where $r$ is a parameter indicating the \textit{number of samples per node}.
\subsection{Gaussian Embedding}
\label{gaussemb}
\subsubsection{Overview}
\label{over}
Recently language modeling techniques such as \textit{word2vec} have been extensively used to learn word representations in and almost all NE studies are based on these word embedding techniques. However, these NE studies map each entity to a fixed point vector in a low-dimension space so that the uncertainties of learned embeddings are ignored. Gaussian embedding aims to solve this problem by learning density-based distributed embeddings in the space of Gaussian distributions~\cite{vilnis2014word}. Gaussian embedding has been utilized in different graph mining tasks including triplet classification on knowledge graphs~\cite{he2015learning}, multi-label classification on heterogeneous graphs~\cite{dos2016multilabel} and link prediction and node classification on attributed graphs~\cite{bojchevski2017deep}.
Gaussian embedding trains with a ranking-based loss based on the ranks of positive and negative samples. Following~\cite{vilnis2014word}, we choose the max-margin ranking objective which can push scores of positive pairs above negatives by a margin defined as:
\begin{equation}
\mathcal{L}=\sum_{(v,u)\in \Gamma_{+}}\sum_{(v',u')\in \Gamma_{-}}\max(0, m-\mathcal{E}(v, u)+\mathcal{E}(v', u'))
\end{equation}
where $\Gamma_{+}$ and $\Gamma_{-}$ are the positive and negative pairs, respectively. $\mathcal{E}(\cdot,\cdot)$ is the energy function, $v$ and $u$ are the learned Gaussian distributions for two nodes and $m$ is the margin separating positive and negative pairs. In this paper, we present two different energy functions to measure the similarity of two distributions for node representation learning.
\subsubsection{Expected Likelihood based Energy}
Although both dot product and inner product can be used to measure similarity between two distributions, dot product only considers means and does not incorporate covariances. Thus, we use inner product to measure the similarity. Formally, the integral of inner product between two Gaussian distributions $z_i$ and $z_j$ (learned Gaussian embeddings for node $i$ and $j$ respectively), a.k.a., expected likelihood, is defined as:
\begin{align}
\label{eq:el}
E(z_i,z_j)&=\int_{x\in \mathbb{R}}\mathcal{N}(x;\mu_i,\Sigma_i)\mathcal{N}(x;\mu_j,\sigma_j)dx=\mathcal{N}(0;\mu_i-\mu_j,\Sigma_i+\Sigma_j).
\end{align}
For simplicity in computation and comparison, we use the logarithm of Eq.~(\ref{eq:el}) as the final energy function:
\begin{align}
\label{eq:logel}
&\mathcal{E}_{EL}(z_i,z_j)=\log E(z_i,z_j)=\log \mathcal{N}(0;\mu_i-\mu_j,\Sigma_i+\Sigma_j)\\\nonumber
=&\frac{1}{2}\Big\{(\mu_i-\mu_j)^T(\Sigma_i+\Sigma_j)^{-1}(\mu_i-\mu_j)+\log\det(\Sigma_i+\Sigma_j)+d\log(2\pi)\Big\}
\end{align}
where $d$ is the number of dimensions. The gradient of this energy function with respect to the means $\mu$ and covariances $\Sigma$ can be calculated in a closed form as:
\begin{align}
\label{el}
\frac{\partial\mathcal{E}_{EL}(z_i,z_j)}{\partial\mu_i}&=-\frac{\partial\mathcal{E}(z_i.z_j)_{EL}}{\partial\mu_j}=-\Delta_{ij}\\\nonumber
\frac{\partial\mathcal{E}_{EL}(z_i,z_j)}{\partial\Sigma_i}&=\frac{\partial\mathcal{E}(z_i.z_j)_{EL}}{\partial\Sigma_j}=\frac{1}{2}(\Delta_{ij}\Delta_{ij}^T-(\Sigma_i+\Sigma_j)^{-1})
\end{align}
where $\Delta_{ij}=(\Sigma_i+\Sigma_j)^{-1}(\mu_i-\mu_j)$~\cite{he2015learning,vilnis2014word}.
Note that expected likelihood is a symmetric similarity measure, i.e., $\mathcal{E}_{EL}(z_i,z_j)=\mathcal{E}_{EL}(z_j,z_i)$.
\subsubsection{KL Divergence based Energy}
KL divergence is another straightforward way to measure the similarity between two distributions so we utilize the energy function $\mathcal{E}_{KL}(z_i,z_j)$ based on the KL divergence to measure the similarity between Gaussian distributions $z_i$ and $z_j$ (learned Gaussian embeddings for node $i$ and $j$ respectively):
\begin{align}
&\mathcal{E}_{KL}(z_i,z_j)=D_{KL}(z_i,z_j)=\int_{x\in \mathbb{R}}\mathcal{N}(x;\mu_i,\Sigma_i)\log\frac{\mathcal{N}(x;\mu_j,\sigma_j)}{\mathcal{N}(x;\mu_i,\Sigma_i)}dx\\\nonumber
=&\frac{1}{2}\Big\{tr(\Sigma_i^{-1}\Sigma_j)+(\mu_i-\mu_j)^T\Sigma_i^{-1}(\mu_i-\mu_j)-\log\frac{\det(\Sigma_j)}{\det(\Sigma_i)}-d\Big\}
\end{align}
where $d$ is the number of dimensions. Similarly, we can compute the gradients of this energy function with respect to the means $\mu$ and covariances $\Sigma$:
\begin{align}
\label{kl}
\frac{\partial\mathcal{E}_{KL}(z_i,z_j)}{\partial\mu_i}&=-\frac{\partial\mathcal{E}_{KL}(z_i.z_j)}{\partial\mu_j}=-\Delta_{ij}^{\prime}\\\nonumber
\frac{\partial\mathcal{E}_{KL}(z_i,z_j)}{\partial\Sigma_i}&=\frac{1}{2}(\Sigma_i^{-1}\Sigma_j\Sigma_i^{-1}+\Delta_{ij}^{\prime}\Delta_{ij}^{\prime T}-\Sigma_i^{-1})\\\nonumber
\frac{\partial\mathcal{E}_{KL}(z_i,z_j)}{\partial\Sigma_j}&=\frac{1}{2}(\Sigma_j^{-1}-\Sigma_i^{-1})
\end{align}
where $\Delta_{ij}^{\prime}=\Sigma_i^{-1}(\mu_i-\mu_j)$.
Note that KL divergence based energy is asymmetric but we can easily extend to a symmetric similarity measure as follows:
\begin{equation}
\mathcal{E}(z_i,z_j)=\frac{1}{2}(D_{KL}(z_i,z_j)+D_{KL}(z_j,z_i)).
\end{equation}
\subsection{Learning}
To avoid overfitting, we regularize the means and covariances to learn the embedding. Due to the different geometric characteristics, two different hard constraint strategies have been used for means and covariances, respectively. In particular, we have
\begin{equation}
\label{mean}
\|\mu_i\|\leq C,~\forall i
\end{equation}
\begin{equation}
\label{covar}
c_{min}I\prec \Sigma_i \prec c_{max}I,~\forall i.
\end{equation}
The constraint on means guarantees them to be sufficiently small and constraint on covariances ensures that they are positive definite and of appropriate size. For example, $\Sigma_{ii}\gets \max(c_{min},\min(c_{max},\Sigma_{ii}))$ can be used to regularize diagonal covariances.
We use AdaGrad~\cite{duchi2011adaptive} to optimize the parameters. The learning procedure is described in Algorithm~\ref{alg}. Initialization phase is from line 1 to 4, context generation is shown in line 7, and Gaussian embeddings are learned from line 8 to 14.
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\begin{algorithm}
\caption{The Learning Algorithm of \textit{struc2gauss}}
\begin{algorithmic}[1]
\label{alg}
\REQUIRE An energy function $\mathcal{E}(z_i,z_j)$, a graph $G=(V,E)$, embedding dimension $d$, constraint values $c_{max}$ and $c_{min}$ for covariance, learning rate $\alpha$, and maximum epochs $n$.
\ENSURE Gaussian embeddings (mean vector $\mu$ and covariance matrix $\Sigma$) for nodes $v\in V$
\FORALL{$v\in V$}
\STATE Initialize mean $\mu$ for $v$
\STATE Initialize covariance $\Sigma$ for $v$
\STATE Regularize $\mu$ and $\Sigma$ with constraint in Eq.~(\ref{mean}) and (\ref{covar})
\ENDFOR
\WHILE{not reach the maximum epochs $n$}
\STATE Generate positive and negative sets $\Gamma_{+}$ and $\Gamma_{-}$ for each node
\IF{use expected likelihood based energy}
\STATE Update means and covariances based on Eq.~(\ref{el})
\ENDIF
\IF{use KL divergence based energy}
\STATE Update means and covariances based on Eq.~(\ref{kl})
\ENDIF
\STATE Regularize $\mu$ and $\Sigma$ with constraint in Eq.~(\ref{mean}) and (\ref{covar})
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\subsection{Computational Complexity}
The complexity of different components of \textit{struc2gauss} are analyzed as follows:
\begin{itemize}
\item[1] For structural similarity calculation using RoleSim, the computational complexity is $O(kn^2d)$, where $n$ is the number of nodes, $k$ is the number of iterations and $d$ is the average of $y\log y$ over all node-pair bipartite graph in $G$~\cite{jin2011axiomatic}.
\item[2] To generate the training set based on similarity matrix, we need to sample from the most similar nodes for each node, i.e., to select $k$ largest numbers from an unsorted array. Using heap, the complexity is $O(n\log k)$.
\item[3] For Gaussian embedding, the operations include matrix addition, multiplication and inversion. In practice, as stated above, we only consider two types of covariance matrices, i.e., diagonal and spherical, so all these operations have the complexity of $O(n)$.
\end{itemize}
Overall, the component of similarity calculation is the bottleneck of the framework. One possible and effective way to optimize this part is to set the similarity to be 0 if two nodes have a large difference in degrees. The reason is: (1) we generate the context only based on most similar nodes; and (2) two nodes are less likely to be structural similar if their degrees are very different.
\subsection{Discussion}
\label{dis}
The proposed \textit{struc2gauss} is a flexible framework for node representations. As shown in Fig.~\ref{fig:frame}, different similarity measures can be incorporated into this framework and empirical studies will be presented in Section~\ref{sim}. Furthermore, other types of methods which model structural information can be utilized in \textit{struc2gauss} as well.
To illustrate the potential to incorporate different methods, we categorize different methods for capturing structural information into three types:
\begin{itemize}
\item \textbf{Similarity-based methods}. Similarity-based methods calculate pairwise similarity based on the structural information of a given network. Related work has been reviewed in Section~\ref{strucsim}.
\item \textbf{Ranking-based methods}. PageRank~\cite{page1999pagerank} and HITS~\cite{kleinberg1999authoritative} are two most representative ranking-based methods which learns the structural information. PageRank has been used for NE in~\cite{ma2017preserving}.
\item \textbf{Partition-based methods}. This type of methods, e.g., role discovery, aims to partition nodes into disjoint or overlapping groups, e.g., REGE~\cite{borgatti1993two} and RolX~\cite{henderson2012rolx}.
\end{itemize}
In this paper, we focus on \textbf{similarity-based methods}. For \textbf{ranking-based methods}, we can use a fixed sliding window on the ranking list, then given a node the nodes within the window can be viewed as the context. In fact, this mechanism is similar to DeepWalk. For \textbf{partition-based methods}, we can consider the nodes in the same group as the context for each other.
\section{Experiments}
\label{exp}
We evaluate \textit{struc2gauss} in different scenarios in order to understand its effectiveness in capturing structural information, capability in modeling uncertainties of embeddings and stability of the model towards parameters. We also study the influence of different similarity measures empirically.
\subsection{Case Study: Visualization in 2-D space}
\label{case}
We use the toy example shown in Fig.~\ref{fig:exp} to demonstrate the effectiveness of \textit{struc2gauss} in capturing the global structural information and the failure of other state-of-the-art techniques in this task. The toy network consists of ten nodes and they can be clustered in two ways: (1) based on global structural information they belong to three groups, i.e., $\{0,1,2,3\}$ (yellow color), $\{4,5,6,7\}$ (blue color) and $\{8,9\}$ (red color) and (2) based on local structural information they belong to two groups, i.e., $\{0,1,4,5,6,8\}$ and $\{2,3,6,7,9\}$. In this study, we only consider the global structural information. Note that from the perspective of role discovery, these three groups of nodes play the roles of \textit{periphery}, \textit{star} and \textit{bridge}, respectively.
Fig.~\ref{fig:toy} shows the learned node representations by different methods. For shared parameters in all methods, we use the same settings by default: representation dimension: 2, number of walks per node: 20, walk length: 80, skipgram window size: 5. For \textit{node2vec}, we set $p = 1$ and $q = 2$. For \textit{struc2gauss}, number of walks per node is 20 and number of positive/negative nodes per node is 5. It can be observed that DeepWalk, LINE and GraRep fail to capture the global structural information. However, DeepWalk is capable to capture the local structural information since nodes are separated into two parts corresponding to the two communities shown in Fig.~\ref{fig:exp}. It has been stated that \textit{node2vec} can capture the structural equivalence but the visualization shows that it still captures the local structural information similar to DeepWalk. \textit{struc2vec} can solve this problem to some extent. However, there is overlap between node 6 and 9. Our proposed \textit{struc2gauss} outperforms all other methods. Both diagonal and spherical covariances can separate nodes based on global structural information and \textit{struc2gauss} with spherical covariances performs better than diagonal covariances since it can recognize \textit{star} and \textit{bridge} nodes better.
\begin{figure*}
\centering
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{DeepWalk.pdf}
\caption[]%
{{DeepWalk}}
\label{fig:a}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{LINE.pdf}
\caption[]%
{{LINE}}
\label{fig:b}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{grarrep.pdf}
\caption[]%
{{GraRep}}
\label{fig:c}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{node2vec.pdf}
\caption[]%
{{\textit{node2vec}}}
\label{fig:d}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{struc2vec.pdf}
\caption[]%
{{\textit{struc2vec}}}
\label{fig:e}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{struc2gaussd.pdf}
\caption[]%
{{\textit{struc2gauss} KL + diag}}
\label{fig:f}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{struc2gausss.pdf}
\caption[]%
{{\textit{struc2gauss} KL + spher}}
\label{fig:g}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{struc2gaussd-el.pdf}
\caption[]%
{{\textit{struc2gauss} EL + diag}}
\label{fig:h}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{struc2gausss-el.pdf}
\caption[]%
{{\textit{struc2gauss} EL + spher}}
\label{fig:i}
\end{subfigure}
\caption[]
{Latent representations in $\mathbb{R}^2$ learned by (a) DeepWalk, (b) LINE, (c) GraRep, (d) \textit{node2vec}, (e) \textit{struc2vec}, (f) \textit{struc2gauss} using KL divergence with diagonal covariance, (g) \textit{struc2gauss} using KL divergence with spherical covariance, (g) \textit{struc2gauss} using KL divergence with diagonal covariance, (h) \textit{struc2gauss} using expected likelihood with diagonal covariance, and (i) \textit{struc2gauss} using expected likelihood with spherical covariance.}
\label{fig:toy}
\end{figure*}
\subsection{Node Clustering}
\label{cluster}
The most common network mining application based on global structural information is the problem of \textit{role discovery} and role discovery essentially is a clustering task. Thus, we consider node clustering task to illustrate the potential of node representations learned by \textit{struc2gauss}. We use the latent representations learned by different methods (in \textit{struc2gauss}, we use means of learned Gaussian distribution) as features and K-means as the clustering algorithm to cluster nodes.
\begin{table}
\small
\centering
\caption{A brief introduction to data sets.}
\label{tb:data}
\begin{tabular}{|l|l|c|c|c|}
\hline
Type & Dataset & \# nodes & \# edges & \# groups \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}with\\ labels\end{tabular}} & Brazilian-air & 131 & 1038 & 4 \\ \cline{2-5}
& European-air & 399 & 5995 & 4 \\ \cline{2-5}
& USA-air & 1190 & 13599 & 4 \\ \hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}without\\ labels\end{tabular}} & Arxiv GR-QC & 5242 & 28980 & 8 \\ \cline{2-5}
& Advogato & 6551 & 51332 & 11 \\ \cline{2-5}
& Hamsterster & 2426 & 16630 & 10 \\ \hline
\end{tabular}
\end{table}
\begin{table}
\small
\centering
\caption{NMI for node clustering in air-traffic networks using different NE methods. In \textit{struc2gauss}, EL and KL mean expected likelihood and KL divergence, respectively. D and S mean diagonal and spherical covariances, respectively. The highest value is in bold.}
\label{tb:air}
\begin{tabular}{|l|c|c|c|}
\hline
Method & Brazil & Europe & USA \\ \hline
DeepWalk & 0.1303 & 0.0458 & 0.0766 \\ \hline
LINE & 0.0684 & 0.0410 & 0.1088 \\ \hline
\textit{node2vec} & 0.0727 & 0.1722 & 0.0945 \\ \hline
GraRep & 0.2097 & 0.1986 & 0.1811 \\ \hline
\textit{struc2vec} & 0.3758 & 0.2729 & 0.2486 \\ \hline
\textit{struc2gauss}-EL-D & 0.5615 & 0.3234 & 0.3188 \\ \hline
\textit{struc2gauss}-EL-S & 0.3796 & 0.2774 & 0.2967 \\ \hline
\textit{struc2gauss}-KL-D & 0.5527 & 0.3145 & 0.3212 \\ \hline
\textit{struc2gauss}-KL-S & \textbf{0.5675} & \textbf{0.3280} & \textbf{0.3217} \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=2.8in]{goodness.pdf}
\caption{Goodness-of-fit of \textit{struc2vec} and \textit{struc2gauss} with different strategies and covariances on three real-world networks. Lower value means better performance.}
\label{fig:gof}
\end{figure}
\textbf{Datasets}. We use two types of network data sets: networks with and without ground-truth clustering labels. For data with labels, to compare state-of-the-art, we use air-traffic networks from~\cite{ribeiro2017struc2vec} where the networks are undirected, nodes are airports, edges indicate the existence of commercial flights and labels correspond to their levels of activities. For data without labels, we select several real-world networks in different domains from Network Repository\footnote{\url{http://networkrepository.com/index.php}}. A brief introduction to these data sets is shown in Table~\ref{tb:data}. Note that the numbers of groups for networks without labels are determined by MDL~\cite{henderson2012rolx}.
\textbf{Baselines}. We select several state-of-the-art NE algorithms as baselines, i.e., DeepWalk, LINE, GraRep, \textit{node2vec}, and \textit{struc2vec}. For our proposed \textit{struc2gauss}, we test both diagonal and spherical covariances. In these baselines, we use the same settings in the literature: representation dimension: 128, number of walks per node: 20, walk length: 80, skipgram window size: 10. For LINE, the order of the proximity is 2. For \textit{node2vec}, we set $p = 1$ and $q = 2$. For GraRep, the maximum matrix transition step is 3 and number of positive/negative nodes per node is 120.
\textbf{Evaluation Metric}. To quantitatively evaluate clustering performance in labeled networks, we use \textit{Normalized Mutual Information (NMI)} as the evaluation metric. NMI is obtained by dividing the mutual information by the arithmetic average of the entropy of obtained cluster $\mathcal{C}$ and ground-truth cluster $\mathcal{D}$:
\begin{equation}
\label{equation:NMI}
\text{NMI}(\mathcal{C,D})=\frac{2*\mathcal{I(C,D)}}{\mathcal{H(C)+H(D)}},
\end{equation}
where the mutual information $\mathcal{I}(\mathcal{C},\mathcal{D})$ is defined as $\mathcal{\mathcal{I}}(\mathcal{C},\mathcal{D})=\mathcal{H}(\mathcal{C})-\mathcal{H(C|D)}$ and $\mathcal{H}(\cdot)$ is the entropy.
For unlabeled networks, we use normalized \textit{goodness-of-fit} as the evaluation metric. In \textit{goodness-of-fit indices}, it is assumed that the output of a role discovery method is an optimal model, and nodes belonging to the same role are predicted to be perfectly structurally equivalent. In real-world SNs, nodes belonging to the same role are only approximately structurally equivalent. The essence of \textit{goodness-of-fit indices} is to measure how approximate are the approximate structural equivalences. If the optimal model holds, then all nodes belonging to the same role are exactly structurally equivalent. \textit{goodness-of-fit} can measure how well the representation of roles and the relations among these roles fit a given network so this measure has been widely used in role discovery~\cite{wasserman1994social,pei2018dynmf}. To make the evaluation metric value in the range of $[0,1]$, we normalize \textit{goodness-of-fit} by dividing $r^2$ where $r$ is number of groups/roles. For more details about \textit{goodness-of-fit indices}, please refer to~\cite{wasserman1994social}.
The NMI values for node clustering on networks with labels are shown in Table~\ref{tb:air} and the normalized \textit{goodness-of-fit} values for networks without labels are shown in Fig.~\ref{fig:gof}. From these results, some conclusions can be drawn:
\begin{itemize}
\item For both types of networks with and without clustering labels, \textit{struc2gauss} outperforms all other methods in different evaluation metrics. It indicates the effectiveness of \textit{struc2gauss} in capturing the global structural information.
\item Comparing \textit{struc2gauss} with diagonal and spherical covariances, it can be observed that spherical covariance can achieve better performance in node clustering. This finding is similar to the results of word embedding in~\cite{vilnis2014word}.
\item For baselines, \textit{struc2vec} can capture the structural information to some extent since its performance is much better than DeepWalk and \textit{node2vec} while both of them fail in capturing the global structural information for node clustering.
\end{itemize}
Note that among the four different combinations of strategies, \textit{struc2gauss} using KL divergence with spherical covariance performs best on all networks. In following sections, we only test the combination of KL divergence and spherical covariance in \textit{struc2gauss} if not explicitly stated otherwise.
\subsection{Influence of Similarity Measures}
\label{sim}
To analyze the influence of different similarity measures on learning node representations, we compare two different measures for global structural similarity, i.e., SimRank~\cite{jeh2002simrank} and MatchSim~\cite{lin2009matchsim}, to RoleSim which is by default used in our framework. The data sets and evaluation metrics used in this experiment are the same to Section~\ref{cluster}.
\begin{table}
\small
\centering
\caption{NMI for node clustering in air-traffic networks of Brazil, Europe and USA using \textit{struc2gauss} with different similarity measures.}
\label{tb:sim}
\begin{tabular}{|l|c|c|c|}
\hline
& Brazil-airport & Europe-airport & USA-airport \\ \hline
SimRank & 0.1695 & 0.0524 & 0.0887 \\ \hline
MatchSim & 0.3534 & 0.2389 & 0.0913 \\ \hline
RoleSim & \textbf{0.5675} & \textbf{0.3280} & \textbf{0.3217} \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[width=2.8in]{similarity.pdf}
\caption{Goodness-of-fit of \textit{struc2gauss} with different similarity measures. Lower values are better.}
\label{fig:sim}
\end{figure}
The NMI values for networks with labels are shown in Table~\ref{tb:sim} and the \textit{goodness-of-fit} values are shown in Fig.~\ref{fig:sim}. We can come to the following conclusions:
\begin{itemize}
\item RoleSim outperforms other two similarity measures in both types of networks with and without clustering labels. It indicates RoleSim can better capture the global structural information. Performance of MatchSim varies on different networks and is similar to \textit{struc2vec}. Thus, it can capture the global structural information to some extent.
\item SimRank performs worse than other similarity measures as well as \textit{struc2vec} (Table~\ref{tb:air}). Considering the basic assumption of SimRank that "two objects are similar if they relate to similar objects", it computes the similarity also via relations between nodes so that the mechanism is similar to random walk based methods which have been proved not being capable of capturing the global structural information~\cite{lyu2017enhancing}.
\end{itemize}
\subsection{Uncertainty Modeling}
We use stochastic blockmodels~\cite{karrer2011stochastic} to generate synthetic networks. In specific, we generate a network with 200 nodes and 4 blocks and the original network can be clustered into these 4 blocks perfectly. Then we add different numbers of edges to the networks step by step randomly from 100 to 1000 with the interval 100. Totally we have one original network and 100 evolved networks with different levels of noise. We learn node representations and corresponding uncertainties reflected by covariances using \textit{struc2gauss}. Since we select the spherical and diagonal covariance in our experiments, we compute the traces of covariance matrices to compare the uncertainties in different embeddings.
The comparison is shown in Fig.~\ref{fig:unc} where it can be observed that with more noise being added to the network, larger traces of convariance matrices we have. When there is less noise (less than 400 edges have been added), the differences between original network and evolved networks are not obvious, but with more noise introduced (more than 400 edges have been added), the differences become significant. This demonstrates that our proposed \textit{struc2gauss} can capture the uncertainties of learned node representations.
\subsection{Parameter Sensitivity}
We consider three major types of parameters in \textit{struc2gauss}, i.e., \textit{latent dimensions}, \textit{number of samples per node} and \textit{number of positive/negative nodes per node}. In order to evaluate how changes to these parameters affect performance, we conducted the same node clustering experiment on the labeled USA air-traffic network introduced in Section~\ref{cluster}.
In the interest of brevity, we first fix two parameters and then vary the third one. In specific, the number of latent dimensions varies from 10 to 200, the number of samples varies from 5 to 15 and the number of positive/negative nodes varies from 40 to 190. The results of parameter sensitivity are shown in Fig.~\ref{fig:param}. It can be observed from Fig.~\ref{fig:param} (a) and \ref{fig:param} (b) that the trends are relatively stable, i.e., the performance is insensitive to the changes of representation dimensions and numbers of samples. The performance of clustering is improved with the increase of numbers of positive/negative nodes shown in Fig.~\ref{fig:param} (c). Therefore, we can conclude that \textit{struc2guass} is more stable than other methods. It has been reported that other methods, e.g., DeepWalk~\cite{perozzi2014deepwalk}, LINE~\cite{tang2015line} and \textit{node2vec}~\cite{grover2016node2vec}, are sensitive to many parameters. In general, more dimensions, more walks and more context can achieve better performance. However, it is difficult to search for the best combination of parameters in practice and it may also lead to overfitting.
Note that we observed the same trend in other networks so only results on USA-airport network are shown here.
\begin{figure}
\centering
\includegraphics[width=3.6in]{uncertainty.pdf}
\caption{Uncertainties of embeddings with different levels of noise.}
\label{fig:unc}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{dim.pdf}
\caption[]%
{{Representation dimensions vs. NMI.}}
\label{fig:dim}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{sample.pdf}
\caption[]%
{{Number of samples per node vs. NMI.}}
\label{fig:sam}
\end{subfigure}
\vskip\baselineskip
\begin{subfigure}[b]{0.8\textwidth}
\centering
\includegraphics[width=\textwidth]{walk.pdf}
\caption[]%
{{Number of positive/negative nodes per node vs. NMI.}}
\label{fig:walk}
\end{subfigure}
\caption[]
{Parameter Sensitivity Study.}
\label{fig:param}
\end{figure}
\begin{comment}
\subsection{Link Prediction}
To evaluate the effectiveness of \textit{DNGE} in modeling dynamic information, we conduct experiments on several real-world dynamic networks from different domains. As there are no ground-truth labels in real-world networks, we use link prediction task to test our method, following \cite{grover2016node2vec,wang2016structural}. Since our aim is to validate dynamic information modeling, we learn node emebddings on the first $T-1$ network snapshots and predict the links on the last snapshot. Similar to \cite{grover2016node2vec}, we regard link prediction as a classification task. We
randomly sample an equal number of node pairs from each snapshot which have no edge connecting them as the negative examples. Without loss of generality, learned node embeddings are used as the features and logistic regression is used as the classifier. We use AUC as the evaluation metric to compare the results.
\textbf{Baselines}. We compare three different types of methods for link prediction (LP) experiments: traditional LP methods, point embedding methods and Gaussian embedding methods. Traditional LP methods use heuristic scores to predict links given a pair of nodes. Following~\cite{grover2016node2vec}, we choose three widely used methods and their definitions are shown in Table~\ref{tb:lp}. Point embedding methods map nodes to deterministic vectors, and state-of-the-art approaches are utilized as baselines, i.e., DeepWalk~\cite{perozzi2014deepwalk}, LINE~\cite{tang2015line} and \textit{node2vec}~\cite{grover2016node2vec}.
For Gaussian embedding methods, we compare \textit{graph2gauss} and our proposed \textit{DNGE} using two dynamics modeling strategies, i.e., $DNGE_{Mean}$ and $DNGE_{Dist}$. For all embedding methods, the latent dimension of Enron, Message, Reality and Facebook are set to be 32, 64, 100 and 100, respectively. For DeepWalk and \textit{node2vec}, the number of walks is 10, walk length is 20 and window size is 10. For LINE, the order of the proximity is 2 and the negative samples is 5. For \textit{node2vec}, both $p$ and $q$ are set to be 1.
\begin{table}
\centering
\caption{Traditional link prediction methods and definitions where $N(u)$ and $N(v)$ denote the neighbor sets of node $u$ and $v$ respectively.}
\label{tb:lp}
\begin{tabular}{l|c}
\hline
method & definition \\ \hline
Jaccard Coefficient (JC) & $|N(u)\cap N(v)|/|N(u)\cup N(v)|$ \\ \hline
Adamic-Adar (AA) & $\sum_{t\in N(u)\cap N(v)}1/\log |N(t)|$ \\ \hline
Preferential Attachment (PA) & $|N(u)|\cdot|N(v)|$ \\ \hline
\end{tabular}
\end{table}
\begin{table*}[]
\centering
\caption{My caption}
\label{my-label}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{1}{|c|}{\multirow{2}{*}{Data}} & \multicolumn{3}{c|}{traditional LP} & \multicolumn{4}{c|}{point embedding} & \multicolumn{4}{c|}{Gaussian embedding} \\ \cline{2-12}
\multicolumn{1}{|c|}{} & JC & AA & PA & DeepWalk & LINE & node2vec & struc2vec & s2g+EL\_D & s2g+EL\_D & s2g+EL\_D & s2g+EL\_D \\ \hline
& & & & & & & & & & & \\ \hline
& & & & & & & & & & & \\ \hline
& & & & & & & & & & & \\ \hline
& & & & & & & & & & & \\ \hline
& & & & & & & & & & & \\ \hline
\end{tabular}
\end{table*}
\end{comment}
\section{Conclusions and Future Work}
\label{conc}
Two major limitations exist in previous NE studies: i.e., \textbf{structure preservation} and \textbf{uncertainty modeling}. Random-walk based NE methods fail in capturing global structural information and representing a node into a point vector are not capable of modeling the uncertainties of node representations.
We proposed a flexible structure preserving network embedding framework, \textit{struc2gauss}, to tackle these limitations. On the one hand, \textit{struc2gauss} learns node representations based on structural similarity measures so that global structural information can be taken into consideration. On the other hand, \textit{struc2gauss} utilizes Gaussian embedding to represent each node as a Gaussian distribution where the mean indicates the position of this node in the embedding space and the covariance represents its uncertainty.
We experimentally compared three different structural similarity measures for networks and two different energy functions for Gaussian embedding. By conducting experiments from different perspectives, we demonstrated that \textit{struc2gauss} excels in capturing global structural
information, compared to state-of-the-art NE techniques such as DeepWalk, \textit{node2vec} and \textit{struc2vec}. It outperforms other competitor methods in graph clustering task, i.e., role discovery, on both synthetic and real-world networks. It also overcomes the limitation of uncertainty modeling and is capable of capturing different levels of uncertainties. Additionally, \textit{struc2gauss} is less sensitive to different parameters which makes it more stable in practice without putting more effort in tuning parameters.
We conclude by indicating promising directions for further study. A first area for improvement is to study faster and more scalable structural similarity calculation method. Since we care more about the most similar nodes given a query node, an approximate method to calculate the structural similarity may also be a promising direction. Second, it is interesting to extend our method to different scenarios, e.g., dynamic networks and heterogeneous networks. Many real-world networks are dynamic with evolving structures and heterogeneous with different types of nodes and edges. How to learn representations in such scenarios based on \textit{struc2gauss} will be a challenging and meaningful problem. A third area for future exploration is to exploit other methods which can calculate structural similarity or capture structural information.
\bibliographystyle{spmpsci}
|
1,108,101,564,269 | arxiv | \section{\label{sec:introduction}Introduction}
Two-phase xenon detectors have become a leading detector technology in searches for WIMP (weakly interacting massive particle) dark matter \cite{Akerib:2016,PandaX,XENON100}, searches for axions and axion-like particles \cite{Aprile:2014,Akerib:2017}, and detection of coherent elastic neutrino-nucleus scattering from nuclear reactor, spallation source, and supernova neutrinos \cite{Santos:2011,Akimov:2012, Horowitz:2003, Chakraborty:2014}. They have other potential applications in radiation/particle detection such as searches for neutrinoless double beta decay~\cite{DARWIN,LZTDR} and Compton imaging of gamma-rays \cite{Wahl:2012}. The understanding of charge and light production in liquid xenon has developed substantially over the past few decades, and the mechanisms which generate detectable signals from energy deposition in the liquid xenon are reviewed in~\cite{henriqueReview, aprileReview}.
The scintillation signal produced by initial atomic excitation and recombining ionization is known as S1, with the light generated from the remaining ionization known as S2. The generation of S2 signal requires that electrons are drifted upward from the interaction site in the liquid to the liquid surface, where they are extracted into the gas phase. No new light is produced while the electrons drift through the liquid. After extraction into the gas phase, a stronger electric field accelerates the electrons, causing them to produce a proportional electroluminescence signal.
The size of the S2 signal (in detected photoelectrons) is shown in equation~\ref{eq:S2size}, where $n_i$ is the initial number of ionizations created when the energy is deposited in the interaction, $(1-r)$ is the fraction of the charges not recombining, $\eta$ is the extraction efficiency at the liquid-gas phase boundary, $\xi$ is the electroluminescence yield (photons/electron), $\nu$ is the S2 geometrical light collection (fraction of photons produced that hit a photocathode), $Q$ is the quantum efficiency (fraction of photons hitting the photocathode that generate a photoelectron), and DPE is the double-photoelectron fraction for the electroluminescence \cite{Faham:2015}:
\begin{equation} \label{eq:S2size}
\textrm{S2 (phe)} = (1-r) \, n_i \, \eta \, \xi \, \, \nu \, Q (1+DPE).
\end{equation}
In equation~\ref{eq:S2size}, only $\eta$ and $\xi$ are dependent on the electric fields applied in the extraction/electroluminescence region. Therefore, the extraction efficiency can be determined by varying the extraction field and measuring both the S2 electroluminescent gain for single electrons and the absolute S2 signal size for events of constant energy and drift field. Understanding of the extraction efficiency and how it varies with extraction field is important for reaching optimal sensitivity in experiments using two-phase xenon detector technology.
The potential energy of a free electron in the liquid xenon is lower than in the gaseous phase by 0.67~eV \cite{Tauchert:1975}. This creates a potential barrier that electrons must cross if they are to be extracted from the liquid into the gas. There are two processes by which this can potentially happen: (A) thermal emission, where the tail of the velocity distribution of electrons in thermal equilibrium with the liquid is above the potential barrier, allowing some fraction to be extracted, and (B) emission of ``hot'' electrons, where electrons accelerated by an electric field are imparted with enough energy to overcome the barrier and be extracted. The potential barrier in liquid xenon is much higher than in other condensed noble gases (i.e. argon), and as a result, process (A) provides little contribution to the production of S2 signals from interactions in the liquid xenon, though it is hypothesized to cause delayed single-electron emission on a tens of millisecond time scale \cite{Sorensen:2017}.
As they drift through the liquid, the electrons are imparted with a non-Maxwellian energy distribution. The form of this distribution and the amount of energy is dependent on the electric field strength in the liquid \cite{Gushchin:1982,Doke:1981}. The onset of extraction will not occur until the electrons at the high-energy end of the distribution have enough energy to cross the barrier. As a result, there is a threshold effect, where an electric field of at least $\sim$~1.5~kV/cm is required to have a non-zero extraction efficiency. According to the model of \cite{Bolozdynya:1999}, the potential barrier is also reduced as the field is increased, resulting in a lower electron energy required to cross the barrier. The combination of these effects results in a strong electric field dependence of the electron extraction efficiency.
The extraction efficiency has been measured previously~\cite{Gushchin:1979, Aprile:2014b}. In practice, a given detector is limited in the range of electric fields it can apply to the extraction region. Here we report a study of relative extraction efficiency over a wide range of fields from 2.4 to 7.1~kV/cm (in the liquid xenon). It is anticipated that these results will be useful in the design optimization of future two-phase xenon experiments.
\section{\label{sec:pixeyDetector}PIXeY detector}
PIXeY (Particle Identification in Xenon at Yale) is a two-phase xenon detector holding approximately 12~kg of liquid xenon. The hexagonal time projection chamber (TPC) has an active xenon mass of about 3~kg and is 18.4\,cm in width at its widest point. Figure~\ref{fig:schematic} shows the layout of the PIXeY detector and TPC. The active target region is 5.1~cm deep, defined by the cathode and gate grids. The anode and gate grids are separated by about 8~mm.
The 175~nm light from S1 and S2 is detected by two arrays of seven Hamamatsu R8778 photomultiplier tubes (PMTs) each, one array above and one below the liquid xenon volume. The response of the PMTs was normalized using single photoelectron gain measurements. The PMT response was then confirmed using the data acquired for the study, with an average PMT response of 85~mV-ns per photoelectron, easily resolvable above baseline. For all data presented in this study, a radial cut of $\sim$5.0~cm is applied using the S2 signal strengths in the top PMT array.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.6\textwidth,clip]{PIXeYschematic_rough1.png}
\caption[]{\label{fig:schematic} Schematic of the PIXeY detector.}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.6\textwidth,clip]{PIXeY_waveform.pdf}
\caption[]{\label{fig:event_waveform} Example event waveform from the PIXeY detector.}
\end{center}
\end{figure}
The signals from the 14 PMTs underwent $\times8$~amplification before being digitized with a 12-bit ADC (CAEN V1720) waveform digitizer sampled at 250~MHz. The data acquisition was triggered by the sum signal of the top array PMTs passed through a filter optimized to pass signals with an S2-like timescale. An example event waveform from PIXeY, summed across all 14 PMTs, is shown in figure~\ref{fig:event_waveform}.
The upper portion of the TPC (shown in figure~\ref{fig:AnodeCut}) was specifically designed to hold high anode voltages, allowing the production of high extraction fields. The anode and gate wire grids are each composed of parallel wires of 80~$\mu$m diameter. Both grids are soldered at a 1~mm pitch. The anode grid frame (C) sits within a recessed well cut into a single block of Teflon called the bathtub (I). The bathtub sits on the gate grid frame (A) and contains a lip (B) that protrudes inward and blocks all direct paths between the anode and gate grid frames. A weir (not shown) integrated into the bathtub maintains the liquid xenon surface at a constant height (D) coincident with the bathtub lip. The PMT shield grid frame (E) is suspended above the anode frame by the upper PMT block (not shown). In this arrangement, the bathtub surfaces exposed to xenon gas contain only long and indirect paths between the anode grid frame and other conductors. The anode grid frame is secured to the bathtub by Teflon nuts threaded onto retaining posts (F). One corner of the hexagonal anode grid frame extends into a blind pocket (G) of the bathtub frame. The pocket contains an electrical receptacle that is accessed by the anode high voltage (HV) cable (H) from above. The anode HV cable is insulated by polyethylene to 5.8~mm diameter and supplied as model 2149 by Dielectric Sciences. The upper end of the HV cable terminates at a custom made socket that is sealed by epoxy to a $2\frac{3}{4}$~inch conflat flange. This allows connection to an external power supply. The anode grid wires discharged directly to the gate grid wires when this design was tested in argon gas at room temperature.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{Anode_cutaway_figure}
\caption{Cutaway rendering of the anode region of the TPC.
A - Gate grid frame.
B - Bathtub lip.
C - Anode grid frame.
D - Liquid xenon surface.
E - Top PMT shield grid frame.
F - Anode grid frame retaining posts.
G - Blind pocket with anode HV cable connection.
H - Anode HV cable.
I - Monolithic Teflon bathtub.
Note that features B, F and I are cut from a single contiguous block of Teflon.}
\label{fig:AnodeCut}
\end{figure}
\section{Results}
\subsection{Electric fields}
The electric fields in PIXeY are defined by the application of high voltage to a series of horizontal grids with parallel wires in the TPC (shown in figure~\ref{fig:schematic}). The ionization yield of drifted electrons (before extraction) is influenced by the electric field within the main drift region, which is set by the voltage applied to the cathode, kept at a constant -1~kV. This produces S2 signals with the same average number of primary electrons drifted from the interaction site, for a given monoenergetic source. The gate grid is always fixed at ground.
The size of the S2 signal seen by the PMTs is determined by a combination of the extraction efficiency (dependent on the field at the liquid surface) and the S2 electroluminescence process (dependent on the field in the gas gap) as defined in equation~\ref{eq:S2size}. Both these fields are set by the voltage applied to the anode grid.
A two-dimensional electric field model was developed in COMSOL Multiphysics v5.0\textsuperscript{\textregistered}~\cite{comsol}, a commercially available finite element simulation software, to calculate the electric fields in the different detector regions for each field configuration used. The DC dielectric constant of liquid xenon is assumed to be 1.85, which is typical of measured values in the scientific literature \cite{Amey:1964,Marcoux:1970,Schmidt:2001,Sawada:2003}. Table~\ref{tab:fields} summarizes the modeled voltage configurations, electric fields and systematic uncertainties. Figure~\ref{fig:COMSOLsims} shows the output of one simulation. The results of these simulations yield extraction fields roughly 12\% lower than would be calculated from a simple parallel-plate model.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\textwidth]{field_paper_624_185}
\caption{Result of a COMSOL electric field simulation of the PIXeY detector. The region around the liquid-gas phase boundary is shown. This configuration has a 4.5~mm gas gap (liquid surface to anode) and 6.24~kV applied to the anode grid.}
\label{fig:COMSOLsims}
\end{figure}
The systematic uncertainty in the extraction field comes primarily from the uncertainty in the $z$ separation of the field grids and the location of the gas gap between anode and gate. The uncertainty in the high voltage applied to the grids is $\sim$ 100~V. The separation of gate and anode was measured to be $7.8 \pm 0.3$~mm, and the distance from the weir (which sets the liquid height) to the anode was measured as $5.5 \pm 0.3$~mm. There is some additional uncertainty in the height of the liquid level due to hydrodynamic effects, raising the actual liquid level above the rim of the weir. This effect has been assessed using the models described in~\cite{Pfister:2013} and estimated to raise the liquid level $1.03 \pm 0.3$~mm above the weir. Combining these uncertainties, the liquid level is found to be $4.47 \pm 0.52$~mm below the anode grid. The resulting systematic uncertainties in the electric field are shown in table~\ref{tab:fields}. The extraction field varies negligibly in the $xy$ plane, within the applied 5~cm radial cut. The change in drift field due to varying extraction field is determined through COMSOL simulations to be less than 18 V/cm for all extraction fields, and this is predicted by the Noble Element Simulation Technique (NEST) \cite{NEST} to produce a sub-1\% change in the number of primary electrons.
\begin{table*}[t]
\caption{Extraction region field configurations studied and their measured relative electron extraction efficiencies, showing the anode voltage applied and the electric fields calculated in liquid and gas phases from a 2D simulation in COMSOL~\cite{comsol}. The errors shown on the fields are systematic uncertainties dominated by the uncertainty in the $z$ separation of the field grids and $z$ location of the liquid surface.} \label{tab:fields}
\setlength{\extrarowheight}{2pt}
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
{} & \multicolumn{2}{c|}{Electric fields} & {}\\
Anode & Liquid extraction & Gas electroluminescence & Electron extraction \\
voltage [kV] & region [kV/cm] & region [kV/cm] & efficiency \\
\hline
3.01 & $ {2.41}\pm{0.12} $ & $ {4.45}\pm{0.22} $ & $ {0.207}\pm{0.006} $\\
3.23 & $ {2.58}\pm{0.12} $ & $ {4.78}\pm{0.23} $ & $ {0.257}\pm{0.008} $\\
3.66 & $ {2.93}\pm{0.13} $ & $ {5.42}\pm{0.25} $ & $ {0.361}\pm{0.011} $\\
4.09 & $ {3.28}\pm{0.14} $ & $ {6.06}\pm{0.27} $ & $ {0.493}\pm{0.015} $\\
4.30 & $ {3.44}\pm{0.15} $ & $ {6.37}\pm{0.27} $ & $ {0.538}\pm{0.016} $\\
4.52 & $ {3.62}\pm{0.15} $ & $ {6.70}\pm{0.38} $ & $ {0.568}\pm{0.017} $\\
4.95 & $ {3.97}\pm{0.16} $ & $ {7.34}\pm{0.30} $ & $ {0.663}\pm{0.020} $\\
5.38 & $ {4.31}\pm{0.18} $ & $ {7.98}\pm{0.33} $ & $ {0.721}\pm{0.022} $\\
5.81 & $ {4.66}\pm{0.19} $ & $ {8.62}\pm{0.35} $ & $ {0.790}\pm{0.024} $\\
6.24 & $ {5.00}\pm{0.20} $ & $ {9.25}\pm{0.37} $ & $ {0.848}\pm{0.025} $\\
6.45 & $ {5.17}\pm{0.20} $ & $ {9.57}\pm{0.38} $ & $ {0.880}\pm{0.026} $\\
6.67 & $ {5.35}\pm{0.21} $ & $ {9.89}\pm{0.39} $ & $ {0.883}\pm{0.027} $\\
6.88 & $ {5.52}\pm{0.22} $ & $ {10.21}\pm{0.40} $ & $ {0.908}\pm{0.027} $\\
7.10 & $ {5.69}\pm{0.22} $ & $ {10.53}\pm{0.41} $ & $ {0.925}\pm{0.028} $\\
7.53 & $ {6.04}\pm{0.23} $ & $ {11.17}\pm{0.43} $ & $ {0.954}\pm{0.029} $\\
8.17 & $ {6.55}\pm{0.25} $ & $ {12.12}\pm{0.46} $ & $ {0.973}\pm{0.029} $\\
8.82 & $ {7.08}\pm{0.27} $ & $ {13.09}\pm{0.50} $ & 1\\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Electroluminescence gain measurements}
The magnitude of the S2 gain from electroluminescence in the gas region depends on the strength of the electric field in that region, as the field provides the energy needed for the electrons to excite gaseous xenon atoms and produce secondary light.
In order to properly disentangle the variation of the S2 signal size from electroluminescence gain and electron extraction efficiency from the liquid to the gas, the electroluminescence gain must be measured for each electric field and used to convert the S2 signal size into a number of detected electrons.
To determine the electroluminescence gain, the amount of S2 light produced by individual electrons drifting from the liquid surface to the anode was measured. Single electrons are abundant in two-phase xenon detectors and have been studied previously in many different experiments~\cite{Edwards:2007, Santos:2011, Aprile:2014b}. An example single electron waveform is shown in figure~\ref{fig:singleElectronWaveform}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{SingleElectron_5.pdf}
\caption{Example signal waveform from a single extracted electron in the PIXeY detector.}
\label{fig:singleElectronWaveform}
\end{figure}
Figure~\ref{fig:singleElectrons} shows an example of a single electron spectrum for one electric field configuration. The single electron population is clearly defined in the pulse area against pulse width parameter space. The width of the single electron signal is the time taken by the electron to cross the gas region from the liquid surface to the anode. The single electron pulse area as a function of field is shown in figure~\ref{fig:singleElectronsField}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.6\textwidth]{PIXeY_single_e_1}
\caption{Representative detector S2 response to single electrons extracted from the liquid into the gas.}
\label{fig:singleElectrons}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth]{SE_area_v_gasField_comsol_dec2016_noComparisons}
\caption{Detected single photoelectrons per single electron as a function of electric field in the gas region, where the electric field is taken from the COMSOL field model. The red shaded region includes the uncertainty in electric field.}
\label{fig:singleElectronsField}
\end{figure}
\subsection{Extraction efficiency}
To measure the relative extraction efficiency, a source of consistent charge yield is required. The S2 yield of this source at a given extraction field may then be divided by the single electron S2 yield at the same field to determine the extraction efficiency.
For the purposes of this study, two different mono-energetic calibration sources were used, \isot{Kr}{83m} and $^{37}$Ar. As the drift field is kept constant throughout the experiment, the charge produced by each calibration source is also constant. The metastable \isot{Kr}{83m} atom decays through two transitions, the first releasing 32.1~keV, followed by a second releasing 9.4~keV with a decay time constant of 154~ns. Each transition results mostly in conversion electrons. Occasionally 12 or 9.4 keV x-rays are emitted, but these have absorption lengths in liquid xenon of less than 10 $\rm \mu m$, much less than the electron diffusion ($\sim$ 1 mm) as the charge signal is drifted through the TPC. In addition, the 154~ns decay time constant between the 32.1 and 9.4~keV pulses is much less than the ~$\rm \mu s$ variation in electron drift time due to diffusion. As a result, the two interactions occur at essentially the same location and same time (effectively producing a single electron cloud) and S2 signals appear as a single pulse in the PMTs. The \isot{Kr}{83m} decay may then be treated as a mono-energetic S2 source of 42.5~keV. $^{37}$Ar decays by electron capture through a number of transitions, with the dominant decay releasing 2.8~keV. These are both low energy sources, with the $^{37}$Ar peak in the energy range of interest for WIMP search experiments. A description of the $^{37}$Ar source production and measurement with PIXeY may be found in \cite{Boulton:2017}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.6\textwidth]{PIXeY_eee_185_185}
\caption{Relative extraction efficiency as a function of electric field in the liquid xenon just below the liquid-gas surface, measured in PIXeY with mono-energetic peaks from \isot{Kr}{83m} and $^{37}$Ar. The electric field ($x$-axis) is calculated from a COMSOL electric field model. We compare the results with the absolute measurement by Gushchin~\textit{et al.}~\cite{Gushchin:1979} and the relative measurement by Aprile~\textit{et al.}~\cite{Aprile:2014b}. The black line is a best fit to the data, and the light blue dotted line is the same function multiplied by a constant value of 1.11519. Extraction fields in~\cite{Aprile:2014b} are quoted as fields in the gaseous xenon; here the extraction fields in~\cite{Aprile:2014b} are quoted as fields in the liquid xenon and divided by the dielectric constant of 1.85 for direct comparison with the present result.}
\label{fig:extractionEfficiency}
\end{figure}
Using these two sources, the size of the S2 signal is tracked at two different energies, thus reducing systematic uncertainties related to PMT saturation (which should be signal size dependent). As with previous measurements in the literature, we normalize the maximum in the measured extraction (i.e. the number of S2 electrons at the highest fields) to 100\% electron emission from the liquid into the gas.
Figure~\ref{fig:extractionEfficiency} shows the measured relative extraction efficiency as a function of electric field in the liquid. A threshold is observed for the onset of extraction from the liquid phase into the gas, followed by an increase in extraction efficiency as extraction field is increased. We assign full extraction to correspond to our highest extraction field of 7.1 kV/cm in the liquid xenon. Comparing the resulting efficiency to those previously seen in the literature, we measure a lower extraction efficiency over the full range of electric fields. We show results from PIXeY plotted against the electric fields calculated using the COMSOL model. The extraction efficiency is fit to a quadratic function, yielding a best fit of $y = - 0.03754x^2 + 0.52660x - 0.84645$, where $y$ is the extraction efficiency and $x$ is the electric field in the liquid xenon. The light blue line is the same function multiplied by a constant value of 1.11519. This suggests that previous studies, constrained to lower extraction fields, overestimated extraction efficiencies by approximately this factor.
The systematic error in the electric field applied, as described previously, is dominated by the geometry of the electric field region, with an overall systematic of about 5\%. In the extraction efficiency, the dominant uncertainties arise from the uncertainty in the single electron signal size and that of the S2 peak position (for \isot{Kr}{83m} and $^{37}$Ar). Combining these gives an overall error of 3\%, as shown in the right-most column of table~\ref{tab:fields}. In addition, we believe any uncertainties due to PMT saturation are small due to the good agreement of the \isot{Kr}{83m} and $^{37}$Ar signals, which give consistent extraction efficiencies while having an order of magnitude difference in signal amplitude.
\section{Summary}
The PIXeY detector has been used to probe a large range of extraction fields, allowing mapping of the electron extraction efficiency curve with better precision and over a wider range of fields than in previous experiments. Due to its novel design features, PIXeY was able to apply electric fields across the liquid-gas interface up to 7.1~kV/cm (in the liquid). Systematic errors in this measurement are constrained by the use of two radioactive sources with energies more than an order of magnitude apart.
We observe extraction efficiencies that continue to increase at the highest extraction fields. This has the practical implication that additional charge signal may be attained through careful engineering of the gate-anode region, so as to enable a high extraction field. Such measurements of the liquid xenon physics underlying two-phase detectors are important for both experimental design and interpretation of data from future large scale liquid xenon experiments.
\begin{acknowledgments}
We acknowledge support from DHS grant 2011-DN-007-ARI056-02, NSF grant PHY-1312561, and DOE grant DE-FG02-94ER40870. This research was conducted using computational resources and services of the Yale Science Research Software Core. The $^{83}$Rb used in this research to produce \isot{Kr}{83m} was supplied by the United States Department of Energy Office of Science by the Isotope Program in the Office of Nuclear Physics.
\end{acknowledgments}
\bibliographystyle{JHEP}
|
1,108,101,564,270 | arxiv | \section{}
The Landau-Lifshitz equations for a double sublattice weak ferromagnet can be
represented in the form~\footnote[1]{For definiteness, we examine a crystal of rhombic symmetry.}
\begin{equation} \label{1}
\begin{aligned}
\dot{\bm{M} }_i &= \gamma \left [ \bm{M}_i ,\frac{\delta W }{\delta \bm{M}_i}\right ] + \alpha \left [ \bm{M}_i,\dot{\bm{M}_i} \right ], \ i= 1,2 \\
W &= \frac{a}{2}m^2 + \frac{b_1}{2}l_x^2 + \frac{b_3}{2}l_z^2 + d_1 m_z l_x - d_3 m_x l_z - \bm{m} \cdot \bm{H}+ A \left ( \nabla \bm{l} \right )^2 + A' \left ( \nabla \bm{m} \right )^2 \\
\bm{m } &= \frac{\bm{M}_1+\bm{M}_2}{2 M}, \ \bm{l } = \frac{\bm{M}_1-\bm{M}_2}{2 M}, \ \frac{\delta}{\delta q }\equiv \frac{\partial }{\partial q}-\nabla \frac{\partial }{\partial \nabla q}
\end{aligned}
\end{equation}
To describe the dynamics of the domain bound, let us go over to the angular variables
$\theta$, $\phi$, $\epsilon$, and $\beta$ in which ($\epsilon \ll 1$, $\beta \ll 1$):
\begin{equation} \label{2}
\begin{aligned}
l_x &= \sin \theta \cos \phi, \ l_y = \sin \theta \sin \phi, \ l_z = \cos \phi, \ m_z = - \epsilon \sin \theta\\
m_x &= \epsilon \cos \theta \cos \phi - \beta \sin \theta \sin \phi, \ m_y = \epsilon \cos\beta \sin\phi + \beta \sin \theta \cos \phi.
\end{aligned}
\end{equation}
To write Eqs. \eqref{1} we use the Lagrange formalism in the variables $\theta$, $\phi$, $\epsilon$, and $\beta$. The Lagrange function $L$, the dissipative function $F$, and the corresponding Euler equations are
\begin{equation} \label{3}
L = \frac{M}{\gamma} \left [ \dot{\phi} \epsilon \sin \theta - \dot{\beta} \cos \theta \right ] - W(\theta,\psi,\epsilon,\beta)
\end{equation}
\begin{equation} \label{4}
F = \frac{\alpha M}{2 \gamma}\left [ \dot{\theta}^2 + \sin ^2 \theta \left ( \dot{\phi}^2 + \dot{\beta}^2 \right ) + \dot{\epsilon}^2 + 4 \epsilon \sin \theta\cos \theta \dot{\phi} \dot {
\beta}\right ]
\end{equation}
\begin{equation} \label{5}
\frac{\partial }{\partial t}\frac{\partial L}{\partial \dot{\theta}} = \frac{\delta L}{ \delta \theta} - \frac{\partial F}{\partial \dot{\theta}}, \text{ etc.}
\end{equation}
\section{}
At $H = (0,0,H)$ we have a specific solution of the nonlinear equations \eqref{5}~\cite{walker}$^,$~\footnote[2]{It has the same meaning as the well-known Walker’s solution \cite{walker} for ferromagnets, although its equations are more complex.} :
$\theta = \pi/2$, $\beta = 0$, $\phi(r,t)$, and $\epsilon(r,t)$. As a result of substituting this solution in Eqs. \eqref{5},
two of the equations (obtained by varying $\theta$ and $\beta$) become identities and the other two have the form
\begin{subequations}
\begin{eqnarray}
&\dot{\epsilon} + \alpha \dot {\phi} = \frac{c^2}{\omega_E}\nabla^2 \phi + \omega_1 \sin \phi \cos \phi - \omega_d\epsilon \sin \phi,\label{6a}
\\
&\alpha \dot {\epsilon} - \dot {\phi} = \frac{{c'}^2}{\omega_E} \nabla^2 \epsilon - \omega_E \epsilon + \omega_d \cos \phi - \omega_H \label{6b}
\end{eqnarray}
\end{subequations}
\noindent where
\begin{eqnarray*}
\begin{aligned}
\omega_1 &= \frac{\gamma b_1}{M}, \ \omega_d = \frac{\gamma d_1}{M}\equiv \gamma H_d, \ \omega_H = \gamma H, \\
\omega_E &= \frac{\gamma a}{M} \equiv 2 \gamma H_E, \ c^2 = 4 \gamma^2 A H_E / M, \ {c'}^2 = 4 \gamma^2 A' H_E / M
\end{aligned}
\end{eqnarray*}
First, let us determine the approximate solution of Eqs. \eqref{6a} and \eqref{6b}. In Eq. \eqref{6b} the terms $\left({c'}^2/\omega_E \right) \nabla^2 \epsilon$ and $\alpha \dot \epsilon$ can be deleted in comparison to $\omega_E$. The parameters of
smallness of the deleted terms are $\left( a_0 / \Delta \right) ^2$ and $\left( \alpha a_0 / \Delta \right) ^2$, where
$a_0 \left( c / \omega_E \right) = \left(2 A / a \right ) ^ {1/2} = 10^{-8} $ cm and $\Delta$ is the thickness of the moving domain
bound. Thus, we have from Eq. \eqref{6b}
\begin{equation} \label{7}
\epsilon = \frac{1}{\omega_E}\left ( - \omega_d \cos \phi + \omega_H + \dot{\phi} \right )
\end{equation}
Substituting it in Eq. \eqref{6a}, we obtain
\begin{equation} \label{8}
\ddot{\phi}-c^2 \nabla^2 \phi + \omega_A^2 \sin \phi \cos \phi = \dot{\omega}_H - \omega_d \omega_H \sin \phi - \alpha \omega_E \dot{\phi}
\end{equation}
where $\omega_A^2 = \omega_d^2 - \omega_E \omega_1$. At $H = 0$ and $\alpha = 0$ this equation becomes the well-known Sine-Gordon equation. Its one-dimensional solution, which satisfies the boundary con-
ditions $\phi (x \rightarrow -\infty) = 0$, $\phi (x \rightarrow +\infty) = \pi$, has the form
\begin{equation} \label{9}
\phi(x,t) = 2 \arctan e^{\frac{x - v t}{\Delta}}, \ \Delta^{-1} = \frac{\omega_A/c}{\sqrt{1 - (v/c)^2}},
\end{equation}
where $v < c$. This function satisfies Eq. \eqref{8} at $H = const \neq 0$ and $\alpha \neq 0$, but for a specific value of $v ( H )$, satisfies the equation, $\omega_d \omega_H = \alpha \omega_E \delta^{-1} v$, which can be easily verified
by substituting Eq. \eqref{9} in Eq. \eqref{8}. From the last equation using Eq. \eqref{9} we obtain \cite{gyorgy1968analysis}$^,$\footnote[3]{ A similar dependence $v(H )$ was obtained in Ref. [\onlinecite{gyorgy1968analysis}], where the authors assumed that the dynamics of the weak ferromagnetic moment are described by the same equations as the dynamics of the ferromagnet and the magnetization remains constant during the motion of the domain bounds.}:
\begin{equation} \label{10}
v(H) = c \frac{H H_d}{a}\left [ 4 H_E^2 H_A^2 + a^{-2} H^2 H_d^2 \right ]^{-1/2}.
\end{equation}
The physical nature of such a dependence $v(H )$ can be described if we use the mechanical analogy of the motion of DB. If the dependence of $H$ and $v$ on $t$ is sufliciently small
(the characterisitic frequencies of their variation are much smaller than $\omega_d$), we can
obtain from Eq. \eqref{8} the following equation for the velocity of DB
\begin{equation} \label{11}
\frac{\mathrm{d} }{\mathrm{d} t}(mv) + \frac{(mv)}{r} = 2 M_s H,
\end{equation}
\noindent where
\begin{equation*}
m = \frac{2 M_s}{H_d \gamma^2 \Delta(v)}=\frac{m_0}{\sqrt{1 - (v/c)^2}}, \ \tau = \frac{1}{\alpha \omega_E}.
\end{equation*}
All the terms in Eq. \eqref{11} have a clear mechanical meaning; $mv/\tau$ is the frictional force acting on the domain bounds, $2 M_s H$ is the pressure exerted on the domain bounds, etc. At $\left(d / d t \right) (m v) = 0 $ Eq. \eqref{11} gives Eq. \eqref{10}. Thus, the velocity of the domain bounds saturates as $H \rightarrow \infty$ because of the “relativistic” dependence of the mass $m$ of the DB on its velocity. Chetkin et al. \cite{chetkin1977velocity,chetkin1978maximum} observed experimentally and investigated the effect of saturation of the velocity of DB in YFeO$_3$; they \cite{chetkin1978maximum} as well as Bar’yakhtar et
al. \cite{baryakhtar1978limit} theoretically estimated the limiting velocity of the DB in orthoferrites.
\section{}
At $v \sim c$ the deleted terms in Eq. \eqref{6b} should be taken into account. Let us
analyze asymptotically Eqs. \eqref{6a} and \eqref{6b} by the method proposed and developed in
Refs. [\onlinecite{schlomann1971structure}] and [\onlinecite{eleonskiy}]. Let us linearize Eqs. \eqref{6a} and \eqref{6b} near the stationary points $\phi = O$ and
$\pi$, which correspond to the domains, and let us find solutions of the linear equations in
the form $\mathrm{exp}\left[\pm \left(\omega_E/c \right) k (x - v_{\mp} t) \right] $ at $(x - v_{\mp}t) \rightarrow \mp \infty$. The conditions for the existence of nontrivial solutions have the form
\begin{equation} \label{12}
\left ( \frac{v}{c} \right )^2 \left ( 1 + a^2 \right ) - \frac{a v }{c} k\left [ 2 - k^{-2}\left ( 1 + \frac{\omega^2_A}{\omega^2_E} \mp \frac{\omega_H \omega_d}{\omega_E^2} \right ) \right ] - 1 + k^2 - \left ( 1 - k^{-2} \right ) \left ( \frac{\omega^2_A}{\omega^2_E} \mp \frac{\omega_H \omega_d}{\omega_E^2} \right ) = 0.
\end{equation}
Let us assume that there is a solution of the nonlinear equations \eqref{6a} and \eqref{6b} in the form $\phi (x - vt )$, $\epsilon (x - vt)$ and that the function $\phi (x - vt )$ is symmetric; thus the equality $v_{+} (k) = v_{-} (- k) = v$, where $v_{+} (k)$ and $v_{-} ( k)$ are determined by Eq. \eqref{12}, gives (in the linear approximation of $\alpha$ and $H/H_E$):
\begin{subequations}
\label{13}
\begin{eqnarray}
v&=&c \left ( 1 + p - pk^{-2} - k^2 \right )^{1/2},\label{13a}
\\
H&=&H_{MC} k \left (1-p k^{-2} \right )^{1/2} \left ( 1 - k^2 \right )^{-1/2} \left ( 1 + p - 2 k^2 \right ), \label{13b}
\end{eqnarray}
\end{subequations}
\noindent where
\begin{equation*}
H_{MC} = a \frac{4 H_E^2}{H_d}, \ p = \left ( \frac{\omega_A}{\omega_E} \right )^2.
\end{equation*}
\begin{figure}[h!]
\includegraphics[width =0.6\columnwidth]{fig1.pdf}
\caption{A plot of the $v(H )$ function constructed according to Eq. \eqref{13} at $p = 10^4$, a part of the $v(H )$ curve
$\left[ (H / H_{MC}) > 0.2\right]$ requires further study since here the condition $\epsilon \ll 1$ is violated.}\label{fig1}
\end{figure}
These equations determine the function $v(H )$ in the parametric form. The characteristic shape of this curve is given in Fig. \ref{fig1}. The maximum of this curve, which has the coordinates \footnote[4]{This velocity coincides with that obtained in Ref. [\onlinecite{baryakhtar1978limit}].}: $v_m = c(1 - \sqrt{p})$, $H_m =H_{MC}p^{1/4}\left( 1- \sqrt{p}\right) ^2$, corresponds to
$k_m = p^{1/2} (p \ll 1)$ and the point $H = 0$, $v = 0$ corresponds to $k_m = p^{1/6} (p \ll 1)$. The quantity $k_m / k_0 \approx p^{1/4}$ characterizes the thickness ratio of the DB at $v = v_m$ and $v = 0$. The
function \eqref{13} coincides with Eq. \eqref{10} at $k_0<k<k_m$ or at $0<H<H_m$. The last inequality is the condition of applicability of Eqs. \eqref{8} and \eqref{10}. The motion of the DB, in which $\bm l$ rotates in the $ac$ plane, is determined by more complicated equations than \eqref{6a} and
\eqref{6b}, but the function $v(H )$, which is determined by Eqs. \eqref{10}, \eqref{13a}, and \eqref{13b}, remains valid in this case (if $d_1 = - d_3$). In them it must be assumed that $\omega_a = \left( b_1 - b_3\right)/M $.
\section{}
We give the numerical estimates. In YFeO$_3$ $A \approx 4 \times 10^{-7}$ erg/cm, $H_E =
6.4 \times 10^6$ Oe, $H_d = 10^5$ Oe, and $p \approx 10^{-4}$. The “scale” of the field H MC can be expressed in terms of the mobility $\mu$ when $H \rightarrow 0$: $\mu = \left( c / H_{MC} \right) p^{1/2}$. Hence, $H_{MC} = \left( c \sqrt{p} / \mu\right) $. According to Ref. [\onlinecite{uait}], $\mu \simeq 5 \times 10^3 $ cm/sec$\cdot$Oe. Using these values, we obtain $c \approx 2 \times 10^6$ cm/sec, $H_{MC} \approx 4 \times 10^4$ Oe, $V_m = 0.99$ s, and $H_m\approx 4 \times 10^3 $ Oe.
|
1,108,101,564,271 | arxiv | \section{Introduction}
The charm quark is the lightest of the heavy quarks.
Yet, the value of its mass $m_c$ is much larger than the scale $\Lambda_{\rm QCD}$
of Quantum Chromodynamics (QCD), i.e., $m_c \gg \Lambda_{\rm QCD}$.
Thus, scattering processes involving charm quarks are subject to QCD dynamics
at scales of the order of $m_c$, where perturbative QCD predictions apply.
This offers the opportunity to extract $m_c$ by comparing experimental data
for an appropriate observable to quark mass dependent theoretical predictions in perturbative QCD.
This procedure does require some care, though.
After all, quark masses are formal parameters of the QCD Lagrangian, but do not belong to the set
of observables in Quantum Field Theory. Quarks and gluons do not belong to the asymptotic states
at $t \rightarrow \pm \infty$ and already on grounds of the LSZ-theorem their mass is not the usual
mass of a stable elementary particle, like the electron. No free quarks are observed in nature. As
known from pertubation theory, the QCD corrections to the quark masses are renormalization scheme-dependent.
Any quantification of these formal parameters necessarily assumes a definite choice of scheme a priori.
In the past, high precision cross-section data from $e^+e^-$-collisions have been the basis for
such charm quark mass determinations.
The available data from $e^+e^-$-annihilation into hadrons span a large range of center-of-mass energies
and can be used in QCD sum rule analyses based on perturbative QCD predictions to high orders in the
coupling constant
resulting in precise $m_c$ values from scattering processes with time-like kinematics,
see, e.g.,~\cite{Beringer:2012}.
The recently available high precision data for charm quark production in
deep-inelastic scattering (DIS) at the HERA collider
now provide the attractive opportunity for a $m_c$ extraction from scattering processes with space-like kinematics.
This is interesting per se for consistency tests of the Standard Model.
Moreover, the precision now reached by the DIS measurements allows for an $m_c$
determination with an accuracy comparable to the one achieved in QCD sum rule analyses.
In the present paper we use the new data combination of charm production cross section
measurements in DIS at HERA~\cite{Abramowicz:1900rp} to determine $m_c$ in the $\overline{\text{MS}}\, $\ scheme
by comparing to QCD predictions at next-to-leading (NLO) and next-to-next-to-leading (NNLO) order.
We apply the formalism developed in Ref.~\cite{Alekhin:2010sv} and fit $m_c$ to the cross section data
together with all other non-perturbative parameters,
of which the gluon distribution function in the proton and the strong coupling constant $\alpha_s(M_Z)$
in particular exhibit a significant correlation with $m_c$.
For this purpose, we update the parton distribution function (PDF) analysis ABM11~\cite{Alekhin:2012ig}
with the new combined HERA data~\cite{Abramowicz:1900rp}
included.
Like ABM11, also the new variant of the present paper
uses the $\overline{\text{MS}}\, $ renormalization scheme for $\alpha_s(M_Z)$ and the heavy-quark masses.
It is performed in the so-called fixed-flavor number (FFN) scheme for $n_f=3$ light quarks to be
dealt with massless.
The latter feature is rather important because in a global fit such as ABM11
already the data for completely inclusive DIS measurements from HERA put
significant constraints on the value of $m_c$ due to the correlations mentioned.
The FFN scheme allows for a well-defined description of open charm production in QCD,
and the radiative corrections, i.e., the Wilson coefficients of the
hard scattering process, are available exactly to NLO~\cite{Laenen:1992zk,Riemersma:1994hv,Bierenbaum:2009zt}
(see also Ref.~\cite{Harris:1995tu}) and to NNLO in an approximation
for the most important parts,
that is the gluon and quark pure-singlet Wilson coefficients~\cite{Kawamura:2012cr}.
The present study complements a previous determination of the $c$-quark mass~\cite{Alekhin:2012un}
in the $\overline{\text{MS}}\, $\ scheme based on data from the H1 collaboration~\cite{Aaron:2009jy,Aaron:2009af}
for open charm production.
Those data are available in differential distributions
so that the effect of value of $m_c$ on the extrapolation to the unmeasured region for
the inclusive cross section has been carefully examined.
In this way, Ref.~\cite{Alekhin:2012un} has obtained the $\overline{\text{MS}}\, $\ mass $m_c(\mu_r=m_c) \equiv m_c(m_c)$
for the renormalization scale choice $\mu_r=m_c$ at NLO to
$m_c(m_c) = 1.27\pm 0.05 (\text{exp})^{+0.06}_{-0.01}(\text{scale})$ GeV
and at approximate NNLO to
$m_c(m_c) = 1.36\pm 0.04 (\text{exp})^{+0.04}_{-0.00}(\text{scale})\pm 0.1 (\text{theory})$ GeV,
respectively.
The present paper is organized as follows.
In Sec.~\ref{sec:data} we briefly recount the essential features of the data combination of Ref.~\cite{Abramowicz:1900rp}.
Sec.~\ref{sec:analysis} contains the analysis and the new result for $m_c(m_c)$
together with a detailed discussion on the impact of
the new data set on the fit and the correlations of $m_c$ with the gluon
distribution and the strong coupling $\alpha_s(M_Z)$.
We conclude in Sec.~\ref{sec:concl} emphasizing that the accuracy of the $m_c$
determination from DIS data becomes competitive with other methods, e.g., QCD sum rule analyses.
\section{Data}
\label{sec:data}
The $c$-quark mass determination is conducted within the framework of a global analyses
provided by the ABM11 fit~\cite{Alekhin:2012ig}.
The ABM11 analyses~\cite{Alekhin:2012ig} has evolved from the previous ABKM09 fit~\cite{Alekhin:2009ni}
and is based on world data for deep-inelastic scattering from
HERA, and fixed target experiments and the Tevatron results on the Drell-Yan process.
These data are supplemented by the recently published
combined charm production cross sections in DIS at HERA~\cite{Abramowicz:1900rp}
and are used as input for the QCD analysis at NLO and NNLO.
Reduced cross sections for charm production were measured in the kinematic
range of photon virtuality $2.5 \le Q^2 \le 2000\, {\rm GeV}^2$ and Bjorken scaling
variable $3 \cdot 10^{-5} \le x \le 5 \cdot 10^{-2}$.
The measurement was based on the combination of results obtained by using different charm
tagging techniques: the reconstruction of $D$ or $D^*$ mesons, the
identification of muons from semi-leptonic decays of charmed
hadrons or by exploiting the long lifetime in charmed hadron decays.
The individual measurements were performed in different experimentally
accessible (visible) phase space regions, depending on the experimental
technique applied or on the different acceptances of the detector components
used. For $D$-meson and muon production, the visible cross section
measurements were extrapolated to the full phase space using predictions from
perturbative QCD to NLO in the FFN scheme~\cite{Laenen:1992zk,Riemersma:1994hv,Harris:1995tu}.
The quoted uncertainties in the extrapolation include those due to
the variations of the factorization and renormalization scales, $\mu_f$, $\mu_r$,
simultaneously by a factor of $1/2$ and $2$ around the nominal scale,
as well as of the charm quark mass in the range $1.35< m_c^{\rm pole}< 1.65\, {\rm GeV}$
for the pole mass $m_c^{\rm pole}$ used in Refs.~\cite{Laenen:1992zk,Riemersma:1994hv,Harris:1995tu}.
The correlated systematic uncertainties and the normalization of the different
measurements were accounted for in the combination procedure such that one
consistent data set has been obtained. Since different experimental techniques
of charm tagging were employed, the combination led to a significant reduction
of the statistical and systematic uncertainties. However, due the combination
procedure the information about the extrapolation factors and their
uncertainties for the individual input data sets cannot be provided.
Therefore, a detailed analysis similar to the previous one performed in Ref.~\cite{Alekhin:2012un} taking
into account the dependence of the extrapolation factor on the assumption of
the charm mass in the underlying theory is not possible here.
Instead, the results on the combined reduced charm cross sections at
particular kinematical points ($x, Q^2$) are used in the current analysis with
account of the correlations of the uncertainties as provided by the
experiments~\cite{h1zeuscombo:2012}.
\section{Analysis}
\label{sec:analysis}
The theoretical framework applied in the present analysis of the combined HERA data~\cite{Abramowicz:1900rp}
essentially coincides with the one used earlier in the determination of the $c$-quark mass~\cite{Alekhin:2012un}
with the H1 data on open charm production~\cite{Aaron:2009jy,Aaron:2009af}.
We compute the heavy-quark contribution to the DIS cross section in the scheme
with $n_f=3$ massless flavors in the initial state.
The running-mass definition is employed for the heavy-quark Wilson coefficients, which comprise
the NLO terms~\cite{Alekhin:2010sv} derived from the calculations performed
with the pole mass definition~\cite{Laenen:1992zk} and the NNLO terms~\cite{Kawamura:2012cr}.
The latter are denoted by NNLO$_\text{approx}$ in the following,
obtained by interpolation between existing soft-gluon threshold resummation results and
approximate relations for the Wilson coefficients at $Q^2\gg m^2$
taking advantage of selected Mellin moments for the massive operator-matrix elements at NNLO
given in
Refs.~\cite{Buza:1995ie,Bierenbaum:2007qe,Bierenbaum:2008yu,Bierenbaum:2009mv,Ablinger:2010ty}.
and the massless 3-loop Wilson coefficients \cite{Vermaseren:2005qc}.
The residual interpolation uncertainty which appears due to the finite number of Mellin moments
being known
is quantified by two options, $c_{2}^{\,(2),A}$ and $c_{2}^{\,(2),B}$,
for the constant terms in the Wilson coefficients at NNLO~\cite{Kawamura:2012cr}.
In the present analysis the shape of the NNLO correction is defined as
a linear interpolation between these options using the ansatz
\begin{equation}
\label{eq:inter}
c_{2}^{\,(2)} \,=\,
(1-d_N) c_{2}^{\,(2),A} + d_N c_{2}^{\,(2),B}
\, .
\end{equation}
The {\tt Fortran} code {\tt OPENQCDRAD} for the numerical computation of all cross sections
in the present analysis is publicly available~\cite{openqcdrad:2012}.
Our determination of $m_c$ is based on the 3-flavor
ABM11 PDFs~\cite{Alekhin:2012ig}.
However, those PDFs were obtained at the fixed value of
$m_c(m_c)=1.27~{\rm GeV}$. In order to provide a consistent treatment of the
PDF dependence on $m_c$ we employ in the present analysis a set of $m_c$-dependent PDFs
produced by interpolating between the variants of the ABM11 fit with the value
of $m_c(m_c)$ scanned over the range of $0.9-1.35$~GeV.
By fitting to the combined HERA charm data in this way we obtain the following
$c$-quark mass values in the $\overline{\text{MS}}\, $\ scheme
\begin{eqnarray}
\label{eq:mcabm11-nlo}
m_c(m_c) \,\,=&
1.20\, \pm 0.05 (\text{exp})
\hspace*{30mm}
&{\rm NLO}
\, ,
\\
\label{eq:mcabm11-nnlo}
m_c(m_c) \,\,=&
1.30\, \pm 0.04 (\text{exp})
\hspace*{30mm}
&{\rm NNLO_\text{approx}}
\, .
\end{eqnarray}
Here the NNLO value corresponds to $d_N=-0.4$ which provides the
best agreement with the data
in line with the approach of Ref.~\cite{Alekhin:2012un}.
The experimental uncertainties in $m_c(m_c)$ are calculated by propagation of the
errors in the data, taking into account the systematic error correlations.
For the combined HERA data~\cite{Abramowicz:1900rp} they stem from 48 sources
including the extrapolation of the visible charm production cross
section to the full phase space\footnote{The combined HERA data on
open charm production with their systematic uncertainties used
in the present analysis are available from {\tt http://arxiv.org}
as an attachment to the arXiv version of the present paper.}.
This extrapolation is sensitive to
the calculation details, such as fragmentation-model parameters, the PDFs, the value of $m_c$, etc.
The corresponding systematic errors encode the impact
of the sensible variation of these parameters on the cross section values.
Ideally, the extrapolation correction
has to be calculated in the analysis iteratively, in parallel with fitting $m_c$ and the PDFs, as it has been
done in the earlier determination of $m_c$ in Ref.~\cite{Alekhin:2012un} based on the selected set of
the H1 open charm production data.
As discussed in Sec.~\ref{sec:data}, this approach is inapplicable in the present analysis
because the necessary information about the visible phase
space is lost in the combination of the H1 and ZEUS data.
The extracted value of $m_c$ thus faces a procedural bias
due to the fact that the extrapolation corrections are calculated for a fixed value of $m_c$.
However, the corresponding uncertainty was estimated in Ref.~\cite{Abramowicz:1900rp}
by a conservative variation of the input used in the extrapolation correction.
Therefore, the quoted experimental uncertainties in $m_c$ must exceed this bias.
The central values of $m_c$ in Eqs.~(\ref{eq:mcabm11-nlo}) and (\ref{eq:mcabm11-nnlo}) are lower
than those in our earlier determination in Ref.~\cite{Alekhin:2012un}.
In particular, this difference can be explained by a
shift of the data obtained by the H1 and ZEUS experiments
in the process of their combination, cf. Ref.~\cite{Abramowicz:1900rp} for details.
Besides, the NNLO correction employed in Ref.~\cite{Alekhin:2012un} corresponds to the
interpolation parameter $d_N=-0.6$, which is somewhat different from
the one obtained in the present analysis, which causes an additional
shift of $m_c(m_c)$ in NNLO. However, in any case the values of $m_c(m_c)$
in Eqs.~(\ref{eq:mcabm11-nlo}) and (\ref{eq:mcabm11-nnlo}) are compatible with the
results of Ref.~\cite{Alekhin:2012un} within the uncertainties.
To study the sensitivity of the $m_c$ determination to the particular choice
of PDFs we repeat our analysis considering other 3-flavor PDFs.
For this purpose, we take in all cases the nominal PDFs obtained with the fixed values of the $c$-quark mass.
The NNLO values of $m_c$ obtained in this way demonstrate good agreement, cf. Tab.~\ref{tab:comp}.
At NLO only the ABM11~\cite{Alekhin:2012ig} and GJR~\cite{Gluck:2007ck,JimenezDelgado:2008hf}
results coincide, while lower values are obtained in case of
MSTW08~\cite{Martin:2009iq} and NN21~\cite{Ball:2011mu}.
This difference may partly appear due to a spread in the $c$-quark mass taken in different PDF fits.
However, the difference between the ABM11 results obtained with and without
taking into account the $m_c$-dependence of PDFs is of ${\cal O}(10)~{\rm MeV}$,
cf. Tab.~\ref{tab:comp} and Eqs.~(\ref{eq:mcabm11-nlo}) and (\ref{eq:mcabm11-nnlo}).
This may point to other reasons for this difference.
In fact, it is also correlated with the scheme used in the PDF fits.
While the ABM11 and JR PDFs are based on the 3-flavor scheme,
the MSTW and NNPDF analysis are performed with different versions of a
general-mass variable-flavor-number (GMVFN) scheme.
In particular, this explains the difference at NLO between the MSTW and ABM/GJR results since
the GMVFN scheme commonly deviates at NLO from the 3-flavor one to a larger extent than at NNLO.
Recall also, that all PDF fits except ABM11, refer to the on-shell scheme for heavy quarks
and compare to theoretical predictions using the pole mass $m_c^{\rm pole}$.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|} \hline
& ABM11~\cite{Alekhin:2012ig} & JR(GJR)~\cite{Gluck:2007ck,JimenezDelgado:2008hf} & MSTW08~\cite{Martin:2009iq} & NN21~\cite{Ball:2011mu} \\ \hline
NLO & 1.21 & 1.21 & 1.12 & 1.01 \\ \hline
NNLO & 1.28 & 1.27 & 1.29 & -- \\ \hline
\end{tabular}
\end{center}
\caption{
\label{tab:comp}
\small The value of $m_c(m_c)$ in GeV obtained from the
analysis of the combined HERA data on open charm
production~\cite{Abramowicz:1900rp} with different 3-flavor PDFs in NLO and
NNLO. Note, the ABM11 values are different from the ones in
Eqs.~(\ref{eq:mcabm11-nlo}) and (\ref{eq:mcabm11-nnlo}) since the latter were obtained within the
$m_c$-dependent variant of the ABM11 PDFs.}
\end{table}
Although Eqs.~(\ref{eq:mcabm11-nlo}) and (\ref{eq:mcabm11-nnlo}) for $m_c(m_c)$
are based on a consistent treatment of the PDF's $c$-quark mass dependence,
the constraints on the variation of those PDFs with $m_c$
imposed by the data included in the ABM11 fit are not yet taken into account
in the the determination of those numbers.
To take advantage of the sensitivity of charm production in neutrino-nucleon scattering \cite{Bazarko:1994tt,Goncharov:2001qe}
and the inclusive DIS to the charm mass we also perform the NLO and NNLO variants of ABM11 fit,
which includes those data together with the HERA charm data of Ref.~\cite{Abramowicz:1900rp} added
and the value of $m_c(m_c)$ considered as a fitted
parameter\footnote{To allow for a variation of the
factorization scale in the present
analysis a cut of $Q^2>2~{\rm GeV}^2$ is imposed
on the data for dimuon production in neutrino-nucleon
DIS~\cite{Bazarko:1994tt,Goncharov:2001qe}, while in the
analysis of Ref.~\cite{Alekhin:2012ig} these data with
$Q^2\simeq 1~{\rm GeV}^2$ were used.}.
From these versions of the fit we obtain the values of
\begin{eqnarray}
\label{eq:mcres-nlo}
m_c(m_c) \,\,=&
1.15\, \pm 0.04 (\text{exp})\,^{+0.04}_{-0.00} (\text{scale})
\hspace*{30mm}
&{\rm NLO}
\, ,
\end{eqnarray}
\begin{eqnarray}
\label{eq:mcres-nnlo}
m_c(m_c) \,\,=&
1.24\, \pm 0.03 (\text{exp})\,^{+0.03}_{-0.02} (\text{scale})\,^{+0.00}_{-0.07} (\text{th}),
\hspace*{14mm}
&{\rm NNLO_\text{approx}}
\, ,
\end{eqnarray}
where the NNLO value corresponds to $d_N=-0.1$. This
provides the best description of data with $\chi^2$ normalized by the number
of data points ($NDP$),
$\chi^2/NDP = 3459/3080$ for the whole data set and
$\chi^2/NDP = 61/52$ for the combined HERA charm data, cf. also Fig.~\ref{fig:hera}.
At the same time, the option B of the massive NNLO correction of
Ref.~\cite{Kawamura:2012cr} corresponding to $d_N=1$
is clearly disfavored by the data giving
$\chi^2/NDP=115/52$ for the HERA charm data and $\chi^2/NDP=3547/3080$ for the whole data set.
Therefore we estimate the uncertainty due to the massive NNLO correction
choice as variation between the values of $m_c(m_c)$ obtained with $d_N=-0.1$ and $d_N=0.5$ in Eq.~(\ref{eq:inter}).
This yields the value of $0.07~{\rm GeV}$ quoted in
Eq.~(\ref{eq:mcres-nnlo}) as an estimate of the theoretical uncertainty.
The scale uncertainty in $m_c(m_c)$ is calculated as a variation
due to a change in the factorization scale by a factor of $1/2$ and $2$ around the nominal value of $\sqrt{m_c^2+\kappa Q^2}$,
where $\kappa=4$ for neutral-current and $\kappa=1$ for charge-current heavy-quark production, respectively.
For the NLO case both these variations lead to
an increase in $m_c(m_c)$ and we select the bigger shift as the
uncertainty due to the scale variation.
The NNLO scale uncertainty in $m_c(m_c)$ is asymmetric and smaller than
the NLO one, in line with the estimates of Ref.~\cite{Kawamura:2012cr}.
The experimental error of $m_c$ is reduced due to the constraints on the PDFs
by the inclusive DIS data.
The theoretical error due to missing higher order corrections is the
dominant source of uncertainty in $m_c$.
The central value of $m_c(m_c)$ obtained at NLO in Ref.~\cite{Abramowicz:1900rp}
for the combined HERA data including the data on charm production, i.e.,
$m_c(m_c) = 1.26\, \pm 0.05 (\text{exp})$~GeV,
turns out to be bigger than our NLO result. It is important to
note that this value is obtained from a scan of
$m_c(m_c)$ and not in a simultaneous fit of the PDFs and the charm quark mass.
Also, partially the difference to our result can be explained by the different cuts on $Q^2$ imposed in the analysis of Ref.~\cite{Abramowicz:1900rp} and ours.
By changing our cut of $Q^2>2.5~\text{GeV}^2$ to the cut of
$Q^2>3.5~\text{GeV}^2$ used in~\cite{Abramowicz:1900rp}
we get a shift of $+0.03~\text{GeV}$ our NLO value of $m_c(m_c)$ in Eq.~(\ref{eq:mcres-nlo}).
Another source for the difference is the data on dimuon production
in neutrino-nucleon DIS~\cite{Bazarko:1994tt,Goncharov:2001qe} included in ABM11.
By excluding this data set, we obtain a shift of $+0.04~\text{GeV}$ for
the $c$-quark mass in Eq.~(\ref{eq:mcres-nlo}).
Note, that the value of $m_c(m_c)$ of Ref.~\cite{Abramowicz:1900rp} is also systematically
bigger than the NLO entries in Tab.~\ref{tab:comp} which are obtained with fixed PDFs.
Therefore the remaining difference between Eq.~(\ref{eq:mcres-nlo}) and $m_c(m_c)$ of Ref.~\cite{Abramowicz:1900rp}
is evidently also related to particularities of the shape of HERA PDFs
used in Ref.~\cite{Abramowicz:1900rp}.
Let us finally discuss a number of cross checks.
Operating in the framework of a global analysis of the proton structure as provided by ABM11
offers the possibility to account consistently for all correlations of the
$c$-quark mass with non-perturbative parameters of the fit of which
the gluon distribution function and the strong coupling constant $\alpha_s(M_Z)$
exhibit the strongest correlation with $m_c$.
We observe, that the shape of the gluon distribution obtained in the present fit is
somewhat modified with respect to the ABM11 PDFs, cf. Fig.~\ref{fig:pdfs}.
However, the changes are basically found to be within the PDFs uncertainties.
The sea distribution is affected to a lesser extent and the other PDFs are practically unchanged.
The correlation of the fitted value of $m_c$ with the
strong coupling constant $\alpha_s(M_Z)$ is shown in Fig.~\ref{fig:alpha}
for a variation of the value of $\alpha_s(M_Z)$ in the range $\alpha_s(M_Z)=0.110 - 0.122$.
Recall, that the analysis of ABM11~\cite{Alekhin:2012ig} has obtained
$\alpha_s(M_Z) = 0.1180 \pm 0.0012$ at NLO and
$\alpha_s(M_Z) = 0.1134 \pm 0.0011$ at NNLO as best fits.
Fig.~\ref{fig:alpha} demonstrates a remarkable stability of the $c$-quark mass both at NLO and NNLO.
Considering a variation of $0.115 \le \alpha_s(M_Z) \le 0.119$
the shift of $\Delta m_c(m_c)$ is confined within an interval of 20~MeV
for the NLO case and
for a range of $0.110 \le \alpha_s(M_Z) \le 0.114$ at NNLO
within an interval of 10~MeV only.
This is to be compared with the $\alpha_s(M_Z)$ dependence inherent in QCD sum rule analyses.
For example, for a variation of $0.113 \le \alpha_s(M_Z) \le 0.119$
Ref.~\cite{Dehnadi:2011gc} observes a linear growth of the value of $m_c(m_c)$
with a maximal shift of $\Delta m_c(m_c) = 25~{\rm MeV}$ (cf. Fig.~11a in~\cite{Dehnadi:2011gc}).
In contrast, the numbers for $m_c(m_c)$ determined in Eqs.~(\ref{eq:mcres-nlo}) and (\ref{eq:mcres-nnlo})
do not carry such bias with respect to the value of the strong coupling constant.
To conclude the discussion we also convert the values of $m_c(m_c)$ in Eqs.~(\ref{eq:mcres-nlo}) and (\ref{eq:mcres-nnlo})
to the on-shell scheme. Using the well-known relations for the scheme
transformation as encoded in~\cite{Chetyrkin:2000yt} and the values for $\alpha_s(M_Z)$ of ABM11 at NLO and NNLO,
we obtain
\begin{eqnarray}
\label{eq:mcpole-nlo}
m_c^{\rm pole} \,\,=&
1.35\, \pm 0.05 (\text{exp})\,^{+0.05}_{-0.00} (\text{scale})
\hspace*{30mm}
&{\rm NLO}
\, ,
\\
\label{eq:mcpole-nnlo}
m_c^{\rm pole} \,\,=&
1.59\, \pm 0.04 (\text{exp})\,^{+0.04}_{-0.03} (\text{scale})\,^{+0.00}_{-0.09} (\text{th}),
\hspace*{14mm}
&{\rm NNLO_\text{approx}}
\, .
\end{eqnarray}
As to be expected, the numerical values for $m_c^{\rm pole}$ are larger than the
values given in Eqs.~(\ref{eq:mcres-nlo}) and (\ref{eq:mcres-nnlo})
and those positive corrections grow in size, i.e., the shift of the central value amount to
$\Delta m_c(m_c) = 200~{\rm MeV}$ at NLO and $\Delta m_c(m_c) = 350~{\rm MeV}$ at NNLO.
The increasing spread between the numbers in Eqs.~(\ref{eq:mcpole-nlo}) and (\ref{eq:mcpole-nnlo})
can illustrate the poor perturbative convergence of the pole mass scheme which
is particularly pronounced at the low scales relevant for DIS charm production.
\begin{figure}[hhh]
\center
\includegraphics[width=0.9\textwidth]{pull.eps}
\setlength{\unitlength}{1cm}
\caption{\label{fig:hera}
The combined HERA data on the reduced cross section for the open
charm production~\cite{Abramowicz:1900rp} versus $x$ at different
values
of $Q^2$ in comparison with the result of the present analysis at
NLO (dashed line) and NNLO (solid line).
A variant of the fit based on the option (A+B)/2 of the NNLO Wilson
coefficients of Ref.~\cite{Kawamura:2012cr},
cf. Eq.~(\ref{eq:inter}), is displayed for comparison (dotted line).
}
\end{figure}
\begin{figure}[hhh]
\center
\includegraphics[width=0.9\textwidth]{pdfs.eps}
\setlength{\unitlength}{1cm}
\caption{\label{fig:pdfs}
The relative change in the NNLO gluon (left) and non-strange sea
(right) distributions obtained in the present analysis
with respect to the ABM11 PDFs (solid lines). The relative uncertainties
in the PDFs are displayed for comparison (shaded area: ABM11, dotted lines:
present analysis).
}
\end{figure}
\begin{figure}[hhh]
\center
\includegraphics[width=0.9\textwidth]{alpha.eps}
\setlength{\unitlength}{1cm}
\caption{\label{fig:alpha}
The values of $m_c(m_c)$ obtained in the NLO and NNLO variants of
the ABM11 fit with the combined HERA charm data~\cite{Abramowicz:1900rp}
included and the value of $\alpha_s(M_Z)$ fixed.
The position of the star displays the result with the value of $\alpha_s(M_Z)$ fitted~\cite{Alekhin:2012ig}.
}
\end{figure}
\section{Conclusions}
\label{sec:concl}
The new combined HERA data for charm production cross section measurements
in DIS allows for a precise determination of the charm-quark mass
in the $\overline{\text{MS}}\, $\ scheme by comparing to QCD theory predictions in the
FFN scheme at NLO and NNLO.
Embedding the data analysis in a global fit takes advantage of a
well-established theory framework and, simultaneously accounts for all
correlations with other non-perturbative parameters, of which the gluon PDF in
the proton and the strong coupling constant $\alpha_s(M_Z)$ are most important and
have been studied in detail.
The effect of the HERA DIS charm data on the extraction of $m_c(m_c)$
has been demonstrated in Eqs.~(\ref{eq:mcabm11-nlo}), (\ref{eq:mcabm11-nnlo}).
Yet, the full potential for a precision determination of $m_c(m_c)$ unfolds
in a global fit due the additional constraints imposed by the inclusive HERA data
and those from neutrino-nucleon DIS.
Thus, the best values for the $c$-quark mass are
$m_c(m_c) = 1.15\, \pm 0.04 (\text{exp})\,^{+0.04}_{-0.00} (\text{scale})$ GeV
at NLO and
$m_c(m_c) = 1.24\, \pm 0.03 (\text{exp})\,^{+0.03}_{-0.02} (\text{scale})\,^{+0.00}_{-0.07} (\text{theory})$ GeV
at approximate NNLO, cf. Eqs.~(\ref{eq:mcres-nlo}) and (\ref{eq:mcres-nnlo}),
although the accuracy of the latter determination still suffers from
missing information on the three-loop Wilson coefficients for neutral current
DIS heavy quark production at small-$x$ and small values of $Q^2$.
This implies an additional theoretical uncertainty on $m_c(m_c)$ estimated to be
in the range $- 70 \le \Delta m_c \le 0$ MeV.
The obtained values in Eqs.~(\ref{eq:mcres-nlo}) and (\ref{eq:mcres-nnlo})
are compatible with the previous analysis of Ref.~\cite{Alekhin:2012un} and
with the world average $m_c(m_c) = 1.275 \pm 0.025~\text{GeV}$
as summarized by the particle data group \cite{Beringer:2012}.
The accuracy of the determination is competitive with other approaches,
e.g., from scattering reactions in time-like kinematics.
\subsection*{Acknowledgments}
We acknowledge fruitful discussions with R.~Pla\v{c}akyt\.{e}.
This work has been supported in part by Helmholtz Gemeinschaft under contract
VH-HA-101 ({\it Alliance Physics at the Terascale}),
VH-NG-401 ({\it Young Investigator group "Physics of gluons and heavy quarks"}),
by the Deutsche Forschungsgemeinschaft in Sonderforschungs\-be\-reich/Trans\-regio~9 and
by the European Commission through contract PITN-GA-2010-264564 ({\it LHCPhenoNet}).
{\footnotesize
|
1,108,101,564,272 | arxiv | \section{Introduction}
The birth of coorbit theory dates back to the 1980ies,
starting with a series of papers by Feichtinger and Gröchenig~\cite{FeGr86, Gr88, Gr91}.
The main intention was to characterize function spaces via an abstract
transform, the so-called voice transform.
In the original setup, this transform is determined by an integrable irreducible representation
of a locally compact group on a Hilbert space $\mathcal{H}$ unifying e.g.\ the continuous wavelet transform, the
short-time Fourier transform, and the recent shearlet transform, to mention just
a few. More recently, representations which are not necessarily irreducible nor integrable have been considered~\cite{dadelasttevi14}. They allow
to treat, for instance, Paley-Wiener spaces and spaces related to Shannon wavelets and Schr\"odingerlets.
Classical examples of coorbit spaces associated to the continuous
wavelet transform on the $ax+b$-group are the homogeneous Besov-Lizorkin-Triebel
spaces \cite{Tr83,Tr88,Tr92}, identified rigorously as coorbits in Ullrich
\cite{T10}. What concerns further extensions of these spaces and interpretations
as coorbits we refer to Liang et al.\ \cite{LiSaUlYaYu11, LiSaUlYaYu12}.
More general wavelet coorbit spaces associated to a semidirect product $G={{\re}^d}\rtimes H$, with a suitable subgroup $H$ of $GL({{\re}^d})$ as dilation group,
have been studied in \cite{fu13a,fu13b,furaitou15} and could recently be identified with certain decomposition spaces on the Fourier domain \cite{fuvoigt14}.
A specific example of this general setup is the shearlet transform, where $G$ is the shearlet group. The
associated shearlet spaces have first been studied in \cite{dakustte09}.
Other coorbit spaces, based
on a voice transform different from the wavelet transform, are e.g.\ modulation spaces \cite{gr01,fe83-4} and Bergman spaces \cite{FeGr86}.
Coorbit theory thus covers a great variety of different function spaces.
The underlying group structure however turns out to be a severe restriction for the theory since the identification of, e.g., inhomogeneous spaces of the above
type was long time not possible, however desirable. For that reason the theory
has evolved and several subsequent contributions have weakened among others the
assumption that the voice transform is supported on a locally compact
group. For instance, Dahlke, Steidl, and Teschke replaced it by a homogeneous
space, i.e.,\ a quotient of a group with a subgroup, with the aim to treat
functions on manifolds \cite{dastte04,dastte04-1,daforastte08}.
The starting point for the general coorbit space theory presented in this
paper is the approach used by Fornasier and Rauhut~\cite{fora05}, which was later revised and extended in~\cite{balhol10} and further expanded in~\cite{RaUl10}. There, the
group structure is abandoned completely and the voice transform is determined
solely by an abstract continuous frame $\mathcal{F} = \{\varphi_x\}_{x\in X}$ in $\mathcal{H}$
indexed by a locally compact Hausdorff space $X$ (not necessarily a group), i.e., $X$ is equipped with a Radon measure $\mu$
such that the map $x\mapsto\varphi_x$ is weakly measurable and that with constants $0<C_1,C_2<\infty$
\begin{equation}\label{eq:stab}
C_1\|f|\mathcal{H}\|^2 \leq \int_{X} |\langle f,\varphi_x \rangle|^2 d\mu(x) \leq C_2\|f|\mathcal{H}\|^2\quad \mbox{for all }
f\in \mathcal{H}\,.
\end{equation}
(Note that weak measurability of $x\mapsto\varphi_x$ in $\mathcal{H}$ implies that the integral in \eqref{eq:stab} is well-defined.)
We combine the approach in \cite{RaUl10} with ideas from \cite{ra05-3} to
define even coorbits
$$
\mathsf{Co}({\mathcal F},Y) := \{f~:~\langle f, \varphi_x \rangle \in Y\}
$$
of quasi-Banach spaces $Y$ using the general voice transform associated to
${\mathcal F}$. We thereby also recall the
relevant details of the existing theory, especially from \cite{fora05,RaUl10}
and fix some earlier inaccuracies. The developed theory yields noteworthy
generalizations even for the Banach case,
e.g.\
some assumptions made in \cite{fora05,RaUl10} can be weakened, such as the
uniform boundedness of the analyzing frame ${\mathcal F}$ or some technical restrictions
on the weights and the admissible coverings. Most notably however, we can
generalize the main results of the discretization theory, which is possible
since we take a different -- more direct -- route to establish them.
It turns out that the three essential Lemmas \ref{auxlem:mainanalysis},
\ref{auxlem:mainsynthesis2}, and \ref{auxlem:Uinvert} below constitute the
technical foundation for the proof of the general abstract discretization
results in Theorems \ref{thm:atomicdec} and \ref{thm:frameexp}. Putting these
lemmas at the center of the exposition simplifies many arguments and
allows for a systematic approach towards new abstract discretization results. In
fact, we obtain
discrete characterizations of coorbit spaces by ``sampling'' the function using
a sub-sampled discrete frame ${\mathcal F}_d = \{\varphi_{x_i}\}_{i\in I}$ on a
suitable index set $I$. Of course, as usual in coorbit space theory, there are
several technical assumptions to check. However, a great advantage of the
presented discretization machinery is
the fact that it provides a straight path towards discretization, where matters essentially reduce to checking properties (associated to $Y$) of
the analyzing frame ${\mathcal F}$. This is in contrast to the usual approach where atomic decompositions and wavelet
characterizations, useful to study embeddings, $s-$numbers, interpolation
properties etc., are often developed from scratch for different related function
spaces.
To prove the potential of the theory presented here, we apply it to
identify spaces with variable smoothness and integrability, so-called variable
exponent spaces, as coorbits. Triebel-Lizorkin spaces of this kind are defined via the
quasi-norm
\begin{equation}\label{f000}
\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\| = \Big\|\Big(\sum\limits_{j=0}^{\infty}
|w_j(\cdot)(\Phi_j \ast f)(\cdot)|^{q(\cdot)}\Big)^{1/q(\cdot)}|L_{p(\cdot)}({{\re}^d})\Big\|\,,
\end{equation}
where the functions $w_j$ are weights and $\Phi_j$ are frequency filters corresponding to a dyadic decomposition of the frequency plane.
For the precise formulation see Definition \ref{inhom} below.
The functions $p(\cdot), q(\cdot)$ represent certain integrability
parameters, which may vary in the spatial variable $x$ of the space.
The
$2$-microlocal weight sequence $w_j(\cdot)$ determines the variable smoothness,
see \cite{KeVybDiff} for details. Function spaces with variable exponents are a
fast developing field thanks
to its many applications in stochastics, fluid dynamics and image processing,
see \cite{DieningHastoRoudenko2009} and \cite{DieningHastoBuch2011} and
references therein. The Lebesgue spaces $L_{p(\cdot)}({{\re}^d})$ with variable
integrability, see Definition \ref{Lppunkt} below, were already used by Orlicz
\cite{Orlicz31}. Recent contributions by Diening \cite{Diening2004} on the
boundedness of the Hardy-Littlewood maximal operator on $L_{p(\cdot)}({{\re}^d})$ make them
accessible for harmonic analysis issues.
Surprisingly, the spaces \eqref{f000} can be handled within the generalized
coorbit space theory presented in this paper. In fact, due to unbounded left
and right translation operators (within the $ax+b$-group) a coorbit
characterization of homogeneous spaces of the above type already seems to be
rather impossible
at first glance. However, we are able to identify them as coorbits
$\mathsf{Co}({\mathcal F},Y)$ of, what we call, Peetre-Wiener type spaces $Y$ by using a
suitable continuous frame ${\mathcal F} = \{\varphi_x\}_{x\in X}$ with the index set $X =
{{\re}^d} \times [(0,1) \cup \{\infty\}]$. These spaces $Y$ are solid quasi-Banach
function spaces (QBF) defined on $X$, see Section \ref{ssec:PeetSp} below.
Peetre-Wiener type spaces can be
seen as a mixture of the Peetre type spaces introduced in \cite{T10}, and
certain Wiener amalgam spaces, see \cite{feGr89a}, \cite{ra05-4}. They appear
naturally when dealing with continuous local mean characterizations, a strategy
developed in \cite{T10} and \cite{LiSaUlYaYu11}. In fact, we show in Subsection
\ref{clm} below that with large enough $a>0$ the quantity
\begin{equation}\label{coorb001}
\begin{split}
\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_3 &= \|w(\cdot,\infty)\langle \Phi^{\ast}_0
f\rangle_{a}(\cdot)|L_{p(\cdot)}({{\re}^d})\|\\
&+ \Big\|\Big(\int_{0}^1
|w(\cdot,t)\langle\Phi^{\ast}_t f\rangle_{a}(\cdot)|^{q(\cdot)}\frac{dt}{t}\Big)^{1/q(\cdot)}
|L_{p(\cdot)}({{\re}^d})\Big\|\,
\end{split}
\end{equation}
represents an equivalent characterization for $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$. Here
$$
\langle\Phi^{\ast}_t f\rangle_{a}(x):= \sup_{\substack{z\in {{\re}^d}
\\t/2\le\tau\le 2t, \tau<1}} \frac{|(\Phi_{\tau} \ast f)(x+z)|}
{(1+|z|/\tau)^{a}}
$$
denotes the corresponding maximal function, which is essentially a
modification of the widely used Peetre maximal function, see
\eqref{Peemax} below, and is used in the definition of the Peetre-Wiener type
spaces, see Definition \ref{PeetreWiener}. Now the representation
\eqref{coorb001} is actually the identification of $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$
as a coorbit space of a Peetre-Wiener type space. Applying the abstract theory, in
particular Theorem \ref{thm:frameexp}, we obtain biorthogonal wavelet
expansions \cite{CoDaFe92} of the respective coorbit spaces. We describe the
application of the machinery for the rather simple (orthogonal) Meyer
wavelets, see Appendix \ref{sect:OWT}. Due to its generality, a straightforward
modification of Theorem \ref{thm:frameexp} leads to general (biorthogonal)
wavelet expansions and other tight discrete wavelet frames.
Let us mention that the continuous local mean characterizations
\eqref{coorb001} of spaces with variable exponents, see also Theorem
\ref{thm:contchar}, are new and interesting for their own
sake. In fact, one has to deal with additional difficulties
since a version of the classical Fefferman-Stein maximal
inequality, a crucial tool in this respect, is in general not true in
$L_{{p(\cdot)}}(\ell_{{q(\cdot)}})$ if $q(\cdot)$ is non-constant.
Finally, the provided discretizations of such spaces are not
entirely new. In
\cite{Ke11} the author used a different technique in order to obtain
discretizations with Meyer and Daubechies wavelets. However, let us mention that
the abstract Theorems \ref{thm:atomicdec}, \ref{thm:frameexp} below neither
restrict to orthonormal wavelets nor compactly supported atoms.
\subsection{Outline}
The paper is structured as follows. The abstract theory is established in
Section~\ref{sec:abstrth}. It generalizes earlier contributions, especially
\cite{fora05,RaUl10}, and in particular now includes the quasi-Banach case. In
Section~\ref{sec:varint} we give a short introduction to variable exponent
spaces, which will serve as our demonstration object for a concrete application
of the theory. We will utilize a new continuous local means characterization
in Section~\ref{sec:appcoorbit} to identify them as coorbits of a new
scale of Peetre-Wiener type spaces.
The abstract theory then yields atomic decompositions as well as discrete characterizations via wavelet frames.
Some useful facts concerning the continuous and discrete (orthogonal) wavelet
transform are collected in the Appendix.
\subsection{Notation}
The symbols $\N, {\N}_0, \mathbb{Z}}\newcommand{\C}{\mathbb{C}, \mathbb{R}}\newcommand{\N}{\mathbb{N},\mathbb{R}}\newcommand{\N}{\mathbb{N}_+$, and $\mathbb{C}$ denote
the natural numbers, the natural numbers including $0$, the integers, the real
numbers, the non-negative real numbers, and the complex numbers. For a real number $t\in\mathbb{R}}\newcommand{\N}{\mathbb{N}$ we put
$(t)_+=\max\{t,0\}$ and $(t)_-=\min\{t,0\}$. The conjugation of $z\in\C$ is
denoted by $\overline{z}$.
Let us emphasize that ${{\re}^d}$ has the usual meaning and $d\in\N_0$ is reserved
for its dimension. The symbol $|\cdot|$ denotes the Euclidean norm on ${{\re}^d}$
and $|\cdot|_1$ the $\ell_1$-norm.
The space of all sequences with entries in some set $M$ over some countable index set $I$
is denoted by $M^I$ and we write $\Lambda(i)$ for the $i$-th sequence element of
a sequence $\Lambda\in M^I$.
For topological vector spaces $Y$ and $Z$ the class of linear continuous mappings from
$Y$ to $Z$ is denoted by $\mathcal{L}(Y,Z)$. The notation $\Phi:
Y\hookrightarrow Z$ indicates that $Y$ is continuously embedded into $Z$, i.e.,
$\Phi$ is an injective continuous linear map from $Y$ into $Z$.
If the embedding is canonical we simply write $Y\hookrightarrow Z$.
If $Y$ is equipped with a quasi-norm we use $\|f|Y\|$ for the quasi-norm of
$f\in Y$. The operator quasi-norm of $A \in \mathcal{L}(Y,Z)$ is denoted by
$\|A|Y\to Z\|$.
We use the notation
$a\lesssim b$ if there exists a constant $c>0$ (independent of the
context dependent relevant parameters) such that $a \le c\,b$. If
$a\lesssim b$ and $b \lesssim a$ we write $a \asymp b$. Furthermore,
we write $Y\asymp Z$ for two quasi-normed spaces $Y,Z$ which coincide as sets and whose quasi-norms are equivalent.
\section{General coorbit space theory}
\label{sec:abstrth}
Let $\mathcal{H}$ be a separable Hilbert space
and $X$ a locally compact Hausdorff space endowed with a positive Radon measure $\mu$ with ${\rm supp \, } \mu = X$.
A family $\mathcal{F} = \{\varphi_x\}_{x\in X}$ of vectors in $\mathcal{H}$
is called a continuous frame (see \cite{alanga93}) if the assignment $x\mapsto\varphi_x$ is weakly measurable and if there exist constants $0<C_1,C_2<\infty$ such that \eqref{eq:stab} is satisfied.
Let us record an important property.
\begin{lemma}\label{lem:total}
Let $\mathcal{F}=\{\varphi_x \}_{x\in X}$ be a continuous frame in $\mathcal{H}$ and $N\subset X$ a set of measure zero.
Then $\{\varphi_x \}_{x\in X\backslash N}$ is total in $\mathcal{H}$.
\end{lemma}
\noindent {\bf Proof.}\,
Let us put $X^\ast:=X\backslash N$. We have to show that
$V:=\spn \{ \varphi_x:x\in X^\ast \}$ is dense in $\mathcal{H}$. Indeed, using the frame property of $\mathcal{F}$, we can deduce for every $f\perp V$
\[
\|f|\mathcal{H}\|^2 = \int_X |\langle f,\varphi_x \rangle|^2 \,d\mu(x) = \int_{X^*} |\langle f,\varphi_x \rangle|^2 \,d\mu(x) = 0.
\]
\hspace*{\fill} \rule{3mm}{3mm}
\noindent
To avoid technicalities, we assume throughout this paper that $X$ is $\sigma$-compact.
We further assume that the continuous frame is Parseval, i.e.\ $C_1=C_2=1$, and note that
-- apart from minor changes -- the theory presented here is valid also for general tight frames where $C_1=C_2$.
It is also possible to develop the theory in the setting
of non-tight frames, where the associated coorbit theory has been worked out
in \cite{fora05} -- at least to a significant extent.
For $0<p<\infty$ we define the Lebesgue space $L_p(X):=L_p(X,\mu)$ as usual by
\[
\|F| L_p(X,\mu) \| := \Big( \int_X |F(x)|^p \,d\mu(x) \Big)^{1/p}<\infty.
\]
A function $F$ belongs to $L_\infty(X):=L_\infty(X,\mu)$ if and only if $F$ is essentially bounded.
The corresponding sequence spaces $\ell_p(I)$ are obtained by choosing $X$ as a countable index set $I$, equipped with the discrete topology and counting measure $\mu$.
Associated to a continuous frame $\mathcal{F}$ is the voice transform $V_{{\mathcal F}}: \mathcal{H} \to L_2(X,\mu)$ defined by
$$
V_{{\mathcal F}}f(x) = \langle f,\varphi_x \rangle\,,\quad f \in {\mathcal H}, x \in X,
$$
and its adjoint $V^{\ast}_{{\mathcal F}}:L_2(X,\mu) \to \mathcal{H}$ given in a weak sense by the integral
\begin{align}\label{eq:adjoint}
V^{\ast}_{{\mathcal F}} F = \int_X
F(y)\varphi_y\,d\mu(y)\,.
\end{align}
Since we assume the frame ${\mathcal F}$ to be Parseval $V_{\mathcal{F}}$ is an isometry and in particular injective.
The adjoint $V_{\mathcal{F}}^*$ is surjective with $\|V_{\mathcal{F}}^*| L_2\rightarrow\mathcal{H} \|=1$ and the
associated frame operator $S_{\mathcal{F}}:=V_{\mathcal{F}}^{\ast}V_{\mathcal{F}}$ is the identity.
Hence we have
\begin{equation*}
f = \int_{X} V_{{\mathcal F}}f(y)\varphi_y\,d\mu(y)\quad\mbox{and}\quad
V_{{\mathcal F}}f(x) = \int_{X} V_{{\mathcal F}}f(y)\langle \varphi_y,\varphi_x\rangle \,d\mu(y)\,.
\end{equation*}
The second identity
is the crucial reproducing formula $R_{\mathcal{F}}(V_{\mathcal{F}}f)=V_{\mathcal{F}}f$ for $f\in\mathcal{H}$, where
\begin{align}\label{eqdef:kernfunc}
R_{\mathcal{F}}(x,y) = \langle \varphi_y, \varphi_x \rangle\,,\quad x,y \in X,
\end{align}
is an integral kernel (operator), referred to as the \emph{frame kernel} associated to $\mathcal{F}$.
It acts as a self-adjoint bounded operator $R_{\mathcal{F}}=V_{\mathcal{F}}V_{\mathcal{F}}^*:L_2(X)\rightarrow L_2(X)$, which is
an orthogonal projection with $R_{\mathcal{F}}(L_2(X))=V_{\mathcal{F}}(\mathcal{H})$.
The converse of the reproducing formula is also true, i.e.,
if $F\in L_2(X)$ satisfies $R_{\mathcal{F}}(F)=F$ then there exists a unique element $f\in\mathcal{H}$ such that $V_{\mathcal{F}}f=F$.
We remark that we use the same notation for the function $R_{\mathcal{F}}:X\times X\to\mathbb{C}$ given in \eqref{eqdef:kernfunc} and the associated operator $R_{\mathcal{F}}:L_2(X)\rightarrow L_2(X)$.
It is important to note that the function $R_{\mathcal{F}}$ is measurable. Indeed, utilizing an orthonormal basis $(f_n)_{n\in\N}$ of $\mathcal{H}$ we can expand $R_{\mathcal{F}}(x,y)=\sum_{n\in\N} \langle \varphi_y,f_n \rangle \langle f_n,\varphi_x \rangle$ as a point-wise limit of measurable functions.
The idea of coorbit theory is to measure ``smoothness'' of $f$ via properties of the transform $V_{{\mathcal F}}f$.
Loosely speaking, the coorbit of a function space on $X$ is its retract with respect to (a suitably extended version of) the voice transform.
The classical theory and its generalizations have been developed for the case of certain Banach function spaces on $X$.
In the classical setup, where $X$ is equipped with a group structure, the extension~\cite{ra05-3}
deals with the quasi-Banach case, and our aim is to extend the generalized theory from \cite{fora05,RaUl10} analogously.
\subsection{Function spaces on $X$}
\label{ssec:QBFspaces}
We consider \emph{(quasi)-Banach function spaces}, or shortly \emph{(Q)BF-spaces},
which are linear spaces of measurable functions on $X$, equipped with
a quasi-norm under which they are complete. Hereby, functions are identified when equal almost everywhere. Hence, when speaking of a function
one often actually refers to an equivalence class. In general, this inaccuracy of language does not pose a problem. Only when it comes to point evaluations
the precise meaning must be made clear in the context.
Recall that a quasi-norm on a linear space $Y$ generalizes the concept of a norm by replacing the triangle inequality with
the more general quasi-triangle inequality
\[
\| f + g \| \le C_Y ( \| f \| + \| g \|), \quad f,g\in Y,
\]
with associated quasi-norm constant $C_Y\ge1$. Many aspects of the theory of normed spaces carry over to the quasi-norm setting,
e.g.\ boundedness and continuity coincide, all $d$-dimensional quasi-norms are equivalent, etc..
An important exception is the Hahn-Banach theory concerned with the dual spaces. Note that the (topological) dual $Y^\prime$ of a quasi-normed space $Y$, equipped with the usual operator norm,
is always a Banach space. Due to the possible non-convexity of the quasi-norm however, it may not be sufficiently large
for the Hahn-Banach theorems to hold. In fact, $Y^\prime$ may even be trivial as the example of the $L_p$-spaces in the range $0<p<1$ shows.
This fact poses a serious problem for the theory.
An important tool for dealing with quasi-norms is the
Aoki-Rolewicz theorem~\cite{Ao42,Ro57}, which states that in every quasi-normed space $Y$ there exists an equivalent $r$-norm --
in the sense of an equal topology -- where an $r$-norm, $0<r\le1$, satisfies the $r$-triangle inequality
\[
\| f + g \|^r \le \| f \|^r + \| g \|^r ,\quad f,g\in Y,
\]
and in particular is a quasi-norm with constant $C_Y=2^{1/r-1}$.
The exponent $r=1/(\log_2 C_Y + 1)$ of the equivalent $r$-norm is called the \emph{exponent of $Y$}.
For a viable theory we need to further restrict the class of function spaces.
A quasi-normed function space $Y$ on $X$ is called \emph{solid}, if the following condition is valid,
\begin{align*}
f\text{ $\mu$-measurable},\,g\in Y,\:|f(x)|\le|g(x)|\,a.e. \quad \Rightarrow \quad f\in Y \text{ and } \|f|Y\|\le\|g|Y\|.
\end{align*}
In a solid space $Y$ we have the equality ${\|}\,|f|\,|Y\|=\| f | Y\|$ for every $f\in Y$. Moreover, there is a useful criterion for a function $f$ to belong to $Y$,
\begin{align*
f\in Y \quad \Leftrightarrow \quad |f|\in Y \text{ and } f \text{ $\mu$-measurable}.
\end{align*}
A function space shall be called \emph{rich}, if it contains the characteristic functions $\chi_K$ for all compact subsets $K\subset X$.
A rich solid quasi-normed function space on $X$ then contains the characteristic functions $\chi_U$
for all relatively compact, measurable subsets $U\subset X$.
We will subsequently develop coorbit theory mainly for rich solid QBF-spaces $Y$, that are continuously embedded into $L_1^{\rm loc}(X)$.
As usual, the spaces $L_p^{\rm loc}(X):=L_p^{\rm loc}(X,\mu)$, $0<p\le\infty$, consist of all functions $F$ where $\|F\chi_K | L_p(X)\|<\infty$ for every
compact subset $K\subset X$. The case, where $Y\not\hookrightarrow L_1^{\rm loc}(X)$, is shortly commented on at the end of Subsection~\ref{ssec:coorbit}
It is important to understand the relation between the quasi-norm convergence and the pointwise convergence
of a sequence of functions in $Y$. We have the following result.
\begin{lemma}\label{lem:FuncConv1}
Let $Y$ be a solid quasi-normed function space on $X$, and assume $f_n\rightarrow f$ in $Y$.
Then for arbitrary but fixed representing functions $\widetilde{f}_n,\widetilde{f}$
the following holds true. For a.e.\ $x\in X$ there is a subsequence $(f_{n_k})_{k\in\N}$, whose choice may depend on the particular $x\in X$,
such that $\widetilde{f}_{n_k}(x)\rightarrow \widetilde{f}(x)$ as $k\rightarrow \infty$.
\end{lemma}
\noindent {\bf Proof.}\,
Assume first that $f_n\rightarrow 0$ in the quasi-norm of $Y$, which implies $\|f_n|Y\|\rightarrow 0$.
As $\inf_{m\ge n} |f_m|$ is a measurable function with $\inf_{m\ge n} |f_m| \le |f_k|$ for all $k\ge n$
we have $\inf_{m\ge n} |f_m|\in Y$ with $\| \inf_{m\ge n} |f_m| | Y \| \le \| f_k |Y\|$ for all
$k\ge n$ by solidity. It follows
$
0\le \| \inf_{m\ge n} |f_m| | Y \| \le \inf_{m\ge n} \| f_m |Y\|=0,
$
and hence $\inf_{m\ge n} |\widetilde{f}_m|(x) =0 $ for a.e.\ $x\in X$. This implies that for these $x\in X$
there is a subsequence $(f_{n_k})_{k\in\N}$ such that $\widetilde{f}_{n_k}(x)\rightarrow 0$.
Now let $f_n\rightarrow f$. Then $(f_n-f)\rightarrow 0$ and by the previous argumentation
for a.e.\ $x\in X$ there is a subsequence $(f_{n_k})_{k\in\N}$ such that $\widetilde{f}_{n_k}(x)-\widetilde{f}(x)\rightarrow 0$, whence $\widetilde{f}_{n_k}(x)\rightarrow \widetilde{f}(x)$.
\hspace*{\fill} \rule{3mm}{3mm}
\begin{remark}
A more thorough investigation of pointwise convergence in solid quasi-normed function spaces is carried out in~\cite{Voigt15}.
It turns out that Lemma~\ref{lem:FuncConv1} can be strengthened using \cite[Cor.~2.2.9]{Voigt15} and the fact that $X$ is $\sigma$-finite (see Step~1 in the proof of Lemma~\ref{lem:Bochner}).
In fact, there is a subsequence $(f_{n_k})_{k\in\N}$, independent of $x\in X$, with $\widetilde{f}_{n_k}(x)\rightarrow \widetilde{f}(x)$ for a.e.\ $x\in X$.
\end{remark}
\subsection{Associated sequence spaces}
Let us take a look at sequence spaces associated with a function space $Y$ on $X$.
For this we recall the notion of an admissible covering introduced in \cite{fora05,RaUl10}.
We say that a covering $\mathcal{U}=\{U_i\}_{i\in I}$ of $X$ is \emph{locally finite} if every $x\in X$ possesses
a neighborhood which intersects only a finite number of the covering sets $U_i$.
\begin{definition}\label{def:admcov}
A covering $\mathcal{U}=\{U_i\}_{i\in I}$ of $X$ is called \emph{admissible}, if it is locally finite and if it satisfies the following conditions:
\begin{enumerate}
\item[(i)] Each $U_i$ is measurable, relatively compact and has non-void interior.
\item[(ii)] The \emph{intersection number} $\sigma(\mathcal{U}):= \sup_{i\in I} \sharp\{ j ~:~ U_i\cap U_j\neq\emptyset \} $ is finite.
\end{enumerate}
\end{definition}
A covering of a locally compact Hausdorff space is
locally finite if and only if every compact subset intersects only a finite number of the covering sets.
Hence, every locally finite covering of the $\sigma$-compact space $X$ is countable.
In particular, the following lemma holds true.
\begin{lemma}\label{lem:admindex}
Every admissible covering of the $\sigma$-compact space $X$ has a countable index set.
\end{lemma}
\noindent
Following \cite{fora05,RaUl10} we now define two types of sequence spaces associated to $Y$.
\begin{definition
For a rich solid QBF-space $Y$ on $X$ and
an admissible
covering $\mathcal{U} = \{U_i\}_{i\in I}$ of $X$
the sequence spaces $Y^{\flat}$ and $Y^{\natural}$ associated to
$Y$ and $\mathcal{U}$ are defined by
\begin{equation}\nonumber
\begin{split}
Y^{\flat} = Y^{\flat}(\mathcal{U}) &:= \Big\{\{\lambda_i\}_{i\in I}~:~
\|\{\lambda_i\}_{i\in I}|Y^{\flat}\| := \Big\|\sum\limits_{i\in I}
|\lambda_i|\chi_{U_i}|Y\Big\|<\infty
\Big\}\,,\\
Y^{\natural} = Y^{\natural}(\mathcal{U}) &:= \Big\{\{\lambda_i\}_{i\in I}~:~
\|\{\lambda_i\}_{i\in I}|Y^{\natural}\| := \Big\|\sum\limits_{i\in I}
|\lambda_i|\mu(U_i)^{-1}\chi_{U_i}|Y\Big\|<\infty
\Big\}\,.
\end{split}
\end{equation}
\end{definition}
Note that due to Lemma~\ref{lem:admindex} the index set $I$ of these sequence spaces is necessarily countable.
Also observe that due to condition~(i) of Definition~\ref{def:admcov} and ${\rm supp \, } \mu=X$ we have $\mu(U_i)>0$ for every $i\in I$, and in turn $\|\chi_{U_i}|Y\|>0$.
Viewing a sequence as a function on the index set $I$, equipped with the counting measure,
we subsequently use the terminology introduced above for function spaces. For better distinction, we will speak of a quasi-Banach sequence space and use the abbreviation QBS-space.
\begin{proposition}\label{prop:ss_basic}
The sequence spaces
$Y^\flat(\mathcal{U})$ and $Y^\natural(\mathcal{U})$ are rich solid QBS-spaces with the same quasi-norm constant $C_Y$ as $Y$.
\end{proposition}
Before we give the proof of this proposition let us establish some useful embedding results.
First observe that the mapping
\begin{align}\label{eq:ss_isom}
I^\natural_\flat: Y^\flat\to Y^\natural, \lambda_i\mapsto \mu(U_i)\lambda_i
\end{align}
is an isometric isomorphism between $Y^\flat$ and $Y^\natural$, which allows to transfer statements from one space to the other.
Moreover, if $\inf_{i\in I}\mu(U_i)>0$ we have the embedding $Y^\flat\hookrightarrow Y^\natural$. Analogously, $\sup_{i\in I}\mu(U_i)<\infty$ implies $Y^\natural\hookrightarrow Y^\flat$.
Consequently, $Y^\flat\asymp Y^\natural$ if both conditions are fulfilled.
Let $\nu:I\to[0,\infty)$ be a discrete weight and define $\|\Lambda|\ell^\nu_p\|:=\| \Lambda\nu |\ell_p \|$ for $0<p\le\infty$ and $\Lambda\in\mathbb{C}^I$.
The space $\ell_p^{\nu}(I):=\{ \Lambda\in\mathbb{C}^I : \|\Lambda| \ell^\nu_p\|<\infty \}$
is a QBS-space with quasi-norm $\|\cdot|\ell^\nu_p\|$.
\begin{lemma}\label{lem:ss_embed}
Let $0<p\le1$ be the exponent of $Y$. We then have the continuous embeddings
\begin{align*}
\ell_p^{\omega^\flat}(I) \hookrightarrow Y^\flat(\mathcal{U})\hookrightarrow \ell_\infty^{\omega^\flat}(I) \quad\text{and}\quad
\ell_p^{\omega^\natural}(I) \hookrightarrow Y^\natural(\mathcal{U})\hookrightarrow \ell_\infty^{\omega^\natural}(I)
\end{align*}
with weights defined by $\omega^\flat(i):=\| \chi_{U_i} |Y\|$ and $ \omega^\natural(i):=\mu(U_i)^{-1} \|\chi_{U_i}|Y\| $ for $i\in I$.
\end{lemma}
\noindent {\bf Proof.}\,
We have
$
\| \{\lambda_i\}_{i\in I} | Y^\flat \|^p
=\big\| \sum_{i\in I} |\lambda_i|\chi_{U_i} \Big| Y \big\|^p
\lesssim \sum_{i\in I} |\lambda_i|^p \| \chi_{U_i}| Y \|^p
= \| \{\lambda_i\}_{i\in I} | \ell_p^{\omega^\flat} \|^p
$
for $\{\lambda_i\}_{i\in I}\in \ell_p^{\omega^\flat}$.
If $\{\lambda_i\}_{i\in I}\in Y^\flat$ we can estimate for every $j\in I$
\begin{align}\label{eq:ss_eval}
|\lambda_j|\omega^\flat(j) =|\lambda_j| \| \chi_{U_j} | Y \| =
\| |\lambda_j|\chi_{U_j} | Y \| \le \Big\| \sum_{i\in I} |\lambda_i|\chi_{U_i} \Big| Y \Big\|
= \| \{ \lambda_i\}_{i\in I} | Y^\flat \|.
\end{align}
The embeddings for $Y^\natural$ follow with the isometry~\eqref{eq:ss_isom}.
\hspace*{\fill} \rule{3mm}{3mm}
The weights $\omega^\flat$ and $\omega^\natural$ also occur in the following result.
\begin{corollary
For every $j\in I$ the evaluation $E_j:\{\lambda_i\}_{i\in I}\mapsto \lambda_j$ is a bounded functional on $ Y^\flat$ and $Y^\natural$ with
$\| E_j | Y^\flat\rightarrow\C \| \le (\omega^\flat(j))^{-1}$ and $\| E_j | Y^\natural\rightarrow\C \| \le (\omega^\natural(j))^{-1}$.
\end{corollary}
\noindent {\bf Proof.}\,
For $Y^\flat$ this follows directly from \eqref{eq:ss_eval}. The argument for $Y^\natural$ is similar.
\hspace*{\fill} \rule{3mm}{3mm}
Now we are ready to give the proof of Proposition~\ref{prop:ss_basic}.
\noindent {\bf Proof.}\, [Proof of Proposition~\ref{prop:ss_basic}]
We prove the completeness of $Y^\flat$.
The result for $Y^\natural$ follows then with the isometry \eqref{eq:ss_isom}.
A Cauchy sequence $(\Lambda_n)_{n\in\N}$ in $Y^\flat$ is also
a Cauchy sequence in $\ell_\infty^{\omega^\flat}$ by Lemma~\ref{lem:ss_embed}.
Let $\Lambda$ be the limit in $\ell_\infty^{\omega^\flat}$.
We show that $\Lambda\in Y^\flat$ and $\Lambda=\lim_{n\rightarrow\infty} \Lambda_n$ in the quasi-norm
of $Y^\flat$. For this task let us introduce the auxiliary operator $A(\Lambda):=\sum_{i\in I} |\Lambda(i)| \chi_{U_i}$,
which maps $\Lambda\in\C^I$ to a nonnegative measurable function on $X$.
For $\alpha\in\C$ and $\Lambda,\Lambda_1,\Lambda_2 \in\C^I$ we have
$A( \alpha \Lambda )= |\alpha| A(\Lambda)$ and $A(\Lambda_1+\Lambda_2)\le A(\Lambda_1)+A(\Lambda_2)$.
We also have
\begin{align}\label{aux:rel1}
|A(\Lambda_1)-A(\Lambda_2)|\le \sum_{i\in I} \big||\Lambda_1(i)|- |\Lambda_2(i)|\big| \chi_{U_i}
\le \sum_{i\in I} |\Lambda_1(i)- \Lambda_2(i)| \chi_{U_i} = A(\Lambda_1-\Lambda_2).
\end{align}
A sequence $\Lambda\in\C^I$ belongs to $Y^\flat$ if and only if $A(\Lambda)\in Y$, and we have the identity
\begin{align}\label{aux:rel2}
\| \Lambda | Y^\flat \|= \| A(\Lambda) | Y\|.
\end{align}
Since $\Lambda$ is the limit of $(\Lambda_n)_{n\in\N}$ in $\ell_\infty^{\omega^\flat}$ it holds
$\lim_{n\rightarrow\infty} | \Lambda(i)- \Lambda_n(i)|=0$ for all $i\in I$.
Considering the local finiteness of the sum in the definition of $A$ it follows
that
\begin{align}\label{aux:rel3}
\lim_{n\rightarrow\infty} A(\Lambda-\Lambda_n)(x)=0 \quad\text{ for all }x\in X.
\end{align}
The rest of the proof relies solely on Properties \eqref{aux:rel1}-\eqref{aux:rel3} of the operator $A$ and the solidity and completeness
of $Y$.
First we show $A( \Lambda)\in Y$ which is equivalent to $\Lambda\in Y^\flat$ according to \eqref{aux:rel2}.
The sequence $(A(\Lambda_n))_{n\in\N}$ is a Cauchy sequence in $Y$ because with \eqref{aux:rel1} we can estimate
$
\| A(\Lambda_n) - A(\Lambda_m) | Y \|
\le \| A(\Lambda_n-\Lambda_m) | Y \| = \| \Lambda_n-\Lambda_m | Y^\flat \|.
$
Furthermore, from \eqref{aux:rel3} and \eqref{aux:rel1} it follows
$
\lim_{n\rightarrow\infty} A(\Lambda_n)(x) = A(\Lambda)(x)
$
for all $x\in X$.
Since $Y$ is complete we can conclude with Lemma~\ref{lem:FuncConv1} that
$A(\Lambda_n)\rightarrow A(\Lambda)$ in $Y$ and $A(\Lambda)\in Y$.
Finally we show $\Lambda=\lim_{n\rightarrow\infty} \Lambda_n$ in $Y^\flat$. The sequence $(A(\Lambda_n-\Lambda))_{n\in\N}$ is a Cauchy sequence in $Y$, because
with \eqref{aux:rel1} we get
\begin{gather*}
\| A(\Lambda_n-\Lambda)-A(\Lambda_m-\Lambda) | Y\|
\le \| A(\Lambda_n-\Lambda_m) | Y \| = \| \Lambda_n-\Lambda_m | Y^\flat \|.
\end{gather*}
Using \eqref{aux:rel3} and Lemma~\ref{lem:FuncConv1} we deduce
$
A(\Lambda_n-\Lambda)\rightarrow 0
$
in $Y$. In view of \eqref{aux:rel2} this finishes the proof.
\hspace*{\fill} \rule{3mm}{3mm}
We finally study sequence spaces where the finite sequences are a dense subset.
Since $Y^\flat$ and $Y^\natural$ are isometrically isomorphic via the isometry $I^\natural_\flat$ from \eqref{eq:ss_isom},
and since $I^\natural_\flat$ is a bijection on the sequences with finite support,
these are dense in $Y^\flat$ if and only if they are dense in $Y^\natural$.
The next result occurs in \cite[Thm.~5.2]{fora05} in the context of Banach spaces.
However, the boundedness of the functions required there is not necessary.
\begin{lemma
If the functions with compact support are dense in $Y$ the finite sequences are dense
in $Y^\flat(\mathcal{U})$ and $Y^\natural(\mathcal{U})$.
\end{lemma}
\noindent {\bf Proof.}\,
Let $\Lambda=\{\lambda_i\}_{i\in I}\in Y^\flat$ and fix $\varepsilon>0$.
Then $F:=\sum_{i\in I} |\lambda_i|\chi_{U_i} \in Y$ and there exists a function $G\in Y$ with compact support
$K$ such that $\| F- G | Y \|<\varepsilon$. As the covering $\mathcal{U}=\{U_i\}_{i\in I}$ is locally finite, the index set $J:=\{ i\in I : U_i\cap K\neq\emptyset \}$ is finite.
Let $\tilde{\Lambda}$ be the sequence which coincides with $\Lambda$ on $J$ and vanishes elsewhere.
Then $\tilde{F}:=\sum_{i\in J} |\lambda_i|\chi_{U_i}\in Y$ and $|F-\tilde{F}| \le |F - G|$.
Using the solidity of $Y$ we conclude
$
\| \Lambda - \tilde{\Lambda} |Y^\flat \|
= \| F-\tilde{F} | Y \| \le \| F- G | Y \| <\varepsilon.
$
\hspace*{\fill} \rule{3mm}{3mm}
For a countably infinite sequence $\Lambda=\{\lambda_i\}_{i\in I}$, a bijection $\sigma:\N\rightarrow I$
and $n\in\N$ we define $\Lambda^\sigma_n$
as the sequence which coincides with $\Lambda$ on $\sigma(\{1,\ldots,n\})$ and is zero elsewhere.
\begin{lemma}\label{lem:ss_findens}
Let $\mathcal{U} = \{U_i\}_{i\in I}$ be an admissible covering and assume that there is a bijection $\sigma:\N\rightarrow I$.
The finite sequences are dense in $Y^\flat(\mathcal{U})$
if and only if for all $\Lambda\in Y^\flat(\mathcal{U})$
it holds $\Lambda^\sigma_n \rightarrow \Lambda$ in the quasi-norm of $Y^\flat(\mathcal{U})$ for $n\rightarrow\infty$.
\end{lemma}
\noindent {\bf Proof.}\,
Assume that the finite sequences are dense. For $n\in\N$ we can then choose a finite sequence $\Gamma_n\in Y^\flat$ with
$\| \Gamma_n - \Lambda | Y^\flat \|<2^{-n}$. By solidity of $Y$ we get for $N\ge 1 + \max \{ k\in\N |\Gamma_n (\sigma(k))\neq 0 \}$,
with the convention $\max \emptyset =0$, the estimate
$
\| \Lambda^\sigma_{N} - \Lambda | Y^\flat \| \le \| \Gamma_n - \Lambda | Y^\flat \| < 2^{-n}.
$
The other direction is trivial.
\hspace*{\fill} \rule{3mm}{3mm}
We end this paragraph with an illustration and examine the sequence spaces associated to the weighted Lebesgue space $L_p^\nu(X)$,
defined by $\|F| L^\nu_p(X) \|:=\| F\nu | L_p(X) \|<\infty$,
where $\nu$ is a weight and $0<p\le\infty$. In this special case we have a stronger statement than Lemma~\ref{lem:ss_embed}.
\begin{proposition
Let $\mathcal{U}=\{U_i\}_{i\in I}$ be an admissible covering of $X$, $\nu$ be a
weight and $0< p\le\infty$.
Then for $Y=L_p^\nu(X)$ we have $Y^\flat(\mathcal{U})\asymp\ell_p^{\nu^\flat_p}(I)$ and
$ Y^\natural(\mathcal{U})\asymp\ell_p^{\nu^\natural_p}(I)$ with weights
given by $\nu^\flat_p(i):= \| \chi_{U_i} | L_p^\nu(X) \|$ and $\nu^\natural_p(i):= \mu(U_i)^{-1}\nu^\flat_p(i)$ for $i\in I$.
\end{proposition}
\noindent {\bf Proof.}\,
We give the proof for $0<p<\infty$ and $Y=L_p^\nu(X)$. For $\{\lambda_i\}_{i\in I}\in\C^I$ we can estimate
\begin{gather*}
\| \{\lambda_i\}_{i\in I} | Y^\flat \|^p = \Big\| \sum_{i\in I} |\lambda_i|\chi_{U_i} \Big| Y \Big\|^p
= \int_X \Big| \sum_{i\in I} |\lambda_i|\chi_{U_i}(y)\nu(y) \Big|^p \,d\mu(y) \\
\asymp \int_X \sum_{i\in I} |\lambda_i|^p\chi_{U_i}(y)^p \nu(y)^p \,d\mu(y)
= \sum_{i\in I} |\lambda_i|^p \int_X \chi_{U_i}(y)^p \nu(y)^p \,d\mu(y)
= \sum_{i\in I} |\lambda_i|^p \nu^\flat_p(i)^p,
\end{gather*}
where we used that the intersection number $\sigma(\mathcal{U})$ is finite and the equivalence of the $p$-norm and the $1$-norm on $\C^{\sigma(\mathcal{U})}$.
Applying the isometry~\eqref{eq:ss_isom} yields the result for $Y^\natural$.
\hspace*{\fill} \rule{3mm}{3mm}
\subsection{Voice transform extension}
For the definition of the coorbit spaces, we need a sufficiently large reservoir for the voice transform. Hence we extend it in this paragraph
following~\cite{fora05}.
For a weight $\nu:X\rightarrow [1,\infty)$ we introduce the space
$
\mathcal{H}_1^\nu := \left\{ f\in\mathcal{H} ~:~ V_\mathcal{F}f\in L_1^\nu(X,\mu) \right\}.
$
Since $\mathcal{F}$ is total in $\mathcal{H}$ by Lemma~\ref{lem:total} it is easy to verify that
$\| f | \mathcal{H}_1^\nu \| :=\| V_{\mathcal{F}}f | L_1^\nu \|$ constitutes a norm on $\mathcal{H}_1^\nu$.
Further, we define the kernel algebra
\begin{equation*
\mathcal{A}_1 := \{K:X \times X \to \C~:~ K \mbox{ is measurable and } \|K|\mathcal{A}_1\| < \infty\},
\end{equation*}
where
\hfill $ \|K|\mathcal{A}_1\| := \max\Big\{\esssup{x\in X}\int_{X}|K(x,y)|d\mu(y)~,~ \esssup{y\in X} \int_{X}|K(x,y)|d\mu(x)\Big\}.$ \hfill~
\vspace*{1ex}
\noindent
Associated to $\nu$ is a weight $m_\nu$ on $X \times X$ given by
\begin{equation*
m_\nu(x,y) = \max\Big\{\frac{\nu(x)}{\nu(y)}, \frac{\nu(y)}{\nu(x)} \Big\}\,,\quad x,y\in X
\end{equation*}
The corresponding sub--algebra $\mathcal{A}_{m_\nu} \subset \mathcal{A}_1$ is defined as
\begin{align}\label{eqdef:Am}
\mathcal{A}_{m_\nu} := \{K:X\times X \to \mathbb{C}~:~Km_\nu \in \mathcal{A}_1\}
\end{align}
and endowed with the norm $\|K|\mathcal{A}_{m_\nu}\| := \|Km_\nu|\mathcal{A}_1\|$. Note that a kernel $K\in\mathcal{A}_{m_\nu}$ operates continuously
on $L_1^\nu(X)$ and $L_\infty^{1/\nu}(X)$ with $\|K |L_1^\nu(X)\rightarrow L_1^\nu(X) \|,\,\|K |L_\infty^{1/\nu}(X)\rightarrow L_\infty^{1/\nu}(X) \| \le \|K | \mathcal{A}_{m_\nu} \| $.
Technically, the theory rests upon (mapping) properties of certain kernel functions.
A first example of a typical result is given by the following lemma.
\begin{lemma}\label{auxlem:crossG}
Assume that for a family $\mathcal{G}=\{\psi_x\}_{x\in X}\subset\mathcal{H}$ the Gramian kernel
\begin{align}\label{eq:crossGram}
G[\mathcal{G},\mathcal{F}](x,y):=\langle\varphi_y,\psi_x\rangle \qquad x,y\in X,
\end{align}
is contained in $\mathcal{A}_{m_\nu}$. Then
$\psi_x\in\mathcal{H}_1^\nu$ with
$\| \psi_x | \mathcal{H}_1^\nu \| \le \| G[\mathcal{G},\mathcal{F}] |\mathcal{A}_{m_\nu}\|\nu(x) $ for a.e.\ $x\in X$.
\end{lemma}
\noindent {\bf Proof.}\,
We have
$
\| G[\mathcal{G},\mathcal{F}] | \mathcal{A}_{m_\nu} \|
\ge \int_X |V_{\mathcal{F}}\psi_x(y)| \frac{\nu(y)}{\nu(x)} \,d\mu(y)
= \frac{\| \psi_x | \mathcal{H}_1^\nu \|}{\nu(x)}
$
for a.e.\ $x\in X$.
\hspace*{\fill} \rule{3mm}{3mm}
The theory in \cite{fora05,RaUl10} is developed under the global assumption that
$\mathcal{F}$ is uniformly bounded, i.e.\ $\|\varphi_x\|\le C_B$ for all $x\in X$ and some $C_B>0$.
This assumption can be weakened.
\begin{lemma}\label{lem:Bochner}
Let $\nu\ge 1$ be a weight such that the analyzing frame $\mathcal{F}$ satisfies
\vspace*{-2.5ex}
\begin{flalign}\label{eq:framecond}
&\parbox{13cm}{
\begin{enumerate}
\item[(i)] $\| \varphi_x |\mathcal{H} \|\le C_B\nu(x) $ for some constant $C_B>0$ and all $x\in X$,
\item[(ii)] $R_\mathcal{F}\in \mathcal{A}_{m_\nu}$.
\end{enumerate}
}&&
\end{flalign}
\vspace*{-4ex}
\noindent
Then
$\mathcal{H}_1^\nu$
is a Banach space and the canonical embedding $\mathcal{H}_1^\nu \hookrightarrow \mathcal{H}$
is continuous and dense.
Moreover, there is a subset $X^\ast\subset X$
such that $\varphi_x\in \mathcal{H}_1^\nu$ for every $x\in X^\ast$ and $\mu(X\backslash X^\ast)=0$.
The corresponding map
$
\Psi: X^\ast\to\mathcal{H}_1^\nu,\, x\mapsto \varphi_
$
is Bochner-measurable in $\mathcal{H}_1^\nu$.
\end{lemma}
\noindent {\bf Proof.}\,
A Cauchy sequence $(f_n)_{n\in\N}\subset \mathcal{H}_1^\nu$ determines a Cauchy sequence $(F_n:=Vf_n)_{n\in\N}$
in $L_1^\nu$, which converges to some $F\in L_1^\nu$. Since
the kernel $R\in\mathcal{A}_{m_\nu}$ operates continuously on $L_1^\nu$, the equality $F_n=R(F_n)$ for $n\in\N$ implies $F=R(F)$. Furthermore, because of
$\| \varphi_x |\mathcal{H} \|\le C_B\nu(x) $ it holds
$
|R(x,y)|\le C_B^2\nu(x)\nu(y)
$
for all $x,y\in X$ and we can deduce
\begin{align*
|F(x)|=\Big|\int_X R(x,y)F(y) \,d\mu(y) \Big| \le C_B^2\nu(x) \int_X |F(y)|\nu(y) \,d\mu(y)
= C_B^2\nu(x) \| F | L_1^\nu \|.
\end{align*}
This shows $F\in L_\infty^{1/\nu}$, and as $L_\infty^{1/\nu}\cap L_1^\nu \subset L_2$
even $F\in L_2$.
The reproducing formula on $\mathcal{H}$ yields $f\in\mathcal{H}$ with $Vf=F\in L_1^\nu$, which implies $f\in \mathcal{H}_1^\nu$.
Since $\| f_n - f | \mathcal{H}_1^\nu \| = \| F_n - F | L_1^\nu \|$ we obtain $f_n \rightarrow f$ in $\mathcal{H}_1^\nu$. This proves the
completeness.
To prove the continuity of the embedding we observe
$
\| h | \mathcal{H} \|^2 = \| Vh | L_2 \|^2
\le \| Vh | L_\infty^{1/\nu} \| \| h | \mathcal{H}_1^\nu \|
$
for $h\in\mathcal{H}_1^\nu$. Together with
$
\| Vh | L_\infty^{1/\nu} \| \le \sup_{x\in X} \left\{ \frac{ \| \varphi_x|\mathcal{H}\|}{\nu(x)} \| h|\mathcal{H}\| \right\} \le C_B \| h|\mathcal{H}\|
$,
where $\|\varphi_x | \mathcal{H} \| \le C_B\nu(x)$ was used, the continuity follows.
Due to Lemma~\ref{auxlem:crossG}, applied with $\mathcal{G}=\mathcal{F}$, there is a null-set $N\subset X$ such that $\varphi_x\in \mathcal{H}_1^\nu$ for every $x\in X^\ast:=X\backslash N$.
The density of $\mathcal{H}_1^\nu\hookrightarrow\mathcal{H}$ is thus a consequence of the totality of $\{\varphi_x\}_{x\in X^\ast}$ in $\mathcal{H}$, as stated by Lemma~\ref{lem:total}.
It remains to prove the Bochner-measurability of $\Psi$.
Since $V_\mathcal{F}:\mathcal{H}_1^\nu \to V_\mathcal{F}(\mathcal{H}_1^\nu)$ is an isometric isomorphism,
it suffices to confirm that
\[
\widetilde{\Psi}:=V_\mathcal{F}\circ \Psi: X^\ast\to L_1^\nu(X), x\mapsto V_\mathcal{F}\varphi_{x}
\]
is Bochner-measurable in $L_1^\nu(X)$. The proof of this is divided into three steps.
\noindent
Step 1: Let us first construct an adequate partition of $X$. Since $\mu$ is a Radon measure, by definition locally finite, all compact subsets of $X$ have finite measure.
As $X$ is assumed to be $\sigma$-compact, the measure $\mu$ is thus $\sigma$-finite.
Hence $X=\bigcup_{n\in\N} L_n$ for certain subsets $L_n\subset X$ of finite measure. By subdividing each of these sets further into $L_{n,m}:=\{x\in L_n : \nu(x)\le m \}$,
disjointifying these subdivided sets, and finally by renumbering the resulting countable family of sets, we obtain a sequence $(K_n)_{n\in\N}$ of pairwise disjoint sets of finite
measure with $X=\bigcup_{n\in\N} K_n$ and such that $\nu(x)\le C_n$ holds for all $x\in K_n$ and suitable constants $C_n>0$.
\noindent
Step 2: We now show that for every $n\in\N$ the function
\[
\widetilde{\Psi}_n: X^\ast\to L_1^\nu(X), x\mapsto V_\mathcal{F}\varphi_{x}\cdot\chi_{K_n}
\]
is Bochner-measurable in $L_1^\nu(X)$. To this end, let $(f_\ell)_{\ell\in\N}$ be an orthonormal basis of $\mathcal{H}$
with $f_\ell\in \mathcal{H}_1^\nu$ for all $\ell\in\N$. Such a basis exists since $\mathcal{H}$ is separable and $\mathcal{H}_1^\nu$ is a dense subspace of $\mathcal{H}$.
Then we define the functions
\[
\Phi_{\ell}:=\overline{V_\mathcal{F}f_{\ell}}\in L_1^\nu(X) \quad\text{and}\quad G_{n,\ell}:= V_\mathcal{F}f_{\ell} \cdot \chi_{K_n} \in L_1^\nu(X).
\]
Note that $\Phi_{\ell}(x)=\langle \varphi_x,f_\ell \rangle$ is the $\ell$-th expansion coefficient of $\varphi_x$ with respect to $(f_\ell)_{\ell\in\N}$.
Due to the measurability of $\Phi_{\ell}\in L_1^\nu(X)$ the function $x\mapsto \Phi_{\ell}(x) G_{n,\ell}$ is clearly Bochner-measurable.
Since the pointwise limit of Bochner-measurable functions is again Bochner-measurable, Step~2 is finished if we can show that for every fixed $x\in X^\ast$
\begin{align*}
\widetilde{\Psi}_n(x)=\lim_{N\to\infty} \sum_{\ell=1}^N \Phi_{\ell}(x) G_{n,\ell} \quad\text{in } L_1^\nu(X).
\end{align*}
This follows with Lebesgue's dominated convergence theorem: For every $y\in X$ we have
\begin{align*}
\lim_{N\to\infty} \sum_{\ell=1}^N \Phi_{\ell}(x) G_{n,\ell}(y) = \lim_{N\to\infty} V_\mathcal{F} \Big( \sum_{\ell=1}^N \Phi_{\ell}(x) f_{\ell} \Big)(y) \cdot \chi_{K_n} (y)
= V_\mathcal{F}\varphi_x(y) \cdot \chi_{K_n}(y) = \widetilde{\Psi}_n(y) .
\end{align*}
Note here that $\varphi_x=\sum_{\ell=1}^\infty \Phi_{\ell}(x)f_\ell$ with convergence in $\mathcal{H}$, and in general $V_\mathcal{F}g_N(x)\to V_\mathcal{F}g(x)$ for fixed $x\in X$ if
$g_N\to g$ in $\mathcal{H}$.
Finally, we estimate using $\|\varphi_x | \mathcal{H} \| \le C_B\nu(x)$
\begin{align*}
\Big|\sum_{\ell=1}^N \Phi_{\ell}(x) G_{n,\ell}(y)\Big| &\le \Big(\sum_{\ell=1}^N |\Phi_{\ell}(x)|^2\Big)^{\frac{1}{2}} \Big( \sum_{\ell=1}^N |G_{n,\ell}(y)|^2 \Big)^{\frac{1}{2}} \\
&\le \|\varphi_x |\mathcal{H}\| \| \varphi_y |\mathcal{H}\| \chi_{K_n}(y) \le C_B \nu(y) \|\varphi_x|\mathcal{H}\| \chi_{K_n}(y) \le C_B C_n \|\varphi_x|\mathcal{H}\| \chi_{K_n}(y).
\end{align*}
Since $K_n$ is of finite measure this provides an integrable majorant (with respect to $y$).
\noindent
Step 3: Similar to Step~2 the Bochner-measurability of $\widetilde{\Psi}$ is proved by showing for $x\in X^\ast$
\[
\widetilde{\Psi}(x) = \lim_{N\to\infty} \sum_{n=1}^N \widetilde{\Psi}_{n}(x) \quad\text{in } L_1^\nu(X).
\]
The pointwise limit is obvious: For every $y\in X$ we clearly have
\begin{align*}
[\widetilde{\Psi}(x)](y)= V_\mathcal{F}\varphi_{x}(y) = \lim_{N\to\infty} \sum_{n=1}^N \chi_{K_n}(y) \cdot V_\mathcal{F}\varphi_{x}(y) = \lim_{N\to\infty} \sum_{n=1}^N [\widetilde{\Psi}_n(x)](y).
\end{align*}
Using Lebesgue's dominated convergence theorem with majorant $|\widetilde{\Psi}(x)|$ proves the claim.
\hspace*{\fill} \rule{3mm}{3mm}
\noindent
Under the assumptions~\eqref{eq:framecond} we therefore have the chain of continuous embeddings
\[
\mathcal{H}_1^\nu \overset{i}{\hookrightarrow} \mathcal{H} \overset{\hspace*{+0.5em}i^*}{\hookrightarrow} (\mathcal{H}_1^\nu)^\urcorner,
\]
where $(\mathcal{H}_1^\nu)^\urcorner$ denotes the normed anti-dual of $\mathcal{H}_1^\nu$, which
plays the role of the tempered distributions in this abstract context.
Moreover, there is a subset $X^\ast\subset X$ with $\mu(X\backslash X^\ast)=0$
such that $\varphi_x\in \mathcal{H}_1^\nu$ for $x\in X^\ast$.
Hence we may extend the transform $V_{\mathcal{F}}:\mathcal{H}\rightarrow L_2(X)$ to $(\mathcal{H}_1^\nu)^\urcorner$ by
\begin{equation}\label{eqdef:Vext}
V_\mathcal{F}f(x) = \langle f,\varphi_x \rangle\,,\quad x\in X^\ast, f\in (\mathcal{H}_1^\nu)^\urcorner,
\end{equation}
where $\langle \cdot,\cdot \rangle$ denotes the duality product on $(\mathcal{H}_1^\nu)^\urcorner \times\mathcal{H}_1^\nu$. The anti-dual is used so that this product extends
the scalar product of $\mathcal{H}$.
\begin{lemma}\label{lem:Vext}
Under the assumptions \eqref{eq:framecond} the extension \eqref{eqdef:Vext} is a well-defined continuous mapping $V_{\mathcal{F}}:(\mathcal{H}_1^\nu)^\urcorner \rightarrow L_\infty^{1/\nu}(X)$.
\end{lemma}
\noindent {\bf Proof.}\,
Let $f\in (\mathcal{H}_1^\nu)^\urcorner$. The function $V_\mathcal{F}f(x)=\langle f,\varphi_x\rangle$ is well-defined for every $x\in X^\ast$.
It determines a measurable function on $X$, in the sense of equivalence classes, due to the Bochner measurability of $x\mapsto\varphi_x$ in $\mathcal{H}_1^\nu$ proved in Lemma~\ref{lem:Bochner}.
Using Lemma~\ref{auxlem:crossG} we can estimate
\begin{align*}
| V_{\mathcal{F}}f(x) | = |\langle f,\varphi_x \rangle| \le \|f|(\mathcal{H}_1^\nu)^\urcorner\| \|\varphi_x|\mathcal{H}_1^\nu \|
\le \|f|(\mathcal{H}_1^\nu)^\urcorner\| \|\mathcal{R}_\mathcal{F}|\mathcal{A}_{m_\nu}\| \nu(x).
\end{align*}
This shows $V_{\mathcal{F}}f\in L_\infty^{1/\nu}(X)$ with $\|V_{\mathcal{F}}f|L_\infty^{1/\nu}\|\le \|f|(\mathcal{H}_1^\nu)^\urcorner\| \|R_\mathcal{F}|\mathcal{A}_{m_\nu}\|$.
\hspace*{\fill} \rule{3mm}{3mm}
\begin{remark}\label{rem:frame}
The membership $R_\mathcal{F}\in\mathcal{A}_{m_\nu}$ does not ensure
${\mathcal F}\subset\mathcal{H}_1^\nu$, wherefore the extended voice transform~\eqref{eqdef:Vext} might not be defined at every point $x\in X$.
This detail has not been accounted for in preceding papers, and fortunately it is negligible
since functions on $X$ are only determined up to $\mu$-equivalence classes. Therefore
we -- as in \cite{fora05,RaUl10} -- will henceforth assume $\mathcal{F}\subset\mathcal{H}_1^\nu$ to simplify the exposition.
\end{remark}
We proceed to establish the injectivity of the extended voice transform.
To this end, the following characterization of the duality bracket $\langle \cdot,\cdot \rangle_{(\mathcal{H}_1^\nu)^\urcorner \times \mathcal{H}_1^\nu }$ will be useful.
\begin{lemma}\label{lem:Visom}
If $\mathcal{F}$ has properties \eqref{eq:framecond}, then
for all $f\in(\mathcal{H}_1^\nu)^\urcorner$ and $g\in \mathcal{H}_1^\nu$ it holds
\[
\langle f,g \rangle_{(\mathcal{H}_1^\nu)^\urcorner \times \mathcal{H}_1^\nu } = \int_X V_{\mathcal{F}}f(y)\overline{V_{\mathcal{F}}g(y)} \,d\mu(y)
=: \langle V_{\mathcal{F}}f,V_{\mathcal{F}}g \rangle_{L_\infty^{1/\nu} \times L_1^\nu}.
\]
\end{lemma}
\noindent {\bf Proof.}\,
Let $f\in(\mathcal{H}_1^\nu)^\urcorner$ and $g\in \mathcal{H}_1^\nu$. Then $V_{\mathcal{F}}g\in L_2\cap L_1^\nu$ and we get
\begin{align*}
\langle f,g \rangle&=
\langle f, V_{\mathcal{F}}^*V_{\mathcal{F}}g \rangle
= \left\langle f, \int_X V_{\mathcal{F}}g(y)\varphi_y \,d\mu(y) \right\rangle\\
&=\int_X \overline{V_{\mathcal{F}}g(y)} \langle f,\varphi_y \rangle \,d\mu(y)
=\langle V_{\mathcal{F}}f,V_{\mathcal{F}}g \rangle_{L_\infty^{1/\nu} \times L_1^\nu}.
\end{align*}
For this equality, it is important that the duality product
commutes with the integral. To verify this, note that since $G:=V_{\mathcal{F}}g \in L_1^\nu$ the
integral $\int_X G(y)\varphi_y \,d\mu(y)$ also exists in the Bochner sense in $\mathcal{H}_1^\nu$.
Indeed, in view of Lemma~\ref{lem:Bochner} the integrand is Bochner-measurable in $\mathcal{H}_1^\nu$. Bochner-integrability follows then from the estimate
\begin{align*}
\int_X |G(y)| \cdot \| \varphi_y | \mathcal{H}_1^\nu \| \,d\mu(y)
\le \|R_\mathcal{F}|\mathcal{A}_{m_\nu}\| \int_X |G(y)| \nu(y) \,d\mu(y)
= \|R_\mathcal{F}|\mathcal{A}_{m_\nu}\| \| G | L_1^\nu \|,
\end{align*}
where Lemma~\ref{auxlem:crossG} was used.
Moreover, the value of the Bochner integral $h:=\int_X G(y)\varphi_y \,d\mu(y)$ equals $g$ since for every $\zeta\in\mathcal{H}$
\begin{align*}
\langle g,\zeta \rangle = \int_X V_{\mathcal{F}}g(y) \cdot \overline{V_{\mathcal{F}}\zeta(y) } \,d\mu(y) = \langle h,\zeta \rangle.
\end{align*}
\hspace*{\fill} \rule{3mm}{3mm}
Using Lemma~\ref{lem:Visom} we can simplify the proof of \cite[Lem.~3.2]{fora05}.
\begin{lemma}\label{lem:Veqivnorm}
Assume that the analyzing frame $\mathcal{F}$ has properties \eqref{eq:framecond}.
Then the expression $\|V_{\mathcal{F}}f | L_\infty^{1/\nu}\|$ is an equivalent norm on $(\mathcal{H}_1^\nu)^\urcorner$.
\end{lemma}
\noindent {\bf Proof.}\,
We already know from Lemma~\ref{lem:Vext} that
$
\| V_{\mathcal{F}}f | L_\infty^{1/\nu} \| \lesssim \| f | (\mathcal{H}_1^\nu)^\urcorner \|.
$
For the estimate from below we argue with the help of Lemma~\ref{lem:Visom}
\begin{align*}
\| f | (\mathcal{H}_1^\nu)^\urcorner \|
&= \sup_{\| h | \mathcal{H}_1^\nu \|=1} |\langle f,h \rangle_{(\mathcal{H}_1^\nu)^\urcorner \times \mathcal{H}_1^\nu }|
= \sup_{\| h | \mathcal{H}_1^\nu \|=1} |\langle V_{\mathcal{F}}f,V_{\mathcal{F}}h \rangle_{L_\infty^{1/\nu} \times L_1^\nu}| \\
&\le \sup_{H\in L_1^\nu, \|H |L_1^\nu\|\le1} |\langle V_{\mathcal{F}}f,H \rangle_{L_\infty^{1/\nu} \times L_1^\nu}|
= \| V_{\mathcal{F}}f | L_\infty^{1/\nu} \|.
\end{align*}
\hspace*{\fill} \rule{3mm}{3mm}
A direct consequence of this lemma is the injectivity of $V_{\mathcal{F}}$.
\begin{corollary
The voice transform $V_{\mathcal{F}}:\left(\mathcal{H}_1^\nu\right)^\urcorner\rightarrow L_\infty^{1/\nu}(X)$ is continuous and injective.
\end{corollary}
The injectivity of $V_{\mathcal{F}}$ on $(\mathcal{H}_1^\nu)^\urcorner$ implies that $\mathcal{F}$ is total in $\mathcal{H}_1^\nu$.
\begin{corollary}\label{cor:frametotal}
Let $N\subset X$ be a set of measure zero. Then $\{\varphi_x \}_{x\in X\backslash N}$ is total in $\mathcal{H}_1^\nu$.
\end{corollary}
\noindent {\bf Proof.}\,
If this is not the case, the closure $\mathcal{C}$ of $\spn \{\varphi_x : x\in X\backslash N \}$ in $\mathcal{H}_1^\nu$ is a
true subspace, and the Hahn-Banach extension theorem yields $f\in(\mathcal{H}_1^\nu)^\urcorner$, $f\neq 0$, with
$\langle f,\zeta \rangle=0$ for all $\zeta\in\mathcal{C}$. Hence, $V_{\mathcal{F}}f(x)=0$ for a.e.\ $x\in X$
and therefore $f=0$ by injectivity of $V_{\mathcal{F}}$, which is true even with respect to
$\mu$-equivalence classes in the image space. This is a contradiction.
\hspace*{\fill} \rule{3mm}{3mm}
The adjoint $V_{\mathcal{F}}^{\ast}: L_\infty^{1/\nu}(X)\rightarrow (\mathcal{H}_1^\nu)^\urcorner$
of the restriction $V_{\mathcal{F}}:\mathcal{H}_1^\nu \rightarrow L_1^\nu(X)$
naturally extends the adjoint
of $V_{\mathcal{F}}:\mathcal{H}\rightarrow L_2(X)$ due to the equality $\langle F,V_{\mathcal{F}}\zeta \rangle_{L_\infty^{1/\nu}\times L_1^\nu}
= \langle F, V_{\mathcal{F}}\zeta \rangle_{L_2\times L_2}$ in case $\zeta\in \mathcal{H}_1^\nu$ and $F\in L_\infty^{1/\nu}\cap L_2$, and
it can also be represented by a weak integral of the form \eqref{eq:adjoint}.
The relations
\begin{align}\label{eq:reladjoint}
V_{\mathcal{F}}^{\ast}V_{\mathcal{F}}f=f \quad\text{ and }\quad V_{\mathcal{F}}V_{\mathcal{F}}^{\ast}(F)=R(F)
\end{align}
remain valid for the extension, i.e., they hold for $f\in (\mathcal{H}_1^\nu)^\urcorner$ and $F\in L_\infty^{1/\nu}$.
Indeed, Lemma~\ref{lem:Visom} yields
$\langle V_{\mathcal{F}}^{\ast}V_{\mathcal{F}}f ,\zeta \rangle
= \langle V_{\mathcal{F}}f , V_{\mathcal{F}}\zeta \rangle_{L_\infty^{1/\nu}\times L_1^\nu}
= \langle f,\zeta \rangle$ for all $\zeta\in\mathcal{H}_1^\nu$.
Further, we have
$
V_{\mathcal{F}}V_{\mathcal{F}}^{\ast} F(x) = \langle V_{\mathcal{F}}^{\ast} F,\varphi_x \rangle
= \int_X F(y) \langle\varphi_y,\varphi_x\rangle \,d\mu(y) = R(F)(x)
$
for all $x\in X$.
An easy consequence of the relations \eqref{eq:reladjoint} is the important fact that the reproducing formula extends to $\left(\mathcal{H}_1^\nu\right)^\urcorner$,
a result obtained differently in \cite[Lemma~3.6]{fora05}.
\begin{lemma}\label{lem:extreproduce}
Let $\nu\ge1$ be a weight on $X$ and assume that the analyzing frame $\mathcal{F}$ satisfies \eqref{eq:framecond}. Then $V_{\mathcal{F}}f(x)=R(V_{\mathcal{F}}f)(x)$
for every $f\in\left(\mathcal{H}_1^\nu\right)^\urcorner$ and $x\in X$.
Conversely, if $F\in L_\infty^{1/\nu}(X)$ satisfies $F=R(F)$ then there is
a unique $f\in \left(\mathcal{H}_1^\nu\right)^\urcorner$ such that $F=V_{\mathcal{F}}f$.
\end{lemma}
\noindent {\bf Proof.}\,
According to \eqref{eq:reladjoint} we have $R(Vf)=VV^\ast V f =V f$
for $f\in (\mathcal{H}_1^\nu)^\urcorner$. For the opposite direction assume that
$F\in L_\infty^{1/\nu}$ satisfies $F=R(F)$. Then by \eqref{eq:reladjoint} the element $V^*F\in(\mathcal{H}_1^\nu)^\urcorner$
has the property $VV^*F=R(F)=F$. It is unique since $V$ is injective on $(\mathcal{H}_1^\nu)^\urcorner$.
\hspace*{\fill} \rule{3mm}{3mm}
Finally we state the correspondence
between the weak*-convergence of a net $(f_i)_{i\in I}$ in $(\mathcal{H}_1^\nu)^\urcorner$
and the pointwise convergence of $(V_{\mathcal{F}}f_i)_{i\in I}$ (compare \cite[Lem.~3.6]{fora05}).
\begin{lemma
Let $(f_i)_{i\in I}$ be a net in $(\mathcal{H}_1^\nu)^\urcorner$.
\begin{enumerate}
\item[(i)]
If $(f_i)_{i\in I}$ converges
to some $f\in (\mathcal{H}_1^\nu)^\urcorner$ in the weak*-topology of $(\mathcal{H}_1^\nu)^\urcorner$,
then $(V_{\mathcal{F}}f_i)_{i\in I}$ converges pointwise to $V_{\mathcal{F}}f$ everywhere.
\item[(ii)]
If $(V_{\mathcal{F}}f_i)_{i\in I}$ converges pointwise a.e.\
to a function $F:X\rightarrow \C$ and if
$(f_i)_{i\in I}$ is uniformly bounded in $(\mathcal{H}_1^\nu)^\urcorner$,
then $(f_i)_{i\in I}$ converges to some $f\in(\mathcal{H}_1^\nu)^\urcorner$ in the weak*-topology
with $V_{\mathcal{F}}f=F$ a.e..
\end{enumerate}
\end{lemma}
\noindent {\bf Proof.}\,
We give a proof for sequences $(f_n)_{n\in \N}$ which extends straightforwardly to nets.
\noindent
Part~(i):\,
The weak*-convergence implies $\langle f_n,\varphi_x \rangle\rightarrow \langle f,\varphi_x \rangle$ for $n\rightarrow\infty$ and all $x\in X$.
\noindent
Part~(ii):\,
Let $X^*\subset X$ denote the subset where the sequence $(V_{{\mathcal F}}f_n)_{n\in\N}$ converges pointwise.
The space $M=\spn\{ \varphi_x : x\in X^* \}$ lies dense in $\mathcal{H}_1^\nu$ by Corollary~\ref{cor:frametotal}.
We define a conjugate-linear functional $\tilde{f}$ on $M$ by
$\tilde{f}(h):= \lim_{n\rightarrow\infty} \langle f_n,h \rangle$ for $h\in M$.
By assumption, there is $C>0$ so that $\| f_n | (\mathcal{H}_1^\nu)^\urcorner \|\le C$, which leads to
$
| \langle f_n,h \rangle | \le \|f_n | (\mathcal{H}_1^\nu)^\urcorner \| \|h | \mathcal{H}_1^\nu \| \le
C \|h | \mathcal{H}_1^\nu \|
$ for all $n\in \N$ and shows that $\tilde{f}$ is bounded on $M$ with respect to $\|\cdot| \mathcal{H}_1^\nu\|$. Hence
it can be uniquely extended to some $f\in(\mathcal{H}_1^\nu)^\urcorner$.
For $\varepsilon>0$ and $\zeta \in \mathcal{H}_1^\nu$ we choose $h\in M$ such that
$\| h-\zeta | \mathcal{H}_1^\nu \|<\varepsilon$. We get
\[
|\langle f_n - f,\zeta \rangle|
\le \| \zeta-h | \mathcal{H}_1^\nu \| \cdot \| f_n - f | (\mathcal{H}_1^\nu)^\urcorner \| + |\langle f_n - f,h \rangle|
\le \varepsilon ( C+ \| f | (\mathcal{H}_1^\nu)^\urcorner \| ) + |\langle f_n - f,h \rangle|.
\]
Letting $n\rightarrow\infty$ it follows
$\limsup_{n\rightarrow\infty} |\langle f_n - f,\zeta \rangle| \le \varepsilon ( C+ \| f | (\mathcal{H}_1^\nu)^\urcorner \| ) $.
This holds for all $\varepsilon>0$, hence, $\lim_{n\rightarrow\infty} |\langle f_n - f ,\zeta \rangle| =0 $.
This shows that $f_n \rightarrow f$
in the weak*-topology
of $(\mathcal{H}_1^\nu)^\urcorner$. As a consequence
$V_{\mathcal{F}}f(x)= \langle f,\varphi_x \rangle =\lim_{n\rightarrow\infty} \langle f_n,\varphi_x \rangle
= \lim_{n\rightarrow\infty} V_{\mathcal{F}}f_n(x) = F(x)$ for all $x\in X^*$.
\hspace*{\fill} \rule{3mm}{3mm}
A direct implication is the correspondence principle with respect to sums formulated below.
\begin{corollary}\label{cor:corrpri}
If $\sum_{i\in I} f_i$ converges
unconditionally in the weak*-topology of $(\mathcal{H}_1^\nu)^\urcorner$ then the series $\sum_{i\in I} V_{\mathcal{F}}f_i(x)$ converges absolutely
for all $x\in X$.
Conversely, if $\sum_{i\in I} V_{\mathcal{F}}f_i(x)$ converges absolutely for a.e.\ $x\in X$ and if
the finite partial sums of $\sum_{i\in I} f_i$ are uniformly bounded in $(\mathcal{H}_1^\nu)^\urcorner$ then
$\sum_{i\in I} f_i$ converges unconditionally in the weak*-topology.
\end{corollary}
\subsection{Coorbit spaces}
\label{ssec:coorbit}
In this central part we introduce the notion of coorbit spaces, building upon the correspondence between elements of $(\mathcal{H}_1^\nu)^\urcorner$ and
functions on $X$ as established by the transform $V_\mathcal{F}$. The idea is to characterize $f\in(\mathcal{H}_1^\nu)^\urcorner$
by properties of the corresponding function $V_\mathcal{F}f$.
For a viable theory
the analyzing frame $\mathcal{F}$ must fulfill certain suitability conditions with respect to $Y$.
\begin{definition}\label{def:F(v,Y)}
Let $\nu\ge1$ be a weight on $X$. We say that $\mathcal{F}$ has \emph{property $F(\nu,Y)$}
if it satisfies condition \eqref{eq:framecond} and if the following holds true,
\begin{enumerate}
\item[(i)] $R_{{\mathcal F}}:Y\rightarrow Y$ acts continuously on $Y$,
\item[(ii)] $R_{{\mathcal F}}(Y)\hookrightarrow L_\infty^{1/\nu}(X)$.
\end{enumerate}
\end{definition}
\noindent
Condition~\eqref{eq:framecond} ensures that the voice transform extends to $(\mathcal{H}_1^\nu)^{\urcorner}$.
Further, conditions (i) and (ii) imply that $R_{{\mathcal F}}F(x)=\int_X R_{{\mathcal F}}(x,y)F(y) \,d\mu(y)$ is well-defined for a.e.\ $x\in X$ if $F\in Y$.
In addition, also due to~(i) and~(ii),
the operator $R_{{\mathcal F}}:Y\rightarrow L_\infty^{1/\nu}(X)$ is continuous:
For $F\in Y$ we have $R(F)\in L_\infty^{1/\nu}(X)$ and
\[
\| R(F) | L_\infty^{1/\nu}\| \lesssim \| R(F) | Y \| \le \|R|Y\to Y\|\cdot \| F|Y\| .
\]
In view of Definition~\ref{def:F(v,Y)} it makes sense to introduce the following subalgebra of $\mathcal{A}_{m_\nu}$ from \eqref{eqdef:Am}
\begin{align*
\mathcal{B}_{Y,{m_\nu}} = \{K:X\times X \to \C~:~K\in \mathcal{A}_{m_\nu}\mbox{ and }K\mbox{ is bounded from }Y \to Y\}\,,
\end{align*}
equipped with the quasi-norm $\|K|\mathcal{B}_{Y,{m_\nu}}\| := \max\{\|K|\mathcal{A}_{m_\nu}\|, \|K|Y\to Y\|\}$.
Now we are able to give the definition of the coorbit of a rich solid QBF-space $Y$.
\begin{definition
Let $Y$ be a rich solid QBF-space on $X$
and assume that the analyzing frame ${\mathcal F} = \{\varphi_x\}_{x\in X}$ has property $F(\nu,Y)$ for some weight $\nu:X\rightarrow[1,\infty)$.
The coorbit of $Y$ with respect to $\mathcal{F}$ is defined by
$$
\mathsf{Co}(\nu, {\mathcal F}, Y):= \{f\in (\mathcal{H}_1^\nu)^{\urcorner}~:~V_{\mathcal{F}} f \in Y\}\quad\mbox{with quasi-norm}\quad
\|f|\mathsf{Co}(\nu, {\mathcal F}, Y)\| := \|V_{{\mathcal F}} f|Y\|\,.
$$
\end{definition}
Since the coorbit is independent of the weight $\nu$ in the definition, as proved by the lemma below, it is omitted in the notation and we simply write
$
\mathsf{Co}(\mathcal{F},Y):=\mathsf{Co}(\nu,\mathcal{F},Y).
$
Moreover, if the analyzing frame $\mathcal{F}$ is fixed we may just write $\mathsf{Co}(Y)$.
\begin{lemma}\label{lem:co_indiw}
The coorbit $\mathsf{Co}(\nu,\mathcal{F},Y)$ does not depend on the particular weight $\nu$
chosen in the definition in the following sense.
If $\tilde{\nu}\ge1$ is another weight such that $\mathcal{F}$ has property $F(\tilde{\nu},Y)$
then we have
$
\mathsf{Co}(\tilde{\nu},\mathcal{F},Y)= \mathsf{Co}(\nu,\mathcal{F},Y).
$
\end{lemma}
\noindent {\bf Proof.}\,
If $\mathcal{F}$ has properties $F(\nu,Y)$ and $F(\tilde{\nu},Y)$
it also has property $F(\omega,Y)$ for $\omega=\nu+\tilde{\nu}$.
We show $\mathsf{Co}(\omega,Y) = \mathsf{Co}(\nu,Y)$.
Since $\omega\ge \nu$ we have the continuous dense embedding $\mathcal{H}_1^{\omega}\hookrightarrow \mathcal{H}_1^\nu$
which implies $(\mathcal{H}_1^\nu)^\urcorner \hookrightarrow (\mathcal{H}_1^{\omega})^\urcorner$
and hence $\mathsf{Co}(\nu,Y)\subset \mathsf{Co}(\omega,Y)$. For the opposite inclusion
let $f\in \mathsf{Co}(\omega,Y)$. Then $f\in (\mathcal{H}_1^{\omega})^\urcorner$ and $F:=Vf\in Y$ with
$R(F) \in L_\infty^{1/\nu}$ by property $F(\nu,Y)$. Since $F=R(F)$
according to the reproducing formula on $(\mathcal{H}_1^{\omega})^\urcorner$ we thus have $F\in L_\infty^{1/\nu}$.
The inverse reproducing formula on $(\mathcal{H}_1^\nu)^\urcorner$ then yields
$\tilde{f}\in (\mathcal{H}_1^\nu)^\urcorner \subset (\mathcal{H}_1^{\omega})^\urcorner$
with $V\tilde{f}=F$, which due to the injectivity
of $V$ is equal to $f$. This shows $f\in (\mathcal{H}_1^\nu)^\urcorner$,
and as $Vf\in Y$ even
$f\in \mathsf{Co}(\nu,Y)$. Finally note that
the quasi-norms on $\mathsf{Co}(\omega,Y)$ and $\mathsf{Co}(\nu,Y)$ are equal.
Analogously it follows $\mathsf{Co}(\omega,Y)=\mathsf{Co}(\tilde{\nu},Y)$.
\hspace*{\fill} \rule{3mm}{3mm}
\begin{remark}
The claim of Lemma \ref{lem:co_indiw} has to be understood in the sense
\begin{align*}
\left\{f|_{\langle\varphi_x :x\in X\rangle}:f\in \mathsf{Co}(\tilde{\nu},\mathcal{F},Y)\right\}=\left\{f|_{\langle\varphi_x :x\in X\rangle}:f\in \mathsf{Co}({\nu},\mathcal{F},Y)\right\}
\end{align*}
since the two spaces are not strictly speaking equal. Further, the span $\langle\varphi_x :x\in X\rangle$ is dense in $\mathcal{H}_1^\nu$ and $\mathcal{H}_1^{\tilde{\nu}}$, thus the notation $\mathsf{Co}(\tilde{\nu},\mathcal{F},Y)=\mathsf{Co}(\nu,\mathcal{F},Y)$ is justified.
\end{remark}
Regarding the applicability of the theory, it is important to decide whether a given analyzing frame ${\mathcal F}=\{ \varphi_x \}_{x\in X}$ has property $F(\nu,Y)$.
In the classical theory, where $X$ is a group, the frame is of the special form $\varphi_x=\pi(x)g$, where $\pi$ is a group representation
and $g\in\mathcal{H}$ a suitable vector. In this case properties of ${\mathcal F}$ break down to properties of the analyzing vector $g$,
and it suffices to check admissibility of $g$, see \cite{FeGr86,feGr89a,Gr91}.
For the continuous wavelet transform concrete conditions can be formulated in terms of smoothness, decay and vanishing moments,
generalized in \cite{furaitou15} to wavelets over general dilation groups.
In our general setup the algebras $\mathcal{A}_{m_\nu}$ and $\mathcal{B}_{Y,{m_\nu}}$ embody the concept of admissibility and
for the (inhomogeneous) wavelet transform utilized in Section~\ref{sec:appcoorbit} also concrete conditions can be deduced, see e.g.\ \cite{RaUl10}.
Concerning the independence of $\mathsf{Co} ({\mathcal F},Y)$
on the reservoir $(\mathcal{H}_1^\nu)^{\urcorner}$ we state \cite[Lem.~3.7]{RaUl10}, whose proof carries over directly.
\begin{lemma}\label{lem:co_indires} Assume that the analyzing frame ${\mathcal F}$ satisfies $F(\nu,Y)$ and let $S$ be a topological vector
space such that ${\mathcal F}\subset S \hookrightarrow \mathcal{H}_1^\nu$.
In case ${\mathcal F}$ is total in $S$ and the reproducing formula $V_{{\mathcal F}}f = R_{{\mathcal F}}(V_{{\mathcal F}}f)$ extends to all $f\in S^{\urcorner}$ (the topological anti-dual
of $S$) then
$$
\mathsf{Co}(\mathcal{F},Y) = \{f\in S^{\urcorner}~:~V_{{\mathcal F}}f \in Y\}\,.
$$
\end{lemma}
We have the following result concerning the coincidence of the two
spaces $\mathsf{Co}({\mathcal F},Y)$ and $\mathsf{Co}({\mathcal G},Y)$, where ${\mathcal F}$ and ${\mathcal G}$ are two different
continuous frames.
\begin{lemma
Assume that the frames ${\mathcal G} = \{g_x\}_{x\in X}$ and ${\mathcal F} = \{f_x\}_{x\in X}$
satisfy $F(\nu,Y)$. If the Gramian kernels $G[{\mathcal F}, {\mathcal G}]$ and $G[{\mathcal G}, {\mathcal F}]$ defined in \eqref{eq:crossGram}
are both contained in $\mathcal{B}_{Y,m_\nu}$, we have
$
\mathsf{Co}({\mathcal F},Y) = \mathsf{Co}({\mathcal G}, Y)\,
$
in the sense of equivalent quasi-norms.
\end{lemma}
\noindent {\bf Proof.}\,
This is a consequence of the relations $V_{\mathcal{F}}=G[{\mathcal F},{\mathcal G}] V_{\mathcal{G}}$ and $V_{\mathcal{G}}=G[{\mathcal G},{\mathcal F}] V_{\mathcal{F}}$.
In view of Lemma~\ref{lem:Bochner} and Lemma~\ref{lem:co_examples} we have $V_{\mathcal{G}}f_x\in L_1^\nu(X)$ for a.e.\ $x\in X$.
Further, $V_{\mathcal{G}}f\in L_\infty^{1/\nu}(X)$ for $f\in (\mathcal{H}_1^\nu)^\urcorner$ and hence with Lemma~\ref{lem:Visom}
\[
V_{\mathcal{F}}f(x)=\langle f,f_x \rangle_{(\mathcal{H}_1^\nu)^\urcorner \times \mathcal{H}_1^\nu } = \langle V_{\mathcal{G}}f,V_{\mathcal{G}}f_x \rangle_{L_\infty^{1/\nu} \times L_1^\nu}
=\int_X \langle f,g_y \rangle \overline{\langle g_y,f_x \rangle}\,d\mu(y) = G[{\mathcal F},{\mathcal G}] V_{\mathcal{G}}f(x).
\]
This proves $V_{\mathcal{F}}=G[{\mathcal F},{\mathcal G}] V_{\mathcal{G}}$, and by symmetry also $V_{\mathcal{G}}=G[{\mathcal G},{\mathcal F}] V_{\mathcal{F}}$.
\hspace*{\fill} \rule{3mm}{3mm}
It is essential for the theory that the reproducing formula carries over to $\mathsf{Co}(Y)$, which
is an immediate consequence of Lemma~\ref{lem:extreproduce}.
\begin{lemma}\label{lem:reproform3}
A function $F\in Y$ is of the form $Vf$ for some $f\in \mathsf{Co}(Y)$
if and only if $F=R(F)$.
\end{lemma}
The reproducing formula is the key to prove the main theorem of this section, which corresponds to
\cite[Prop.~3.7]{fora05}. We explicitly state the continuitiy of the embedding $\mathsf{Co}(Y)\hookrightarrow (\mathcal{H}_1^\nu)^\urcorner$.
\begin{Theorem}\label{thm:co_main}
\begin{enumerate}
\item[(i)] The space $(\mathsf{Co}(Y),\|\cdot|\mathsf{Co}(Y)\|)$ is a quasi-Banach space with quasi-norm constant $C_Y$, which is continuously embedded
into $(\mathcal{H}_1^\nu)^\urcorner$.
\item[(ii)] The map $V:\mathsf{Co}(Y)\rightarrow Y$ establishes
an isometric isomorphism between $\mathsf{Co}(Y)$ and the closed subspace
$R(Y)$ of $Y$.
\item[(iii)] The map $R:Y\rightarrow Y$
is a projection of $Y$ onto $R(Y)=V(\mathsf{Co}(Y))$.
\end{enumerate}
\end{Theorem}
\noindent {\bf Proof.}\,
In general, we refer to the proof of \cite[Prop.~3.7]{fora05}.
However, the continuity of the embedding $\mathsf{Co}(Y)\hookrightarrow (\mathcal{H}_1^\nu)^\urcorner$ is not proved there.
It is a consequence of the following estimate for $f\in \mathsf{Co}(Y)$, where Lemma~\ref{lem:Veqivnorm} is used,
\begin{align*}
\| f | (\mathcal{H}_1^\nu)^\urcorner \| \asymp \| Vf | L_\infty^{1/\nu} \|
\le \| R | Y\rightarrow L_\infty^{1/\nu} \| \| Vf | Y \|
= \| R | Y\rightarrow L_\infty^{1/\nu} \| \| f | \mathsf{Co}(Y) \|.
\end{align*}
Further, the proof of \cite[Prop.~3.7]{fora05} implicitly relies on the validity of $R\circ R=R$ on $Y$, which a-priori is only clear for $L_2(X)$.
Therefore, we include a proof of this relation here. Let $F\in Y$ and choose compact subsets $(K_n)_{n\in\N}$ with $X=\bigcup_{n\in\N} K_n$ and $K_n\subset K_m$ for $n\le m$, which is possible
since $X$ is $\sigma$-compact. Then we define the sets $U_n:=\{ x\in K_n : |F(x)|\le n \}$, which are
relatively compact and thus of finite measure. As a consequence,
$F_n:= \chi_{U_n} F \in L_2(X)$. Moreover, $F_n\in Y$ since $|F_n(x)|\le |F(x)|$ for every $x\in X$.
Since by assumption $R:Y\rightarrow Y$ is well-defined the assignment $y\mapsto |R(x,y) F(y)|$ is integrable for a.e.\ $x\in X$. As $F_n(y)\rightarrow F(y)$ pointwise, Lebesgue's dominated convergence theorem thus
yields for these $x\in X$
\begin{align*}
RF_n(x) = \int_X R(x,y) F_n(y) \,d\mu(y) \rightarrow \int_X R(x,y) F(y) \,d\mu(y) = RF(x).
\end{align*}
Next, observe that the function $|R(x,\cdot)|m_{\nu}(x,\cdot)$ is integrable for a.e.\ $x\in X$ since $R\in\mathcal{A}_{m_\nu}$.
Further, due to $R(Y)\hookrightarrow L_\infty^{1/\nu}(X)$ the following estimate holds true for a.e.\ $x,y\in X$
\[
|R(x,y) RF_n(y)| \le C |R(x,y)| \nu(y) \|F_n|Y\| \le C |R(x,y)| m_{\nu}(x,y) \nu(x) \|F|Y\|.
\]
Another application of Lebesgue's dominated convergence therefore yields for a.e.\ $x\in X$
\begin{align*}
R(RF_n)(x) = \int_X R(x,y) RF_n(y) \,d\mu(y) \rightarrow \int_X R(x,y) RF(y) \,d\mu(y) = R(RF)(x).
\end{align*}
Since $F_n\in L_2(X)$ we have $RF_n=R(RF_n)$ for every $n\in\N$. Altogether, we obtain
\begin{align*}
R(RF)(x) \leftarrow R(RF_n)(x) = RF_n(x) \rightarrow RF(x).
\end{align*}
\hspace*{\fill} \rule{3mm}{3mm}
Let us finally provide some trivial examples, also given in \cite[Cor.~3.8]{fora05}.
\begin{lemma}\label{lem:co_examples}
If the analyzing frame $\mathcal{F}$ satisfies condition~\eqref{eq:framecond} for a weight $\nu\ge 1$,
it has properties $F(\nu,L_2)$, $F(\nu,L_\infty^{1/\nu})$, $F(\nu,L_1^\nu)$, and
it holds
\begin{align*}
(\mathcal{H}_1^\nu)^\urcorner \asymp \mathsf{Co}(\mathcal{F},L_\infty^{1/\nu}),
&& \mathcal{H}_1^\nu = \mathsf{Co}(\mathcal{F},L_1^\nu), && \mathcal{H} = \mathsf{Co}(\mathcal{F},L_2).
\end{align*}
\end{lemma}
Typically, the theory cannot be applied if the QBF-space $Y$ is not embedded in $L_1^{\rm loc}(X)$, since then the kernel conditions concerning operations on $Y$
can usually not be fulfilled. Let us close this paragraph with a short discussion of how to proceed in case $Y\not\hookrightarrow L_1^{\rm loc}(X)$.
\subsubsection*{The case $Y\not\hookrightarrow L_1^{\rm loc}(X)$}
The main idea is to replace $Y$ with a suitable subspace $Z$, which is embedded into $L_1^{\rm loc}(X)$ and fits into the existing theory.
The basic observation behind this is that not all the information of $Y$ is used in the definition of the coorbit. In fact, the information about $\mathsf{Co}(Y)$ is fully contained
in the subspace $R(Y)$, i.e., we have $\mathsf{Co}(\mathcal{F},Y)=\mathsf{Co}(\mathcal{F},R(Y))$.
Thus, we can painlessly pass over to a solid subspace $Z$ of $Y$ and regain the same coorbit if
\begin{align*
R(Y)\hookrightarrow Z \hookrightarrow Y.
\end{align*}
This observation motivates the idea to substitute $Y$ -- in case $Y$ is not embedded into $L_1^{\rm loc}(X)$ itself -- by a
suitable subspace $Z$ of $Y$ consisting of locally integrable functions, and then to consider the coorbit of $Z$ instead.
In the classical group setting~\cite{ra05-3} Wiener amalgams~\cite{Fe83,ra05-4} were used as suitable substitutes.
Since Wiener amalgams rely on the underlying group structure, they cannot be used in our general setup however.
Instead, it is possible to resort to the closely related
decomposition spaces due to Feichtinger and Gröbner~\cite{fegr85}, which can be viewed as discrete analoga of Wiener amalgams.
This approach has been worked out in \cite{Sch12}, where the decomposition space $\mathcal{D}(Y,\mathcal{U})$ with local component $L_\infty$ and global component $Y$
is used. It is defined as follows.
\begin{definition}[\cite{Sch12}
The decomposition space $\mathcal{D}(Y,\mathcal{U})$ associated to a rich solid QBF-space $Y$ on $X$ and an admissible
covering $\mathcal{U}=\{U_i\}_{i\in I}$ of $X$ is defined by
\begin{equation}\nonumber
\begin{split}
\mathcal{D}(Y,\mathcal{U}) &:= \Big\{ f\in L_\infty^{\rm loc}(X) ~:~
\| f |\mathcal{D}(Y,\mathcal{U})\| := \Big\|\sum\limits_{i\in I}
\|f|L_\infty(U_i)\| \chi_{U_i} |Y \Big\|<\infty
\Big\}.
\end{split}
\end{equation}
Note that the sum $\sum_{i\in I} \|f\|_{L_\infty(U_i)}\chi_{U_i}$
is locally finite and defines pointwise a function on $X$.
\end{definition}
The space $\mathcal{D}(Y,\mathcal{U})$ is a subspace of $Y$, continuously embedded, and a rich solid QBF-space with the same quasi-norm constant $C_Y$ as $Y$.
Moreover, it is contained in $L_1^{\rm loc}(X)$ even if $Y$ itself is not.
In fact, we have the embedding $\mathcal{D}(Y,\mathcal{U}) \hookrightarrow L_\infty^{1/\omega}(X)$,
where $\omega:X\rightarrow(0,\infty)$ defined by $\omega(x):= \max_{i:x\in U_i} \{\|\chi_{U_i} | Y \|^{-1} \}$ is a locally bounded weight.
For a short proof, let $K\subset X$ be compact and $\{U_i\}_{i\in J}$ the finite subfamily of sets in $\mathcal{U}$ intersecting $K$. Then $\omega(x)\le \max_{i\in J} \{\|\chi_{U_i} | Y \|^{-1} \}$
for all $x\in K$.
In the spirit of \cite[Def.~4.1]{ra05-3}, we may therefore pass over to $\mathsf{Co}(\mathcal{F},\mathcal{D}(Y,\mathcal{U}))$, the coorbit of $\mathcal{D}(Y,\mathcal{U})$.
In general, one can only expect $\mathsf{Co}(\mathcal{F},\mathcal{D}(Y,\mathcal{U}))\subset \{ f\in (\mathcal{H}_1^\nu)^\urcorner : V_\mathcal{F}f\in Y \}$ and not equality.
In many applications however the equality can be proved by methods not available in the abstract setting.
Moreover, the choice $\mathcal{D}(Y,\mathcal{U})$ is consistent with the theory due to the result below, which is
analogous to a result obtained for Wiener amalgams \cite[Thm.~6.1]{ra05-3}.
\begin{Theorem}[{\cite[Thm.~8.1]{Sch12}}]
Assume that $Y$ is a rich solid QBF-space
and that the analyzing frame $\mathcal{F}$ has property $F(\nu,Y)$.
If $\mathcal{U}$ is an admissible covering of $X$ such that the kernel $M^*_{\mathcal{U}}=K^*_\mathcal{U}[\mathcal{F},\mathcal{F}]$ (defined in \eqref{eqdef:kerK} below) operates continuously
on $Y$, then the frame $\mathcal{F}$ has property $F(\nu,\mathcal{D}(Y,\mathcal{U}))$
and it holds
\[
\mathsf{Co}(\mathcal{F},\mathcal{D}(Y,\mathcal{U})) \asymp \mathsf{Co}(\mathcal{F},Y)
\]
in the sense of equivalent quasi-norms.
\end{Theorem}
\begin{remark}
The condition that the kernel $M^*_{\mathcal{U}}$ operates continuously on $Y$
is fulfilled for instance in the important case when $\mathcal{F}$ has property $D(\delta,\nu,Y)$ (see Definition~\ref{def:D(d,v,Y)} below).
\end{remark}
In \cite[Thm.~8.1]{Sch12} this theorem was formulated under the additional assumption that $Y$ is continuously embedded into $L_1^{\rm loc}(X)$.
However, essential for the proof is only that the frame $\mathcal{F}$ has property $F(\nu,Y)$, wherefore we chose to omit this assumption here.
\subsection{Discretizations}
A main feature of coorbit space theory is its general abstract discretization
machinery. With a coorbit characterization of a given function space at
hand, the abstract framework (Theorems~\ref{thm:atomicdec}
and~\ref{thm:frameexp} below) provides atomic decompositions of this space,
i.e.,\ a representation of functions using ``only'' a countable number of atoms
as building blocks.
Moreover, the function space can be characterized via an equivalent
quasi-norm on an associated sequence space.
The transition to sequence spaces bears many advantages, since those usually
have a simpler, more accessible structure than the original
spaces. For example, the investigation of embedding relations becomes much simpler
by performing them on the associated sequence spaces. In addition, atomic decompositions naturally
lend themselves to real world representations of the considered functions:
By truncation one obtains approximate expansions consisting only of a finite
number of atoms.
Our discretization results, Theorem~\ref{thm:atomicdec} and
Theorem~\ref{thm:frameexp}, transfer the results from \cite{RaUl10},
namely Theorem~3.11 and Theorem~3.14, to the general quasi-Banach setting.
Applying a different strategy for their proofs, however, we are able to
strengthen these results significantly even in the Banach space setting.
\subsubsection*{Preliminaries}
Let us introduce the kernel functions $K_\mathcal{U}[\mathcal{G},\mathcal{F}]$ and $K^*_\mathcal{U}[\mathcal{G},\mathcal{F}]$, which are related by involution
and play a prominent role in the discretization theory. For a family $\mathcal{G}=\{\psi_x\}_{x\in X}$
and an admissible covering $\mathcal{U}=\{U_i\}_{i\in I}$ they are defined by
\begin{align}\label{eqdef:kerK}
K_\mathcal{U}[\mathcal{G},\mathcal{F}](x,y):=\sup_{z\in Q_y} |\langle \varphi_x,\psi_z \rangle| \quad\text{and}\quad
K^*_\mathcal{U}[\mathcal{G},\mathcal{F}](x,y):= K_\mathcal{U}[\mathcal{G},\mathcal{F}](y,x)
\end{align}
where $x,y\in X$ and $Q_y:=\bigcup\limits_{i~:~y\in U_i} U_i$ for $y\in X$.
Their mapping properties are essential for two central results, namely Lemmas~\ref{auxlem:mainanalysis} and \ref{auxlem:mainsynthesis2},
which together with Lemma~\ref{auxlem:Uinvert} provide the technical foundation for the proofs of Theorem~\ref{thm:atomicdec} and Theorem~\ref{thm:frameexp}.
We will subsequently use the symbol $\interleave\cdot\interleave$ for the operator quasi-norm $\|\cdot |Y\rightarrow Y \|$ on $Y$.
\begin{lemma}\label{auxlem:mainanalysis}
Let $Y$ be a rich solid QBF-space on $X$
and let the analyzing frame $\mathcal{F}=\{\varphi_x\}_{x\in X}$ possess property $F(\nu,Y)$.
Further, let $\mathcal{G}=\{\psi_x \}_{x\in X}\subset \mathcal{H}_1^\nu$ be a
family and $\mathcal{U}=\{U_i\}_{i\in I}$ an admissible covering such that $K^*_\mathcal{U}:=K^*_\mathcal{U}[\mathcal{G},\mathcal{F}]$ defines a bounded
operator on $Y$.
Then for $f\in \mathsf{Co}(\mathcal{F},Y)$ the function $\sum_{i\in I} \sup_{z\in U_i} |V_\mathcal{G}f(z)|\chi_{U_i}$
belongs to $Y$ with the estimate
\begin{gather*}
\Big\|\sum_{i\in I} \sup_{z\in U_i} |V_\mathcal{G}f(z)|\chi_{U_i} \Big| Y \Big\| \le
\sigma(\mathcal{U})\interleave K^*_\mathcal{U} \interleave \| f | \mathsf{Co}(\mathcal{F},Y)\| \,.
\end{gather*}
\end{lemma}
\noindent
Note that the sum
$\sum_{i\in I} \sup_{z\in U_i}|V_\mathcal{G}f(z)|\chi_{U_i}$ is locally finite and defined pointwise.
\noindent {\bf Proof.}\,
Using $V_{\mathcal{G}}f= G[\mathcal{G},\mathcal{F}]V_\mathcal{F}f$
we can estimate for $f\in \mathsf{Co}(\mathcal{F},Y)$ and all $x\in X$
\begin{align*}
\sup_{z\in Q_x} |V_\mathcal{G}f(z)| &= \sup_{z\in Q_x} |G[\mathcal{G},\mathcal{F}]V_\mathcal{F}f(z)|
\le \sup_{z\in Q_x} \int |G[\mathcal{G},\mathcal{F}](z,y)||V_\mathcal{F}f(y)|\,d\mu(y) \\
&\le \int \sup_{z\in Q_x}|G[\mathcal{G},\mathcal{F}](z,y)||V_\mathcal{F}f(y)|\,d\mu(y)\\
&= \int K^*_\mathcal{U}[\mathcal{G},\mathcal{F}](x,y)|V_\mathcal{F}f(y)|\,d\mu(y) = K^*_\mathcal{U}[\mathcal{G},\mathcal{F}](|V_\mathcal{F}f|)(x).
\end{align*}
For functions $F:X\rightarrow\C$ we further have the estimate
\begin{align}\label{ineq:aux}
\sup_{z\in Q_x} |F(z)| \le \sum_{i\in I} \sup_{z\in U_i} |F(z)| \chi_{U_i}(x) \le \sigma(\mathcal{U}) \sup_{z\in Q_x} |F(z)|,
\end{align}
where $\sigma(\mathcal{U})$ is the intersection number of $\mathcal{U}$.
Choosing $F=V_\mathcal{G}f$ in \eqref{ineq:aux}, we can conclude
\begin{align*}
\Big\| \sum_{i\in I} \sup_{z\in U_i}|V_\mathcal{G}f(z)|\chi_{U_i} \Big| Y \Big\|
\le \sigma(\mathcal{U}) \interleave K^*_\mathcal{U} \interleave \| V_\mathcal{F}f| Y \|
= \sigma(\mathcal{U}) \interleave K^*_\mathcal{U} \interleave \| f |
\mathsf{Co}(\mathcal{F},Y) \|. ~\hfill\hfill
\end{align*}
\hspace*{\fill} \rule{3mm}{3mm}
We can immediately deduce an important result, which corresponds to \cite[Lemma~3.12]{RaUl10}, concerning the sampling of $V_\mathcal{G}f$.
\begin{corollary}\label{auxcor:trafosampling}
With the same assumptions as in the previous lemma let $\{x_i\}_{i\in I}$ be a family of points
such that $x_i\in U_i$.
Then $\{V_\mathcal{G}f(x_i)\}_{i\in I}\in Y^\flat(\mathcal{U})$
and it holds
\begin{align*}
\| \{V_\mathcal{G}f(x_i)\}_{i\in I} | Y^\flat \| = \Big\| \sum_{i\in I} |V_\mathcal{G}f(x_i)|\chi_{U_i} \Big| Y \Big\| \le \sigma(\mathcal{U})\interleave K^*_\mathcal{U} \interleave \| f | \mathsf{Co}(\mathcal{F},Y) \|.
\end{align*}
\end{corollary}
Let us turn to the synthesis side. Here the following lemma is a key result, which generalizes \cite[Lem.~5.10]{fora05} and whose short direct proof
is new and avoids technical difficulties. In particular, it does not rely on \cite[Lem.~5.4]{fora05}.
\begin{lemma}\label{auxlem:mainsynthesis1}
Let $Y$ be a rich solid QBF-space on $X$
and let the analyzing frame $\mathcal{F}=\{\varphi_x\}_{x\in X}$ possess property $F(\nu,Y)$.
Further, let $\mathcal{G}=\{\psi_x \}_{x\in X}$ be a
family in $\mathcal{H}$ and $\mathcal{U}=\{U_i\}_{i\in I}$ an admissible covering such that $K_\mathcal{U}:=K_\mathcal{U}[\mathcal{G},\mathcal{F}]$ defines a bounded
operator on $Y$.
Then for $\{\lambda_i\}_{i\in I}\in Y^\natural(\mathcal{U})$ and for points $x_i\in U_i$
the series $\sum_{i\in I} \lambda_i V_\mathcal{F}\psi_{x_i}(x)$
converges absolutely for a.e.\ $x\in X$ defining a function in $Y$ with
\begin{align*}
\Big\| \sum_{i\in I} \lambda_i V_\mathcal{F}\psi_{x_i} \Big| Y \Big\| \le \interleave K_\mathcal{U} \interleave \| \{\lambda_i\}_{i\in I} | Y^\natural \|.
\end{align*}
If the finite sequences are dense in $Y^\natural(\mathcal{U})$ the series
also converges unconditionally in the quasi-norm of $Y$.
\end{lemma}
\noindent {\bf Proof.}\,
We have for every $x\in X$ the estimate
\begin{gather*}
\sum_{i\in I} |\lambda_i||V_\mathcal{F}\psi_{x_i}(x)|
\le
\sum_{i\in I} \mu(U_i)^{-1}|\lambda_i| \int_X \chi_{U_i}(y) K_\mathcal{U}(x,y) \,d\mu(y) \\
= \int_X \sum_{i\in I} \mu(U_i)^{-1}|\lambda_i| \chi_{U_i}(y) K_\mathcal{U}(x,y) \,d\mu(y)
= K_\mathcal{U} \left( \sum_{i\in I} \mu(U_i)^{-1}|\lambda_i| \chi_{U_i} \right) (x),
\end{gather*}
where summation and integration can be interchanged due to monotone convergence.
Since $\{\lambda_i\}_{i\in I}\in Y^\natural$ the sum $\sum_{i\in I}\mu(U_i)^{-1}|\lambda_i|\chi_{U_i}$ defines pointwise a function in $Y$. By assumption $K_\mathcal{U}$ operates continuously on $Y$ and
hence also $K_\mathcal{U} \left( \sum_{i\in I}\mu(U_i)^{-1}|\lambda_i|\chi_{U_i} \right)\in Y$,
which implies $\left| K_\mathcal{U} \left( \sum_{i\in I} \mu(U_i)^{-1}|\lambda_i| \chi_{U_i} \right)(x) \right|<\infty$ for a.e.\ $x\in X$.
It follows that $\sum_{i\in I} \lambda_iV_\mathcal{F}\psi_{x_i}(x)$
converges absolutely at these points.
As a consequence of the solidity of $Y$ and the pointwise estimate
\begin{align}\label{auxeq:est1}
\Big| \sum_{i\in I}\lambda_iV_\mathcal{F}\psi_{x_i} \Big|
\le \sum_{i\in I} |\lambda_i||V_\mathcal{F}\psi_{x_i}|
\le K_\mathcal{U} \left( \sum_{i\in I} \mu(U_i)^{-1}|\lambda_i| \chi_{U_i} \right) \in Y,
\end{align}
the measurable functions $\sum_{i\in I} \lambda_iV_\mathcal{F}\psi_{x_i}$ and
$\sum_{i\in I} |\lambda_i||V_\mathcal{F}\psi_{x_i}|$ belong to $Y$ with
\begin{gather}\label{auxeq:est2}
\Big\| \sum_{i\in I} \lambda_iV_\mathcal{F}\psi_{x_i} \Big| Y \Big\|
\le \Big\| \sum_{i\in I} |\lambda_i| |V_\mathcal{F}\psi_{x_i}| \Big| Y \Big\|
\le \interleave K_\mathcal{U}\interleave \| \{\lambda_i \}_{i\in I} |Y^\natural \|.
\end{gather}
It remains to show that $\sum_{i\in I} \lambda_i V_\mathcal{F}\psi_{x_i}$ converges
unconditionally in $Y$ to its pointwise limit, if the finite sequences are dense in $Y^\natural(\mathcal{U})$.
For this we fix an arbitrary bijection $\sigma:\N\rightarrow I$ and obtain as in \eqref{auxeq:est2}
\begin{align}\label{auxeq:tendto0}
\Big\| \sum_{m=n+1}^\infty \lambda_{\sigma(m)} V_\mathcal{F}\psi_{x_{\sigma(m)}} \Big| Y \Big\|
\le \interleave K_\mathcal{U} \interleave \| \Lambda - \Lambda^\sigma_n | Y^\natural \|,
\end{align}
where the sequence $\Lambda^\sigma_n$ is given as in Lemma~\ref{lem:ss_findens}. According to this lemma the right-hand side of \eqref{auxeq:tendto0}
tends to zero for $n\rightarrow\infty$, which finishes the proof.
\hspace*{\fill} \rule{3mm}{3mm}
\begin{corollary}\label{cor:familyG}
With the assumptions of the previous lemma $\mathcal{G}=\{ \psi_x\}_{x\in X}\subset \mathsf{Co}(\mathcal{F},Y)$.
\end{corollary}
\noindent {\bf Proof.}\,
For every $x\in X$ there is an index $i_0\in I$ such that $x\in U_{i_0}$. Set $x_{i_0}:=x$
and choose arbitrary points $x_i\in U_i$ for $i\in I\backslash\{i_0\}$.
Let $\delta^{i_0}$ denote the sequence, which has entry 1 at position $i_0$ and is zero
elsewhere. Since $Y$ is assumed to be rich $\delta^{i_0}\in Y^\natural$ and
by the previous lemma
$V_\mathcal{F}\psi_x=\sum_{i\in I} \delta^{i_0}_i V_\mathcal{F}\psi_{x_i}\in Y$, whence
$\psi_x\in \mathsf{Co}(\mathcal{F},Y)$.
\hspace*{\fill} \rule{3mm}{3mm}
The correspondence principle allows to cast Lemma~\ref{auxlem:mainsynthesis1}
in a different form, which corresponds to \cite[Lem.~3.11]{RaUl10}.
However, due to the different deduction
the technical assumption $Y^\natural\hookrightarrow (L_\infty^{1/\nu})^\natural$ is not required any more.
\begin{lemma}\label{auxlem:mainsynthesis2}
With the same assumptions as in Lemma~\ref{auxlem:mainsynthesis1} the
series $\sum_{i\in I} \lambda_i \psi_{x_i}$ converges unconditionally in the weak*-topology of
$(\mathcal{H}_1^\nu)^\urcorner$ to an element $f\in \mathsf{Co}(\mathcal{F},Y)$ with
\begin{align*}
V_\mathcal{F}f=V_\mathcal{F}\Big(\sum_{i\in I} \lambda_i \psi_{x_i}\Big)=\sum_{i\in I} \lambda_i V_\mathcal{F}\psi_{x_i}
\end{align*}
and the estimate \hfill
$\displaystyle{
\| f | \mathsf{Co}(\mathcal{F},Y) \|= \Big\|\sum_{i\in I} \lambda_i \psi_{x_i} \Big| \mathsf{Co}(\mathcal{F},Y) \Big\|
\le \interleave K_\mathcal{U} \interleave \| \{\lambda_i\}_{i\in I} | Y^\natural \|.
}$ \hfill\hfill~\\
Moreover, if the finite sequences are dense in $Y^\natural(\mathcal{U})$ the series
also converges unconditionally in the quasi-norm of $\mathsf{Co}(\mathcal{F},Y)$.
\end{lemma}
\noindent {\bf Proof.}\,
If the subset $J\subset I$ is finite we have
$
V_\mathcal{F}\Big( \sum_{i\in J} \lambda_i\psi_{x_i} \Big)(x)= \sum_{i\in J} \lambda_iV_\mathcal{F}\psi_{x_i}(x)
$
for all $x\in X$.
Moreover, we have proved in Lemma~\ref{auxlem:mainsynthesis1} that $\sum_{i\in I} \lambda_iV_\mathcal{F}\psi_{x_i}$ converges pointwise
absolutely a.e.\ to a function in $Y$.
In order to apply the correspondence principle, Corollary~\ref{cor:corrpri}, it remains to verify that
the sums $\sum_{i\in J} \lambda_i\psi_{x_i}$ for finite subsets $J\subset I$
are uniformly bounded in $(\mathcal{H}_1^\nu)^\urcorner$.
With the continuous embedding $\mathsf{Co}(Y) \hookrightarrow (\mathcal{H}_1^\nu)^\urcorner$ from Theorem~\ref{thm:co_main}
we can conclude
\begin{gather*}
\Big\| \sum_{i\in J} \lambda_i\psi_{x_i} \Big| (\mathcal{H}_1^\nu)^\urcorner\Big\|\lesssim\Big\|\sum_{i\in J}\lambda_i\psi_{x_i}\Big|\mathsf{Co}(Y)\Big\|
= \Big\| \sum_{i\in J} \lambda_iV_\mathcal{F}\psi_{x_i} \Big| Y \Big\| \le \Big\| \sum_{i\in I} |\lambda_i||V_\mathcal{F}\psi_{x_i}| \Big| Y \Big\|
\end{gather*}
for every finite subset $J\subset I$, where we used that $\psi_{x_i}\in \mathsf{Co}(Y)$ for all $i\in I$ by Corollary~\ref{cor:familyG}.
We have shown in the proof of Lemma~\ref{auxlem:mainsynthesis1} that $\sum_{i\in I} |\lambda_i||V_\mathcal{F}\psi_{x_i}|$ is a function in $Y$.
Hence the sums are uniformly bounded in $(\mathcal{H}_1^\nu)^\urcorner$ and Corollary~\ref{cor:corrpri}
implies the unconditional weak*-convergence
of $\sum_{i\in I} \lambda_i\psi_{x_i}$ to an element
$f\in (\mathcal{H}_1^\nu)^\urcorner$. Moreover, $f\in \mathsf{Co}(Y)$ because Corollary~\ref{cor:corrpri} together with the previous lemma asserts that $V_\mathcal{F}f=\sum_{i\in I} \lambda_i V_\mathcal{F}\psi_{x_i}\in Y$.
It remains to show that $\sum_{i\in I} \lambda_i \psi_{x_i}$ converges
unconditionally in $\mathsf{Co}(\mathcal{F},Y)$, if the finite sequences are dense in $Y^\natural$.
For a subset $\tilde{I}\subset I$ let $\tilde{\Lambda}$ denote the sequence which coincides with $\Lambda$ on $\tilde{I}$ and
is trivial elsewhere.
By solidity $\tilde{\Lambda}\in Y^\natural$ and -- applying what we have proved so far --
the sum $\sum_{i\in \tilde{I}} \lambda_i \psi_{x_i} $ converges in the weak*-topology to an element
of $\mathsf{Co}(Y)$ and
$
V_\mathcal{F}\Big(\sum_{i\in \tilde{I}} \lambda_i \psi_{x_i} \Big) = \sum_{i\in \tilde{I}} \lambda_i V_\mathcal{F}\psi_{x_i}.
$
In view of \eqref{auxeq:tendto0} we conclude
\begin{align*}
\Big\| \sum_{m=n+1}^\infty \lambda_{\sigma(m)} \psi_{x_{\sigma(m)}} \Big| \mathsf{Co}(Y) \Big\|
= \Big\| \sum_{m=n+1}^\infty \lambda_{\sigma(m)} V_\mathcal{F}\psi_{x_{\sigma(m)}} \Big| Y \Big\|
\rightarrow 0 \quad (n\rightarrow\infty),
\end{align*}
for an arbitrary bijection $\sigma:\N\rightarrow I$, which finishes the proof.
\hspace*{\fill} \rule{3mm}{3mm}
\subsubsection*{Atomic decompositions}
Our first goal is to obtain atomic decompositions of the coorbit $\mathsf{Co}(Y)$.
Since $\mathsf{Co}(Y)$ is isometrically isomorphic to the function space $R(Y)$
we initially focus on this space and recall from Theorem~\ref{thm:co_main} that for functions $F\in R(Y)$ the reproducing formula holds, i.e.\
\[
F= R(F)= \int_X F(y)R(\cdot,y) \,d\mu(y) ,\qquad F\in R(Y).
\]
This identity can be interpreted as a ``continuous atomic decomposition'' of $F$ with atoms $R(\cdot,y)$
indexed by $y\in X$.
The strategy is to discretize the integral, an approach which originates
from Feichtinger and Gröchenig~\cite{feGr89a} and was also used in subsequent
papers e.g. in \cite{fora05,RaUl10}.
To this end let $\mathcal{U}=\{U_i\}_{i\in I}$ be an admissible
covering of $X$ and let $\Phi=\{\Phi_i\}_{i\in I}$ be a $\mathcal{U}$-PU, i.e.\
a partition of unity subordinate to the covering $\mathcal{U}$ consisting of
measurable functions $\Phi_i$ which satisfy
\begin{enumerate}
\item[(i)] $0\le\Phi_i(x)\le 1$ for all $x\in X$ and all $i\in I$,
\item[(ii)] ${\rm supp \, } \Phi_i\subset U_i$ for all $i\in I$,
\item[(iii)] $\sum_{i\in I} \Phi_i(x) = 1$ for all $x\in X$.
\end{enumerate}
We note that the construction of such a family $\Phi$ with respect to a locally finite covering is standard,
see e.g.\ \cite[p.~127]{Fol84}.
Using $\Phi$ the integral operator $R$ can be written in the form
\[
R(F)(x)=\sum_{i\in I} \int_X \Phi_i(y)F(y)R(x,y) \,d\mu(y).
\]
A formal discretization yields a discrete integral operator $U_\Phi$, called the \emph{discretization operator},
\begin{align}\label{def:DiscrOp}
U_\Phi F(x) :=\sum_{i\in I} c_i F(x_i)R(x,x_i),
\end{align}
where $c_i:=\int_X \Phi_i(y)\,d\mu(y)$ and the points $\{x_i\}_{i\in I}$ are chosen such that $x_i\in U_i$.
Here we must give meaning to the point evaluations $F(x_i)$ since in general $F\in Y$ only determines an equivalence class of functions where point evaluations are not well-defined.
However, the operator $U_\Phi$ is only applied to elements $F\in R(Y)$ and pointwise evaluation can be understood in the sense
\[
F(x_i)=(RF)(x_i)=\int_X R(x_i,y)F(y) \,d\mu(y)\,.
\]
Intuitively, $U_\Phi F$ approximates $R(F)$ because the discretization resembles a Riemannian sum of the integral.
Hence we can hope to obtain an atomic decomposition from the relation
\[
F= R(F) \approx U_\Phi F = \sum_{i\in I} c_i F(x_i)R(\cdot,x_i).
\]
So far our considerations were just formal. To make the argument precise
we have to impose conditions on $\mathcal{F}$ so that $U_\Phi$ is a well-defined
operator.
It turns out that here mapping properties of the kernels
$M_\mathcal{U}:=K_\mathcal{U}[\mathcal{F},\mathcal{F}]$ and $M^*_\mathcal{U}:=K^*_\mathcal{U}[\mathcal{F},\mathcal{F}]$
come into play. Recalling the definition~\eqref{eqdef:kerK} of $K_\mathcal{U},\,K^*_\mathcal{U}$ we have for $x,y\in X$
\begin{align}\label{eqdef:kerM}
M_\mathcal{U}(x,y)=\sup_{z\in Q_y} |\langle \varphi_x,\varphi_z \rangle| \quad\text{and}\quad
M^*_\mathcal{U}(x,y)= M_\mathcal{U}(y,x)
\end{align}
with $Q_y=\bigcup\limits_{i~:~y\in U_i} U_i$ for the covering $\mathcal{U}=\{U_i\}_{i\in I}$.
The lemma below provides definition \eqref{def:DiscrOp} with a solid foundation.
\begin{lemma}\label{lem:DiscrOp}
If $M_{\mathcal{U}}$ and
$M^*_{\mathcal{U}}$ given in \eqref{eqdef:kerM} are bounded operators on $Y$
the discretization operator defined in \eqref{def:DiscrOp}
is a well-defined continuous operator $U_\Phi: R(Y)\rightarrow R(Y)$ with operator quasi-norm
$\| U_\Phi | R(Y)\rightarrow R(Y) \| \le \sigma(\mathcal{U}) \interleave M_\mathcal{U}\interleave
\interleave M^*_\mathcal{U} \interleave$.
In general, the convergence of the sum in \eqref{def:DiscrOp} is pointwise absolutely a.e..
If the finite sequences are dense in $Y^\natural$ the convergence is also in the quasi-norm of $Y$.
\end{lemma}
\noindent {\bf Proof.}\,
For $F\in R(Y)$ Lemma~\ref{lem:reproform3} gives an element $f\in \mathsf{Co}(Y)$ such that $F(x)=Vf(x)$ for all $x\in X$.
Thus, using Corollary~\ref{auxcor:trafosampling} with $\mathcal{G}=\mathcal{F}$, we can conclude $\{F(x_i)\}_{i\in I}\in Y^\flat$ with
$\| \{F(x_i)\}_{i\in I} | Y^\flat \| \le \sigma(\mathcal{U})\interleave M^*_\mathcal{U} \interleave \| F | Y\|$.
Since $\lambda_i\mapsto \mu(U_i)\lambda_i$ is an isometry from $Y^\flat$ to $Y^\natural$ and since $0\le c_i\le\mu(U_i)$ for all $i\in I$
it follows $\{c_iF(x_i)\}_{i\in I}\in Y^\natural(\mathcal{U})$ and $\| \{c_iF(x_i)\}_{i\in I} | Y^\natural \| \le \| \{F(x_i)\}_{i\in I} | Y^\flat \| $.
Therefore by Lemma~\ref{auxlem:mainsynthesis2} the sum $\sum_{i\in I} c_i F(x_i) \varphi_{x_i}$ converges in the weak*-topology
to an element in $\mathsf{Co}(Y)$ and $U_\Phi F=V\big(\sum_{i\in I} c_i F(x_i)\varphi_{x_i} \big)$. As a consequence $U_\Phi F\in R(Y)$ and again with Lemma~\ref{auxlem:mainsynthesis2}
\begin{align*}
\|U_\Phi F | Y \|&=\| \sum_{i\in I} c_i F(x_i) \varphi_{x_i} | \mathsf{Co}(Y) \|\\
&\le \interleave M_\mathcal{U} \interleave \| \{F(x_i)\}_{i\in I} | Y^\flat \|
\le \sigma(\mathcal{U}) \interleave M_\mathcal{U} \interleave \interleave M^*_\mathcal{U} \interleave \| F | Y\|.
\end{align*}
\hspace*{\fill} \rule{3mm}{3mm}
The operator $U_\Phi$ is self-adjoint in a certain sense.
\begin{lemma}\label{lem:Uselfadj}
Let $\mathcal{U}=\{U_i\}_{i\in I}$ be an admissible covering and assume
that the associated maximal kernels $M_{\mathcal{U}}$ and $M^*_{\mathcal{U}}$ of the analyzing frame
$\mathcal{F}$ belong to $\mathcal{A}_{m_\nu}$. Then $U_\Phi$ is a well-defined operator on
$R(L_\infty^{1/\nu})$ and $R(L_1^\nu)$ and
for every $F\in R(L_\infty^{1/\nu})$ and $G\in R(L_1^\nu)$ it holds
\begin{align}\label{eq:Uselfadj}
\langle U_\Phi F , G \rangle_{L_\infty^{1/\nu}\times L_1^\nu } = \langle F , U_\Phi G \rangle_{L_\infty^{1/\nu}\times L_1^\nu }.
\end{align}
\end{lemma}
\noindent {\bf Proof.}\,
For $F\in R(L_\infty^{1/\nu})$ we have $F(x)= \langle F, R(\cdot,x) \rangle_{L_\infty^{1/\nu}\times L_1^\nu}$ and -- by arguments in the proof of Lemma~\ref{lem:DiscrOp} for $Y=L_\infty^{1/\nu}$ --
$\{c_iF(x_i)\}_{i\in I}\in (L_\infty^{1/\nu})^\natural$.
Therefore, $\sum_{i\in I} c_i|F(x_i)||R(\cdot,x_i)|\in L_\infty^{1/\nu}$ by Lemma~\ref{auxlem:mainsynthesis1} and \eqref{auxeq:est1}.
Analogous statements hold for $G\in R(L_1^\nu)$. We conclude
\begin{align*
\begin{aligned}
\langle U_\Phi F , G \rangle_{L_\infty^{1/\nu}\times L_1^\nu }
&= \sum_{i\in I} c_iF(x_i) \langle R(\cdot,x_i), G \rangle_{L_\infty^{1/\nu}\times L_1^\nu }
= \sum_{i\in I} c_i F(x_i) \overline{G(x_i)} \\
&= \sum_{i\in I} c_i \overline{G(x_i)} \langle F,R(\cdot,x_i) \rangle_{L_\infty^{1/\nu}\times L_1^\nu }
= \langle F , U_\Phi G \rangle_{L_\infty^{1/\nu}\times L_1^\nu },
\end{aligned}
\end{align*}
where Lebesgue's dominated convergence theorem was used.
\hspace*{\fill} \rule{3mm}{3mm}
Our next aim is to find suitable conditions on $\Phi$ and $\mathcal{U}$ such that the discretization operator $U_\Phi$ is invertible.
The possible expansion
\[
F= U_\Phi U_\Phi^{-1} F = \sum_{i\in I} c_i (U_\Phi^{-1}F)(x_i)R(\cdot,x_i)
\]
then yields an atomic decomposition for $F\in R(Y)$.
Intuitively, for the invertibility of $U_\Phi$ the functions $F\in R(Y)$ must be sufficiently ``smooth'', so that a
discrete sampling is possible without loss of information.
Since $R(Y)$ is the isomorphic image of $\mathsf{Co}(Y)$ under the voice transform, we have to ensure that the transforms
$V_\mathcal{F}f$ of elements $f\in \mathsf{Co}(Y)$ are smooth enough.
An appropriate tool for the control of the smoothness are the oscillation kernels, a concept originally due to Feichtinger and Gröchenig.
We use the extended definition from \cite{balhol10}, utilizing a \emph{phase function} $\Gamma:X\times X\rightarrow\mathbb{S}^1$ where $\mathbb{S}^1=\{z\in\C : |z|=1\}$, namely
\begin{align*
{\operatorname{osc}}_{\mathcal{U},\Gamma}(x,y):= \sup_{z\in Q_y} \left| R_{{\mathcal F}}(x,y)-\Gamma(y,z)R_{{\mathcal F}}(x,z) \right| && \text{and}
&& {\operatorname{osc}}^*_{\mathcal{U},\Gamma}(x,y):= {\operatorname{osc}}_{\mathcal{U},\Gamma}(y,x)
\end{align*}
with $x,y\in X$ and $Q_y$ as in \eqref{eqdef:kerK}. The choice $\Gamma\equiv1$ yields the kernels used in \cite{fora05,RaUl10}.
We can now formulate a condition on $\mathcal{F}$ which ensures invertibility of $U_\Phi$, but which is weaker than
the assumptions made in \cite{fora05,RaUl10} since we allow a larger class of coverings and weights.
\begin{definition}\label{def:D(d,v,Y)}
We say a tight continuous frame $\mathcal{F}=\{\varphi_x\}_{x\in X}\subset\mathcal{H}$ possesses \emph{property
$D(\delta,\nu,Y)$} for a weight $\nu\ge 1$ and some $\delta>0$ if it has property $F(\nu,Y)$ and if
there exists an admissible covering $\mathcal{U}$ and a phase function $\Gamma:X\times X\rightarrow\mathbb{S}^1$ so that
\begin{enumerate}
\item[(i)] $|R_\mathcal{F}|,\, {\operatorname{osc}}_{\mathcal{U},\Gamma},\, {\operatorname{osc}}^*_{\mathcal{U},\Gamma}\,\in \mathcal{B}_{Y,m_\nu}$.
\item[(ii)] $\displaystyle{
\|{\operatorname{osc}}_{\mathcal{U},\Gamma}|\mathcal{B}_{Y,m_\nu} \| < \delta }$ and $\displaystyle{ \| {\operatorname{osc}}^*_{\mathcal{U},\Gamma} | \mathcal{B}_{Y,m_\nu} \| < \delta }$.
\end{enumerate}
\end{definition}
\begin{remark}\label{rem:D(d,v,Y)}
A frame $\mathcal{F}$ with property $D(\delta,\nu,Y)$
for a covering $\mathcal{U}$ and a phase function $\Gamma$ automatically possesses properties
$D(\delta,\nu,L_\infty^{1/\nu})$ and $D(\delta,\nu,L_1^\nu)$ for the same covering $\mathcal{U}$ and the same phase function $\Gamma$.
\end{remark}
\noindent {\bf Proof.}\,
Every $K\in\mathcal{A}_{m_\nu}$ operates continuously on $L_\infty^{1/\nu}$ and $L_1^\nu $
with $\| K |L_\infty^{1/\nu}\rightarrow L_\infty^{1/\nu} \| \le \| K | \mathcal{A}_{m_\nu} \| $ and
$\| K |L_1^\nu\rightarrow L_1^\nu \| \le \| K | \mathcal{A}_{m_\nu} \| $.
Moreover, for
$Y=L_\infty^{1/\nu}$ or $Y=L_1^\nu$ it holds $R(Y)\hookrightarrow L_\infty^{1/\nu}$ and the algebras
$\mathcal{B}_{Y,m_\nu}$ and $\mathcal{A}_{m_\nu}$ coincide with equal norms.
\hspace*{\fill} \rule{3mm}{3mm}
Note that for a measurable kernel function $K:X\times X\rightarrow\C$ the equality $\interleave K \interleave = {\interleave}\,|K|\,{\interleave}$ does not hold in general.
However, we have the following result.
\begin{lemma}\label{lem:keraction}
Let $K,L:X\times X\rightarrow\C$ be two measurable kernels and assume that $|K|$ acts continuously on $Y$.
Then, if $|L(x,y)|\le|K(x,y)|$ for almost all $x,y\in X$, also $L$ acts continuously on $Y$ with the estimate
$\interleave L \interleave \le {\interleave}\,|K|\,{\interleave}$.
In particular, $K$ acts continuously on $Y$ with $\interleave K \interleave \le {\interleave}\,|K|\,{\interleave}$.
\end{lemma}
Let us record an important consequence of the previous lemma.
\begin{corollary}\label{cor:D(d,v,Y)}
If the frame $\mathcal{F}$ has property $D(\delta,\nu,Y)$
the kernels $R_{{\mathcal F}}$, $|R_{{\mathcal F}}|$, ${\operatorname{osc}}_{\mathcal{U},\Gamma}$,
${\operatorname{osc}}^*_{\mathcal{U},\Gamma}$, $M_\mathcal{U}$, and $M^*_\mathcal{U}$ are continuous operators on $Y$.
\end{corollary}
\noindent {\bf Proof.}\,
For all $x,y\in X$ we have $|R_{{\mathcal F}}(x,y)|\le M_\mathcal{U}(x,y)$ as well as the estimates
\begin{align*
M_\mathcal{U}(x,y) \le {\operatorname{osc}}_{\mathcal{U},\Gamma}(x,y) + |R_{{\mathcal F}}(x,y)| && \text{and} &&
{\operatorname{osc}}_{\mathcal{U},\Gamma}(x,y)\le M_\mathcal{U}(x,y) + |R_{{\mathcal F}}(x,y)|.
\end{align*}
The corresponding estimates for the involuted kernels also hold true. Hence Lemma~\ref{lem:keraction} yields the result.
\hspace*{\fill} \rule{3mm}{3mm}
The following lemma shows that $U_\Phi F$ approximates $F\in R(Y)$ if the analyzing frame possesses property
$D(\delta,\nu,Y)$ for a suitably small $\delta>0$.
It corresponds to \cite[Thm.~5.13]{fora05}
and the proof is still valid in our setting -- with the triangle inequality replaced by
the corresponding quasi-triangle inequality.
\begin{lemma}\label{auxlem:Uinvert}
Suppose that the analyzing frame $\mathcal{F}$ possesses property $D(\delta,\nu,Y)$ for some $\delta>0$ with
associated covering $\mathcal{U}=\{U_i\}_{i\in I}$ and phase function $\Gamma$.
Then the discretization operator $U_\Phi$ for some $\mathcal{U}$-PU $\Phi$
is a well-defined bounded operator $U_\Phi:R(Y)\rightarrow R(Y)$ and it holds
\begin{align}\label{eq:Uinvert}
\| Id-U_{\Phi} ~|~ R(Y) \rightarrow R(Y)\| \le \delta (\interleave R \interleave + \interleave M^*_\mathcal{U} \interleave )C_Y .
\end{align}
\end{lemma}
\noindent {\bf Proof.}\,
For $F\in R(Y)$ there is $f\in\mathsf{Co}(Y)$ with $F=Vf$. By adapting the proof of Lemma~\ref{auxlem:mainanalysis},
it can be shown that $\widetilde{H}:=\sum_{i\in I} \sup_{z\in U_i} |Vf(z)| \Phi_i \in Y$ with $\|\widetilde{H}|Y\|\le \interleave M^*_\mathcal{U} \interleave \| f | \mathsf{Co}(\mathcal{F},Y)\|$.
The intersection number $\sigma(\mathcal{U})$ does not come into play here, since the inequality~\eqref{ineq:aux} can be improved when using $\Phi_i$ instead of $\chi_{U_i}$.
A solidity argument yields
$H:=\sum_{i\in I} |F(x_i)|\Phi_i \in Y$ and also $\sum_{i\in I} F(x_i)\overline{\Gamma(\cdot,x_i)}\Phi_i \in Y$ with respective quasi-norms dominated by $\|\widetilde{H}|Y\|$.
Let us introduce the auxiliary operator $S_\Phi:R(Y)\rightarrow R(Y)$, given pointwise for $x\in X$ by
\[
S_\Phi F(x):=R\bigg( \sum_{i\in I} F(x_i)\overline{\Gamma(\cdot,x_i)}\Phi_i \bigg)(x).
\]
Since $F=R(F)$ we can estimate
\[
\|F- S_\Phi F | Y \| = \Big\| R \Big( F - \sum_{i\in I} F(x_i)\overline{\Gamma(\cdot,x_i)}\Phi_i \Big) \Big| Y \Big\|
\le \interleave R \interleave \Big\| F - \sum_{i\in I}F(x_i)\overline{\Gamma(\cdot,x_i)}\Phi_i \Big| Y \Big\|.
\]
We further obtain for every $x\in X$, because $F(x)=R(F)(x)$ even pointwise,
\begin{gather*}
\Big| F(x)- \sum_{i\in I} F(x_i)\overline{\Gamma(x,x_i)}\Phi_i(x) \Big| = \Big| \sum_{i\in I} \big(R(F)(x)-\overline{\Gamma(x,x_i)}R(F)(x_i)\big)\Phi_i(x) \Big| \\
\le \sum_{i\in I} \Phi_i(x)\int_X |R(y,x)-\Gamma(x,x_i)R(y,x_i)| |F(y)|\,d\mu(y)
\le {\operatorname{osc}}^*_{\mathcal{U},\Gamma}(|F|)(x).
\end{gather*}
We arrive at \hfill
$
\| F - S_\Phi F | Y \| \le \interleave R \interleave \| {\operatorname{osc}}^*_{\mathcal{U},\Gamma}(|F|) | Y \|
\le \interleave R \interleave \interleave {\operatorname{osc}}^*_{\mathcal{U},\Gamma} \interleave \| F|Y \|
\le \delta \interleave R \interleave \| F|Y \|.
$
\hfill
\vspace*{1.5ex}
Let us now estimate the difference of $U_\Phi$ and $S_\Phi$. First we see that for $x\in X$
\begin{align*}
S_\Phi F(x)
=\int_X R(x,y) \sum_{i\in I} F(x_i)\overline{\Gamma(y,x_i)}\Phi_i(y) \,d\mu(y)
=\sum_{i\in I} \int_X R(x,y)F(x_i)\overline{\Gamma(y,x_i)}\Phi_i(y) \,d\mu(y).
\end{align*}
Here we used Lebesgue's dominated convergence theorem, which we use again to obtain
\begin{gather*}
\left| U_\Phi F(x)- S_\Phi F(x) \right| = \Big| \sum_{i\in I} \int_X \Phi_i(y) F(x_i) (R(x,x_i)-\overline{\Gamma(y,x_i)}R(x,y)) \,d\mu(y) \Big| \\
\le \sum_{i\in I} \int_X |F(x_i)|\Phi_i(y){\operatorname{osc}}_{\mathcal{U},\Gamma}(x,y) \,d\mu(y)
= \int_X \sum_{i\in I} |F(x_i)|\Phi_i(y){\operatorname{osc}}_{\mathcal{U},\Gamma}(x,y) \,d\mu(y) = {\operatorname{osc}}_{\mathcal{U},\Gamma}(H)(x),
\end{gather*}
where $H=\sum_{i\in I} |F(x_i)|\Phi_i$ as above. We conclude
\begin{align*}
\left\| U_\Phi F- S_\Phi F | Y \right\| \le \left\| {\operatorname{osc}}_{\mathcal{U},\Gamma}(H) | Y \right\|
\le \interleave {\operatorname{osc}}_{\mathcal{U},\Gamma} \interleave \|H |Y\| \le \delta \interleave M^*_\mathcal{U} \interleave \| F | Y \|.
\end{align*}
Hence, altogether we have proved
\[
\| F -U_\Phi F | Y \|\le
C_Y ( \| F - S_\Phi F|Y\| + \|S_\Phi F-U_\Phi F | Y \|) \le \delta C_Y \| F | Y \| ( \interleave M^*_\mathcal{U} \interleave + \interleave R \interleave ) .
\]
\hspace*{\fill} \rule{3mm}{3mm}
If the righthand side of \eqref{eq:Uinvert} is less than one, $U_\Phi:R(Y)\rightarrow R(Y)$ is boundedly invertible with
the Neumann expansion $U_\Phi^{-1}=\sum_{n=0}^\infty (Id-U_\Phi)^n$, which is still valid in the quasi-Banach setting.
Finally, we are able to prove a cornerstone of the discretization theory, which generalizes \cite[Thm.~5.7]{fora05} and \cite[Thm.~3.11]{RaUl10}.
Note that the characterization via the sequence
spaces is a new result even in the Banach case and that we can drop many technical restrictions.
\begin{Theorem}\label{thm:atomicdec}
Let $Y$ be a rich solid QBF-space with quasi-norm constant $C_Y$
and suppose that the analyzing frame $\mathcal{F}=\{\varphi_x\}_{x\in X}$
possesses property $D(\delta,\nu,Y)$ for the covering $\mathcal{U}=\{U_i\}_{i\in I}$
and a small enough $\delta>0$ such that
\begin{align}\label{eq:dcond}
\delta\big( (1+C_Y)\big\| |R_{\mathcal{F}}| \big| \mathcal{B}_{Y,m_\nu} \big\| + \delta C_Y \big)C_Y \le 1.
\end{align}
Choosing arbitrary points $x_i\in U_i$, the sampled frame
$\mathcal{F}_d:=\{\varphi_i\}_{i\in I}:=\{\varphi_{x_i}\}_{i\in I}$
then possesses a ``dual family''
$\widehat{\mathcal{F}_d}=\{\psi_i\}_{i\in I}\subset \mathcal{H}_1^\nu\cap \mathsf{Co}(Y)$
such that the following holds true:
\begin{enumerate}
\item[(i)] (Analysis) An element $f\in(\mathcal{H}_1^\nu)^\urcorner$ belongs to $\mathsf{Co}(Y)$ if and only if
$\{ \langle f,\varphi_i \rangle \}_{i\in I}\in Y^\flat(\mathcal{U})$
\textup{(}or $\{ \langle f,\psi_i \rangle \}_{i\in I}\in Y^\natural(\mathcal{U})$\textup{)}
and we have the quasi-norm equivalences
\begin{align*}
\| f | \mathsf{Co}(Y) \| \asymp \| \{ \langle f,\varphi_{i} \rangle \}_{i\in I} | Y^\flat(\mathcal{U}) \|
\quad\text{ and }\quad
\| f | \mathsf{Co}(Y) \| \asymp \| \{ \langle f,\psi_i \rangle \}_{i\in I} | Y^\natural(\mathcal{U}) \|.
\end{align*}
\item[(ii)] (Synthesis) For every sequence $\{\lambda_i\}_{i\in I}\in Y^\natural(\mathcal{U})$
it holds $f=\sum_{i\in I} \lambda_i\varphi_i \in \mathsf{Co}(Y)$ with $\| f | \mathsf{Co}(Y) \| \lesssim \| \{\lambda_i\}_{i\in I} |Y^\natural(\mathcal{U}) \|$.
In general, the convergence of the sum is in the weak*-topology induced by $(\mathcal{H}_1^\nu)^\urcorner$. It is unconditional in the \mbox{quasi-}norm of $\mathsf{Co}(Y)$,
if the finite sequences are dense in $Y^\natural$.
Similarly, $f=\sum_{i\in I} \lambda_i\psi_i \in \mathsf{Co}(Y)$ with $\| f | \mathsf{Co}(Y) \| \lesssim \| \{\lambda_i\}_{i\in I} |Y^\flat(\mathcal{U}) \|$ in case $\{\lambda_i\}_{i\in I}\in Y^\flat(\mathcal{U})$.
\item[(iii)] (Reconstruction) For all $f\in \mathsf{Co}(Y)$ we have
f=\sum_{i\in I} \langle f,\psi_i \rangle\varphi_i$
and $f=\sum_{i\in I} \langle f,\varphi_i \rangle \psi_i$.
\end{enumerate}
\end{Theorem}
\noindent {\bf Proof.}\,
According to Remark~\ref{rem:D(d,v,Y)} the frame $\mathcal{F}$ has properties
$D(\delta,\nu,L_1^\nu)$ and $D(\delta,\nu,L_\infty^{1/\nu})$ with respect to the covering $\mathcal{U}$,
and by Lemma~\ref{lem:co_examples} it holds
$(\mathcal{H}_1^\nu)^\urcorner \asymp \mathsf{Co}(L_\infty^{1/\nu})$ and $\mathcal{H}_1^\nu= \mathsf{Co}(L_1^\nu)$.
In view of Theorem~\ref{thm:co_main} the voice transform
$V:(\mathcal{H}_1^\nu)^\urcorner\rightarrow R(L_\infty^{1/\nu})$ is thus
a boundedly invertible operator with isometric restrictions $V:\mathsf{Co}(Y)\rightarrow R(Y)$ and $V:\mathcal{H}_1^\nu\rightarrow R(L_1^\nu)$.
Let us fix a $\mathcal{U}$-PU $\Phi=\{\Phi_i\}_{i\in I}$ and put $c_i:=\int_X \Phi_i(y)\,d\mu(y)$.
According to Lemma~\ref{lem:Uselfadj} the corresponding discretization operator $U_\Phi$ is well-defined and bounded on $R(L_\infty^{1/\nu})$.
Condition~\eqref{eq:dcond} on $\delta$ further implies that $U_\Phi:R(L_\infty^{1/\nu})\rightarrow R(L_\infty^{1/\nu})$ is boundedly invertible
as a consequence of Lemma~\ref{auxlem:Uinvert}.
Indeed, using the estimates $\interleave M^*_{\mathcal{U}} \interleave \le C_Y (\interleave |R_{{\mathcal F}}| \interleave + \interleave {\operatorname{osc}}^*_{\mathcal{U},\Gamma} \interleave)$ and $\interleave R_{{\mathcal F}} \interleave \le \interleave |R_{{\mathcal F}}| \interleave$ together with the assumption $\interleave {\operatorname{osc}}^*_{\mathcal{U},\Gamma} \interleave<\delta$ we can deduce
\begin{align*}
\delta(\interleave R_{{\mathcal F}} \interleave + \interleave M^*_{\mathcal{U}} \interleave ) C_Y
&\le \delta( (1+C_Y) \interleave |R_{{\mathcal F}}| \interleave + C_Y\interleave {\operatorname{osc}}^*_{\mathcal{U},\Gamma} \interleave) C_Y\\
&< \delta( (1+C_Y) \| |R_{{\mathcal F}}| | \mathcal{B}_{Y,m_\nu} \| + C_Y \delta )C_Y \le 1.
\end{align*}
Analogously, it follows that
$U_\Phi:R(L_1^\nu)\rightarrow R(L_1^\nu)$ and $U_\Phi:R(Y)\rightarrow R(Y)$ are
boundedly invertible.
For the proof it is useful to note that the operator
$
T:=V^{-1}U_\Phi^{-1}V ~:~ (\mathcal{H}_1^\nu)^\urcorner \rightarrow (\mathcal{H}_1^\nu)^\urcorner
$
is a boundedly invertible isomorphism, whose restrictions $T:\mathcal{H}_1^\nu\rightarrow \mathcal{H}_1^\nu$ and $T:\mathsf{Co}(Y)\rightarrow \mathsf{Co}(Y)$ are also boundedly invertible.
Moreover, $T$ is ``self-adjoint''. For this observe that relation \eqref{eq:Uselfadj} also
holds for the inverse $U_\Phi^{-1}=\sum_{n=0}^\infty (Id-U_\Phi)^n$.
Consequently, for $f\in(\mathcal{H}_1^\nu)^\urcorner$ and $\zeta\in\mathcal{H}_1^\nu$
\begin{align*}
\langle f,T\zeta \rangle = \langle f,V^{-1}U_\Phi^{-1}V\zeta \rangle
=\langle Vf , U_\Phi^{-1}V\zeta \rangle_{L_\infty^{1/\nu}\times L_1^\nu}
=\langle U_\Phi^{-1}Vf , V\zeta \rangle_{L_\infty^{1/\nu}\times L_1^\nu}
=\langle Tf , \zeta \rangle.
\end{align*}
It follows further that $T$ is sequentially continuous with respect to the weak*-topology of $(\mathcal{H}_1^\nu)^\urcorner$.
To see this let $f_n \rightarrow f$ in the weak*-topology. Then
$
\langle Tf_n , \zeta \rangle = \langle f_n , T\zeta \rangle \rightarrow
\langle f , T\zeta \rangle = \langle Tf , \zeta \rangle
$
for every $\zeta\in\mathcal{H}_1^\nu$.
By Lemma~\ref{cor:familyG}, Corollary~\ref{cor:D(d,v,Y)} and Lemma~\ref{lem:co_examples} the atoms $\varphi_{x_i}$ lie in $\mathcal{H}_1^\nu\cap \mathsf{Co}(Y)$.
Since $T$ respects these subspaces we can define
\[
\psi_i:= c_iT\varphi_i \:\in \mathcal{H}_1^\nu\cap \mathsf{Co}(Y)
\]
and claim that $\widehat{\mathcal{F}_d}=\{\psi_i\}_{i\in I}$ is the desired ``dual'' of $\mathcal{F}_d=\{\varphi_i\}_{i\in I}$.
After these preliminary considerations we now turn to the proof of the assertions.
\noindent
\textit{Step 1.}\,
If $f\in \mathsf{Co}(Y)$ then $\{ \langle f,\varphi_i \rangle \}_{i\in I}= \{ Vf(x_i) \}_{i\in I}\in Y^\flat$
and $\| \{ \langle f,\varphi_i \rangle \}_{i\in I} | Y^\flat \| \lesssim \| f | \mathsf{Co}(Y)\|$ by Corollary~\ref{auxcor:trafosampling}.
Furthermore, it holds $Tf\in \mathsf{Co}(Y)$ and Corollary~\ref{auxcor:trafosampling} yields
$
\{ \langle f,\psi_i \rangle \}_{i\in I}= \{ c_i \langle Tf, \varphi_i \rangle \}_{i\in I}\in Y^\natural
$
with the estimate
$\|\{ \langle f,\psi_i \rangle \}_{i\in I} | Y^\natural \| \le
\| \{\langle Tf, \varphi_i \rangle \}_{i\in I} | Y^\flat \| \lesssim \| Tf | \mathsf{Co}(Y)\|
\lesssim \|f | \mathsf{Co}(Y)\|$.
\noindent
\textit{Step 2.}\,
If $\{ \lambda_i \}_{i\in I}\in Y^\natural$ then by Lemma~\ref{auxlem:mainsynthesis2} the sum $\sum_{i\in I} \lambda_i\varphi_i$ converges
in the weak*-topology to an element in $\mathsf{Co}(Y)$ with estimate
$\| \sum_{i\in I} \lambda_i\varphi_i | \mathsf{Co}(Y) \|\lesssim \| \{\lambda_i\}_{i\in I} | Y^\natural \|$.
If the finite sequences are dense in $Y^\flat$ (or equivalently $Y^\natural$)
the convergence is even in the quasi-norm of $\mathsf{Co}(Y)$.
A similar statement holds for the dual family $\{\psi_i\}_{i\in I}$.
Indeed, for $\{ \lambda_i \}_{i\in I}\in Y^\flat$ we have $\{ c_i\lambda_i \}_{i\in I}\in Y^\natural$
and hence $\sum_{i\in I} c_i\lambda_i\varphi_i$ converges
in the weak*-topology to an element in $\mathsf{Co}(Y)$. Since $T$ is sequentially continuous it follows that
\begin{align*
\sum_{i\in I} \lambda_i\psi_i = \sum_{i\in I} c_i\lambda_iT\varphi_i = T\left( \sum_{i\in I} c_i\lambda_i\varphi_i \right)
\in \mathsf{Co}(Y)
\end{align*}
with weak*-convergence in the sums. The operator $T$ is also continuous on $\mathsf{Co}(Y)$, proving the quasi-norm convergence if the finite sequences are dense. Moreover, we have the estimate
\begin{align*}
\Big\| \sum_{i\in I} \lambda_i\psi_i \Big| \mathsf{Co}(Y) \Big\| \lesssim \Big\| \sum_{i\in I} c_i\lambda_i\varphi_i \Big| \mathsf{Co}(Y) \Big\|
\lesssim \| \{c_i\lambda_i\}_{i\in I} | Y^\natural \| \le \| \{\lambda_i\}_{i\in I} | Y^\flat \|.
\end{align*}
\noindent
\textit{Step 3.}\,
In this step we prove the expansions in (iii).
For $f\in(\mathcal{H}_1^\nu)^\urcorner$ we have the identity
\begin{align*}
Vf
=U_\Phi \left( U_\Phi^{-1}Vf \right)
= \sum_{i\in I} c_i \left(U_\Phi^{-1} Vf\right)(x_i) R(\cdot,x_i)
=\sum_{i\in I} \langle f,\psi_i \rangle V\varphi_i
\end{align*}
with pointwise absolute convergence a.e.\ in the sums. Since
$(\mathcal{H}_1^\nu)^\urcorner\asymp \mathsf{Co}(L_\infty^{1/\nu})$ the coefficients
$\{ \langle f,\psi_i \rangle \}_{i\in I}$ belong to $(L_\infty^{1/\nu})^\natural$ according to Step 1.
Hence, by Lemma~\ref{auxlem:mainsynthesis2} it holds $Vf=V(\sum_{i\in I} \langle f,\psi_i \rangle \varphi_i)$ with weak*-convergence of the sum.
The injectivity of $V$ finally yields
\begin{align} \label{proofeq:Atomic3}
f=\sum_{i\in I} \langle f,\psi_i \rangle \varphi_i.
\end{align}
Using the sequential continuity of $T$ with respect to the weak*-topology we can further deduce
\begin{align}\label{proofeq:Atomic4}
f=TT^{-1}f=\sum_{i\in I} \langle T^{-1}f,\psi_i \rangle T\varphi_i = \sum_{i\in I} \langle T^{-1}f,c_iT\varphi_i \rangle T\varphi_i
= \sum_{i\in I} \langle f, \varphi_i \rangle \psi_i.
\end{align}
In particular, these expansions are valid for $f\in \mathsf{Co}(Y)$ with coefficients $\{ \langle f,\psi_i \rangle \}_{i\in I}\in Y^\natural$
and $\{ \langle f,\varphi_i \rangle \}_{i\in I}\in Y^\flat$ by Step 1.
\noindent
\textit{Step 4.}\,
If $f\in(\mathcal{H}_1^\nu)^\urcorner$ and either $\{ \langle f,\varphi_i \rangle \}_{i\in I}\in Y^\flat$ or
$\{ \langle f,\psi_i \rangle \}_{i\in I} \in Y^\natural $ we can conclude from the expansions \eqref{proofeq:Atomic3}
and \eqref{proofeq:Atomic4}
together with Step 2 that $f\in \mathsf{Co}(Y)$. Moreover,
$ \| \{\langle f,\psi_i \rangle \}_{i\in I} | Y^\natural \|$ and
$ \| \{\langle f,\varphi_i \rangle \}_{i\in I} | Y^\flat \|$ are equivalent quasi-norms on $\mathsf{Co}(Y)$ because using Steps 1 and 2
\begin{flalign*
&&\| f | \mathsf{Co}(Y) \| = \Big\| \sum_{i\in I} \langle f,\psi_i \rangle \varphi_i \Big| \mathsf{Co}(Y) \Big\|
\lesssim \| \{\langle f,\psi_i \rangle \}_{i\in I} | Y^\natural \| \lesssim \| f | \mathsf{Co}(Y) \| &&\\
\text{and} && \| f | \mathsf{Co}(Y) \| = \Big\| \sum_{i\in I} \langle f,\varphi_i \rangle \psi_i \Big| \mathsf{Co}(Y) \Big\|
\lesssim \| \{\langle f,\varphi_i \rangle \}_{i\in I} | Y^\flat \| \lesssim \|
f | \mathsf{Co}(Y) \|. &&
\end{flalign*}
\hspace*{\fill} \rule{3mm}{3mm}
\begin{remark}
Properties (i)-(iii) in particular show that the discrete families $\mathcal{F}_d$ and $\widehat{\mathcal{F}_d}$ both constitute atomic decompositions for $\mathsf{Co}(Y)$, as well as quasi-Banach frames,
compare e.g.\ \cite{RaUl10,ra05-3}.
\end{remark}
\subsubsection*{Frame expansion}
Now we come to another main discretization result, which allows
to discretize the coorbit space $\mathsf{Co}(Y)=\mathsf{Co}({\mathcal F},Y)$ by samples of a frame ${\mathcal G} = \{\psi_x\}_{x\in X}$
different from the analyzing frame ${\mathcal F}$.
It is a generalization of \cite[Thm.~3.14]{RaUl10},
whose original proof carries over to
the quasi-Banach setting based on
Corollary~\ref{auxcor:trafosampling} and Lemma~\ref{auxlem:mainsynthesis2}. In contrast to Theorem~\ref{thm:atomicdec}, here we require the
additional property of the covering $\mathcal{U}=\{U_i\}_{i\in I}$ that for some constant $D>0$
\begin{align}\label{eq:covbounbel}
\mu(U_i)\ge D \quad\text{for all }i\in I.
\end{align}
\begin{Theorem}\label{thm:frameexp}
Let $Y$ be a rich solid QBF-space on $X$
and assume that the analyzing frame
$\mathcal{F}=\{\varphi_x\}_{x\in X}$ has property $F(\nu,Y)$.
For $r\in\{1,\ldots,n\}$ let $\mathcal{G}_r=\{\psi_x^r\}_{x\in X}$ and $\tilde{\mathcal{G}}_r=\{\tilde{\psi}_x^r\}_{x\in X}$ be families in $\mathcal{H}$, and suppose
that for some admissible covering $\mathcal{U}=\{U_i\}_{i\in I}$ with the additional property \eqref{eq:covbounbel} the kernels
$K_r:=K_\mathcal{U}[\mathcal{G}_r,\mathcal{F}]$ and
$\tilde{K}^*_r:=K_\mathcal{U}^*[\tilde{\mathcal{G}}_r,\mathcal{F}]$ belong to $\mathcal{B}_{Y,{m_\nu}}$.
Then, if every $f\in\mathcal{H}$ has an expansion
\begin{align}\label{eq:frexpansion}
f=\sum_{r=1}^n \sum_{i\in I} \langle f , \tilde{\psi}^r_{x_i} \rangle \psi^r_{x_i}
\end{align}
with fixed points $x_i\in U_i$,
this expansion extends to all $f\in \mathsf{Co}(Y)=\mathsf{Co}(\mathcal{F},Y)$.
Furthermore, $f\in (H^1_\nu)^\urcorner$ belongs to $\mathsf{Co}(Y)$ if and only if $\{\langle f,\tilde{\psi}^r_{x_i} \rangle\}_{i\in I}\in Y^\natural(\mathcal{U})$ for each $r\in\{1,\ldots,n\}$, and in this case we have
$
\| f | \mathsf{Co}(Y) \| \asymp \sum_{r=1}^n \left\| \{ \langle f,\tilde{\psi}_{x_i}^r \rangle \}_{i\in I} | Y^\natural(\mathcal{U}) \right\| .
$
The convergence in \eqref{eq:frexpansion} is in the quasi-norm of $\mathsf{Co}(Y)$ if the finite sequences are dense in $Y^\natural(\mathcal{U})$.
In general, we have weak*-convergence induced by $(\mathcal{H}_1^\nu)^\urcorner$.
\end{Theorem}
Observe that the technical assumption $Y^\natural\hookrightarrow(L^{1/v}_\infty)^\natural$ made in \cite[Thm.~3.14]{RaUl10} is not necessary.
In view of Lemma~\ref{auxlem:crossG} it is further not necessary to require $\mathcal{G}_r, \tilde{\mathcal{G}}_r\subset \mathcal{H}_1^\nu$. In fact,
$K_r,\tilde{K}^*_r \in\mathcal{A}_{m_\nu}$ is a stronger condition than $G[\mathcal{G}_r,\mathcal{F}], G^*[\tilde{\mathcal{G}}_r,\mathcal{F}]\in\mathcal{A}_{m_\nu}$
and implies $\mathcal{G}_r, \tilde{\mathcal{G}}_r\subset \mathcal{H}_1^\nu$.
\section{Variable exponent spaces}
\label{sec:varint}
In the remainder we give a demonstration of the theory.
As an example we will show that variable exponent spaces, which have caught some attention recently, fall into
the framework of coorbit theory and can be handled conveniently within the theory.
\subsection{Spaces of variable integrability}
The spaces of variable integrability $L_{\p}(\R)$ were first introduced by Orlicz~\cite{Orlicz31} in 1931 as a generalization of the Lebesgue spaces $L_p({{\re}^d})$.
Before defining them let us introduce some standard notation from \cite{KovacikRakosnik91}. For a measurable function $p:{{\re}^d}\to(0,\infty]$ and a set $\Omega\subset{{\re}^d}$
we define the quantities $p^-_\Omega=\essinf{x\in\Omega}p(x)$ and $p^+_\Omega=\esssup{x\in\Omega}\ p(x)$. Furthermore, we abbreviate $p^-=p^-_{{\re}^d}$ and $p^+=p^+_{{\re}^d}$ and
say that ${p(\cdot)}$ belongs to the class of admissible exponents $\P$ if $p^->0$.
Having an admissible exponent $p\in\P$ we define the set ${{\re}^d}_{\!\!\!\infty}=\{x\in{{\re}^d}:p(x)=\infty\}$ and for every measurable function $f:{{\re}^d}\to\C$
the modular
\begin{align*}
\varrho_{p(\cdot)}(f)=\int_{{{\re}^d}\setminus{{\re}^d}_{\!\!\!\infty}}|f(x)|^{p(x)}dx+\esssup{x\in{{\re}^d}_{\!\!\!\infty}}|f(x)|\text{ .}
\end{align*}
\begin{definition}\label{Lppunkt} The space $L_{\p}(\R)$ is the collection of all
functions $f$ such that there exists a $\lambda>0$ with $\varrho_{p(\cdot)}(\lambda
f)<\infty$. It
is equipped with the Luxemburg quasi-norm
\begin{align*}
\norm{f}{L_{\p}(\R)}=\inf\left\{\lambda>0:\varrho_{p(\cdot)}\left(\frac{f}{\lambda}\right)<1\right\}\text{ .}
\end{align*}
\end{definition}
The spaces $L_{\p}(\R)$ share many properties with the constant exponent spaces $L_p({{\re}^d})$. Let us mention a few; the proofs can be found in \cite{KovacikRakosnik91} and in \cite{DieningHastoBuch2011}:
\begin{itemize}
\item If $p(x)=p$ then $L_{\p}(\R)=L_p({{\re}^d})$,
\item if $|f(x)|\geq|g(x)|$ for a.e.\ $x\in{{\re}^d}$ then $\varrho_{p(\cdot)}(f)\geq\varrho_{p(\cdot)}(g)$ and $\norm{f}{L_{\p}(\R)}\geq\norm{g}{L_{\p}(\R)}$,
\item $\varrho_{p(\cdot)}(f)=0$ if and only if $f=0$,
\item for $p(\cdot)\geq1$ H\"older's inequality holds \cite[Theorem 2.1]{KovacikRakosnik91}
\begin{align*}
\int_{{\re}^d}|f(x)g(x)|dx\leq 4\norm{f}{L_{\p}(\R)}\norm{g}{L_{p'(\cdot)}({{\re}^d})}\text{ , }
\end{align*}
where $1/{p(\cdot)}+1/p'(\cdot)=1$ pointwise.
\end{itemize}
There are also some properties of the usual constant exponent spaces which the $L_{\p}(\R)$ spaces do not share. For example in general the $L_{\p}(\R)$ spaces are not translation invariant, i.e.\
$f\inL_{\p}(\R)$ does not automatically imply that $f(\cdot+h)$ belongs to $L_{\p}(\R)$ for $h\in{{\re}^d}$. As a consequence also Young's convolution inequality is not valid (see again \cite{KovacikRakosnik91} for details).
The breakthrough for $L_{\p}(\R)$ spaces was made by Diening in \cite{Diening2004} when he showed that the Hardy-Littlewood maximal operator $\mathcal{M}$ is bounded on $L_{\p}(\R)$ under certain regularity conditions on ${p(\cdot)}$.
His result has been generalized in many cases (see \cite{DieningHarjulehto2009},\cite{Nekvinda2004} and \cite{CruzUribe2003}) and it turned out that logarithmic H\"older continuity classes are well adapted to the boundedness of the maximal operator.
\begin{definition
Let $g\in C({{\re}^d})$. We say that $g$ is \emph{locally $\log$-H\"older continuous}, abbreviated $g\in C^{\log}_{\rm loc}({{\re}^d})$, if there exists $c_{\log}>0$ such that
\begin{align*
|g(x)-g(y)|\leq\frac{c_{\log}}{\log(\textrm{e}+{1}/{|x-y|})} \qquad\text{for all }x,y\in{{\re}^d}.
\end{align*}
We say that $g$ is \emph{globally $\log$-H\"older continuous}, abbreviated $g\in C^{\log}({{\re}^d})$, if $g$ is locally $\log$-H\"older continuous and there exists $g_\infty\in\mathbb{R}}\newcommand{\N}{\mathbb{N}$ such that
\begin{align*}
|g(x)-g_\infty|\leq\frac{c_{\log}}{\log(\textrm{e}+|x|)} \qquad\text{for all }x\in{{\re}^d}.
\end{align*}
\end{definition}
With the help of the above logarithmic H\"older continuity the following result holds.
\begin{lemma}[{\cite[Thm.~3.6]{DieningHarjulehto2009}}]\label{lem:HLM}
Let $p\in\P$ with $1<p^-\leq p^+\leq\infty$. If $\frac{1}{p}\in C^{\log}({{\re}^d})$, then $\mathcal{M}$ is bounded on $L_{\p}(\R)$ i.e., there exists $c>0$ such that for all $f\inL_{\p}(\R)$
\begin{align*}
\norm{\mathcal{M} f}{L_{\p}(\R)}\leq c\norm{f}{L_{\p}(\R)}\text{ .}
\end{align*}
\end{lemma}
Since logarithmic H\"older continuous exponents play an essential role we introduce the class $\mathcal{P}^{\log}(\R)$ of admissible exponents ${p(\cdot)}$ with $1/p\in C^{\log}({{\re}^d})$ and $0<p^-\leq p^+\leq\infty$.
As a consequence of Lemma~\ref{lem:HLM}, for exponents $p\in\mathcal{P}^{\log}(\R)$ the maximal operator $\mathcal{M}$ is bounded on $L_{\frac{p(\cdot)} t}({{\re}^d})$ for every $0<t<p^-$.
\subsection{2-microlocal function spaces with variable integrability}
We proceed with spaces of Besov-Triebel-Lizorkin type featuring variable integrability and smoothness.
Spaces of the form $F^{s(\cdot)}_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ and $B^{s(\cdot)}_{{p(\cdot)},\qconst}({{\re}^d})$ have been studied in \cite{DieningHastoRoudenko2009,AlmeidaHasto2010},
where $s:{{\re}^d}\to\mathbb{R}$ with $s\in L_\infty({{\re}^d})\cap C^{\log}_{\rm loc}({{\re}^d})$.
A further generalization was pursued in \cite{Ke09,Ke11} replacing the smoothness parameter $s(\cdot)$ by a more general weight function $w$.
We make some reasonable restrictions on $w$ and use the class $\mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ of admissible weights introduced in \cite{Ke09}.
\begin{definition
For real numbers $\alpha_3\geq0$ and $\alpha_1\leq\alpha_2$ a weight function $w:X \to (0,\infty)$ on the index set $X={{\re}^d}\times[(0,1)\cup\{\infty\}]$ belongs
to the class $\mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$
if and only if for $\mathbf{x}=(x,t) \in X$,
\begin{description}
\item(W1)
$
\left\{\begin{array}{lcl}
\Big(\frac{s}{t}\Big)^{\alpha_1}w(x,s) \leq w(x,t) \leq \Big(\frac{s}{t}\Big)^{\alpha_2}w(x,s)&,& s \geq t\\\\
t^{-\alpha_1}w(x,\infty) \leq w(x,t) \leq t^{-\alpha_2}w(x,\infty)&,& s = \infty\,,
\end{array}\right.
$
\item(W2)
$
w(x,t) \leq w(y,t)\left\{\begin{array}{rcl}
(1 + |x-y|/t)^{\alpha_3}&,& t\in (0,1)\\
(1+|x-y|)^{\alpha_3}&,& t=\infty
\end{array}\right.\, \quad\mbox{ for all }y \in {{\re}^d}.
$
\end{description}
\end{definition}
\begin{example}\label{Exampel2ml} The main examples are weights of the form
$$
w_{s,s'}(x,t) = \left\{\begin{array}{rcl}
t^{-s}\Big(1+\frac{|x-x_0|}{t}\Big)^{s'} &,& t\in (0,1)\\
(1+|x-x_0|)^{s'}&,& t=\infty
\end{array}\right.\,.
$$
where $s,s' \in \mathbb{R}}\newcommand{\N}{\mathbb{N}$. These weights are continuous versions of 2-microlocal weights, used to define 2-microlocal function spaces of Besov-Lizorkin-Triebel type, see \cite{Ke09,Ke10,Ke11}.\\
By choosing $s'=0$ we get back to usual Besov-Lizorkin-Triebel spaces with smoothness $s\in\mathbb{R}}\newcommand{\N}{\mathbb{N}$.
\end{example}
The special weights from this example are usually called 2-microlocal weights. Furthermore, function spaces which are defined with admissible weights $w\in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ are usually called 2-microlocal spaces. This term was coined by Bony \cite{Bony} and Jaffard \cite{Jaffard}, who also introduced the concept of 2-microlocal analysis to study local regularity of functions.
\begin{remark}
By the conditions on admissible weights $w\in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ we obtain the following estimates which will be useful later on:
\begin{enumerate}
\item For $s\leq t$ we get from (W1)
\begin{align}\label{eq_W1tilde}
\left(\frac{s}{t}\right)^{\alpha_2}w(x,s)\leq w(x,t)\leq\left(\frac{s}{t}\right)^{\alpha_1}w(x,s).
\end{align}
\item For $0<c<s/t$ we have from (W1) and \eqref{eq_W1tilde}
\begin{align}\label{eq_st1}
\frac{w(x,t)}{w(x,s)}\leq\max\{1,c^{\alpha_1-\alpha_2}\}\left(\frac{s}{t}\right)^{\alpha_2}.
\end{align}
\item For $0<c<t/s$ we obtain similarly from (W1) and \eqref{eq_W1tilde}
\begin{align}\label{eq_st2}
\frac{w(x,t)}{w(x,s)}\leq\max\{1,c^{\alpha_1-\alpha_2}\}\left(\frac{s}{t}\right)^{\alpha_1}.
\end{align}
\item Consequently, we have for $0<c_1<s/t<c_2$ from \eqref{eq_st1} and \eqref{eq_st2}
\begin{align*}
w(x,t)\asymp w(x,s)\quad\text{for all $x\in{{\re}^d}$.}
\end{align*}
\item Using (W2) and the inequalities \eqref{eq_st1} and \eqref{eq_st2} we can relate $w(x,t)$ to $w(0,1/2)$ by
\begin{align*}
w(0,1/2)t^{-\alpha_1}(1+|x|)^{-\alpha_3}
\lesssim w(x,t)
\lesssim w(0,1/2)t^{-\alpha_2}(1+|x|)^{\alpha_3}.
\end{align*}
\end{enumerate}
\end{remark}
A weight $w \in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ gives rise to a
semi-discrete counterpart $(w_j)_{j \in \N_0}$, corresponding to
an admissible weight sequence in the sense of \cite{Ke09,Ke10,Ke11},
given by
\begin{equation}\label{eqdef:wj}
\begin{split}
w_j(x) = \left\{\begin{array}{rcl}
w(x,2^{-j})&,&j\in \N\,,\\
w(x,\infty)&,&j=0\,.
\end{array}\right.
\end{split}
\end{equation}
In \cite[Lemma~2.6]{Ke11} it was shown that it is equivalent to consider a smoothness function $s\in L_\infty({{\re}^d})\cap C^{\log}_{\rm loc}({{\re}^d})$ or an admissible weight sequence stemming from
$w\in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ if they are connected by $w_j(x)=2^{js(x)}$, see \eqref{eqdef:wj}. But there exist weight sequences (Example \ref{Exampel2ml} with $s'\neq0$) where it is not possible to find a smoothness function $s:{{\re}^d}\to\mathbb{R}$ such that the above relation holds.\\
Recently in \cite{Tu14} the concept of admissible weight sequences was extended to include more general weights. We will not follow this generalization of admissible weights, but we remark that by this definition we can have local Muckenhoupt weights as components in the sequence.\\
The spaces $B^w_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ are defined Fourier analytical as subspaces of the tempered distributions $\mathcal{S}'({{\re}^d})$. As usual the Schwartz space $\mathcal{S}(\mathbb{R}}\newcommand{\N}{\mathbb{N}^d)$ denotes
the locally convex space of rapidly decreasing infinitely differentiable functions on $\mathbb{R}}\newcommand{\N}{\mathbb{N}^d$. Its topology is generated by the seminorms
\begin{align*}
\|\varphi\|_{k,l}=\sup_{x\in\mathbb{R}}\newcommand{\N}{\mathbb{N}^d}(1+|x|)^k\sum_{|\beta|\leq l}|D^\beta\varphi(x)|
\end{align*}
for every $k,l\in\N_0$. Its topological dual, the space of tempered distributions on $\mathbb{R}}\newcommand{\N}{\mathbb{N}^d$, is denoted by $\mathcal{S}'(\mathbb{R}}\newcommand{\N}{\mathbb{N}^d)$. The Fourier transform and its inverse are defined on both $\mathcal{S}({{\re}^d})$ and $\mathcal{S}'({{\re}^d}$) (see Appendix A.1) and we denote them by $\hat{f}$ and $f^\vee$.
Finally, we introduce the subspace $\mathcal{S}_0({{\re}^d})$ of $\mathcal{S}({{\re}^d})$ by
\[
\mathcal{S}_0({{\re}^d}):=\left\{ f\in\mathcal{S}({{\re}^d}) ~:~ D^{\bar{\alpha}}\widehat{f}(0)=0 \text{ for every multi-index } \bar{\alpha}\in\N_0^d \right\}.
\]
The definition of $B^w_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ relies on a dyadic decomposition of unity, see also \cite[2.3.1]{Tr83}.
\begin{definition
Let $\Pi({{\re}^d})$ be the collection of all systems $\{\varphi_j\}_{j\in
{\N}_0} \subset \mathcal{S}({{\re}^d})$ such that
\begin{description}
\item(i) there is a function $\varphi\in \mathcal{S}({{\re}^d})$ with $\varphi_j(\xi) = \varphi(2^{-j}\xi)\,,\, j\in
\N$\,,
\item(ii) ${\rm supp \, } \varphi_0 \subset \{\xi\in {{\re}^d}~:~|\xi|\leq
2\}\,,\quad
{\rm supp \, } \varphi \subset \{\xi\in {{\re}^d}~:~1/2 \leq |\xi| \leq 2\}$\,,
\item(iii) $\sum\limits_{j=0}^{\infty} \varphi_j(\xi) = 1$ for every $\xi\in \mathbb{R}}\newcommand{\N}{\mathbb{N}^d$\,.
\end{description}
\end{definition}
\noindent
\begin{definition}\label{inhom} Let
$\{\varphi_j\}_{j= 0}^{\infty} \in \Pi({{\re}^d})$ and put
$\widehat{\Phi}_j = \varphi_j$ for $j\in\N_0$.
Let further $w \in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ with associated weight sequence
$\{w_j\}_{j\in {\N}_0}$ defined as in \eqref{eqdef:wj}.
\begin{description}
\item(i) For $p\in\P$, $\qconst\in(0,\infty]$, we define
$B^w_{{p(\cdot)},\qconst}({{\re}^d}) = \Big\{f\in \mathcal{S}'({{\re}^d}):
\|f|B^w_{{p(\cdot)},\qconst}({{\re}^d})\| <\infty\Big\} $ with
\begin{equation}\nonumber
\begin{split}
\|f|B^w_{{p(\cdot)},\qconst}({{\re}^d})\| = \Big(\sum\limits_{j=0}^{\infty}
\|w_j(\cdot)(\Phi_j \ast f)(\cdot)|L_{p(\cdot)}({{\re}^d})\|^{\qconst}\Big)^{1/\qconst}.
\end{split}
\end{equation}
\item(ii) For $p,q\in\P$ we define
$F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d}) = \Big\{f\in \mathcal{S}'({{\re}^d}):
\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\| <\infty\Big\}$ with
\begin{equation}\nonumber
\begin{split}
\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\| = \Big\|\Big(\sum\limits_{j=0}^{\infty}
|w_j(\cdot)(\Phi_j \ast f)(\cdot)|^{q(\cdot)}\Big)^{1/q(\cdot)}|L_{p(\cdot)}({{\re}^d})\Big\|.
\end{split}
\end{equation}
\end{description}
\end{definition}
\begin{remark}
It is also possible to consider Besov spaces $B^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ with variable
index ${q(\cdot)}$, which were introduced and studied in \cite{AlmeidaHasto2010}. The
definition of these spaces is very technical since
they require a new modular. Surprisingly it is much harder to work with Besov
spaces with variable indices ${p(\cdot)}$ and ${q(\cdot)}$ than to work with variable
Triebel-Lizorkin spaces, in sharp contrast to the constant exponent case. For
example, Besov spaces with variable ${q(\cdot)}$ are not always normed spaces for
$\min\{{p(\cdot)},{q(\cdot)}\}\geq1$, even if ${p(\cdot)}$ is a constant (see \cite{KeVybNorm} for
details). So we restrict our studies on Besov spaces to the case were the index
${q(\cdot)}$ remains a constant $\qconst$ and we leave the fully variable case for further
research.
\end{remark}
Formally, the definition of $F^{w}_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ and $B^{w}_{{p(\cdot)},\qconst}({{\re}^d})$ depends on the chosen decomposition of unity $\{\varphi_j\}_{j= 0}^{\infty} \in \Pi({{\re}^d})$.
The following characterization by local means shows that under certain regularity conditions on the indices ${p(\cdot)},{q(\cdot)}$ it is in fact independent, in the sense of equivalent quasi-norms.
To get useful further characterizations of the spaces defined above we need a
replacement for the classical Fefferman-Stein maximal inequality since it does
not hold in our case if $q(\cdot)$ is non-constant. We will use the following
convolution inequality.
\begin{lemma}[Theorem 3.2 in \cite{DieningHastoRoudenko2009}]\label{lem:FaltungsUnglg}
Let $p,q\in\mathcal{P}^{\log}(\R)$ with $1<p^-\leq p^+<\infty$ and $1<q^-\leq q^+<\infty$, then for $m>d$ there exists a constant $c>0$ such that
\begin{align*}
\norm{\norm{\left(\eta_{\nu,m}\ast f_\nu\right)_{\nu\in\N_0}}{\ell_{q(\cdot)}}}{L_{\p}(\R)}\leq c\norm{\norm{\left(f_\nu\right)_{\nu\in\N_0}}{\ell_{q(\cdot)}}}{L_{\p}(\R)},
\end{align*}
where $\eta_{\nu,m}(x)=2^{\nu d}(1+2^\nu|x|)^{-m}$.
\end{lemma}
\subsection{Continuous local means characterization}
\label{clm}
For our purpose, it is more convenient to reformulate Definition~\ref{inhom} in terms of a continuous characterization, where the discrete dilation parameter $j\in {\N}_0$ is replaced by $t>0$ and
the sums become integrals over $t$. Characterizations of this type have some history and are usually referred to as characterizations via (continuous) local means.
For further references and some historical facts we mainly refer to \cite{Tr92, BuPaTa96, Ry99a} and in particular to the recent contribution \cite{T10}, which provides a complete and self-contained reference.
The system $\{\varphi_j\}_{j\in{\N}_0} \in\Pi({{\re}^d})$ may be replaced by a more general one. Essential are functions $\Phi_0, \Phi \in \mathcal{S}({{\re}^d})$ satisfying
the so-called Tauberian conditions
\begin{equation}\label{condphi1}
\begin{split}
|\widehat{\Phi}_0(\xi)| > 0 \quad &\mbox{ on }\quad \{|\xi| < 2\varepsilon\}\,,\\
|\widehat{\Phi}(\xi)| > 0 \quad&\mbox{ on }\quad \{\varepsilon/2<|\xi|< 2\varepsilon\}\,,
\end{split}
\end{equation}
for some $\varepsilon>0$, and -- for some $R+1 \in {\N}_0 $ -- the moment conditions
\begin{equation}\label{condphi2}
D^{{\beta}} \widehat{\Phi}(0) = 0\quad\mbox{for all}\quad
|\beta|_1 \leq R\,.
\end{equation}
If $R+1 = 0$ the condition \eqref{condphi2} is void. We will call the functions
$\Phi_0$ and $\Phi$ \emph{kernels for local
means} and use the notations $\Phi_k = 2^{kd}\Phi(2^{k}\cdot)$, $k\in \N$, as
well as
$\Phi_t = \mathcal{D}_t \Phi = t^{-d}\Phi(\cdot/t)$ for $t>0$. The associated \emph{Peetre maximal function}
\begin{equation}\label{Peemax}
(\Phi^{\ast}_t f)_{a}(x) = \sup\limits_{y \in {{\re}^d}}\frac{|(\Phi_t \ast f)(x+y)|}
{(1+|y|/t)^{a}}\quad,\quad x\in {{\re}^d}\,,t>0\,,
\end{equation}
was introduced in \cite{Pe75} for $f\in \mathcal{S}^\prime({{\re}^d})$ and $a>0$. We also need the stronger version
$$
\langle\Phi^{\ast}_t f\rangle_{a}(x) =
\sup\limits_{\substack{\frac t2\leq\tau\leq 2t\\ \tau<1}} (\Phi^{\ast}_\tau
f)_{a}(x) \quad,\quad x\in {{\re}^d}\,,t>0\,, \quad\text{(Convention:
$\sup\emptyset=0$)}
$$
which we will refer to as
\emph{Peetre-Wiener maximal function} and which was utilized for the coorbit characterization of the classical Besov-Lizorkin-Triebel-spaces in~\cite{Sch12}.
To adapt to the inhomogeneous setting we further put
$
\langle\Phi_0^*f\rangle_a=(\Phi_0^*f)_a= ((\Phi_0)_1^*f)_a
$.
Using these maximal functions we now state several different characterizations.
\begin{Theorem}\label{thm:contchar} Let $w \in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ and choose functions $\Phi_0,\Phi \in \mathcal{S}({{\re}^d})$
satisfying \eqref{condphi1} and \eqref{condphi2} with $R+1>\alpha_2$. For $x\in{{\re}^d}$ and $t\in(0,1)$ define
$A_1f(x,t):=(\Phi_t \ast f)(x)$, $A_2f(x,t):=(\Phi_t^\ast f)_a(x)$, and $A_3f(x,t):=\langle \Phi_t^\ast f\rangle_a(x)$, $a>0$.
Further, put $A_1f(x,\infty):=(\Phi_0 \ast f)(x)$, $A_2f(x,\infty):=(\Phi_0^\ast f)_a(x)$, and $A_3f(x,\infty):=\langle \Phi_0^\ast f\rangle_a(x)$.
\begin{description}
\item(i) If $p\in\mathcal{P}^{\log}(\R)$, $0<\qconst\leq\infty$, and $a>\frac{d}{p^-}+\alpha_3$ then
$$
B^w_{{p(\cdot)},\qconst}({{\re}^d}) = \{f\in \mathcal{S}'({{\re}^d})~:~\|f|B^w_{{p(\cdot)},\qconst}({{\re}^d})\|_i < \infty\}\quad,\quad i=1,2,3,4,
$$
where for $i=1,2,3$
\begin{flalign*
&& \|f|B^w_{{p(\cdot)},\qconst}({{\re}^d})\|_i &= \|w(\cdot,\infty)A_if(\cdot,\infty)|L_{p(\cdot)}({{\re}^d})\| &\\
&&& + \Big(\int_{0}^1
\|w(\cdot,t)A_if(\cdot,t)|L_{p(\cdot)}({{\re}^d})\|^\qconst\frac{dt}{t}\Big)^{1/\qconst}\,, & \\
\text{and} &&
\|f|B^w_{{p(\cdot)},\qconst}({{\re}^d})\|_4 &= \|w(\cdot,\infty)(\Phi_0^{\ast}f)_a(\cdot)|L_{p(\cdot)}({{\re}^d})\|&\\
&&&+ \Big(\sum\limits_{j=1}^{\infty}
\Big\|w_j(\cdot)(\Phi_{2^{-j}}^{\ast} f)_a(\cdot)|L_{p(\cdot)}({{\re}^d})\Big\|^{\qconst}\Big)^{1/\qconst}\,. &
\end{flalign*}
Moreover, $\|\cdot|B^w_{{p(\cdot)},\qconst}({{\re}^d})\|_i$, $i=1,2,3,4$, are equivalent quasi-norms in $B^w_{{p(\cdot)},\qconst}({{\re}^d})$\,.
\item(ii) If $p,q\in\mathcal{P}^{\log}(\R)$ with $0<q^-\leq q^+<\infty$, $0<p^-\leq p^+ < \infty$, and $a>\max\{\frac{d}{p^-},\frac{d}{q^-}\}+\alpha_3$ then
$$
F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d}) = \{f\in \mathcal{S}'({{\re}^d})~:~\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_i < \infty\}\quad,\quad i=1,2,3,4,
$$
where for $i=1,2,3$
\begin{flalign*}
&&\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_i &= \|w(\cdot,\infty)A_if(\cdot,\infty)|L_{p(\cdot)}({{\re}^d})\| &\\
&&&+ \Big\|\Big(\int_{0}^1
|w(\cdot,t)A_if(\cdot,t)|^{q(\cdot)}\frac{dt}{t}\Big)^{1/q(\cdot)}
|L_{p(\cdot)}({{\re}^d})\Big\|\,, &\\
\text{and} &&\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_4 &= \|w(\cdot,\infty)(\Phi_0^{\ast}f)_a(\cdot)|L_{p(\cdot)}({{\re}^d})\|&\\
&&&+ \Big\|\Big(\sum\limits_{j=1}^{\infty}
|w_j(\cdot)(\Phi_{2^{-j}}^{\ast} f)_a(\cdot)|^{q(\cdot)}\Big)^{1/q(\cdot)}|L_{p(\cdot)}({{\re}^d})\Big\|.
\nonumber
\end{flalign*}
Moreover, $\|\cdot|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_i$, $i=1,2,3,4$, are equivalent quasi-norms in $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$\,.
\end{description}
\end{Theorem}
Before we present a sketch of the proof recall an important convolution inequality from \cite{Ke09}.
\begin{lemma}
\label{lem:ConvIneq}
Let $0<\qconst\leq\infty$, $\delta>0$ and $p,q\in\P$. Let $(g_k)_{k\in\N_0}$
be a sequence of non-negative measurable functions on ${{\re}^d}$
and denote $G_\ell=\sum_{k=0}^\infty2^{-|\ell-k|\delta}g_k$ for $\ell\in\N_0$.
Then there exist constants $C_1,C_2\geq0$ such that
\begin{align*}
\norm{\{G_\ell\}_{\ell}}{\ell_{\qconst}(L_{p(\cdot)})}\leq C_1\norm{\{g_k\}_{k}}{\ell_{\qconst}(L_{p(\cdot)})} \quad\text{and}\quad
\norm{\{G_\ell\}_{\ell}}{L_{p(\cdot)}(\ell_{q(\cdot)})}\leq C_2\norm{\{g_k\}_{k}}{L_{p(\cdot)}(\ell_{q(\cdot)})}\text{ .}
\end{align*}
\end{lemma}
\noindent{\bf Proof of Theorem \ref{thm:contchar}.} We only prove (ii) and
comment afterwards briefly on the necessary modifications
for (i). The arguments are more or less the same as in the proofs of
\cite[Thm.\ 2.6]{T10} and \cite[Thm. 9.6]{Sch12}. We remark that the
equivalences $\|\cdot|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|\asymp\|\cdot|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_4$ and
$\|\cdot|B^w_{{p(\cdot)},\qconst}({{\re}^d})\|\asymp\|\cdot|B^w_{{p(\cdot)},\qconst}({{\re}^d})\|_4$ are
already known, see \cite{Ke09}.
\noindent
{\em Step 1.} First, we prove a central estimate \eqref{eq:essestimate} between different start functions $\Phi$ and $\Psi$ incorporating the different types of Peetre maximal operators. The needed norm inequalities in the theorem are consequences of this central estimate \eqref{eq:essestimate}, and are subsequently deduced in the following steps.
Let us put $\varphi_0:=\widehat{\Phi}_0$ and $\varphi_k:=\widehat{\Phi}_k$ for $k\in\N$.
We can find a pair of functions $\lambda_0,\lambda\in\mathcal{S}({{\re}^d}) $ with
${\rm supp \, } \lambda_0\subset \{ \zeta\in{{\re}^d} : |\zeta|\le 2\varepsilon \}$ and
${\rm supp \, } \lambda\subset \{ \zeta\in{{\re}^d} : \varepsilon/2 \le |\zeta| \le 2\varepsilon\}$ such that
$\sum_{k\in\N_0} \lambda_k \varphi_k \equiv 1$, where $\lambda_k=\lambda(2^{-k}\cdot)$ for $k\in\N$.
Let us shortly
demonstrate how to do that. We use the special dyadic
decomposition of unity given by $\eta_0(t) = 1$ if $|t|\leq 4/3$ and
$\eta_0(t) =0$ if $|t|>3/2$. We
put $\eta_k:=\eta_0(\cdot/2^k)-\eta_0(\cdot/2^{k-1})$ for $k\in\N$. Then
clearly $\eta_0 + \sum_{k=1}^\infty \eta_k \equiv 1$ and we obtain $\sum_{k\in\N_0} \lambda_k \varphi_k \equiv 1$ by
defining $\lambda_k := \eta_k(\cdot/\varepsilon)/\varphi_k$ for $k\in{\N}_0$ and $\lambda:=\lambda_1(2\cdot)$.
The support of the function $\theta:=1-\sum_{k\in\N} \lambda_k\varphi_k\in C^\infty_0({{\re}^d})$ is
fully contained in $M:=\{ |x| \le 3\varepsilon/2 \}$. Due to
the Tauberian conditions, $\varphi_0$ is positive on $M$. Inverting $\varphi_0$ on $M$ and extending appropriately outside, we
can construct a function $\gamma\in C^\infty_0({{\re}^d})$, which coincides with $1/\varphi_0$ on $M$.
Since $\lambda_0\varphi_0=\theta$ we thus have the factorization $\lambda_0=\gamma \theta$.
We now put $\lambda_{0,u}(\cdot):=\gamma(\cdot) \theta(u\cdot)$ for $u\in[1,2]$, which gives
\begin{align*}
\lambda_{0,u} \varphi_0 + \sum_{k\in\N} \lambda_k(u\cdot) \varphi_k(u\cdot) =1.
\end{align*}
We then define $\Xi$, $\Theta$, $\Lambda$, $\Lambda_{0,u}$, and $\Lambda_k$ for $k\in{\N}_0$, all elements of $\mathcal{S}({{\re}^d})$, via inverse Fourier transform of the
functions $\gamma$, $\theta$, $\lambda$, $\lambda_{0,u}$, and $\lambda_k$, respectively.
We get $\Lambda_{0,u}=\Xi \ast \Theta_u$ and it holds
$
g= \Lambda_{0,u} \ast \Phi_0 \ast g + \sum_{k\in\N} \Lambda_{2^{-k}u} \ast \Phi_{2^{-k}u} \ast g
$
for every $g\in\mathcal{S}^\prime({{\re}^d})$.
Let $\Psi_0,\Psi\in \mathcal{S}({{\re}^d})$ be another system
which satisfies the Tauberian conditions \eqref{condphi1} and \eqref{condphi2}.
Choosing $g=\Psi_{2^{-\ell}v}\ast f$, where $f\in\mathcal{S}^\prime({{\re}^d})$, $\ell\in\N$, and $v\in[1/2,4]$,
we get
\begin{align}\label{eq:convident}
\Psi_{2^{-\ell}v}\ast f=\sum_{k\in\N} \Psi_{2^{-\ell}v} \ast \Lambda_{2^{-k}u} \ast \Phi_{2^{-k}u} \ast f + \Psi_{2^{-\ell}v} \ast \Lambda_{0,u}\ast \Phi_0 \ast f.
\end{align}
Defining $J_{\ell,k}= \int_{{{\re}^d}} |\Psi_{2^{-\ell}v} \ast \Lambda_{2^{-k}u}(z)| (1+2^k|z|/u)^a \,dz$ for $k\in\N$ we have
for $y\in{{\re}^d}$
\begin{gather*
\begin{aligned}
|(\Psi_{2^{-\ell}v} \ast \Lambda_{2^{-k}u} \ast \Phi_{2^{-k}u} \ast f)(y)|
&\le \int_{{{\re}^d}} |\Psi_{2^{-\ell}v} \ast \Lambda_{2^{-k}u}(z)| | \Phi_{2^{-k}u} \ast f(y-z) | \,dz\\
&\le (\Phi_{2^{-k}u}^*f)_a(y) J_{\ell,k},
\end{aligned}
\end{gather*}
For $k=0$ we get with $J_{\ell,0}=\int_{{{\re}^d}} |\Psi_{2^{-\ell}v} \ast \Lambda_{0,u}(z)| (1+|z|)^a \,dz$
\begin{gather*
\begin{aligned}
|(\Psi_{2^{-\ell}v} \ast \Lambda_{0,u} \ast \Phi_0 \ast f)(y)|
\le \int_{{{\re}^d}} |\Psi_{2^{-\ell}v} \ast \Lambda_{0,u}(z)| | \Phi_0 \ast f(y-z) | \,dz
\le (\Phi_0^*f)_a(y) J_{\ell,0}.
\end{aligned}
\end{gather*}
To estimate $J_{\ell,k}$ the following identity for functions $\mu,\nu\in\mathcal{S}({{\re}^d})$ is used,
\begin{equation*}
(\mu_u\ast\nu_v)(x)= \frac{1}{u^d} [\mu\ast\nu_{v/u}](x/u) = \frac{1}{v^d} [\mu_{u/v}\ast\nu](x/v),
\end{equation*}
valid for $u,v>0$ and $x\in{{\re}^d}$. In case $\ell\ge k>0$ we obtain
\begin{align*}
J_{\ell,k}= \int_{{{\re}^d}} |(\Psi_{2^{k-\ell}\frac{v}{u}} \ast \Lambda)(z)|(1+|z|)^a \,dz
\lesssim \sup_{z\in{{\re}^d}} \big| (\Psi_{2^{k-\ell}\frac{v}{u}}\ast \Lambda)(z)
(1+|z|)^{a+d+1} \big| \lesssim 2^{(k-\ell)(R+1)}\,,
\end{align*}
where we used \cite[Lemma~1]{Ry99a} in the last step.
In case $0<\ell< k$ we estimate similarly to obtain
\begin{align*}
J_{\ell,k}= \int_{{{\re}^d}} |(\Psi \ast \Lambda_{2^{-(k-\ell)}{u/v}}(z))|(1+2^{k-\ell}u|z|/v)^a \,dz
\lesssim 2^{(\ell-k)(L+1-a)},
\end{align*}
where $L$ can be chosen arbitrarily large since $\Lambda\in \mathcal{S}_0({{\re}^d})$ fulfills moment conditions for all $L\in\N_0$.
For $\ell>k=0$ we estimate as follows, taking advantage of $\Xi\in\mathcal{S}({{\re}^d})$,
\begin{align*}
J_{\ell,0}&= \int_{{{\re}^d}} |(\Psi_{2^{-\ell}v} \ast
\Theta_u) \ast \Xi (z)|(1+|z|)^a \,dz \\
&\lesssim \sup_{y\in{{\re}^d}} \big| (\Psi_{2^{-\ell}v}\ast
\Theta_u)(y) (1+|y|)^{a+d+1} \big| \int_{{{\re}^d}} \int_{{{\re}^d}} |\Xi(z-y)| (1+|z-y|)^a (1+|y|)^{-d-1} \,dz dy \\
&\lesssim \sup_{y\in{{\re}^d}} \big| (\Psi_{2^{-\ell}v/u}\ast
\Theta)(y) (1+|y|)^{a+d+1} \big| \int_{{{\re}^d}} \int_{{{\re}^d}} (1+|z|)^{-d-1} (1+|y|)^{-d-1} \,dz dy \lesssim 2^{-\ell(R+1)}.
\end{align*}
Using $1+t|x|\le \max\{1,t\} (1+|x|)$ and $1+|x+y|/t\le (1+|y|/t) (1+|x|/t)$
for $t>0$ and $x,y\in{{\re}^d}$ we further deduce for $k\in\N$
\begin{align*}
(\Phi^*_{2^{-k}u}f)_a(y) &\le (\Phi^*_{2^{-k}u}f)_a(x)(1+2^k|x-y|/u)^a \\
&\lesssim (\Phi^*_{2^{-k}u}f)_a(x)(1+2^\ell|x-y|/v)^a \max\{1,2^{(k-\ell)}\}^a.
\end{align*}
and $(\Phi_0^*f)_a(y) \lesssim (\Phi_0^*f)_a(x) (1+2^\ell|x-y|/v)^a$.
Altogether, we arrive - for $k\ge 1$ - at
\begin{align*
\sup_{y\in{{\re}^d}} \frac{|(\Psi_{2^{-\ell}v} \ast \Lambda_{2^{-k}u} \ast (\Phi_{2^{-k}u} \ast f))(y)|}{(1+2^\ell|x-y|/v)^a}
\lesssim (\Phi^*_{2^{-k}u}f)_a(x)
\begin{cases} 2^{(k-\ell)(R+1)} ~&:~ \ell\ge k, \\
2^{(\ell-k)(L+1-2a)} ~&:~ \ell< k,
\end{cases}
\end{align*}
with an implicit constant independent of $u\in[1,2]$ and $v\in [1/2,4]$.
For $k=0$ we obtain
\begin{align*
\sup_{y\in{{\re}^d}} \frac{|(\Psi_{2^{-\ell}v} \ast \Lambda_{0,u} \ast \Phi_0 \ast f)(y)|}{(1+2^\ell|x-y|/v)^a}
\lesssim (\Phi^*_{0}f)_a(x) 2^{-\ell(R+1)}.
\end{align*}
We thus conclude from \eqref{eq:convident} that uniformly in $t,u\in[1,2]$
\begin{align*}
\langle \Psi^*_{2^{-\ell}t}f \rangle_a(x) &= \sup_{t/2\le v \le 2t, v<1} (\Psi^*_{2^{-\ell}v}f)_a(x) \\ \notag
&\lesssim (\Phi_0^\ast f)_a(x) 2^{-\ell(R+1)}
+ \sum_{k\in\N} (\Phi^*_{2^{-k}u}f)_a(x) \begin{cases} 2^{(k-\ell)(R+1)} \,&: \ell\ge k, \\
2^{(\ell-k)(L+1-2a)} &: \ell< k.
\end{cases}
\end{align*}
Writing $\tilde{w}_{\ell,t}(x)=w(x,2^{-\ell}t)$ for $\ell\in\N$ and $\tilde{w}_{0,t}(x)=w(x,\infty)$ we have
\[
\tilde{w}_{\ell,t}(x) \tilde{w}_{k,u}(x)^{-1} \lesssim \begin{cases} 2^{(\ell-k)\alpha_2} \quad &\ell\ge k, \\
2^{(\ell-k)\alpha_1} &\ell< k,
\end{cases}
\]
as a consequence of $(W1)$, \eqref{eq_st1}, and \eqref{eq_st2}.
Multiplying both sides with $w(x,2^{-\ell}t)$
we finally derive with an implicit constant independent of $t,u\in[1,2]$
\begin{align*}
w(x,2^{-\ell}t) \langle \Psi_{2^{-\ell}t}^{\ast} f \rangle_a (x)
&\lesssim w(x,\infty) (\Phi_0^\ast f)_a(x) 2^{-\ell(R+1-\alpha_2)} \\
&+\sum_{k\in\N} w(x,2^{-k}u) (\Phi_{2^{-k}u}^{\ast}f)_a(x)
\begin{cases} 2^{(k-\ell)(R+1-\alpha_2)} \,&:~ \ell\ge k, \\
2^{(\ell-k)(L+1-2a+\alpha_1)} &:~ \ell< k.
\end{cases}
\end{align*}
Choosing $L\ge 2a - \alpha_1$
we have with $0<\delta = \min\{1,R+1-\alpha_2\}$ the central estimate
\begin{align}\label{eq:essestimate}
\langle \Psi_{2^{-\ell}t}^{\ast} f \rangle_a (x)
\lesssim 2^{-\ell\delta} \frac{w(x,\infty)}{ w(x,2^{-\ell}t)} (\Phi_0^\ast f)_a(x) + \sum_{k\in\N} 2^{-|k-\ell|\delta} \frac{w(x,2^{-k}u)}{ w(x,2^{-\ell}t)} (\Phi_{2^{-k}u}^{\ast}f)_a(x) .
\end{align}
{\em Step 2.} We show $\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_1 \asymp \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_{2,3}$. The direction $\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_1 \lesssim \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_{2,3}$ is obvious
and it remains to verify $\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_{3} \lesssim \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_1$.\\
We use \eqref{eq:essestimate} with $\Psi=\Phi$. Choosing $0<\tilde{\delta} \le \delta$ we obtain for any $r>0$, using an embedding argument if $0<r\le1$ and Hölder's inequality otherwise,
\begin{align*}
\langle \Phi_{2^{-\ell}t}^{\ast} f \rangle_a^r (x)w^r(x,2^{-\ell}t)
\lesssim 2^{-\ell\tilde{\delta} r} w^r(x,\infty) (\Phi_0^\ast f)_a^r(x) + \sum_{k\in\N} 2^{-|k-\ell|\tilde{\delta} r} w^r(x,2^{-k}u) (\Phi_{2^{-k}u}^{\ast}f)^r_a(x) .
\end{align*}
To estimate the sum on the right hand side we use (2.66) proved in Substep~1.3 of the proof of \cite[Thm.\ 2.6]{T10}. It states that for
$x\in{{\re}^d}$, $f\in\mathcal{S}^\prime({{\re}^d})$, $k\in\N$, $u\in[1,2)$, $r>0$, and
$0<a\le N$ for some arbitrary but fixed $N\in{\N}_0$
\begin{equation}\label{2.30}
(\Phi^{\ast}_{2^{-k}u}f)_{a}(x)^r\le C_N
\sum\limits_{j\in {\N}_0} 2^{-jNr}2^{(k+j)d}
\int_{{{\re}^d}}\frac{|(\Phi_{2^{-(k+j)}u}\ast
f)(y)|^r}{(1+2^{k}|x-y|)^{a
r}}\,dy\,,
\end{equation}
where the constant $C_N$ is independent of $x,f,k$, and $u\in[1,2)$,
but may depend on $r$, $a$ and $N$. Taking into account $(W2)$ and \eqref{eq_W1tilde}, which give the relation
$w(x,2^{-k}u) \lesssim 2^{-j\alpha_1}(1+2^{k}|x-y|)^{\alpha_3}w(y,2^{-(j+k)}u)$ and $(1+2^k|z|)^{-M}\leq2^{jM}(1+2^{k+j}|z|)^{-M}$, this leads to
\begin{align}\notag
&\langle \Phi_{2^{-\ell}t}^{\ast} f \rangle_a^r (x)w^r(x,2^{-\ell}t)
\lesssim 2^{-\ell\tilde{\delta} r} w^r(x,\infty) (\Phi_0^\ast f)_a^r(x)\\
&\hspace{5em}+ \sum_{k\in\N} 2^{-|k-\ell|\tilde{\delta} r}\sum_{j\in{\N}_0}2^{-jr\tilde{N}}2^{(k+j)d}\int_{{{\re}^d}}\frac{|(\Phi_{2^{-(k+j)}u}\ast f)(y)w(y,2^{-(j+k)}u)|^r}{(1+2^{k+j}|x-y|)^{(a-\alpha_3)r}}\,dy\,\label{eq:step2_eq1}
\end{align}
with $\tilde{N}=N-a+\alpha_1+\alpha_3>0$. Since $x\in{{\re}^d}$ is fixed we can apply in $t$ the $L_{q(x)/r}([1,2);\frac{dt}{t})$ norm with $r<\min\{p^-,q^-\}$. This changes only the constant and the
left-hand side of \eqref{eq:step2_eq1}.
The $L_{q^-/r}([1,2);\frac{du}{u})$ (quasi-)norm in the variable $u$ only affects the right-hand side of \eqref{eq:step2_eq1}.
With Minkowski's integral inequality we obtain
\begin{align}
&\left(\int_1^2|\langle\Phi_{2^{-\ell}t}^{\ast} f \rangle_a(x)w(x,2^{-\ell}t)|^{q(x)}\frac{dt}{t}\right)^{r/q(x)}-2^{-\ell\tilde{\delta} r} w^r(x,\infty) (\Phi_0^\ast f)_a^r(x)\notag\\
&\hspace{2em}\lesssim\sum_{k\in\N} 2^{-|k-\ell|\tilde{\delta} r}\sum_{j\in{\N}_0}2^{-|j-k|\tilde{N}r} 2^{jd} \int_{{\re}^d}\frac{\left(\int_1^2|(\Phi_{2^{-j}u}\ast f)(y)w(y,2^{-j}u)|^{q^-}\frac{du}{u}\right)^{r/q^-}}{(1+2^{j}|x-y|)^{(a-\alpha_3)r}}dy\notag\\
&\hspace{2em}\lesssim \sum_{k\in\N}2^{-|k-\ell|\tilde{\delta} r}\sum_{j\in{\N}_0}2^{-|j-k|\tilde{N}r}\left[\eta_{j,(a-\alpha_3)r}\ast\left(\int_1^2|(\Phi_{2^{-j}u}\ast f)(\cdot)w(\cdot,2^{-j}u)|^{{q}^-}\frac{du}{u}\right)^{r/{q}^-}\right](x)\ \notag
\end{align}
with functions $\eta_{\nu,m}(x)=2^{\nu d}(1+2^\nu|x|)^{-m}$.\\
Now we choose $r>0$ such that $\frac{d}{a-\alpha_3} < r < \min\{p^-,q^-\}$,
which is possible since $a>\alpha_3 + \frac{d}{\min\{p^-,q^-\}}$, and $N$ such that $\tilde{N}>0$.
Applying the $L_{{p(\cdot)}/r}(\ell_{{q(\cdot)}/r})$ norm with respect to $x\in{{\re}^d}$ and $\ell\in\N$ and using Lemma~\ref{lem:ConvIneq} twice together with Lemma~\ref{lem:FaltungsUnglg} (note $(a-\alpha_3)r>d$) then yields
\begin{align}
&\norm{\left(\int_1^2|\langle\Phi_{2^{-\ell}t}^*f\rangle_a(\cdot)w(\cdot,2^{-\ell}t)|^{q(\cdot)}\frac{dt}{t}\right)^{r/q(\cdot)}}{L_{{p(\cdot)}/r}(\ell_{{q(\cdot)}/r})}-c\norm{w(\cdot,\infty) (\Phi_0^\ast f)_a(\cdot)}{L_{p(\cdot)}({{\re}^d})}^r\notag\\
&\hspace{2em}\lesssim\norm{\left[\eta_{\ell,(a-\alpha_3)r}\ast\left(\int_1^2|(\Phi_{2^{-\ell}u}\ast f)(\cdot)w(\cdot,2^{-\ell}u)|^{{q}^-}\frac{du}{u}\right)^{r/{q}^-}\right](x)}{L_{{p(\cdot)}/r}(\ell_{{q(\cdot)}/r})}\notag\\
&\hspace{2em}\lesssim\norm{\left(\int_1^2|(\Phi_{2^{-\ell}u}\ast f)(\cdot)w(\cdot,2^{-\ell}u)|^{{q}^-}\frac{du}{u}\right)^{r/{q}^-}}{L_{{p(\cdot)}/r}(\ell_{{q(\cdot)}/r})}\ .\label{eq:step2_eq2}
\end{align}
Finally, we use H\"older's inequality to estimate the integral in the last norm. We use $0< q^-\leq q(x)$ and get
\begin{align*}
&\left(\int_1^2|(\Phi_{2^{-\ell}u}\ast f)(x)w(x,2^{-\ell}u)|^{{q}^-}\frac{du}{u}\right)^{r/q^-}\\
&\hspace{5em}\leq\left(\int_1^2|(\Phi_{2^{-\ell}u}\ast f)(x)w(x,2^{-\ell}u)|^{q(x)}\frac{du}{u}\right)^{r/q(x)}\left(\int_1^2\frac{du}{u}\right)^{r/{q}^-\cdot\frac{1}{\left(\frac{{q(x)}}{{q}^-}\right)'}}\\
&\hspace{5em}\le
\left(\int_1^2|(\Phi_{2^{-\ell}u}\ast f)(x)w(x,2^{-\ell}u)|^{q(x)}\frac{du}{u}\right)^{r/q(x)}.
\end{align*}
Using this estimate we can reformulate \eqref{eq:step2_eq2} into
\begin{align*}
&\norm{\left(\int_0^1|\langle\Phi_\lambda^*f\rangle_a(\cdot)w(\cdot,\lambda)|^{q(\cdot)}\frac{d\lambda}{\lambda}\right)^{1/q(\cdot)}}{L_{p(\cdot)}}\\
&\hspace{5em}\lesssim\norm{w(\cdot,\infty) (\Phi_0^\ast f)_a(\cdot)}{L_{p(\cdot)}({{\re}^d})}+\norm{\left(\int_0^1|(\Phi_\lambda\ast f)(\cdot)w(\cdot,\lambda)|^{q(\cdot)}\frac{d\lambda}{\lambda}\right)^{1/q(\cdot)}}{L_{p(\cdot)}}\ .
\end{align*}
The inhomogeneous term $(\Phi_0^\ast f)_a(x)$ needs to be treated separately. The argumentation, however, is analogous to the exposition before with \eqref{2.30} replaced by the inequality
\begin{equation*
(\Phi^{\ast}_0f)_{a}(x)^r\lesssim
\sum\limits_{k\in \N} 2^{-kNr}2^{kd}
\int_{{{\re}^d}}\frac{
|(\Phi_{2^{-k}u}\ast f)(y)|^r}{(1+|x-y|)^{ar}}\,dy + \int_{{{\re}^d}}\frac{
|(\Phi_{0}\ast f)(y)|^r}{(1+|x-y|)^{ar}}\,dy.
\end{equation*}
In the Besov space case we do not need the functions $\eta_{\nu,m}$ and one can
work with the usual maximal operator $\mathcal{M}$ together with Lemma~\ref{lem:HLM},
see \cite{Ke09} for details.
\noindent
\indent {\em Step 3.} In the third step we show $\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_2 \asymp \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_3 \asymp \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_4 $. We immediately observe
$\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_2 \lesssim \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_3$. \\
\noindent
\indent {\em Substep 3.1.} To prove $\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_3 \lesssim \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_4$ we apply \eqref{eq:essestimate} with $u=1$ and $\Psi=\Phi$ .
Since the inhomogeneous terms are identical, it suffices to estimate the
homogeneous part. Integration with respect to $dt/t$ yields for $\ell\in\N$
\begin{align*}
\Big( \int_1^2 |w(x,2^{-\ell}t) \langle \Phi_{2^{-\ell}t}^{\ast} f\rangle_a (x) |^{q(x)} \frac{dt}{t} \Big)^{\frac{1}{q(x)}}
\lesssim 2^{-\ell\delta}w_0(x) (\Phi_0^{\ast}f)_a(x) + \sum_{k\in\N} 2^{-|k-\ell|\delta} w_k(x) (\Phi_{2^{-k}}^{\ast}f)_a(x).
\end{align*}
Let us denote the function on the right-hand side of the previous estimate by $G_\ell$.
Applying the vector-valued convolution inequality of Lemma~\ref{lem:ConvIneq} then proves
\begin{align*}
&\Big\| \Big( \sum_{\ell=1}^\infty \int_1^2 | w(\cdot,2^{-\ell}t) \langle\Phi^*_{2^{-\ell}t}f\rangle_a(\cdot)|^{q(\cdot)}
\,\frac{dt}{t} \Big)^{1/q(\cdot)} \big| L_{{p(\cdot)}} \Big\| \lesssim \| \{G_\ell \}_{\ell \in \N} | L_{{p(\cdot)}}(\ell_{{q(\cdot)}}) \| \\
& \qquad \lesssim \| w_0 (\Phi_0^{\ast}f)_a | L_{p(\cdot)} \| + \| \{ w_k (\Phi_{2^{-k}}^{\ast}f)_a \}_{k\in\N} | L_{p(\cdot)}(\ell_{q(\cdot)}) \|
=\| f | F^w_{{p(\cdot)},{q(\cdot)}} \|_4.
\end{align*}
{\em Substep 3.2:} Let us finish by proving $\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_4 \lesssim \|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\|_2$.
Again it suffices to estimate the homogeneous part.
For this we let $t=1$ and $\Psi=\Phi$ in \eqref{eq:essestimate}. If $q(x)\ge1$ we can use Minkowski's inequality to deduce
\begin{align*}
w_\ell(x) (\Phi_{2^{-\ell}}^{\ast}f)_a(x) &\lesssim 2^{-\ell\delta} w(x,\infty)
(\Phi_0^\ast f)_a(x)\\
&+ \sum_{k\in\N} 2^{-|k-\ell|\delta} \Big( \int_1^2
|\tilde{w}_{k,u}(x) (\Phi_{2^{-k}u}^{\ast}f)_a(x)|^{q(x)} \, \frac{du}{u}
\Big)^{1/q(x)}.
\end{align*}
Applying the $\ell_{q(x)}$-norm on both sides, Young's convolution inequality then yields
\begin{align}\label{eq:centest}
\sum_{\ell\in\N} w_\ell(x)^{q(x)} (\Phi_{2^{-\ell}}^{\ast}f)_a(x)^{q(x)}
&\lesssim (w(x,\infty) (\Phi_0^\ast f)_a(x))^{q(x)} \\
&+ \sum_{k\in\N} \int_1^2 |\tilde{w}_{k,u}(x)
(\Phi_{2^{-k}u}^{\ast}f)_a(x)|^{q(x)} \, \frac{du}{u}. \notag
\end{align}
If $q(x)<1$ we use the $q(x)$-triangle inequality
\begin{align*}
\Big(w_\ell(x) (\Phi_{2^{-\ell}}^{\ast}f)_a(x) \Big)^{q(x)} &\lesssim
2^{-\ell\delta q(x)} (w(x,\infty) (\Phi_0^\ast f)_a(x))^{q(x)}\\
&+ \sum_{k\in\N} 2^{-|k-\ell|q(x)\delta} \int_1^2 |\tilde{w}_{k,u}(x)
(\Phi_{2^{-k}u}^{\ast}f)_a(x)|^{q(x)} \, \frac{du}{u}.
\end{align*}
Now we take on both sides the $\ell_1$-norm with respect to the index $\ell\in\N$ and take into account $\sum_{k\in{\N}_0} 2^{-|k|q(x)\delta} \le C$.
We thus arrive at the same estimate \eqref{eq:centest}. Taking the $L_{{p(\cdot)}}$-quasi-norm of \eqref{eq:centest} finishes the proof of Substep 3.2 and hence Step 3.
{\em Step 4:} Relation \eqref{eq:essestimate} also immediately allows to change to a different system $\Psi_0,\Psi$, however
in the discrete setting the change of systems has already been shown in \cite{Ke09}.
\hspace*{\fill} \rule{3mm}{3mm}
\begin{remark} The previous theorem ensures in particular the
independence of Besov-Lizorkin-Triebel type spaces with variable exponents from
the chosen resolution of unity if
$p,q\in\mathcal{P}^{\log}(\R)$ with $p^+<\infty$, $q^+<\infty$ in the $F$-case and
$p\in\mathcal{P}^{\log}(\R)$, $\qconst\in(0,\infty]$ in the $B$-case.
\end{remark}
\section{Variable exponent spaces as coorbits}
\label{sec:appcoorbit}
In order to treat the spaces $B^w_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ as coorbits
we utilize an inhomogeneous version of the continuous wavelet transform, which
uses high scale wavelets together with a base scale for the analysis.
The corresponding index set is $X = {{\re}^d} \times [(0,1) \cup \{\infty\}]$, where
$\infty$ denotes an isolated point, equipped with the Radon measure
$\mu$ defined by
$$
\int_{X} F(\mathbf{x}) d\mu(\mathbf{x}) = \int_{{{\re}^d}}\int_{0}^1 F(x,s) \frac{ds}{s^{d+1}}dx +
\int_{{{\re}^d}} F(x,\infty) dx\,.
$$
The wavelet transform is then given by $V_{{\mathcal F}}f(\mathbf{x}) = \langle f,
\varphi_\mathbf{x}\rangle$, $\mathbf{x}\in X$, for a continuous frame $\mathcal{F}=\{ \varphi_\mathbf{x}
\}_{\mathbf{x}\in X}$ on
$\mathcal{H}= L_2({{\re}^d})$ of the form
\begin{equation}\label{eqdef:wavefr}
\varphi_{(x,\infty)} = T_x \Phi_0 = \Phi_0(\cdot-x) \quad \mbox{and} \quad \varphi_{(x,t)} =
T_x\mathcal{D}^{L_2}_t \Phi = t^{-d/2} \Phi((\cdot-x)/t)\,,
\end{equation}
with suitable functions $\Phi_0,\Phi\in L_2({{\re}^d})$. Such a frame
${\mathcal F}={\mathcal F}(\Phi_0,\Phi)$ will in our context be referred to as a \emph{continuous
wavelet frame} in $L_2({{\re}^d})$.
\begin{definition}\label{def:admfr} A continuous wavelet frame ${\mathcal F} = {\mathcal F}(\Phi_0,\Phi)$ is \emph{admissible} if
$\Phi_0\in\mathcal{S}({{\re}^d})$ and $\Phi\in \mathcal{S}_0({{\re}^d})$ are chosen such that they
satisfy the Tauberian conditions \eqref{condphi1}, \eqref{condphi2} and the condition
$$
|\widehat{\Phi}_0(\xi)|^2 + \int_{0}^1 |\widehat{\Phi}(t\xi)|^2\frac{dt}{t} = C \quad\text{ for a.e. }\xi\in{{\re}^d}.
$$
\end{definition}
An admissible wavelet frame ${\mathcal F}(\Phi_0,\Phi)$ represents a tight continuous frame in the sense of \eqref{eq:stab}. To see this, apply Fubini's and Plancherel's theorem to get
\begin{equation}
C \|f|L_2({{\re}^d})\|^2
= \int_{{{\re}^d}} |\widehat{f}(\xi)|^2 \Big(|\widehat{\Phi}_0(\xi)|^2 + \int_{0}^1 |\widehat{\Phi}(t\xi)|^2 \frac{dt}{t}\Big)\,d\xi
= (2\pi)^{-d} \int_{X} |\langle f,\varphi_{\mathbf{x}}\rangle|^2 d \mu(\mathbf{x})\,.\notag
\end{equation}
\subsection{Peetre-Wiener type spaces on $X$}
\label{ssec:PeetSp}
We intend to define two general scales of spaces on $X$, for which we need
a Peetre type maximal function, given for a measurable function $F: X \to \C$ by
\begin{align*}
\mathcal{P}^\ast_a F(x,t) &:= \esssup{\substack{z\in {{\re}^d}, \tau<1\\
\frac{t}{2}\le \tau \le 2t}}\frac{|F(x+z,\tau)|}{(1+|z|/\tau)^a}\quad,\quad x\in{{\re}^d},\, 0<t<1,\\
\mathcal{P}^\ast_a F(x, \infty) &:= \esssup{z\in {{\re}^d}} \frac{|F(x+z,\infty)|}{(1+|z|)^a} \quad , \quad x \in {{\re}^d}.\notag
\end{align*}
The operator $\mathcal{P}^\ast_a$ is a stronger version of the usual Peetre maximal operator $\mathcal{P}_a$, which does not take the supremum over $t$
and was used e.g.\ in \cite{RaUl10}.
\begin{definition}\label{PeetreWiener
Let $p,q\in\mathcal{P}^{\log}(\R)$ with $0< p^-\leq p^+<\infty$ and $0< q^-\leq q^+<\infty$ and let $0< \qconst\leq \infty$.
Further, let $a>0$ and $w \in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$.
Then we define by
\begin{equation*}
\begin{split}
P^w_{p(\cdot),q(\cdot),a}(X) &= \{F:X \to \C~:~\|F|P^w_{p(\cdot),q(\cdot),a}\| < \infty\}\,,\\
L^w_{p(\cdot),\tilde{q},a}(X) &= \{F:X \to \C~:~\|F|L^w_{p(\cdot),\tilde{q},a}\| < \infty\}\\
\end{split}
\end{equation*}
two scales of function spaces on $X$ with respective quasi-norms
\begin{align*}
\|F|P^w_{p(\cdot),q(\cdot),a}\| & := \Big\|w(\cdot,\infty){\mathcal P}^\ast_a F(\cdot, \infty) |L_{p(\cdot)}({{\re}^d})\Big\|\\
&~~~~+ \Big\|\Big(\int_{0}^1 \Big[w(\cdot,t){\mathcal P}^\ast_a F(\cdot, t) \Big]^{q(\cdot)}\frac{dt}{t}\Big)^{1/q(\cdot)}|L_{p(\cdot)}({{\re}^d})\Big\|,\\
\|F|L^w_{p(\cdot),\tilde{q},a}\| & := \Big\|w(\cdot,\infty){\mathcal P}^\ast_a F(\cdot, \infty) |L_{p(\cdot)}({{\re}^d})\Big\|\nonumber \\
&~~~~+ \Big(\int_{0}^1 \Big\|w(\cdot,t) {\mathcal P}^\ast_a F(\cdot,t)|L_{p(\cdot)}({{\re}^d})\Big\|^{\qconst}\frac{dt}{t}\Big)^{1/\qconst}\,.
\end{align*}
\end{definition}
It is not hard to verify that in case $a>d/p^-+\alpha_3$ these spaces are rich solid QBF-spaces as defined and studied in Subsection~\ref{ssec:QBFspaces}. Moreover,
the utilization of the Peetre-Wiener operator ${\mathcal P}^\ast_a$ ensures that they are locally integrable, even in the quasi-Banach case in contrast to the
ordinary Peetre spaces where ${\mathcal P}_a$ is used instead of ${\mathcal P}^\ast_a$. In fact, there is an associated locally bounded weight function given by
\begin{equation}\label{eqdef:assweight}
\nu_{w,p(\cdot),q(\cdot)}(x,t) = \left\{\begin{array}{rcl}
t^{\alpha_1-d/p^-}(1+|x|)^{\alpha_3}&,&x\in{{\re}^d},\,0<t<1,\\
(1+|x|)^{\alpha_3}&,&x\in{{\re}^d},\,t=\infty,
\end{array}\right.
\end{equation}
such that the following lemma holds true.
\begin{lemma
We have the continuous embeddings
\[
P^w_{p(\cdot),q(\cdot),a}(X) \hookrightarrow L_\infty^{1/\nu_{w,p(\cdot),q(\cdot)}}(X) \quad\text{and}\quad L^w_{p(\cdot),\tilde{q},a}(X) \hookrightarrow L_\infty^{1/{\nu_{w,{p(\cdot)},\qconst}}}(X).
\]
\end{lemma}
\noindent {\bf Proof.}\,
It is useful to interpret the component ${{\re}^d}\times(0,1)$ of the index $X$ as a subset of the $ax+b$ group $\mathcal{G}={{\re}^d}\times(0,\infty)$
with multiplication $(x,t)(y,s)=(x+ty,ts)$ and $(x,t)^{-1}=(-x/t,1/t)$.
Let $U^{-1}$ be the inversion of $U:=[-2,2]^d\times[\frac{1}{2},2]$ and define $U_{(x,t)}:=(x,t)U^{-1}$ and $\widetilde{U}_{(x,t)}:=(x,t)U$.
Further put $Q_{(x,t)}:=x + t[-1,1]^d$ and $U_{(x,\infty)}:=\widetilde{U}_{(x,\infty)}:=Q_{(x,1)}\times\{\infty\}$. Then
we can estimate for $F:X\to \C$ and almost all $(x,t)\in X$ at every fixed $(y,s)\in X$
\begin{align*
|F(x,t)| \chi_{U_{(x,t)}}(y,s) \le \esssup{(x,t)\in X\cap\widetilde{U}_{(y,s)}}|F(x,t)| \lesssim \mathcal{P}^\ast_a F(y,s).
\end{align*}
For convenience, let us introduce
\[
\| F | M^w_{{p(\cdot)},{q(\cdot)}} \| :=\Big\|w(\cdot,\infty) F(\cdot, \infty) |L_{p(\cdot)}\Big\|
+ \Big\|\Big(\int_{0}^1 \Big| w(\cdot,t) F(\cdot, t) \Big|^{q(\cdot)}\frac{dt}{t}\Big)^{1/q(\cdot)}|L_{p(\cdot)}\Big\|.
\]
We obtain for almost all $(x,t)\in X$
\[
|F(x,t)|\cdot \| \chi_{U_{(x,t)}} |M^w_{{p(\cdot)},{q(\cdot)}} \| \lesssim \| \mathcal{P}^\ast_a F |M^w_{{p(\cdot)},{q(\cdot)}} \| = \| F | P^w_{p(\cdot),q(\cdot),a} \|.
\]
It remains to prove $\nu_{w,p(\cdot),q(\cdot)}(x,t)\gtrsim \| \chi_{U_{(x,t)}} | M^w_{{p(\cdot)},{q(\cdot)}} \|^{-1}$.
Since $U^{-1}\supset[-1,1]^d\times[\frac{1}{2},2]$ we have $U_{(x,t)}\supset Q_{(x,t)} \times [\frac{t}{2},2t]$. If $0<t<1$ it follows for $x,y\in{{\re}^d}$
\begin{align*}
\Big(\int_0^1 \big[ w(y,s) \chi_{U_{(x,t)}}(y,s) \big]^{q(y)} \frac{ds}{s} \Big)^{1/q(y)}
&\gtrsim \ln(4)^{1/q(y)} w(y,t) \chi_{Q_{(x,t)}}(y) \gtrsim w(y,t) \chi_{Q_{(x,t)}}(y)
\end{align*}
and $\chi_{U_{(x,t)}}(y, \infty)=0$. The properties (W1) and (W2) of $w\in \mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ further imply
\[
w(y,t)\gtrsim w(x,t)(1+|x-y|/t)^{-\alpha_3} \gtrsim t^{-\alpha_1} (1+|x|)^{-\alpha_3} (1+|x-y|/t)^{-\alpha_3}.
\]
This leads to
\hfill
\| \chi_{U_{(x,t)}} | M^w_{{p(\cdot)},{q(\cdot)}} \| \gtrsim
t^{-\alpha_1} (1+|x|)^{-\alpha_3}
\| \chi_{Q_{(x,t)}}(\cdot) (1+|x-\cdot|/t)^{-\alpha_3} |L_{p(\cdot)} \|.
$\hfill
\noindent
Since $\|\chi_{Q_{(x,t)}} |L_{p(\cdot)} \| \ge \frac{1}{2} \min\{ |Q_{(x,t)}|^{1/p^+}, |Q_{(x,t)}|^{1/p^-} \}$ by \cite[Lemma 3.2.12]{DieningHastoBuch2011} and $|Q_{(x,t)}|=(2t)^d$ we obtain
$\|\chi_{Q_{(x,t)}}|L_{p(\cdot)} \| \gtrsim t^{d/ p^-}$
and finally arrive at
\begin{align*}
\| \chi_{U_{(x,t)}} | M^w_{{p(\cdot)},{q(\cdot)}} \| \gtrsim t^{-\alpha_1}(1+|x|)^{-\alpha_3}\|\chi_{Q_{(x,t)}}|L_{p(\cdot)}({{\re}^d})\| \gtrsim (\nu_{w,p(\cdot),q(\cdot)}(x,t))^{-1}\,,
\end{align*}
where $\chi_{Q_{(x,t)}}(y) (1+|x-y|/t)^{-\alpha_3}\asymp \chi_{Q_{(x,t)}}(y)$ was used. If $t=\infty$ we can argue analogously.
\hspace*{\fill} \rule{3mm}{3mm}
\subsection{Coorbit identification}
As the following lemma shows,
every admissible wavelet frame ${\mathcal F}={\mathcal F}(\Phi_0,\Phi)$ in the sense of
Definition~\ref{def:admfr} is suitable for the definition of coorbits of
Peetre-Wiener spaces.
\vspace*{-4ex}
\begin{flalign}\label{standassump}
\begin{minipage}[t]{0.9\textwidth}
\textbf{Standing assumptions:} For the rest of the
paper the indices fulfill $p,q\in\mathcal{P}^{\log}(\R)$ with $0< p^-\leq p^+<\infty$, $0<
q^-\leq q^+<\infty$. Further $\qconst\in(0,\infty]$ and
$w\in\mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ for arbitrary but fixed
$\alpha_2\ge \alpha_1$ and $\alpha_3\geq0$.
\end{minipage} &&
\end{flalign}
\begin{lemma}\label{lem:R(Y)}
An admissible continuous wavelet frame ${\mathcal F}$ in the sense of \eqref{eqdef:wavefr} with generators $\Phi_0\in\mathcal{S}({{\re}^d})$ and $\Phi\in\mathcal{S}_0({{\re}^d})$
has property $F(\nu,Y)$ for $Y=P^w_{p(\cdot),q(\cdot),a}(X)$ and $Y=L^w_{p(\cdot),\tilde{q},a}(X)$, and where $\nu=\nu_{w,p(\cdot),q(\cdot)}$ is the corresponding weight from \eqref{eqdef:assweight}.
\end{lemma}
\noindent {\bf Proof.}\, The proof goes along the lines of \cite[Lem.\ 4.18]{RaUl10}.
The kernel estimates in \cite[Lem.\ 4.8, 4.24]{RaUl10} have to be adapted to the
Peetre-Wiener space. This is a straight-forward procedure and allows for
treating as well the quasi-Banach situation\,.
\hspace*{\fill} \rule{3mm}{3mm}
Now we are ready for the coorbit characterization of $B^{w}_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^{w}_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$.
Note that the weight $\tilde{w}$ defined in \eqref{eqdef:wtilde} is an element of the class $\mathcal{W}^{\alpha_3}_{\alpha_1+d/2,\alpha_2+d/2}$.
\begin{Theorem}\label{thm:coident} Let ${p(\cdot)},\ {q(\cdot)},\ \qconst,\ w$ fulfill the
standing assumptions \eqref{standassump}. We choose an admissible continuous wavelet frame
${\mathcal F}={\mathcal F}(\Phi_0,\Phi)$ according to Definition \ref{def:admfr}. Putting
\begin{equation}\label{eqdef:wtilde}
\tilde{w}(x,t) := \left\{\begin{array}{rcl}
t^{-d/2}w(x,t) &,& 0<t< 1\,,\\
w(x,\infty)&,& t=\infty\,,
\end{array}\right.
\end{equation}
we have $B^{w}_{{p(\cdot)},\qconst}({{\re}^d})= \mathsf{Co} ({\mathcal F},L^{\tilde{w}}_{{p(\cdot)},\qconst,a})$ if $a>\frac{d}{p^-}+\alpha_3$ and
$F^{w}_{{p(\cdot)},{q(\cdot)}}({{\re}^d}) = \mathsf{Co} ({\mathcal F},P^{\tilde{w}}_{{p(\cdot)},{q(\cdot)},a})$ if $a>\max\{\frac{d}{p^-},\frac{d}{q^-}\}+\alpha_3$
in the sense of equivalent quasi-norms.
\end{Theorem}
\noindent {\bf Proof.}\,
By Lemma~\ref{lem:R(Y)} the coorbits exist in accordance with the theory.
Now, let $f\in\mathcal{S}({{\re}^d})$ and $F(x,t):=V_{{\mathcal F}}f(x,t) = \langle f,
\varphi_{(x,t)}\rangle$ with $\varphi_{(x,t)}$ as in \eqref{eqdef:wavefr}. According to \cite[Lem.\ A.3]{T10}
\begin{align*}
|V_{{\mathcal F}}f(x,t)|\le C_N(f) G_N(x,t) \quad\text{with}\quad G_N(x,t)=\begin{cases} t^N(1+|x|)^{-N} \quad&,\, 0<t<1, \\
(1+|x|)^{-N} &,\, t=\infty, \end{cases}
\end{align*}
where $N\in\N$ is arbitrary but fixed and $C_N(f)>0$ is a constant depending on $N$ and $f$.
Choosing $N$ large, we have $G_N\in L_1^\nu(X)$ and thus $F\in L_1^\nu(X)$ with $\| F | L_1^\nu \|\le C_N(f) \| G_N | L_1^\nu \|$. This proves $f\in \mathcal{H}_1^\nu$.
Even more, given a sequence $(f_n)_{n\in\N} \subset \mathcal{S}({{\re}^d})$ we have $C_N(f_n)\rightarrow 0$ if $f_n\rightarrow 0$ in $\mathcal{S}({{\re}^d})$.
This is due to the fact that the constants $C_N(f_n)$ can be estimated by the Schwartz semi-norms of $f_n$ up to order $N$ (see proof of \cite[Lem.\ A.3]{T10}).
Hence, $\mathcal{F}\subset\mathcal{S}({{\re}^d})\hookrightarrow \mathcal{H}_1^\nu$ and
the voice transform $V_{\mathcal F}$ extends to $\mathcal{S}^\prime({{\re}^d})$. Moreover, by a straight-forward modification of the argument in \cite[Cor.\ 20.0.2]{Hol95},
the reproducing formula is still valid on $\mathcal{S}^\prime({{\re}^d})$. Therefore we may apply Lemma~\ref{lem:co_indires} and use the larger reservoir $\mathcal{S}^\prime({{\re}^d})$.
To see that the coorbits coincide with $B^{w}_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^{w}_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$, note that
the functions $\tilde{\Phi}=\overline{\Phi(-\cdot)}$ and $\tilde{\Phi}_0=\overline{\Phi_0(-\cdot)}$
satisfy the Tauberian conditions \eqref{condphi1}, \eqref{condphi2} and can thus be used in the continuous characterization of Theorem~\ref{thm:contchar}.
Recall the notation $\tilde{\Phi}_t= t^{-d} \tilde{\Phi}(\cdot/t)$.
The assertion is now a direct consequence of the possible reformulation $(V_{{\mathcal F}} f)(\cdot,\infty) = \tilde{\Phi}_0\ast f$ and
$$
(V_{{\mathcal F}} f)(x,t) = \left(\mathcal{D}^{L_2}_t \overline{\Phi(-\cdot)} \ast f
\right)(x) = t^{d/2} \left( \tilde{\Phi}_t \ast f \right)(x) \qquad, 0<t<1,
x\in{{\re}^d}.
$$
\hspace*{\fill} \rule{3mm}{3mm}
\subsection{Atomic decompositions and quasi-Banach frames}
Based on the coorbit characterizations
of Theorem~\ref{thm:coident} we can now apply the abstract theory from Section \ref{sec:abstrth} in our concrete setup, in particular the
discretization machinery.
We will subsequently use the following covering of the space $X$.
For $\alpha>0$ and $\beta>1$ we consider
the family $\mathcal{U}^{\alpha,\beta} = \{U_{j,k}\}_{j\in {\N}_0, k\in \mathbb{Z}}\newcommand{\C}{\mathbb{C}^d}$ of subsets
\begin{align}\nonumber
U_{0,k} &= Q_{0,k}\times\{\infty\}
\quad,\quad k\in {\zz}^d\,,\notag\\
U_{j,k} &= Q_{j,k} \times [\beta^{-j},\beta^{-j+1})\quad , \quad j \in \N, k \in \mathbb{Z}}\newcommand{\C}{\mathbb{C}^d\,,\nonumber
\end{align}
where $Q_{j,k} = \alpha \beta^{-j} k + \alpha \beta^{-j}[0,1]^d$.
Clearly, we have $X \subset \bigcup_{j\in {\N}_0, k\in {\zz}^d} U_{j,k}$ and $\mathcal{U}=\mathcal{U}^{\alpha,\beta}$ is an admissible covering of $X$.
The abstract Theorem~\ref{thm:atomicdec} provides atomic decompositions for
$B^w_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$.
To apply it we need to analyze the oscillation kernels
${\operatorname{osc}}_{\alpha,\beta}:={\operatorname{osc}}_{\mathcal{U},\Gamma}$ and
${\operatorname{osc}}^{\ast}_{\alpha,\beta}:={\operatorname{osc}}^{\ast}_{\mathcal{U},\Gamma}$, where we choose the trivial phase function $\Gamma\equiv1$.
This goes along the lines of \cite[Sect. 4.4]{RaUl10}
\begin{proposition}\label{bddkernels} Let ${\mathcal F} ={\mathcal F}(\Phi_0,\Phi)$ be an admissible wavelet frame, $Y=L^w_{p(\cdot),\tilde{q},a}(X)$ or $Y=P^w_{p(\cdot),q(\cdot),a}(X)$,
and $\nu=\nu_{w,p(\cdot),q(\cdot)}$ the associated weight~\eqref{eqdef:assweight}.
\begin{description}
\item(i) The kernels ${\operatorname{osc}}_{\alpha,\beta}$ and ${\operatorname{osc}}^{\ast}_{\alpha, \beta}$ are bounded operators on
$Y$ and belong to $\mathcal{A}_{m_\nu}$.
\item(ii) If $\alpha\downarrow 0$ and $\beta\downarrow 1$ then $\|{\operatorname{osc}}_{\alpha,\beta} | \mathcal{B}_{Y,{m_\nu}} \| \to 0$ and $\|{\operatorname{osc}}^{\ast}_{\alpha,\beta} | \mathcal{B}_{Y,{m_\nu}} \| \to 0$.
\end{description}
\end{proposition}
\noindent {\bf Proof.}\, The proof is a straight-forward modification of \cite[Lem.\
4.22]{RaUl10}. Similar as in Lemma \ref{lem:R(Y)} above we have to adapt the
kernel estimates to the Peetre-Wiener spaces.
\hspace*{\fill} \rule{3mm}{3mm}
Finally, Theorem~\ref{thm:atomicdec}
yields the following discretization result in our concrete setting, which we
only state
for $F^w_{p(\cdot),q(\cdot)}(\R)$ since for $B^w_{p(\cdot),\qconst}(\R)$ it is essentially the same.
\begin{Theorem
Let ${p(\cdot)},\ {q(\cdot)},\ w$ fulfill the standing assumptions \eqref{standassump}, assume further $a>\max\{d/p^-, d/q^-\}+\alpha_3$ and let $\tilde{w}$ be given as in \eqref{eqdef:wtilde}.
For an admissible continuous wavelet frame ${\mathcal F} = \{\varphi_\mathbf{x}\}_{\mathbf{x}\in X}$ there exist $\alpha_0>0$ and $\beta_0>1$,
such that for all $0<\alpha\leq \alpha_0$ and $1< \beta\leq \beta_0$ the family
${\mathcal F}_d = \{\varphi_{\mathbf{x}_{j,k}}\}_{j\in \N_0, k \in {\zz}^d}$ with $\mathbf{x}_{j,k} = (\alpha k \beta^{-j}, \beta^{-j})$ for $j\in\N$ and
$\mathbf{x}_{0,k} = (\alpha k , \infty) $ is a discrete wavelet frame with a corresponding dual frame $\mathcal{E}_d = \{e_{j,k}\}_{j\in \N_0, k\in {\zz}^d}$
such that
\begin{description}
\item(a) If $f\in F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ we have the quasi-norm equivalence
\begin{align*}
\|f|F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})\| &\asymp \|\{\langle f,\varphi_{x_{j,k}}\rangle\}_{j\in \N_0, k\in {\zz}^d}|(P^{\tilde{w}}_{{p(\cdot)},{q(\cdot)},a})^{\flat}\|\\
&\asymp \|\{\langle f,e_{j,k} \rangle\}_{j\in \N_0, k\in {\zz}^d}|(P^{\tilde{w}}_{{p(\cdot)},{q(\cdot)},a})^{\natural}\|\,.
\end{align*}
\item(b) For every $f\in F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ the series
\hfill$\displaystyle{ f = \sum\limits_{j\in \N_0}\sum\limits_{k\in {\zz}^d}\langle f,e_{j,k}\rangle \varphi_{x_{j,k}}
=\sum\limits_{j\in \N_0}\sum\limits_{k\in {\zz}^d}\langle f,\varphi_{x_{j,k}}\rangle e_{j,k} } $\hfill~\\
converge unconditionally in the quasi-norm of $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$.
\end{description}
\end{Theorem}
\noindent {\bf Proof.}\,
The assertion is a consequence of the representation $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})=\mathsf{Co} ({\mathcal F},P^{\tilde{w}}_{{p(\cdot)},{q(\cdot)},a})$ and Theorem~\ref{thm:atomicdec}.
In fact, Proposition~\ref{bddkernels} proves that $\mathcal{F}$ has property $D(\delta,\nu,Y)$ and $D(\delta,\nu,L_2)$ for every $\delta>0$.
Also note that $(P^w_{p(\cdot),q(\cdot),a})^{\flat}=(P^w_{p(\cdot),q(\cdot),a})^{\natural}$ with equivalent quasi-norms.
\hspace*{\fill} \rule{3mm}{3mm}
\subsection{Wavelet bases}
According to Appendix \ref{sect:OWT} we obtain a family of
systems $\mathcal{G}_c,\,c\in E:=\{0,1\}^d$, whose union constitutes a tensor
wavelet system in $L_2({{\re}^d})$. Our aim is now to apply the abstract result in
Theorem~\ref{thm:frameexp} to achieve wavelet basis characterizations of
$B^w_{{p(\cdot)},\qconst}({{\re}^d})$ and $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$.
We have to consider the Gramian cross kernels
$K_c=K_\mathcal{U}[\mathcal{G}_c,\mathcal{F}]$ and
$K^{\ast}_c=K^\ast_\mathcal{U}[\mathcal{G}_c,\mathcal{F}]$ from
\eqref{eqdef:kerK} in our
concrete setup.
\begin{lemma}\label{lem:wavecond}
Let $Y=L^w_{p(\cdot),\tilde{q},a}(X)$ or $Y=P^w_{p(\cdot),q(\cdot),a}(X)$ with associated weight $\nu=\nu_{w,p(\cdot),q(\cdot)}$ given in
\eqref{eqdef:assweight}. Assume that $a>0$ and ${p(\cdot)},\ {q(\cdot)},\ \qconst,\ w$ fulfill
the standing assumptions \eqref{standassump}. Let further ${\mathcal F}={\mathcal F}(\Phi_0,\Phi)$ be an admissible
wavelet frame, ${\mathcal G}_c$ be the systems from above, and
$K_c=K_\mathcal{U}[\mathcal{G}_c,\mathcal{F}]$,
$K^{\ast}_c=K^\ast_\mathcal{U}[\mathcal{G}_c,\mathcal{F}]$, $c \in E$, the
corresponding Gramian cross kernels. Then the kernels $K_c$ and $K^{\ast}_c$
define bounded operators from $Y$ to $Y$.
\end{lemma}
\noindent {\bf Proof.}\, The proof is analogous to the treatment of the kernels ${\operatorname{osc}}$ in
Proposition \ref{bddkernels}, see also \cite[Lem.\ 4.24]{RaUl10}.\hspace*{\fill} \rule{3mm}{3mm}
Now we are ready for the discretization of $B^w_{p(\cdot),\qconst}(\R)$ and $F^w_{p(\cdot),q(\cdot)}(\R)$ in terms of orthonormal wavelet bases.
We again only state the result for $F^w_{p(\cdot),q(\cdot)}(\R)$ for the sake of brevity.
\begin{Theorem
Let ${p(\cdot)},\ {q(\cdot)},\ w\in\mathcal{W}^{\alpha_3}_{\alpha_1,\alpha_2}$ fulfill the standing assumptions \eqref{standassump}, assume further
$a>\max\{d/p^-, d/q^-\} +\alpha_3$ and let $\tilde{w}$ be given as in
\eqref{eqdef:wtilde}. Let $\psi^0, \psi^1 \in L_2(\mathbb{R}}\newcommand{\N}{\mathbb{N})$ be the Meyer
scaling function and associated wavelet.
Then every $f\inF^w_{p(\cdot),q(\cdot)}(\R)$ has the decomposition
\begin{equation*
\begin{split}
f =& \sum\limits_{c\in E}\sum\limits_{k\in {\zz}^d}\lambda^c_{0,k}
\psi^c(\cdot-k)+\sum\limits_{c\in E\setminus \{0\}}\sum\limits_{j\in
\N}\sum\limits_{k\in {\zz}^d} \lambda^c_{j,k}2^{\frac{jd}{2}}\psi^c(2^j\cdot-k)
\end{split}
\end{equation*}
with quasi-norm convergence in $F^w_{{p(\cdot)},{q(\cdot)}}({{\re}^d})$ and
sequences $\lambda^c = \{\lambda^c_{j,k}\}_{j\in \N_0,k\in{\zz}^d}$ defined by
$$
\lambda^c_{j,k} = \langle
f,2^{\frac{jd}{2}}\psi^c(2^j\cdot-k)\rangle_{\mathcal{S}^\prime\times\mathcal{S}
}\quad, \quad j\in \N_0,k\in {\zz}^d\,,
$$
which belong to the sequence space $(P^{\tilde{w}}_{{p(\cdot)},{q(\cdot)},a})^\natural$
for every $c\in E$.
Conversely, an element $f\in (\mathcal{H}_{\nu_{\tilde{w},{p(\cdot)},{q(\cdot)}}}^1)^{\urcorner}$
belongs to $F^w_{p(\cdot),q(\cdot)}(\R)$ if all sequences
$\lambda^c(f)$ belong to $(P^{\tilde{w}}_{{p(\cdot)},{q(\cdot)},a})^\natural$.
\end{Theorem}
\noindent {\bf Proof.}\,
The statement is a direct consequence of Theorem~\ref{thm:coident} and Theorem~\ref{thm:frameexp}.
The required conditions of the kernels $K_c,K^{\ast}_c$, $c\in E$, have been proved in Lemma~\ref{lem:wavecond}.
\hspace*{\fill} \rule{3mm}{3mm}
\setcounter{section}{0}
\renewcommand{\thesection}{\Alph{section}}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\renewcommand{\theTheorem}{\Alph{section}.\arabic{Theorem}}
\renewcommand{\thesubsection}{\Alph{section}.\arabic{subsection}}
\section{Appendix: Wavelet transforms}
\subsection{The continuous wavelet transform}
\label{Appendix:WaveletTransfoms}
As usual $\mathcal{S}(\mathbb{R}}\newcommand{\N}{\mathbb{N}^d)$ denotes
the locally convex space of rapidly decreasing infinitely
differentiable functions on $\mathbb{R}}\newcommand{\N}{\mathbb{N}^d$ and its topological dual is denoted by $\mathcal{S}'(\mathbb{R}}\newcommand{\N}{\mathbb{N}^d)$.
The Fourier transform defined on both $\mathcal{S}({{\re}^d})$ and $\mathcal{S}'({{\re}^d}$)
is given by
$\widehat{f}(\varphi) := f (\widehat{\varphi})$,
where
$\,f\in \mathcal{S}'({{\re}^d}), \varphi \in \mathcal{S}({{\re}^d})$, and
$$
\widehat{\varphi}(\xi):= (2\pi)^{-d/2}\int_{{{\re}^d}} e^{-ix\cdot
\xi}\varphi(x)\,dx.
$$
The Fourier transform is a bijection (in both cases) and its inverse is
given by $\varphi^{\vee} = \widehat{\varphi}(-\cdot)$.
Let us introduce the continuous wavelet transform. A general reference is provided by the monograph \cite[2.4]{Dau92}.
For $x\in {{\re}^d}$ and $t>0$ we define the unitary dilation and translation operators $\mathcal{D}^{L_2}_{t}$
and $T_x$ by
$$
\mathcal{D}^{L_2}_{t}g = t^{-d/2}g\Big(\frac{\cdot}{t}\Big)\quad\mbox{and}\quad T_xg = g(\cdot-x)\quad,\quad g\in L_2({{\re}^d})\,.
$$
The vector $g$ is said to be the analyzing vector for a function $f\in L_2({{\re}^d})$. The
continuous wavelet transform $W_gf$ is then defined by
$$
W_g f(x,t) = \langle T_x\mathcal{D}^{L_2}_t g, f\rangle\quad,\quad x\in {{\re}^d}, t>0\,,
$$
where the bracket $\langle \cdot, \cdot \rangle$ denotes the
inner product in $L_2({{\re}^d})$.
We call $g$ an admissible wavelet if
$$
c_g:= \int_{{{\re}^d}} \frac{|\widehat{g}(\xi)|^2}{|\xi|^d}\,d\xi < \infty\,.
$$
If this is the case, then the family $\{T_x\mathcal{D}^{L_2}_t g\}_{t>0, x\in {{\re}^d}}$ represents a tight
continuous frame in $L_2(\mathbb{R}}\newcommand{\N}{\mathbb{N})$ where $C_1 = C_2 = c_g$.
Many consideration in this paper are based on decay results of the continuous wavelet transform $W_g f(x,t)$. This decay
mainly depends on moment conditions of the analyzing vector $g$ as well as on the smoothness of $g$
and the function $f$ to be analyzed, see \cite[Lem.\ A.3]{T10} which is based
on \cite[Lem.\ 1]{Ry99a}
\subsection{Orthonormal wavelets}
\label{sect:OWT}
\subsubsection*{The Meyer wavelets}
Meyer wavelets were introduced in \cite{Meyer86} and are an important example of wavelets which belong to the Schwartz class $\mathcal{S}(\mathbb{R}}\newcommand{\N}{\mathbb{N})$. The scaling function $\psi^0\in\mathcal{S}(\mathbb{R}}\newcommand{\N}{\mathbb{N})$ and the wavelet $\psi^1\in\mathcal{S}(\mathbb{R}}\newcommand{\N}{\mathbb{N})$ are real, their Fourier transforms are compactly supported and they fulfill
\begin{align*}
\hat{\psi}^0(0)=(2\pi)^{-1/2}\quad\text{and}\quad {\rm supp \, }\hat{\psi}^1\subset\left[-\frac83\pi,-\frac23\pi\right]\cup\left[\frac23\pi,\frac83\pi\right].
\end{align*}
Due to the support condition we have infinitely many moment conditions \eqref{condphi2} on $\psi^1$ and both functions are fast decaying and infinitly often differentiable, see \cite[Section 3.2]{Wo97} for more properties.
\subsubsection*{Wavelets on ${{\re}^d}$}
In order to treat function spaces on ${{\re}^d}$ let us recall the construction of
a $d$-variate wavelet basis out of a resolution of unity in ${{\re}^d}$, see for
instance Wojtaszczyk \cite{Wo97}. It starts with a scaling function $\psi^0$ and
a wavelet $\psi^1$ belonging to $L_2(\mathbb{R}}\newcommand{\N}{\mathbb{N})$.
For $c\in E= \{0,1\}^d$ the function $\psi^c:{{\re}^d} \to \mathbb{R}}\newcommand{\N}{\mathbb{N}$ is then defined by the
tensor product
$\psi^c = \bigotimes_{i=1}^d \psi^{c_i}$, i.e., $\psi^c(x) = \prod_{i=1}^d
\psi^{c_i}(x_i)$, and we let $\mathcal{G}_c = \{\psi^c_{(x,t)}\}_{(x,t)\in X}$
be the system
with
\begin{align*}
\begin{aligned}
\psi^c_{(x,t)} = \left\{\begin{array}{rcl}
T_x\mathcal{D}^{L_2}_t \psi^c&,&0<t<1\,,\\
T_x \psi^c&,& t=\infty\,,
\end{array}\right.
\end{aligned}
\quad \text{if $c\neq 0$} \quad\text{and}\quad
\begin{aligned}
\psi^0_{(x,t)} = \left\{\begin{array}{rcl}
0&,&0<t<1\,,\\
T_x \psi^0&,& t=\infty\,.
\end{array}\right.
\end{aligned}
\end{align*}
\section*{Acknowledgement}
The authors would like to thank Felix Voigtlaender for a careful reading of the manuscript and many valuable comments and corrections.
They would further like to thank the anonymous referees for their careful proofreading and many valuable remarks. In particular, Lemma~\ref{lem:Bochner} should be pointed out, where now -- based on their input -- also questions of Bochner-measurability are discussed. Furthermore,
a serious technical issue in the proof of Theorem~\ref{thm:contchar} has been fixed.
M.S.\ would like to thank Holger Rauhut for support during his diploma studies where some ideas of this paper were developed.
\def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0
by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0
\hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
|
1,108,101,564,273 | arxiv | \section{Introduction}
It has been well-known that Dirichlet forms provide an elegant way to characterize Markov processes. Any regular symmetric Dirichlet form admits a symmetric Hunt process (see, for instance, \cite[Theorem 1.5.1, Theorem 3.1.12]{CF}) associated with it. Furthermore, there is a one-to-one correspondence between the family of strongly local regular symmetric Dirichlet forms and the family of diffusion processes with no killing inside. In recent years, Dirichlet form theory has been serving as a powerful tool to construct processes on irregular spaces. For instance, varieties of strong Markov processes with darning have been constructed by Chen and Fukushima in \cite{Chen, CF}, including one-dimensional absorbing Brownian motion, circular Brownian motion, knotted Brownian motion, multidimensional Brownian motion with darning, diffusions on half-lines merging at one point , etc. In my recent joint work \cite{CL1} with Chen, Brownian motion on spaces with varying dimension are characterized in terms of Dirichlet form with darning and therefore have been studied with an emphasis on their two-sided heat kernel behaviors. One of the major difficulties in studying processes constructed on irregular spaces is to describe their behaviors near the singularities.
It therefore becomes natural to ask whether there is any {\it general} method or criterion treating heat kernel bounds that can be applied to
Dirichlet forms constructed on state spaces that possibly contain singularities thus do not allow any of the typical methods to work, such as parabolic Harnack inequality, Poincar\'{e} inequality, or volume-doubling property? For example, in \cite{CL1}, none of these properties holds for Brownian motion on spaces with varying dimension due to the inhomogeneity at the darning point(s).
The amount of existing literature answering the question above is very limited.
The established results on heat kernel estimates are mostly under the frameworks of either Laplace-Beltrami operators on Riemannian manifolds (for example, \cite{LSC1, LSC2}), or Dirichlet forms on metric measure spaces
(for example, \cite{GH, GHL, GT}). The majority of these existing results require the underlying spaces to satisfy volume-doubling or other regularity conditions. In this paper, the underlying space is not necessarily equipped with an original metric. Instead, we equip it with the intrinsic metric induced by the Dirichlet form. Without assuming volume-doubling property of the unerlying measure with respect to the intrinsic metric, we give sharp on-diagonal heat kernel lower bound for general strongly local regular symmetric Dirichlet forms.
A similar problem has been answered in \cite{CG} by Coulhon and Grigor'yan, which gives criteria for pointwise on-diagonal two-sided heat kernel bounds associated with Laplace-Beltrami operators on weighted Riemannian manifolds without assuming volume-doubling property. The two-sided bound only depends on the local volume form of the space near the particular point. The key ingredients in their paper are the integral estimations of the heat kernels and their time derivatives established in \cite[Theorem 1.1]{gowri} and \cite[Theorem 1]{G1}. Some analogous properties were earlier proved for the fundamental solutions to parabolic equations by Aronson in \cite{Aronson}. In this paper, we also extend these integral estimates further to strongly local Dirichlet spaces.
Throughout this paper, $({\mathcal{E}}, {\mathcal{F}})$ is a Dirichlet form on a real Hilbert space $L^2(E,\mu)$. The underlying space $E$ is a locally compact separable Hausdorff space equipped with a reference measure $\mu$ which is a positive Radon measure with full support on $E$. With the norm $\|u\|_{\mathcal{F}}:=({\mathcal{E}}(u, u)+\|u\|_{L^2})^{1/2}$, ${\mathcal{F}}$ is also a real Hilbert space. The Dirichlet form ${\mathcal{E}}$ is assumed to be regular, symmetric, and strongly local. A Dirichlet form $({\mathcal{E}}, {\mathcal{F}})$ is said to be regular if $C_c(E)\cap {\mathcal{F}}$ is dense in $({\mathcal{E}}_1, {\mathcal{F}})$ and $(C_c(E),\|\cdot \|_\infty)$. It is symmetric if ${\mathcal{E}}(u, v)={\mathcal{E}}(v, u)$ for any $u, v\in {\mathcal{F}}$. It is strongly local means ${\mathcal{E}}(u,v)=0$ whenever $u$ is equal to a constant on a neighborhood of the support of $v$. In other words, ${\mathcal{E}}$ has no killing or jumping part.
As usual we denote the infinitesimal operator associated with ${\mathcal{E}}$ by $\mathcal{L}$. It follows that the family of $\{P_t=e^{\mathcal{L}t}, \, t\ge 0\}$ is a strongly continuous semigroup on $L^2(E, \mu)$, and that there is a unique symmetric diffusion process $X$ associated with $({\mathcal{E}}, {\mathcal{F}})$ whose transition semigroup is $\{P_t\}_{t\ge 0}$. Furthermore, $X$ can start from every point of $E$ outside a properly exceptional set \footnote{A set $\mathcal{N}\subset E$ is called properly exceptional if it is Borel, $\mu(\mathcal{N})=0$ and ${\mathbb{P}}_x(X_t\in \mathcal{N} \text{ for some }t\ge0)=0$ for all $x\in E\setminus \mathcal{N}$ (see \cite[p.134 and Theorem 4.1.1 on p.137]{FOT}).} denoted by $\mathcal{N}$. A family of functions $\{p(t,x,y)\}_{t\ge 0, x,y\in E\setminus \mathcal{N}}$ is called the heat kernel of $({\mathcal{E}}, {\mathcal{F}})$ if for all $t>0$ and $\mu-$a.e. $y\in E$,
\begin{equation}\label{heat-kernel-definition}
P_tf(y)=\int_E p(t,x, y)f(x)\mu(dx), \quad \text{for every }f\in L^2(E).
\end{equation}
The main result of this paper is the following theorem.
\begin{thm}\label{main-result}
Let $({\mathcal{E}}, {\mathcal{F}})$ be a strongly local regular symmetric Dirichlet form satisfying Assumption \ref{strong-regularity} and Nash-type inequality \eqref{Nash-inequality-I}. Fix $z\in E\setminus \mathcal{N}$ where $\mathcal{N}$ is a properly exceptional set. With respect to the intrinsic metric induced by $({\mathcal{E}}, {\mathcal{F}})$, assume that for all $r>0$, $\mu(B(z,r))\le v(r)$, where $v(r)$ is a continuous monotonically increasing function satisfying doubling property in the following sense: There exists some $A>0$ such that
\begin{equation*}\label{doubling}
v(2r)\le Av(r), \quad \text{for all }r>0.
\end{equation*}
Suppose also that for some $C_1>0$, $T\in (0, \infty]$,
\begin{equation*}\label{UPE}
p(t,z,z)\le \frac{C_1}{v(\sqrt{t})}, \quad t\in (0, T).
\end{equation*}
Then there exists $C_2>0$ such that
\begin{equation*}\label{LE}
p(t,z,z)\ge \frac{C_2}{v(\sqrt{t})}, \quad t\in (0, T).
\end{equation*}
\end{thm}
Note that the definition of intrinsic metric is given in \eqref{def-instrinsic-metric}, and Assumption \ref{strong-regularity} is made based on that. The intrinsic metric is the metric under which two-sided Gaussian-type heat kernel bounds can be characterized by parabolic Harnack inequality or the conjunction of volume-doubling property and Poincar\'{e} inequality. See \cite{St3}. Finally we briefly explain the necessity of imposing Assumption \ref{strong-regularity} and Nash-type inequality. Indeed, Nash-type inequality is a natural assumption to
ensure that the heat kernel associated with the Dirichlet form exists. Assumption \ref{strong-regularity}, on the other hand, guarantees that intrinsic distance functions are non-degenerate and in the local Dirichlet form domain, and that cutoff distance functions (with respect to the intrinsic metric) are in ${\mathcal{F}}$. More details will be given in Section \ref{S:2}. For more delicate discussion on Assumption \ref{strong-regularity} and its variations, one may refer to \cite{Sturm1} and \cite{St3}.
The rest of the paper is organized as follows. In Section \ref{S:2}, we briefly introduce the definitions and some basic properties of the energy measures associated with strongly local regular symmetric Dirichlet forms. Then we give the definition of intrinsic metric induced by Dirichlet forms. Using Nash-type inequality, we claim the existence of the heat kernel $\{p(t,x,y)\}_{t\ge 0, x,y\in E\setminus \mathcal{N}}$ and establish a rough off-diagonal heat kernel upper bound which follows from Davies method. This (rough) upper bound does not need to be in the volume form $v(\sqrt{t})^{-1}$ along the diagonal $\{x,y\in E\setminus \mathcal{N}:\, x=y\}$. Theorem \ref{main-result} will be proved in Section \ref{S:3}.
\section{Preliminary}\label{S:2}
It is known that any strongly local symmetric Dirichlet form $({\mathcal{E}}, {\mathcal{F}})$ can be written in terms of the energy measure $\Gamma$ as follows:
$$
{\mathcal{E}}(u, v)=\int_E d\Gamma(u, v), \quad u, v\in {\mathcal{F}}.
$$
where $\Gamma$ is a positive semidefinite symmetric bilinear form on ${\mathcal{F}}$ with values being signed Radon measures on $E$, which is also called the energy measure. To be more precise, we first define for every $u\in {\mathcal{F}}\cap L^\infty(E)$ and every $\phi\in {\mathcal{F}}\cap \mathcal{C}_0(E)$
$$
\int_E \phi \,d\Gamma(u, u)={\mathcal{E}}(u, \phi u)-\frac{1}{2}{\mathcal{E}}(u^2, \phi).
$$
The quadratic form $u\mapsto \Gamma(u, u)$ can be extended to ${\mathcal{F}}$ using the approximation sequence $u_n:=n\wedge u\vee (-n)$. Recall that local Dirichlet space is defined as
\begin{align}
{\mathcal{F}}_{\text{loc}}:=\{&u: \text{ for every relatively compact open set $D$, there exists some $v\in {\mathcal{F}}$ such that } \nonumber
\\
&v|_{D}=u,\, \mu-a.e. \}.\label{def-local-Dirichlet-space}
\end{align}
It follows that every $u\in {\mathcal{F}}_{\text{loc}}$ admits a $\mu$-version which is quasi-continuous\footnote{A function $f$ is called``${\mathcal{E}}$-quasi-continuous" if for any $\epsilon>0$, there is an open set $O$ with capacity less than $\epsilon$ such that $f|_{E\setminus O}$ is continuous (See \cite[\S 2.3 on p.77, Theorem 3.17 on p.96, and Theorem 3.3.3 on p.107]{CF}).}. Furthermore, the domain of the map $u\mapsto \Gamma(u, u)$ can be extended to ${\mathcal{F}}_{\text{loc}}$ (see \cite[Theorem 4.3.10(ii), p.248-249]{CF}). By polarization, for $u, v\in {\mathcal{F}}_{\text{loc}}$,
\begin{equation*}
\Gamma(u, v):=\frac{1}{4}\left(\Gamma(u+v, u+v)-\Gamma(u-v, u-v)\right)
\end{equation*}
is defined as a signed Radon measure.
The following Cauchy-Schwarz inequality is satisfied by energy measures, which can be found in \cite[Appendix]{Sturm1}.
\begin{prop}\label{Cauchy-Schwarz}
Let \emph{$u, v\in {\mathcal{F}}_{\text{loc}}$}. For $f, g\in L^\infty(E)$ that are quasi-continuous, it holds
\begin{align}
\int_E fgd\Gamma(u, v)&\le \left(\int_E f^2 d\Gamma (u, u)\right)^{\frac{1}{2}}\left(\int_E g^2 d\Gamma (v, v)\right)^{\frac{1}{2}} \le \frac{1}{2}\left(\int_E f^2 d\Gamma(u, u)+\int_E g^2d\Gamma(v, v)\right).\label{Cauchy-Schwarz-energy-measure}
\end{align}
\end{prop}
Energy measures satisfy the following properties called Leibniz rule and chain rule for strongly local Dirichlet spaces. See \cite[Appendix]{Sturm1} or \cite[Chapter 4]{CF}.
\begin{thm}\label{chain-product-rule} Let $({\mathcal{E}}, {\mathcal{F}})$ be a strongly local regular Dirichlet space. The following properties hold:
\begin{description}
\item{\emph{(i)}} For any \emph{$u,v\in {\mathcal{F}}_{\text{loc}}\cap L^\infty(E)$} and \emph{$w\in {\mathcal{F}}_{\text{loc}}$},
$$
d\Gamma(uv, w)=\tilde{u}d\Gamma(v, w)+\tilde{v}d\Gamma(u, w),
$$ where $\tilde{u}$, $\tilde{v}$ are the quasi-continuous versions of $u$ and $v$.
\item{\emph{(ii)}} For any $\Phi\in C_b^1({\mathbb{R}})$ with bounded derivative $\Phi '$. Then \emph{ $u\in {\mathcal{F}}_{\text{loc}}$} implies \emph{$\Phi(u)\in {\mathcal{F}}_{\text{loc}}$} and
$$
d\Gamma( \Phi(u), v)=\Phi '(u)d\Gamma( u',v),
$$
for all \emph{$v\in {\mathcal{F}}_{\text{loc}}\cap L^\infty_{\text{loc}}(E)$}.
\end{description}
\end{thm}
To introduce heat kernels associated with Dirichlet spaces, we first introduce the intrinsic metric $d$ on $E$ induced by the energy measure $\Gamma$:
\begin{equation}\label{def-instrinsic-metric}
d(x, y):=\sup\{u(x)-u(y):\, u\in {\mathcal{F}}_{\text{loc}}\cap \mathcal{C}(E), \, \Gamma(u, u)\le \mu \text{ on }E\},
\end{equation}
where $\Gamma(u, u)\le \mu$ should be interpreted as follows: The energy measure $\Gamma(u, u)$ is absolutely continuous with respect to the underlying measure $\mu$ with its Radon-Nikodym derivative $d\Gamma(u, u)/d\mu \le 1$ a.e.. In fact, the Radon-Nikodym derivative $d\Gamma(u, u)/d\mu$ should be interpreted as the square of the gradient of $u\in {\mathcal{F}}_{\text{loc}}$.
Generally speaking, $d$ is a pseudo metric instead of a metric, which means it may be degenerate ($d(x,y)=0$ or $\infty$ for $x\neq y$). To ensure $d$ is a metric and all cutoff distance functions are in ${\mathcal{F}}$, we make the following fundamental assumption throughout this paper:
\begin{assumption}\label{strong-regularity}
The topology induced by $d(\cdot, \cdot)$ in \eqref{def-instrinsic-metric} coincides with the original one and all balls in the form of $B_r(z):=\{x\in E: d(x,z)<r\}$ are relatively compact.
\end{assumption}
This assumption in particular implies that $d$ is non-degenerate. The following fundamental lemma has been proved in \cite[Lemma 1]{Sturm1} for strongly local regular Dirichlet spaces satisfying Assumption \ref{strong-regularity}.
\begin{lem}\label{property-energy-meas}
Let $({\mathcal{E}}, {\mathcal{F}})$ be a strongly local regular symmetric Dirichlet form satisfying Assumption \ref{strong-regularity}. For every $z\in E$, the distance function $d_z(x): \, x\mapsto d(x,z)$ is in \emph{${\mathcal{F}}_{\text{loc}}\cap \mathcal{C}(E)$} and satisfies
\begin{equation}\label{L:2.4}
\Gamma (d_z, d_z)\le \mu.
\end{equation}
\end{lem}
\begin{rmk}\label{R:2.5}
Indeed, Lemma \ref{L:2.4} holds without the assumption that all balls are relatively compact. However, given that all open balls are relatively compact, Lemma \ref{L:2.4} implies that all cutoff functions: $x\mapsto (r-d(x,z))_+$ are in ${\mathcal{F}}\cap L^\infty$ and satisfy \eqref{L:2.4}.
\end{rmk}
We also assume throughout this paper that the Dirichlet form $({\mathcal{E}}, {\mathcal{F}})$ satisfies the following Nash-type inequality: There exist some $\gamma>0$, $\delta\ge 0$ and some $A>0$,
\begin{equation}\label{Nash-inequality-I}
\|f\|_2^{2+4/\gamma}\le A\left({\mathcal{E}}(f, f)+\delta \|f\|_2^2\right)\|f\|_1^{4/\gamma}, \quad \text{for all }f\in {\mathcal{F}}.
\end{equation}
The existence of the heat kernel along with its short time off-diagonal estimate follows immediately from Nash inequality, as the next proposition claims.
\begin{prop}\label{P:upper-bound-off-diag-Nash}
Let $({\mathcal{E}}, {\mathcal{F}})$ be a strongly local regular symmetric Dirichlet form satisfying Nash-type inequality \eqref{Nash-inequality-I} with some $\gamma>0, \delta\ge 0$ and $A>0$.
\begin{description}
\item{\emph{(i)}} There is a properly exceptional set $\mathcal{N}\subset E$ of $X$ such that there is a positive symmetric kernel $p(t,x,y)$ defined on $(0, \infty)\times (E\setminus \mathcal{N})\times (E\setminus \mathcal{N})$ satisfying \eqref{heat-kernel-definition} and
\begin{equation*}
p(t+s, x, y)=\int_{E} p(t,x,z)p(s, z, y)\mu(dy), \quad t, s>0, \, x, y\in (E\setminus \mathcal{N}).
\end{equation*}
Additionally, for every $t>0$, $y\in E\setminus \mathcal{N}$, the map $x\mapsto p(t,x,y)$ is quasi-continuous on $E$.
\item{\emph{(ii)}} There exist $C_1, C_2>0$ such that for every $x\in E\setminus \mathcal{N}$,
\begin{equation}\label{e:2.6}
p(t,x,y)\le \frac{C_1}{t^{\gamma/2}}e^{-C_2d(x,y)^2/t}, \quad t\in (0, 1], \, \mu-a.e. \, y,
\end{equation}
for the same $\gamma$ as in Nash inequality \eqref{Nash-inequality-I}.
\end{description}
\end{prop}
\begin{proof}
It follows from \cite[Theorem 2.1]{CKS} that for some $c_1>0$,
\begin{equation*}
\|P_t f\|_\infty \le \frac{c_1 e^{\delta t}}{t^{\gamma /2}} \|f\|_1, \quad f\in L^1(E), \, t>0.
\end{equation*}
Therefore (i) follows immediately from \cite[Theorem 3.1]{BBCK}. The proof to (ii) follows from a standard argument using Davies's method:
Fix $x_0, y_0\in E\setminus N$, $0<t_0\le 1$. Set a constant $\alpha:=d(y_0,x_0)/4t_0$ and
$\displaystyle{\psi (x):=\alpha \cdot d(x, x_0)}$.
Then we define $\psi_n(x)=\psi(x)\wedge n$. Note that for $\mu$-a.e. $x\in E\setminus \mathcal{N}$, by Theorem \ref{chain-product-rule} and Lemma \ref{property-energy-meas},
\begin{displaymath}
e^{-2\psi_n (x)} \frac{d}{d\mu}\Gamma\left( e^{\psi_n (x)}, e^{\psi_n(x)} \right)=\frac{d}{d\mu}\Gamma \left(\psi_n (x), \psi_n(x)\right) \leq \alpha^2.
\end{displaymath}
Similarly,
$$
e^{2\psi_n (x)} \frac{d}{d\mu}\Gamma\left( e^{-\psi_n (x)}, e^{-\psi_n(x)} \right) \leq \alpha^2.
$$
By \cite[Theorem 3.25]{CKS}, there exists some $c_2>0$ such that for every $x\in E\setminus \mathcal{N}$,
\begin{equation*}
p(t,x,y)\le \frac{c_2e^{2\delta t}}{t^{\gamma/2}}\exp\left(-|\psi(y)-\psi(x)|+2t|\alpha|^2\right), \quad t>0, \, \mu-a.e.\, y.
\end{equation*}
i.e.,
\begin{equation}\label{proveoffdiagUB}
p(t,x,y)\le \frac{c_2e^{2\delta }}{t^{\gamma/2}}\exp\left(-|\psi(y)-\psi(x)|+2t|\alpha|^2\right), \quad 0<t\le 1, \, \mu-a.e.\, y.
\end{equation}
Taking $t=t_0, x=x_0$
and $y=y_0$ in \eqref{proveoffdiagUB} completes the proof.
\end{proof}
With the above heat kernel upper bound, the following well-known result can be justified using spectral theory. See \cite[Example 4.10]{GH}.
\begin{lem}\label{L:2.7}
Fix $y\in E\setminus \mathcal{N}$. For every $t>0$, the map $x\mapsto p(t,x,y)$ is in domain of the infinitesimal generator $\mathcal{L}$ and satisfies the heat equation
\begin{equation*}
\partial_t p(t,x,y)+\mathcal{L}p(t,x,y)=0,
\end{equation*}
where $\partial_t p(t,x,y)$ is the strong derivative of the map $t\mapsto p(t,x,y)$ in $L^2$.
\end{lem}
This immediately yields that the map $x\mapsto p(t,x,y)$ is in ${\mathcal{F}}$ because $\mathcal{D(L)}$ is a dense subset of ${\mathcal{F}}=\mathcal{D(\sqrt{-L})}$.
\section{Proof of the On-diagonal Heat Kernel Lower Bounds}\label{S:3}
For the rest of the paper, a properly exceptional set $\mathcal{N}$ is always fixed to be the same as in Proposition \ref{P:upper-bound-off-diag-Nash}. To prove Theorem \ref{main-result}, we introduce the following quantity $E_D(z, t)$ for notation convenience. Let $z\in E\setminus \mathcal{N}$ be fixed and $d$ be the intrinsic distance on $E$. Set
\begin{equation*}\label{def-E(z,t)}
E_D(z,t):=\int_E p^2(t,z,x)\exp\left(\frac{d(z,x)^2}{Dt}\right)\mu(dx).
\end{equation*}
For fixed $z\in E$, $R>0$, we let $f_R(x):=(R-d(z,x))_+$.
By Lemma \ref{property-energy-meas} and Remark \ref{R:2.5}, $f_R$ is in ${\mathcal{F}}\cap L^\infty$. To establish an upper bound for $E_D(z,t)$, we first claim the next three propositions.
\begin{prop}\label{g-decreasing}
Fix $z\in E\setminus \mathcal{N}$ and $0<T<\infty$. For any $R>0$, $D\ge 2$, the map
\begin{equation*}\label{def-f(t)}
t\mapsto \int_E p^2(t,z,x)e^{f_R(x)^2/D(t-T)}\mu(dx)
\end{equation*}
is non-increasing on $t\in (0, T)$.
\end{prop}
\begin{proof}
In this proof, when there is no confusion, we suppress the subscript $R$ from $f_R$ for notation simplicity. Indeed we show derivative of the map exists and is always non-positive. For this purpose, for every $t\in (0, T)$, we write
\begin{align}
&\quad\, \frac{1}{s-t}\left(\int_E p^2(s,z,x)e^{f(x)^2/D(s-T)}\mu(dx)-\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}\mu(dx) \right)\nonumber
\\
&=\int_E \frac{1}{s-t}\left(p^2(s,z,x)e^{f(x)^2/D(s-T)}-p^2(t,z,x)e^{f(x)^2/D(s-T)}\right)\mu(dx) \nonumber
\\
&+\int_E \frac{1}{s-t}\left(p^2(t,z,x)e^{f(x)^2/D(s-T)}-p^2(t,z,x)e^{f(x)^2/D(t-T)}\right)\mu(dx) . \label{212}
\end{align}
For the first term on the right hand side of \eqref{212}, as $s\rightarrow t$, one has
\begin{align}
&\quad \lim_{s\rightarrow t}\int_E \frac{1}{s-t}\left(p(s,z,x)-p(t,z,x)\right)\left(p(s,z,x)+p(t,z,x)\right)e^{f(x)^2/D(s-T)}\mu(dx) \nonumber
\\
&=\int_E \mathcal{L} p(t,z,x)\cdot 2 p(t,z,x)e^{f(x)^2/D(t-T)}\mu(dx),\label{309}
\end{align}
because $\frac{1}{s-t}\left(p(s,x,z)-p(t,x,z)\right)\rightarrow \mathcal{L}p(t,x,z)$ strongly in $L^2$ in view of Lemma \ref{L:2.7}, $p(s,x,z)+p(t,x,z)\rightarrow 2p(t,x,z)$ also strongly in $L^2$, and $e^{f(x)^2/D(s-T)}\rightarrow e^{f(x)^2/D(t-T)}$ strongly in $L^\infty$. To take the limit as $s\rightarrow t$ for the second term on the right hand side of \eqref{212}, with the heat kernel upper bound shown in Proposition \ref{P:upper-bound-off-diag-Nash}, it follows immediately from dominate convergence theorem that
\begin{align}
&\quad \lim_{s\rightarrow t}\int_E \frac{1}{s-t}\left(p^2(t,z,x)e^{f(x)^2/D(s-T)}-p^2(t,z,x)e^{f(x)^2/D(t-T)}\right)\mu(dx) \nonumber
\\
&=\int_E p^2(t,z,x)\frac{\partial}{\partial t}e^{f(x)^2/D(t-T)}\mu(dx).\label{310}
\end{align}
Now letting $s\rightarrow t$ in \eqref{212} by replacing the two terms in the last display of \eqref{212} with \eqref{309} and \eqref{310} respectively yields
\begin{align}
&\quad \frac{d}{dt}\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}\mu(dx) \nonumber
\\
&=\int_E 2p(t,z,x) \left(\mathcal{L}p(t,z,x)\right)e^{f(x)^2/D(t-T)}\mu(dx) +\int_E p^2(t,z,x)\frac{d}{dt}e^{f(x)^2/D(t-T)}\mu(dx)\nonumber
\\
&=-2{\mathcal{E}}\left(p(t,z,x),\,p(t,z,x)e^{f(x)^2/D(t-T)} \right)+\int_E p^2(t,z,x)\frac{d}{dt}e^{f(x)^2/D(t-T)}\mu(dx).\label{315}
\end{align}
Note in the last ``$=$" above, $p(t,z,x)e^{f(x)^2/D(t-T)}$ is in ${\mathcal{F}}$ because both $p(t,z,x)$ and $e^{f(x)^2/D(t-T)}$ are in ${\mathcal{F}}\cap L^\infty$ (see, for example, \cite[Theorem 1.4.2]{FOT}). To proceed with the computation, we first rewrite the first term in the last display of \eqref{315} in terms of energy measure as follows:
\begin{align}
&\quad -2 {\mathcal{E}}\left(p(t,z,x), p(t,z,x)e^{f(x)^2/D(t-T)}\right) =-2 \int_E d\Gamma\left(p(t,z,x), \,p(t,z,x)e^{f(x)^2/D(t-T)}\right) \nonumber
\\
&=-2 \int_E p(t,z,x) d\Gamma \left(p(t,z,x), e^{f(x)^2/D(t-T)}\right)-2 \int_E e^{d(x)^2/D(t-T)}d\Gamma(p(t,z,x), p(t,z,x)) \nonumber
\\
&=-2 \int_E p(t,z,x)e^{f(x)^2/D(t-T)}d\Gamma \left(p(t,z,x),\, \frac{f(x)^2}{D(t-T)}\right) \nonumber
\\
&\quad -2 \int_E e^{f(x)^2/D(t-T)}d\Gamma \left(p(t,z,x), p(t,z,x)\right) \nonumber
\\
&\le 2\left(\int_E e^{f(x)^2/D(t-T)}d\Gamma \left(p(t,z,x),\,p(t,z,x) \right)\right)^{1/2} \nonumber
\\
&\quad \times \left(\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}d\Gamma \left(\frac{f(x)^2}{D(t-T)}, \frac{f(x)^2}{D(t-T)} \right)\right)^{1/2} \nonumber
\\
&\quad -2 \int_E e^{f(x)^2/D(t-T)}d\Gamma \left(p(t,z,x), p(t,z,x)\right).\label{eq:3.4}
\end{align}
where in the third ``$=$" above we use Theorem \ref{chain-product-rule} and the last ``$\le$" is justified by \eqref{Cauchy-Schwarz-energy-measure}. For the second term in the last display of \eqref{315}, it can first be observed that by Theorem \ref{chain-product-rule} and Lemma \ref{property-energy-meas},
\begin{align}
\frac{d}{d\mu}\Gamma \left(\frac{f(x)^2}{D(t-T)}, \frac{f(x)^2}{D(t-T)}\right)&=\frac{4f(x)^2}{D^2(t-T)^2}\frac{d}{d\mu}\Gamma (f(x), f(x)) \nonumber
\\
&\le \frac{4f(x)^2}{D^2(t-T)^2}=-\frac{4}{D}\frac{d}{dt}\left(\frac{f(x)^2}{D(t-T)}\right). \label{243}
\end{align}
Consequently,
\begin{align}
&\quad \int_E p^2(t,z,x)\frac{d}{dt}e^{f(x)^2/D(t-T)}\mu(dx) =\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}\frac{d}{dt}\left(\frac{f(x)^2}{D(t-T)}\right)\mu(dx)\nonumber
\\
&\stackrel{\eqref{243}}{\le}-\frac{D}{4 }\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}\, d\Gamma\left(\frac{f(x)^2}{D(t-T)},\, \frac{f(x)^2}{D(t-T)}\right). \label{eq:3.6}
\end{align}
Now replacing the two terms in the last display of \eqref{315} with \eqref{eq:3.4} and \eqref{eq:3.6} gives
\begin{align*}
&\quad \,\frac{d}{dt}\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}\mu(dx)
\\
&\le 2\left(\int_E e^{f(x)^2/D(t-T)}d\Gamma \left(p(t,z,x),\,p(t,z,x) \right)\right)^{1/2}
\\
&\quad \times \left(\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}d\Gamma \left(\frac{f(x)^2}{D(t-T)}, \frac{f(x)^2}{D(t-T)} \right)\right)^{1/2}
\\
&\quad -2\int_E e^{f(x)^2/D(t-T)}d\Gamma \left(p(t,z,x),\,p(t,z,x)\right)
\\
&\quad -\frac{D}{4 }\int_E p^2(t,z,x)e^{f(x)^2/D(t-T)}\, d\Gamma\left(\frac{f(x)^2}{D(t-T)},\, \frac{f(x)^2}{D(t-T)}\right)\le 0,
\end{align*}
which is justified by the second inequality of \eqref{Cauchy-Schwarz-energy-measure} because $D\ge 2$.
\end{proof}
The next proposition says the on-diagonal heat kernel is monotonically non-increasing in $t$.
\begin{prop}\label{on-diag-HK-mono-dec}
$p(t,z, z)$ is non-increasing in $t\in (0, \infty)$, for all $z\in E\setminus \mathcal{N}$.
\end{prop}
\begin{proof} This follows from
\begin{align*}
\frac{d}{dt}p(t,z, z)&=\frac{d}{dt}\int_E p(t/2, z,x)^2 \mu(dx)
\\
&=2\int_E \mathcal{L} p(t/2,z,x)p(t/2,z,x)\mu (dx)
\\
&=-2\,{\mathcal{E}}\left(p(t/2, z,x), p(t/2,z,x)\right) \le 0,
\end{align*}
where the second ``$=$" can be justified in an analogous manner to the proof of Proposition \ref{g-decreasing}, in view of the fact that the strong $L^2$-derivative $\frac{\partial}{\partial t}p(t, z, x)$ exists on $(0, \infty)$ and equals $\mathcal{L} p(t,z,x)$.
\end{proof}
The following proposition is comparable to the integral estimate in \cite[Lemma 3.1]{gowri}.
\begin{prop}\label{upb-integral-psquare}
Fix $z\in E\setminus \mathcal{N}$. Assume that for all $r>0$, $\mu(B(z,r))\le v(r)$, where $v(r)$ is a continuous monotonically increasing function satisfying doubling property.
Suppose for some $C_1>0$, $T\in (0, \infty]$,
\begin{equation*}
p(t,z,z)\le \frac{C_1}{v(\sqrt{t})}, \quad t\in (0, T).
\end{equation*}
There exist constants $C_2, C_3>0$ such that
\begin{equation}\label{ineq-upb-integral-psquare}
\int_{E\setminus B(z, R)} p^2(t, z, x)\mu(dx) \le \frac{C_2}{v(\sqrt{t})} e^{-C_3R^2/t}, \quad \text{for all } t\in (0, T), R>0.
\end{equation}
\end{prop}
\begin{proof}
In Proposition \ref{g-decreasing}, taking $D=2$ yields that for any $0<\tau<t<\tau'<T$ and any $R>0$,
\begin{equation*}
\int_E p^2(t,z,x)e^{f_R(x)^2/2(t-\tau')}\mu(dx) \le \int_E p^2(\tau, z,x)e^{f_R(x)^2/2(\tau-\tau')}\mu(dx).
\end{equation*}
Rewriting each side above as a sum of two integrals over $B(z, R)$ and $E\setminus B(z, R)$ respectively yields that for $\rho<R$,
\begin{align*}
&\quad\, \int_{E\setminus B(z,R)} p^2(t,z,x)\mu(dx)
\\
&\le \int_{B(z,R)} p^2(\tau, z,x)e^{(R-d(z,x))^2/2(\tau-\tau')}\mu(dx)+\int_{E\setminus B(z, R)} p^2(\tau, z, x)\mu(dx)
\\
&\le \int_{B(z,\rho)} p^2(\tau, z,x)e^{(R-d(z,x))^2/2(\tau-\tau')}\mu(dx)+\int_{E\setminus B(z, \rho)} p^2(\tau, z, x)\mu(dx).
\end{align*}
We observe that since $\tau<\tau'$, the exponential term involved in the last display is bounded by
$$
\exp \left(-\frac{(R-\rho)^2}{2(\tau'-\tau)}\right) \int_{B(z, \rho)} p^2(\tau, x,z)\mu(dx).
$$
Therefore, by letting $\tau'\rightarrow t+$ and using semigroup property, we get
\begin{align*}
&\quad \,\int_{E\setminus B(z,R)} p^2(t,z,x)\mu(dx)
\\
& \le \exp \left(-\frac{(R-\rho)^2}{2(t-\tau)}\right) \int_{B(z, \rho)} p^2(\tau, z,x)\mu(dx) +\int_{E\setminus B(z, \rho)} p^2(\tau, z,x)\mu(dx)
\\
&\le \exp \left(-\frac{(R-\rho)^2}{2(t-\tau)}\right) \int_E p^2(\tau, z,x)\mu(dx) +\int_{E\setminus B(z, \rho)} p^2(\tau, z,x)\mu(dx)
\\
&\le \exp\left(-\frac{(R-\rho)^2}{2(t-\tau)}\right)p(2\tau, z, z)+\int_{E\setminus B(z, \rho)}p^2(\tau, z, x)\mu(dx)
\\
&\le \exp\left(-\frac{(R-\rho)^2}{2(t-\tau)}\right)p(\tau, z, z)+\int_{E\setminus B(z, \rho)}p^2(\tau, z, x)\mu(dx)
\\
&\le \frac{1}{v(\sqrt{\tau})} \exp\left(-\frac{(R-\rho)^2}{2(t-\tau)}\right)+\int_{E\setminus B(z, \rho)}p^2(\tau, z, x)\mu(dx),
\end{align*}
where the second last ``$\le $" is justified by Proposition \ref{on-diag-HK-mono-dec}. Now we consider two decreasing sequences $t_k=t\cdot 2^{-k}$ and $R_k=\left(\frac{1}{2}+\frac{1}{k+2}\right)R$ for $k=0, 1, \cdots$. Replacing $t, \tau, R, \rho$ with $t_{k-1}, t_k, R_{k-1}, R_k$ gives that for $k\ge 1$,
\begin{align*}
&\quad \int_{E\setminus B(z,R_{k-1})} p^2(t_{k-1},z,x)\mu(dx)
\\
& \le \frac{1}{v(\sqrt{t_k})} \exp\left(-\frac{(R_{k-1}-R_k)^2}{2(t_{k-1}-t_k)}\right)+\int_{E\setminus B(z, R_k)}p^2(t_k, z, x)\mu(dx)
\end{align*}
Summing the above inequality in $k$ from $1$ to $n$ and canceling the common terms from both sides gives
\begin{equation}\label{1223}
\int_{E\setminus B(z,R)} p^2(t,z,x)\mu(dx) \le \sum_{k=1}^n \frac{1}{v(\sqrt{t_k})} \exp\left(-\frac{(R_{k-1}-R_k)^2}{2(t_{k-1}-t_k)}\right)+\int_{E\setminus B(z, R_n)}p^2(t_n, z, x)\mu(dx).
\end{equation}
Observing that $t_n\downarrow 0$ and $R_n\downarrow R/2$, in view of Proposition \ref{P:upper-bound-off-diag-Nash}, we get
\begin{align}
\lim_{n\rightarrow \infty}\int_{E\setminus B(z, R_n)}p^2(t_n, z, x)\mu(dx) &\le \lim_{n\rightarrow \infty}\int_{E\setminus B(z, R/2)}p^2(t_n, z, x)\mu(dx) \nonumber
\\
& =\int_{E\setminus B(z, R/2)}\lim_{n\rightarrow \infty}p^2(t_n, z, x)\mu(dx)=0. \label{75424}
\end{align}
Hence, letting $n\rightarrow \infty$ in \eqref{1223} shows
\begin{equation*}
\int_{E\setminus B(z,R)} p^2(t,z,x)dx \le \sum_{k=1}^\infty \frac{1}{v(\sqrt{t_k})} \exp\left(-\frac{(R_{k-1}-R_k)^2}{2(t_{k-1}-t_k)}\right).
\end{equation*}
By the doubling property of the function $v$, it holds for some $c_1>0$ that
\begin{equation*}
v(\sqrt{t})\le A^{k/2+1} v(\sqrt{t_k}) \le e^{c_1 k} v(\sqrt{t_k}),
\end{equation*}
where $A$ is the same as in Theorem \ref{main-result}. It follows that
\begin{align}
\int_{E\setminus B(z, R)}p^2(t, z, x)dx &\le \sum_{k=1}^\infty \frac{1}{v(\sqrt{t_k})} \exp\left(-\frac{(R_{k-1}-R_k)^2}{2(t_{k-1}-t_k)}\right) \nonumber
\\
&\le \sum_{k=1}^\infty \frac{e^{c_1k}}{v(\sqrt{t})}\exp \left(-\frac{\left(\frac{1}{k+1}-\frac{1}{k+2}\right)^2R^2}{2t\cdot 2^{-k}}\right)\nonumber
\\
&\le \sum_{k=1}^\infty \frac{1}{v(\sqrt{t})}\exp \left(c_1k-\frac{2^{k-1}R^2}{(k+2)^4t}\right). \label{119}
\end{align}
We select constants $c_2, c_3>0$ such that
\begin{equation*}
\frac{2^{k-1}}{(k+2)^4} >c_2k+c_3, \quad \text{for all } k\ge 1.
\end{equation*}
Therefore, when $R^2/t>2c_1/c_2$, the quantity inside the brackets of the last display of \eqref{119} can be bounded by
\begin{align*}
c_1k-\frac{2^{k-1}R^2}{(k+2)^4t} &<c_1k-(c_2k+c_3) \frac{R^2}{t }
\\
&< c_1k-c_2k\cdot\frac{2c_1}{c_2}-c_3\frac{R^2}{t}<-c_1k-c_3\frac{R^2}{t}.
\end{align*}
i.e., when $R^2/t>2c_1/c_2$, there exists some $c_4>0$ such that
\begin{equation*}
\int_{E\setminus B(z, R)}p^2(t, z, x)dx \le\sum_{k=1}^\infty \frac{1}{v(\sqrt{t})}\exp\left(-c_1k-c_3\frac{R^2}{t}\right)\le \frac{c_4}{v(\sqrt{t})}e^{-c_3R^2/t}.
\end{equation*}
On the on the other hand, when $R^2/t\le 2c_1/c_2$, due to its boundedness we immediately conclude that there exist $c_5, c_6>0$ such that
\begin{equation*}
\int_{E\setminus B(z, R)}p^2(t, z, x)dx \le \int_E p^2(t, z, x)dx \le p(2t, z, z)\le p(t,z,z) \le \frac{c_5}{v(\sqrt{t})}\le \frac{c_6}{v(\sqrt{t})}e^{-R^2/t},
\end{equation*}
where the third ``$\le$" from left is due to the monotonicitiy of $p(t,z,z)$ stated in Proposition \ref{on-diag-HK-mono-dec}. The proof is thus complete by combining both cases above.
\end{proof}
We finally establish the following upper bound for $E_D(z, t)$ before proving the main theorem.
\begin{lem}
Fix $z\in E\setminus \mathcal{N}$. Assume that for all $r>0$, $\mu(B(z,r))\le v(r)$, where $v(r)$ is a continuous monotonically increasing function satisfying doubling property.
Suppose for some $C_4>0$, $T\in (0, \infty]$,
\begin{equation}\label{UPE-L:2.4}
p(t,z,z)\le \frac{C_4}{v(\sqrt{t})}, \quad t\in (0, T).
\end{equation}
Then there exist some $C_5>0$ and $D_0>0$ such that for any $D>D_0$,
\begin{equation*}\label{upb-E_D}
E_D(z, t)\le \frac{C_5}{v(\sqrt{t})}, \quad t\in (0, T).
\end{equation*}
\end{lem}
\begin{proof}
Note that $E_D$ is decreasing in $D$, therefore it suffices to show the conclusion for some $D>0$. Fix any $D>5/C_3$, where $C_3 $ is the same as in Proposition \ref{upb-integral-psquare}, by choosing $R=\sqrt{Dt}$ we decompose $E_D(z, t)$ as follows:
\begin{align}
\int_E p^2(t, z, x)\exp\left(\frac{d(z, x)^2}{Dt}\right)\mu(dx)
&=\int_{B(z, R)}p^2(t, z, x)\exp\left(\frac{d(z, x)^2}{Dt}\right) \mu(dx) \nonumber
\\
&+\sum_{k=0}^\infty \int_{2^kR\le d(z,x)\le 2^{k+1}R}p^2(t, z, x)\exp\left(\frac{d(z, x)^2}{Dt}\right)\mu(dx). \label{552}
\end{align}
For the first term on the right hand side of \eqref{552}, since $R=\sqrt{Dt}$, it holds from the semigroup property and \eqref{UPE-L:2.4} that for some $c_1>0$,
\begin{align*}
\int_{B(z, R)}p^2(t, z, x)\exp\left(\frac{d(z, x)^2}{Dt}\right)\mu(dx) &\le e^{R^2/Dt}\int_E p^2(t, z, x)\mu(dx)
\\
&\le e\cdot p(2t, z, z )
\le e\cdot p(t,z,z)\le
\frac{c_1}{v(\sqrt{t})},
\end{align*}
where the second last inequality is again justified by the monotonicity of $p(t,z,z)$ on $t\in (0, \infty)$.
For the summation term in \eqref{552}, observing that $D>5/C_3$ for $C_3$ the same as in Proposition \ref{upb-integral-psquare}, we get that there exists some $c_2>0$ such that
\begin{align*}
&\quad \sum_{k=0}^\infty \int_{2^kR\le d(z,x)\le 2^{k+1}R}p^2(t, z, x)\exp\left(\frac{d(z, x)^2}{Dt}\right) \mu(dx)
\\
&\le \sum_{k=0}^\infty \exp \left(\frac{4^{k+1}R^2}{Dt}\right)\int_{d(z,x)\ge 2^{k}R} p^2(t,z,x)\mu(dx)
\\
&\le \sum_{k=0}^\infty \exp \left(\frac{4^{k+1}R^2}{Dt}\right) \frac{c_2}{v(\sqrt{t})}\exp \left(-\frac{5\cdot 4^kR^2}{Dt}\right)
\\
&\le \frac{c_2}{v(\sqrt{t})}\sum_{k=0}^\infty \exp \left(-\frac{4^{k}R^2}{Dt}\right)
\\
&= \frac{c_2}{v(\sqrt{t})}\sum_{k=0}^\infty e^{-4^k}=\frac{c_3}{v(\sqrt{t})}, \quad t\in (0, T).
\end{align*}
Combining the computation for both terms on the right hand side of \eqref{552} yields that there exists some $c_4>0$ such that
\begin{equation*}
E_{D}(z, t)\le \frac{c_4}{v(\sqrt{t})}, \quad \text{for all }t\in (0, T).
\end{equation*}
Therefore the proof is complete by choosing $D_0>5/C_3$ where $C_3$ the same as in Proposition \ref{upb-integral-psquare}.
\end{proof}
To proceed, we introduce another quantity for notation simplicity. For any $D>0$, $R>0$, set
\begin{equation}\label{def-I(z,R)}
I_D(z, t, R):=\int_{E\setminus B(z,R)}e^{-d(x,z)^2/Dt}\mu(dx).
\end{equation}
It follows from H\"{o}lder's inequality that for any $z\in E\setminus \mathcal{N}$ and any $R>0$,
\begin{align}
&\quad \left(\int_{E\setminus B(z, R)} p(t/2, z, x)\mu(dx)\right)^{2} \nonumber
\\
&\le \int_{ E\setminus B(z, R)} p^2(t/2, z, x)e^{d(x,z)^2/Dt}\mu(dx)\int_{E\setminus B(z, R)} e^{-d(x,z)^2/Dt}\mu(dx) \nonumber
\\
&\le E_D(z, t/2)\int_{E\setminus B(z, R)} e^{-d(x,z)^2/Dt}\mu(dx)=E_D(z, t/2)I_D(z,t, R). \label{213}
\end{align}
Now we are in the position to prove the following main theorem:
\begin{thm}\label{HKLE}
Let $({\mathcal{E}}, {\mathcal{F}})$ be a strongly local regular symmetric Dirichlet form satisfying Assumption \ref{strong-regularity} and Nash-type inequality \eqref{Nash-inequality-I}. Fix $z\in E\setminus \mathcal{N}$ where $\mathcal{N}$ is a properly exceptional set. Assume that for all $r>0$, $\mu(B(z,r))\le v(r)$, where $v(r)$ is a continuous monotonically increasing function satisfying doubling property in the following sense: There exists some $A>0$ such that
\begin{equation*}\label{doubling-P:2.3}
v(2r)\le Av(r), \quad \text{for all }r>0.
\end{equation*}
Suppose for some $C_6>0$, $T\in (0, \infty]$,
\begin{equation*}
p(t,z,z)\le \frac{C_6}{v(\sqrt{t})}, \quad t\in (0, T).
\end{equation*}
Then there exists $C_7>0$ such that for all $t\in (0,T)$,
\begin{equation*}
p(t, z,z)\ge \frac{C_7}{v(\sqrt{t})}.
\end{equation*}
\end{thm}
\begin{proof}
Let $\Omega:= B(z, R)$ where $R>0$ will be determined later. $\mu(\Omega)\le v(R)$ by the assumption. In view of the symmetry and the semigroup property of $p(t,x,y)$,
\begin{align*}
p(t, z,z)&=\int_E p^2(t/2, z,x)\mu(dx)\ge \int_\Omega p^2(t/2, z, x) \mu(dx) \ge \frac{1}{\mu(\Omega )}\left( \int_\Omega p(t/2, z, x)\mu(dx)\right)^2
\\
&=\frac{1}{\mu(\Omega )}\left(1- \int_{E\setminus \Omega} p(t/2, z, x)\mu(dx)\right)^2 \ge \frac{1}{v(R)}\left(1- \int_{E\setminus \Omega} p(t/2, z, x)\mu(dx)\right)^2.
\end{align*}
To give an upper bound for \eqref{213}:
\begin{equation}\label{Holder}
\left(\int_{E\setminus B(z, R)} p(t/2, z, x)\mu(dx)\right)^2\le E_D(z,t)I_D(z, t,R), \quad \text{for all }t\in (0, T),
\end{equation}
we first select and fix a constant $D>\max\{D_0, 2\}$ where $D_0$ is the same as in Lemma \ref{upb-E_D}. By the doubling property of $v(\cdot)$, there exists some constant $B>1$ such that $v(Dr)\le Bv(r)$, for all $r>0$. We thus let $R=a\sqrt{t}$ and $R_k=D^kR$, $k=0, 1, 2, \cdots$, where the constant $a>0$ is chosen to satisfy
\begin{equation}\label{condition-constant-a}
\frac{a^2}{D}\ge 2 \ln B.
\end{equation}
It follows that $v(R_{k+1})\le B^k v(R)$. Observing that $D^{2k}\ge k+1$ for all $k\ge 0$, we have
\begin{align*}
I_D(z, t, R)&=\sum_{k=0}^\infty \int_{B(z, R_{k+1})\setminus B(z, R_k)}e^{-d(x, z)^2/Dt}\mu(dx)
\\
&\le \sum_{k=0}^\infty \exp \left(-\frac{R_k^2}{Dt}\right)\mu(B(z, R_{k+1}))
\\
&\le \sum_{k=0}^\infty \exp \left(-
\frac{R^2_{k}}{Dt}\right)B^{k}v(R)
\\
&=v(R)\sum_{k=0}^\infty \exp\left(-D^{2k}\;\frac{R^2}{Dt}+k\ln B\right)
\\
&\stackrel{\eqref{condition-constant-a}}{\le} v(R)\sum_{k=0}^\infty \exp\left(-D^{2k}\;\frac{a^2}{D}+k\frac{a^2}{2D}\right)
\\
&\le v(R)\sum_{k=0}^\infty\exp \left(-\frac{(k+1)}{2D}a^2\right)\le \frac{v(R)}{e^{a^2/2D}-1}=\frac{v(a\sqrt{t})}{e^{a^2/2D}-1}.
\end{align*}
Combining this with \eqref{Holder}, we conclude from Lemma \ref{upb-E_D} that for some fixed large constant $D$, there exists some $C=C(D)>0$, such that for any $a$ satisfying \eqref{condition-constant-a},
\begin{equation}
\left(\int_{E\setminus B(z, R)} p(t/2, z, x)\mu(dx)\right)^2 \le \frac{C}{v(\sqrt{t})}\cdot \frac{v(a\sqrt{t})}{e^{a^2/2D}-1} \le \frac{C\cdot A^{[\log_2 a]+1}}{e^{a^2/2D}-1}, \quad \text{for all }t\in (0, T).
\end{equation}
The rightmost term above can be made less than $1/4$ (indeed, arbitrarily small) by selecting $a$ sufficiently large in \eqref{condition-constant-a}. Therefore, for such a constant $a$ and $R=a\sqrt{t}$, it holds for some $c_1>0$ that
\begin{align*}
p(t, z,z)&\ge \frac{1}{v(R)}\left(1- \int_{E\setminus \Omega} p(t/2, z, x)\mu(dx)\right)^2
\\
&\ge \frac{1}{v(R)}\left(1-\sqrt{1/4}\right)^2
\\
&\ge \frac{1}{2v(a\sqrt{t})} \ge \frac{c_1}{v(\sqrt{t})} \qquad \text{on }t\in (0, T),
\end{align*}
where the last ``$\ge$" is again due to the doubling property of $v(\cdot)$, since $a$ has been fixed. This completes the proof.
\end{proof}
|
1,108,101,564,274 | arxiv | \section*{Introduction}
It was about 8:00 pm on Saturday, May $26^{\textrm{\tiny th}}$, 2018, when firemen were alerted by pedestrians that a child was hanging on the railing of
the $4^{\textrm{\tiny th}}$ floor of a building in Paris (France)\cite{LCI,Parisien,FranceSoir1,CNews,China}. When they arrived on the scene, the child had been rescued by Mamoudou Gassama, a Malian immigrant.
The scene was recorded\cite{LCI}, showing Mamoudou climbing the four stories in about 30 seconds, grabbing the kid's arm, lifting him over the railing, and
putting him in safety. Subsequently, Mamoudou was congratulated by president Emmanuel Macron, who proposed to engage right away a procedure of naturalization,
which Mamoudou accepted\cite{FranceSoir2}.
Following the online publication of the present study, the author was interviewed by famous French journalist Andr\'e Bercoff, and the interview
was broadcast on \textit{Sud Radio} on June $4^{\textrm{\tiny th}}$, 2019\cite{Bercoff}.
One question that arises is how the four-year-old child ended up hanging on the $4^{\textrm{\tiny th}}$ balcony's railing, where supposedly nobody else was
present, and where the windows were locked from the inside. It was reported that the child didn't talk but indicated with his finger that
he fell from above\cite{LCI,Parisien,FranceSoir1,CNews,China}, presumably the $5^{\textrm{\tiny th}}$ floor, although there is no testimony of anybody having seen him falling. However, the
concierge of the building later declared that the $5^{\textrm{\tiny th}}$ floor is uninhabited, which suggests that the child fell from the $6^{\textrm{\tiny th}}$ floor, where he lives\cite{China}.
In this paper, it is shown by using kinematic equations and Newton's laws that the above scenario is impossible. It is important here
to point that we don't claim that the rescue of the child by Mamoudou is staged, we only consider it as a possibility. The only claim that we make is that, as opposed to what was reported in the news, the
child didn't fall from one or more stories. We don't make any hypotheses on whether the child was put on the $4^{\textrm{\tiny th}}$ balcony's railing by irresponsible parents in order to provide
Mamoudou with an opportunity to accomplish his exploit, or if Mamoudou was totally unaware of the reasons why the child was hanging up there. We also don't
even comment on the fact that the child didn't lose his flip-flops during the reported fall, and leave it to the reader's consideration. Indeed, Mamoudou
declared that once he put the child in safety, he noticed that he was wearing Spiderman flip-flops,\cite{20minutes} a funny coincidence for the so-called ``French Spiderman".\\
\section*{Model}
In the following, a calculation that largely underestimates the force that the child would have had to produce in order to stop his fall is presented, so that it
clearly shows that such an exploit is impossible. Although the child supposedly fell from two stories, we assume that he fell from only one, Fig.~\ref{Mamoudou}, and that the distance between two consecutive balcony railings is
$h=3.00\:\rm m$ (standard distance). In order to make sure that the calculated average force is underestimated, the ideal case where the child manages to slow
down and come to a stop over the largest possible distance is considered. Typically, the braking distance is equal to about the child's arm length. Considering that, on
average, the arm length of a four-year-old child is $11.0\:\rm in$, this corresponds to a distance of $27.9\:\rm cm$. However, in the following we purposely
overestimate this distance and take it to be $d=50.0\:\rm cm$, in order to make sure that the calculated average force is underestimated.
\begin{figure}[h]
\centerline{\includegraphics[width=0.5\textwidth]{Mamoudou1.png}}
\caption{(Color online) Ideal situation where the child falls from the height of a single story, $h=3.00\:\rm m$, and slows down and comes to a complete stop over a distance of $d=50\:\rm cm$.}
\label{Mamoudou}
\end{figure}
As is shown below, the velocity that the child reaches before catching the railing is much smaller than the terminal velocity of a skydiver. This justifies
that air resistance can be neglected. Also, during the fall, since there is no horizontal force pushing the child against the railing and the balcony's concrete, the friction force due to his nails,
clothes, and flip-flops rubbing against the railing and the balcony's concrete vanishes quickly as soon as he starts to fall. Indeed, this kinetic
friction force is proportional to the normal (horizontal) force that the railing and balcony's concrete exert on the child. If this normal force is initially present, it is
the only horizontal force acting on the child, who is therefore accelerated away from the building. Thus the force that he is able to apply on it (equal in magnitude to the normal reaction force)
quickly vanishes. This friction force can therefore
be neglected as well. As a result, during the fall, only gravity is at play so that the child is in a free fall.
\section*{Kinematic equations}
The child is in a free fall over a distance $D$ equal to the distance between the balconies, minus the braking distance, $D=h-d=2.50\:\rm m$. During this
free fall, the child's acceleration is constant, equal to $g=9.80\:\rm m/s^2$. Using the well known kinematic equations for constant acceleration,
\begin{equation}
y=y_o+v_ot+\frac12 at^2\quad\quad\quad v=v_o+at,
\end{equation}
where $y,y_o,a,t,v,v_o$ are respectively the final height, initial height, acceleration due to gravity, time, final velocity, and initial velocity,
and substituting $v_o=0$ (child initially at rest), we can eliminate the time and express the velocity
as a function of the free-fall distance $D=|y-y_o|$:
\begin{equation}
\label {Math1} v=\sqrt{2a|y-y_o|}
\end{equation}
Substituting $|y-y_o|=2.50\:\rm m$, and $a=g=9.80\:\rm m/s^2$, we get:
\begin{equation}
v=7.00\:\rm m/s=25.2\:\rm km/h
\end{equation}
This is the child's velocity right before he catches the railing. Note that, as already announced, this velocity is much smaller than the terminal velocity
of a skydiver (about 200 km/h), which justifies neglecting air resistance. At this point, one could think that the challenge for the child is to catch the railing. But
even more challenging is for him to slow down and come to a complete stop over the distance $d$. Indeed, let us calculate the average acceleration (deceleration)
needed by solving Eq.~\ref{Math1} for $a$:
\begin{equation}
a=\frac{v^2}{2|y-y_o|}
\end{equation}
Substituting $v=7.00\:\rm m/s$ and the braking distance $|y-y_o|=0.500\:\rm m$, the average acceleration needed to come to a stop is:
\begin{equation}
a=49.0\:\rm m/s^2
\end{equation}
This corresponds exactly to an average acceleration of $5g$.\\
\section*{Newton's $\bf 2^{\textrm{\tiny nd}}$ and $\bf 3^{\textrm{\tiny rd}}$ laws}
Let us denote by $m$ the mass of the child.
During the braking, two forces are acting on him: His weight $\vec w$ with magnitude $w=mg$, and the railing's reaction force $\vec R$, as shown on the free-body diagram,
Fig.~\ref{Mamoudou2}.
\begin{figure}[h]
\centerline{\includegraphics[width=0.35\textwidth]{Mamoudou2.png}}
\caption{(Color online) Free-body diagram of the child, showing his weight $\vec w$ and the railing's reaction force $\vec R$. The acceleration $\vec a$ is also shown (the
child is slowing down with a downward velocity, thus the net acceleration is upward).}
\label{Mamoudou2}
\end{figure}
According to Newton's $2^{\textrm{\tiny nd}}$ law, the net force $\vec F_\textrm{\tiny net}$ acting on the child is:
\begin{equation}
\vec F_\textrm{\tiny net}=\vec R+\vec w=m\vec a
\end{equation}
Solving for $\vec R$, we get:
\begin{equation}
\vec R=m\vec a-\vec w
\end{equation}
In term of magnitudes, this becomes:
\begin{equation}
R=m(a+g)
\end{equation}
Substituting $a=49.0\:\rm m/s^2$, $g=9.80\:\rm m/s^2$, and assuming an average mass of $20.0\:\rm kg$ for a four-year-old child, the average force exerted by the
railing onto the child has a magnitude of:
\begin{equation}
R=1176\:\rm N
\end{equation}
According to Newton's $3^{\textrm{\tiny rd}}$ law, this force is equal in magnitude and opposite in direction to the force exerted by the child against the railing.
This force is precisely equal to {\bf six times his own weight} (this holds true for any mass $m$). In other words, this is the force necessary to lift a mass of 120 kg. It is hard to believe
that a four-year-old child could be able to accomplish such an exploit.
Also, note that this is just the average force during the braking. The instantaneous force could easily be an order of magnitude greater. In addition,
this average force is way underestimated, since we have purposely overestimated the braking distance $d$, and considered a fall from the height of a single story,
whereas the child supposedly fell from two. As a result, it is clear that the reported scenario is impossible.
Another way to realize that the scenario is impossible is to imagine that we ask a four-year-old child to catch a mass of $20.0\:\rm kg$ that is dropped
$3.00\:\rm m$ from above...
\section*{Critics of the model}
A common critic of the model is that it does not take into account frictional forces due to the child trying to grab anything on his way during the fall. As
explained in section ``model", the kinetic friction force quickly vanishes as soon as the child starts to fall. Also, it was reported by the neighbor on the
$4^{\textrm{\tiny th}}$ floor (who didn't attempt anything to rescue the child) that he noticed that the child had a torn toe nail\cite{LCI}. Assuming that this information is correct, the force necessary to tear the nail must be
compared to the force of $1176\:\rm N$ necessary to stop the fall. Common sense clearly allows us to conclude that these two forces cannot compete.
Another common critic is that the child could have slowed down his fall with his feet or legs hitting the railing first. This doesn't make any sense either,
since conservation of horizontal momentum guarantees that the child's center of mass cannot move towards the railing. If during the fall the feet or legs of the child move
toward the building, then his upper body must necessarily move away from it, as illustrated in Fig.~\ref{Mamoudou3}. He would therefore fall backward,
without any chance of catching the railing with his hands. In addition, such a collision with the railing would clearly have led to injuries, which
have not been reported.
\begin{figure}[h]
\centerline{\includegraphics[width=0.5\textwidth]{Mamoudou3.png}}
\caption{(Color online) Because horizontal momentum is conserved, the feet or legs hitting first the railing would make the child fall backward, eliminating
any possibility of catching the railing with his hands.}
\label{Mamoudou3}
\end{figure}
\section*{Conclusion}
It was reported in worldwide news that the child rescued by Mamoudou Gassama fell from one or more stories\cite{LCI,Parisien,FranceSoir1,CNews,China}.
However, in this paper, it is shown that this scenario is impossible.
This raises the question about determining how the child actually ended up hanging
on the railing of the $4^{\textrm{\tiny th}}$ balcony, where supposedly nobody else was present, while he lives on the $6^{\textrm{\tiny th}}$ floor, where it was reported by his father
that he was left alone. Given these facts, it
is hard to avoid the idea that the rescue of the child could have been staged. It should also be noted that one could have arrived to the same conclusion (namely the scenario being impossible)
without any calculations, just by using common sense. More than one month after the facts, it is very surprising that nobody has made a public claim that
the reported scenario is impossible. It is even more surprising that, presumably, president Emmanuel Macron wasn't advised about the
glitches of the case, and decided to proceed right away with the naturalization of Mamoudou\cite{FranceSoir2}.
|
1,108,101,564,275 | arxiv | \section{Introduction}
Thermal effects cause many challenges in a broad variety of semiconductor
devices. Thermal instabilities limit the safe-operating area of high
power devices and modules in electrical energy technology \citep{Lutz2011,Schulze2012},
electro-thermal feedback loops lead to catastrophic snapback phenomena
in organic light-emitting diodes \citep{Fischer2014,Fischer2018}
and self-heating effects decisively limit the achievable output power
of semiconductor lasers \citep{Osinski1994,Piprek2002,Streiff2005,Wenzel2010}.
The numerical simulation of semiconductor devices
showing strong self-heating and thermoelectric effects requires a
thermodynamically consistent modeling approach, that describes the
coupled charge carrier and heat transport processes. In the context
of semiconductor device simulation, the non-isothermal drift-diffusion
system\emph{ }\citep{Wachutka1990,Lindefelt1994,Brand1995,Parrott1996,Albinus2002,Bandelow2005}
has become the standard model for the self-consistent description
of electro-thermal transport phenomena. This is a system of four partial
differential equations, which couples the semiconductor device equations
\citep{Selberherr1984,Markowich1986} to a (lattice) heat flow equation
for the temperature distribution in the device. On the step from the
\emph{isothermal} to the \emph{non-isothermal} drift-diffusion system,
additional thermoelectric transport coefficients must be included
in the theory. The magnitude of the thermoelectric cross-effects is
governed by the Seebeck coefficient (also\emph{ thermopower}), which
quantifies the thermoelectric voltage induced by a temperature gradient
(\emph{Seebeck effect}) \citep{Goldsmid2010,Goupil2011}. The reciprocal
phenomenon of the Seebeck effect is the \emph{Peltier effect}, which
describes the current-induced heating or cooling at material junctions.
As a consequence of Onsager's reciprocal relations, the Seebeck and
Peltier coefficients are not independent such that only the Seebeck
coefficient must be specified \citep{Onsager1931}. Over the decades,
several definitions have been proposed for the Seebeck coefficient
\citep{Kubo1957b,Cutler1969,Fritzsche1971,Chaikin1976}; recent publications
list at least five coexisting different (approximate) formulas \citep{Shastry2013,Freeman2014}.
In the context of semiconductor device simulation, the Seebeck coefficients
are typically derived from the Boltzmann transport equation in relaxation
time approximation \citep{VanVliet1976,Marshak1984,Lundstrom2000}
or defined according to the adage of the Seebeck coefficient being
the ``(specific) entropy per carrier'' \citep{Albinus2002,Bandelow2005,Goupil2011,Wenzel2017}.
These approaches are often focused on non-degenerate semiconductors,
where the carriers follow the classical Maxwell--Boltzmann statistics.
This approximation breaks down in heavily doped semiconductors, where
the electron-hole plasma becomes degenerate and Fermi--Dirac statistics
must be considered to properly take into account the Pauli exclusion
principle. Degeneration effects are important in many semiconductor
devices such as semiconductor lasers, light emitting diodes or transistors.
Moreover, heavily doped semiconductors are considered as ``good''
thermoelectric materials, i.\,e., materials with high thermoelectric
figure of merit \citep{Goldsmid2010,Goupil2011}, for thermoelectric
generators, which can generate electricity from waste heat \citep{Snyder2008,Bennett2017}.
In this paper, we will consider an alternative model for the Seebeck
coefficient, which is the so-called \emph{Kelvin formula for the thermopower}
\citep{Peterson2010}. The Kelvin formula recently gained interest
in theoretical condensed matter physics and has been shown to yield
a good approximation of the Seebeck coefficient for many materials
(including semiconductors, metals and high temperature superconductors)
at reasonably high temperatures \citep{Shastry2008,Silk2009,Peterson2010,Garg2011,Arsenault2013,Deng2013,Zlatic2014,Kokalj2015,Hejtmanek2015,Terasaki2016,Mravlje2016}.
The Kelvin formula relates the Seebeck coefficient to the derivative
of the entropy density with respect to the carrier density and therefore
involves only equilibrium properties of the electron-hole plasma,
where degeneration effects are easily included. To our knowledge,
the Kelvin formula has not been considered in the context of semiconductor
device simulation so far. In Sec.~\ref{sec: The energy drift diffusion model and the Kelvin formula for the Seebeck coefficent},
we show that the Kelvin formula yields a remarkably simple form of
the non-isothermal drift-diffusion system, which shows two exceptional
features:
\begin{enumerate}
\item The heat generation rate involves exactly the three classically known
self-heating effects (Joule, Thomson--Peltier and recombination heating)
without any further (transient) contributions.
\item The thermal driving force in the current density expressions can
be entirely absorbed in a (nonlinear) diffusion coefficient via a
generalized Einstein relation. Hence, the $\nabla T$ term is eliminated
in the drift-diffusion form.
\end{enumerate}
The second part of this paper (Sec.~\ref{sec: Non-isothermal generalization of the Scharfetter=002013Gummel scheme for degenerate semiconductors})
deals with the discretization of the electrical current density expressions,
which are required in (non-isothermal) semiconductor device simulation
tools. The robust and accurate discretization of the drift-diffusion
fluxes in semiconductors with exponentially varying carrier densities
is a non-trivial problem, that requires a special purpose discretization
technique. The problem has been solved by Scharfetter and Gummel
for the case of non-degenerate semiconductors under isothermal conditions
\citep{Scharfetter1969}. Since then, several adaptations of the method
have been developed to account for more general situations (non-isothermal
conditions \citep{Tang1984,McAndrew1985,Rudan1986,Forghieri1988,Chen1991,Souissi1991,TenThijeBoonkkamp1993,Smith1993},
degeneration effects \citep{Yu1985,Gajewski1993,Bessemoulin-Chatard2012,Koprucki2013,Koprucki2015,Fuhrmann2015}).
The Kelvin formula for the Seebeck coefficients allows for a straightforward
generalization of the Scharfetter--Gummel approach to the non-isothermal
case. We take up two different approaches to incorporate degeneration
effects into the non-isothermal Scharfetter--Gummel formula and give
an extensive numerical and analytical comparison of both methods.
This includes an investigation of limiting cases and structure preserving
properties of the discrete formulas (Sec.~\ref{Sec: Limiting cases and structure preserving properties}),
a comparison with the numerically exact solution of the underlying
two-point boundary value problem (Sec.~\ref{sec: Comparison with numerically exact solution})
and a comparison of analytical error bounds (Sec.~\ref{sec: Analytical error estimate}).
Finally, in Sec.~\ref{sec: benchmark simulation}, we present a numerical
convergence analysis of both schemes based on numerical simulations
of a one-dimensional p-n-diode.
\section{The non-isothermal drift-diffusion system using the Kelvin formula
for the Seebeck coefficient \label{sec: The energy drift diffusion model and the Kelvin formula for the Seebeck coefficent}}
\begin{figure}
\includegraphics[width=1\textwidth]{fig1-Fermi-Dirac}
\caption{(a)~Fermi--Dirac integrals (\ref{eq: Fermi-Dirac integral}) of
order $\nu=1/2$, $0$ and $-1/2$ as functions of the reduced Fermi
energy $\eta$. For $\eta\ll-1$ the Fermi--Dirac integrals approach
the Maxwell--Boltzmann distribution $\mathscr{F}\left(\eta\right)=\exp{\left(\eta\right)}$ (non-degenerate limit).
(b)~Plot of the degeneracy factor (\ref{eq: degeneracy factor - eta})
(or diffusion enhancement factor) for the Fermi--Dirac integrals
in (a). For $\eta\ll-1$ the degeneracy factor approaches $1$ (linear
diffusion). (c)~Correction factor (\ref{eq: correction factor-1-1})
that quantifies the deviation of the Fermi--Dirac integrals from
the exponential function. The non-degenerate limit corresponds to
$\gamma\left(\eta\right)\equiv1$.}
\label{fig: distribution function and degeneracy factor}
\end{figure}
In this section we briefly review the non-isothermal drift-diffusion
system, which provides a self-consistent description of the coupled
electro-thermal transport processes in semiconductor devices. The
model has been extensively studied by several authors from the perspective
of physical kinetics or phenomenological non-equilibrium thermodynamics
\citep{Wachutka1990,Lindefelt1994,Brand1995,Parrott1996,Albinus2002,Bandelow2005}.
The model equations read:
\begin{align}
-\nabla\cdot\varepsilon\nabla\phi & =q\left(C+p-n\right),\label{eq: Poisson equation}\\
\partial_{t}n-\frac{1}{q}\nabla\cdot\mathbf{j}_{n} & =-R,\label{eq: electron transport equation}\\
\partial_{t}p+\frac{1}{q}\nabla\cdot\mathbf{j}_{p} & =-R,\label{eq: hole transport equation}\\
c_{V}\partial_{t}T-\nabla\cdot\kappa\nabla T & =H.\label{eq: heat equation}
\end{align}
Poisson's equation~(\ref{eq: Poisson equation}) describes the electrostatic
field generated by the electrical charge density $\rho=q\left(C+p-n\right)$.
Here, $\phi$ is the electrostatic potential, $n$ and $p$ are the
densities of electrons and holes, respectively, $C$ is the built-in
doping profile, $q$ is the elementary charge and $\varepsilon$ is
the (absolute) dielectric constant of the material. The transport
and recombination dynamics of the electron-hole plasma are modeled
by the continuity equations (\ref{eq: electron transport equation})--(\ref{eq: hole transport equation}),
where $\mathbf{j}_{n/p}$ are the electrical current densities and
$R$ is the (net-)recombination rate. The latter includes several
radiative and non-radiative recombination processes (Shockley--Read--Hall
recombination, Auger recombination, spontaneous emission etc.) \citep{Selberherr1984}.
The carrier densities $n$, $p$ are connected with the electrostatic
potential $\phi$ via the state equations
\begin{align}
n & =N_{c}\left(T\right)\mathscr{F}\left(\frac{\mu_{c}+q\phi-E_{c}\left(T\right)}{k_{B}T}\right), & p & =N_{v}\left(T\right)\mathscr{F}\left(\frac{E_{v}\left(T\right)-q\phi-\mu_{v}}{k_{B}T}\right),\label{eq: carrier density state equations}
\end{align}
where $k_{B}$ is Boltzmann's constant, $T$ is the absolute temperature,
$N_{c/v}$ is the effective
density of states and $E_{c/v}$ is the reference energy level (typically
the band edge energy) of the conduction band or valence band, respectively.
The function $\mathscr{F}$ describes the occupation of the electronic
states under \emph{quasi-equilibrium} conditions, which is controlled
by the quasi-Fermi energies $\mu_{c/v}$ of the respective bands.
The quasi-Fermi energies are connected with the quasi-Fermi potentials
$\varphi_{n/p}$ via
\begin{align}
\mu_{c} & =-q\varphi_{n}, & \mu_{v} & =-q\varphi_{p}.\label{eq: quasi-Fermi potentials-1}
\end{align}
In non-degenerate semiconductors (Maxwell--Boltzmann statistics),
$\mathscr{F}$ is the exponential function $\mathscr{F}\left(\eta\right)=\exp{\left(\eta\right)}$.
Taking the degeneration of the electron-hole plasma due to Pauli-blocking
into account (Fermi--Dirac statistics), $\mathscr{F}$ is typically
given by the Fermi--Dirac integral
\begin{equation}
\mathscr{F}\left(\eta\right)=F_{\nu}\left(\eta\right)=\frac{1}{\Gamma\left(\nu+1\right)}\int_{0}^{\infty}\mathrm{d}\xi\,\frac{\xi^{\nu}}{\exp{\left(\xi-\eta\right)}+1},\label{eq: Fermi-Dirac integral}
\end{equation}
where the index $\nu$ depends on the dimensionality of the structure.
Isotropic, bulk materials with parabolic energy bands are described
by $\nu=1/2$; for two-dimensional materials (quantum wells) the
index $\nu=0$ applies. See Fig.~\ref{fig: distribution function and degeneracy factor}\,(a)
for a plot of the Fermi--Dirac integrals for different $\nu$ as
a function of the reduced Fermi energy.
The function $\mathscr{F}$ may also include non-parabolicity effects, see \ref{sec: non-parabolic energy dispersion}.
In the case of organic semiconductors,
$\mathscr{F}$ is often taken as the Gauss--Fermi integral \citep{Mensfoort2008,Paasch2010}
or a hypergeometric function \citep{Vissenberg1998,Seki2013}.
The heat transport equation~(\ref{eq: heat equation}) describes
the spatio-temporal dynamics of the temperature distribution in the
device. Here, $c_{V}$ is
the (volumetric) heat capacity, $\kappa$ is the thermal conductivity
and $H$ is the heat generation rate. The non-isothermal drift-diffusion
model assumes a local thermal equilibrium between the lattice and
the carriers, i.\,e., $T=T_{L}=T_{n}=T_{p}$. The system (\ref{eq: Poisson equation})--(\ref{eq: heat equation})
must be supplemented with initial conditions and boundary conditions (i.e., for electrical contacts,
semiconductor-insulator interfaces, heat sinks etc). We refer to Refs.~\citep{Selberherr1984,Palankovski2004}
for a survey on commonly used boundary condition models.
The electrical current densities are driven by the gradients
of the quasi-Fermi potentials and the temperature
\begin{align}
\mathbf{j}_{n} & =-\sigma_{n}\left(\nabla\varphi_{n}+P_{n}\nabla T\right), & \mathbf{j}_{p} & =-\sigma_{p}\left(\nabla\varphi_{p}+P_{p}\nabla T\right),\label{eq: current densities-2}
\end{align}
where $\sigma_{n}=qM_{n}n$ and $\sigma_{p}=qM_{p}p$ are the electrical
conductivities (with carrier mobilities $M_{n/p}$) and $P_{n/p}$
are the Seebeck coefficients. Finally, we consider a (net-)recombination rate of the form \citep{Kantner2018c}
\begin{equation}
R = R\left(\phi,\varphi_{n},\varphi_{p},T\right) = \left(1-\exp{\left(-\frac{\mu_{c}-\mu_{v}}{k_{B}T}\right)}\right)\sum_{\alpha}r_{\alpha}\left(\phi,\varphi_{n},\varphi_{p},T\right), \label{eq: recombination rate}
\end{equation}
which combines several radiative and non-radiative recombination processes labeled by $\alpha$ (e.g., Shockley--Read--Hall recombination, spontaneous emission, Auger recombination etc.). The functions $r_{\alpha}=r_{\alpha}\left(\phi,\varphi_{n},\varphi_{p},T\right)\geq 0$ are inherently non-negative and specific for the respective processes. We refer to Refs.~\cite{Selberherr1984,Palankovski2004} for commonly considered recombination rate models.
\subsection{Kelvin formula for the Seebeck coefficient} \label{sec: Kelvin formula for the Seebeck coefficient}
In this paper, we consider the so-called \emph{Kelvin formula}
for the Seebeck coefficient \citep{Peterson2010}
\begin{align}
P_{n} & =-\frac{1}{q}\frac{\partial s\left(n,p,T\right)}{\partial n}, & P_{p} & =+\frac{1}{q}\frac{\partial s\left(n,p,T\right)}{\partial p},\label{eq: Kelvin formula}
\end{align}
which relates the thermoelectric powers to the derivatives of the
entropy density $s=s(n,p,T)$ with respect to the carrier densities.
The expression for the entropy density is easily derived from the
free energy density $f\left(n,p,T\right)$ of the system, which is
a proper thermodynamic potential if the set of unknowns is chosen
as $\left(n,p,T\right)$ (``natural variables'') . The expressions
for the quasi-Fermi energies and the entropy density then follow as
\begin{align}
\frac{\partial f\left(n,p,T\right)}{\partial n} & =+\mu_{c}\left(n,p,T\right), & \frac{\partial f\left(n,p,T\right)}{\partial p} & =-\mu_{v}\left(n,p,T\right), & \frac{\partial f\left(n,p,T\right)}{\partial T} & =-s\left(n,p,T\right).\label{eq: conjugate fields-1}
\end{align}
Taking the second derivatives, this yields the Maxwell relations
\begin{align}
\frac{\partial\mu_{c}\left(n,p,T\right)}{\partial T} & =-\frac{\partial s\left(n,p,T\right)}{\partial n}, & \frac{\partial\mu_{v}\left(n,p,T\right)}{\partial T} & =+\frac{\partial s\left(n,p,T\right)}{\partial p},\label{eq: Maxwell relations-1}
\end{align}
which allow for an alternative representation of Eq.~(\ref{eq: Kelvin formula}).
The free energy density includes contributions from the quasi-free
electron-hole plasma (ideal Fermi gas), the lattice vibrations (ideal
Bose gas) and the electrostatic (Coulomb) interaction energy. Throughout
this paper, we assume a free energy density of the form \citep{Albinus2002}
\begin{equation}
f\left(n,p,T\right)=f_{\text{e--h}}\left(n,p,T\right)+f_{L}\left(T\right)+f_{\text{Coul}}\left(p-n\right).\label{eq: free energy density-1}
\end{equation}
The free energy density of the (non-interacting) electron-hole plasma
reads \citep{Albinus2002,Kantner2018c}
\begin{align}
\begin{aligned}f_{\text{e--h}}\left(n,p,T\right) & =k_{B}T\mathscr{F}^{-1}\left(\frac{n}{N_{c}\left(T\right)}\right)n-k_{B}TN_{c}\left(T\right)\mathscr{G}\left(\mathscr{F}^{-1}\left(\frac{n}{N_{c}\left(T\right)}\right)\right)+E_{c}\left(T\right)n\\
& \hphantom{=}+k_{B}T\mathscr{F}^{-1}\left(\frac{p}{N_{v}\left(T\right)}\right)p-k_{B}TN_{v}\left(T\right)\mathscr{G}\left(\mathscr{F}^{-1}\left(\frac{p}{N_{v}\left(T\right)}\right)\right)-E_{v}\left(T\right)p,
\end{aligned}
\label{eq: free energy density electron-hole plasma-1-1}
\end{align}
where $\mathscr{F}^{-1}$ is the inverse of the function $\mathscr{F}$
in the state equations (\ref{eq: carrier density state equations})
and $\mathscr{G}$ denotes its antiderivative: $\mathscr{G}^{\prime}\left(\eta\right)=\mathscr{F}\left(\eta\right)$.
Note that Eq.~(\ref{eq: free energy density electron-hole plasma-1-1})
implies\begin{subequations}\label{eq: free energy derivatives}
\begin{align}
\frac{\partial f_{\text{e--h}}}{\partial n} & =k_{B}T\mathscr{F}^{-1}\left(\frac{n}{N_{c}\left(T\right)}\right)+E_{c}\left(T\right), & \frac{\partial f_{\text{e--h}}}{\partial p} & =k_{B}T\mathscr{F}^{-1}\left(\frac{p}{N_{v}\left(T\right)}\right)-E_{v}\left(T\right).\label{eq: non-interacting Fermi gas chemical potentials-1}
\end{align}
The lattice contribution $f_{L}\left(T\right)$ yields the dominant
contribution to the heat capacity $c_{V}$. It can be derived
from, e.\,g., the Debye model for the free phonon gas \citep{Czycholl2008}.
The Coulomb interaction energy $f_{\text{Coul}}$ must be modeled
such that the state equations~(\ref{eq: carrier density state equations})
follow consistently from solving the defining relations for the quasi-Fermi
energies~(\ref{eq: conjugate fields-1}) for the carrier densities.
In order to supplement the ``missing'' electrostatic contributions in
Eq.~(\ref{eq: non-interacting Fermi gas chemical potentials-1}),
we specify the derivatives of $f_{\text{Coul}}$ with respect to the
carrier densities:
\begin{align}
\frac{\partial f_{\text{Coul}}}{\partial n} & =-q\phi, & \frac{\partial f_{\text{Coul}}}{\partial p} & =+q\phi.\label{eq: electrostatic derivatives-1}
\end{align}
\end{subequations}We refer to Albinus et al. \citep{Albinus2002}
for a rigorous mathematical treatment of the Coulomb interaction
energy.
\begin{figure}
\includegraphics[width=1\textwidth]{fig2-Seebeck_new}
\caption{(a)~Seebeck coefficient according to the Kelvin formula (\ref{eq: explicit Seebeck-1})
as a function of the reduced Fermi energy $\eta$ for power law type
effective density of states $N_{c/v}\propto T^{3/2}$ and $\mathscr{F}\left(\eta\right)=F_{1/2}\left(\eta\right)$
in units of $k_{B}/q$. The formula (\ref{eq: explicit Seebeck-1})
takes degeneration effects (Fermi--Dirac statistics, solid lines)
of the electron-hole plasma into account, which causes a deviation
from the non-degenerate result (Maxwell--Boltzmann statistics, dashed
lines) at $\eta\gtrsim-1$. The temperature dependency of the band
gap energy yields an offset of $E_{c}^{\prime}\left(T\right)=\left(\chi+\frac{1}{2}\right)E_{g}^{\prime}\left(T\right)$
for electrons (red lines) and $E_{v}^{\prime}=\left(\chi-\frac{1}{2}\right)E_{g}^{\prime}\left(T\right)$
for holes (blue lines). The plot is for $\chi=-0.2$ and $k_{B}^{-1}E_{g}^{\prime}\left(T\right)=-5$.
(b)~Seebeck coefficient for n-type GaAs. Solid lines are computed
according to the Kelvin formula (\ref{eq: explicit Seebeck - electrons-1})
using Fermi--Dirac statistics, dashed lines indicate the corresponding
non-degenerate limit. The respective ionized donor densities $C=N_{D}^{+}$
are given in the plot in units of $\text{cm}^{-3}$. The temperature-dependency
of the band gap energy $E_{g}\left(T\right)$ is modeled by the Varshni
model (\ref{eq: Varshni model}) with data from Ref.~\citep{Palankovski2004}
and the effective mass is $m_{c}^{\ast}\left(T\right)=\left(0.067-1.2\times10^{-5}\,\text{K}^{-1}\,T\right)m_{0}$
\citep{Palankovski2004}, where $m_{0}$ is the free electron mass.
The fitting parameter is set to $\chi=-0.2$. Experimental data: $\triangledown$
Carlson et al. \citep{Carlson1962}, $\Circle$ Amith et al. \citep{Amith1965},
$\lozenge$ Edmond et al. \citep{Edmond1956} (data from Ref.~\citep{Sutadhar1979}),
$\square$ Homm et al. \citep{Homm2008} and $\bullet$ Emel\textquoteright yanenko
et al. \citep{Emelyanenko1973} (data from Ref.~\citep{Sutadhar1979}).}
\label{fig: Seebeck coefficient-2}
\end{figure}
The Seebeck coefficients (\ref{eq: Kelvin formula}) are evaluated
using Eqs.~(\ref{eq: conjugate fields-1})--(\ref{eq: free energy derivatives}).
Since $f_{\text{Coul}}$ is independent of the temperature and $f_{L}$
does not depend on the carrier densities, the evaluation of Eq.~(\ref{eq: Kelvin formula})
requires only the Maxwell relations (\ref{eq: Maxwell relations-1})
and the derivatives of Eqs.~(\ref{eq: non-interacting Fermi gas chemical potentials-1})
with respect to the temperature. One obtains\begin{subequations}\label{eq: explicit Seebeck-1}
\begin{align}
P_{n}\left(n,T\right) & =-\frac{k_{B}}{q}\left(\frac{TN_{c}^{\prime}\left(T\right)}{N_{c}\left(T\right)}g\left(\frac{n}{N_{c}\left(T\right)}\right)-\mathscr{F}^{-1}\left(\frac{n}{N_{c}\left(T\right)}\right)-\frac{1}{k_{B}}E_{c}^{\prime}\left(T\right)\right),\label{eq: explicit Seebeck - electrons-1}\\
P_{p}\left(p,T\right) & =+\frac{k_{B}}{q}\left(\frac{TN_{v}^{\prime}\left(T\right)}{N_{v}\left(T\right)}g\left(\frac{p}{N_{v}\left(T\right)}\right)-\mathscr{F}^{-1}\left(\frac{p}{N_{v}\left(T\right)}\right)+\frac{1}{k_{B}}E_{v}^{\prime}\left(T\right)\right),\label{eq: explicit Seebeck - holes-1}
\end{align}
\end{subequations}where the prime denotes the derivatives
$N_{c/v}^{\prime}\left(T\right)=\partial_{T}N_{c/v}\left(T\right)$
and $E_{c/v}^{\prime}\left(T\right)=\partial_{T}E_{c/v}\left(T\right)$.
For power law type temperature dependency $N_{c/v}\left(T\right)\propto T^{\theta}$
(e.\,g., $\theta=3/2$), the factor in the first term reduces
to a constant $TN_{c/v}^{\prime}\left(T\right)/N_{c/v}\left(T\right)=\theta$.
For temperature-dependent effective masses, the term is more complicated.\emph{
}The function\begin{subequations}\label{eq: degeneracy factor - both expressions}
\begin{equation}
g\left(x\right)=x\,\frac{\mathrm{d}\mathscr{F}^{-1}\left(x\right)}{\mathrm{d}x}\label{eq: degeneracy factor-2-1}
\end{equation}
quantifies the degeneration of the Fermi gas. For non-degenerate
carrier statistics (Maxwell--Boltzmann statistics), Eq.~(\ref{eq: degeneracy factor-2-1})
reduces to exactly $g\equiv1$. For degenerate carrier statistics
one obtains $g>1$, which implies a nonlinear enhancement of the diffusion
current (see Sec.~\ref{sec: Drift-diffusion-current-densities}).
For later use, we also introduce the function
\begin{equation}
g_{\eta}\left(\eta\right)\equiv g\left(\mathscr{F}\left(\eta\right)\right)=\frac{\mathscr{F}\left(\eta\right)}{\mathscr{F}^{\prime}\left(\eta\right)},\label{eq: degeneracy factor - eta}
\end{equation}
\end{subequations}which is plotted in Fig.~\ref{fig: distribution function and degeneracy factor}\,(b).
The last terms in Eq.~(\ref{eq: explicit Seebeck-1}) describe the
contributions of the temperature dependency of the band edge energies
to the Seebeck coefficients. The two terms are not independent, as
they are required to satisfy $E_{g}^{\prime}\left(T\right)=E_{c}^{\prime}\left(T\right)-E_{v}^{\prime}\left(T\right)$,
where $E_{g}\left(T\right)$ is the energy band gap. A plot of the
Seebeck coefficients (\ref{eq: explicit Seebeck-1}) as functions
of the reduced Fermi energy $\eta$ is shown in Fig.~\ref{fig: Seebeck coefficient-2}\,(a)
for $\mathscr{F}\left(\eta\right)=F_{1/2}\left(\eta\right)$ and $N_{c,v}\propto T^{3/2}$.
The plot illustrates schematically the impact of the temperature derivatives
of the band edge energies and the role of degeneration effects.
In the following, several consequences of the Kelvin formula for the Seebeck coefficients are described, which are very appealing for numerical semiconductor device simulation as they greatly simplify the model equations.
Before going into details, we emphasize that the Kelvin formula is of course merely a convenient approximation and by no means exact.
More accurate and microscopically better justified approaches to calculate the Seebeck coefficient are based on
advanced kinetic models such as the semi-classical Boltzmann transport equation
beyond the relaxation time approximation (retaining the full form of the collision operator \cite{Ramu2010, Mascali2017})
or fully quantum mechanical methods \cite{Arsenault2013,Deng2013,Kokalj2015, Mravlje2016}.
\subsection{Comparison with experimental data}
Several empirical models for the temperature dependency of the band
gap energy have been proposed in the literature \citep{ODonnell1991},
including the commonly accepted Varshni model
\begin{equation}
E_{g}\left(T\right)=E_{g,0}-\frac{\alpha T^{2}}{\beta+T},\label{eq: Varshni model}
\end{equation}
where $E_{g,0}$, $\alpha$ and $\beta$ are material specific constants
\citep{Vurgaftman2001}. In order to specify $E_{c/v}^{\prime}\left(T\right)$
from Eq.~(\ref{eq: Varshni model}), we introduce a parameter $\chi$
such that $E_{c}^{\prime}\left(T\right)=\left(\chi+\frac{1}{2}\right)E_{g}^{\prime}\left(T\right)$
and $E_{v}^{\prime}\left(T\right)=\left(\chi-\frac{1}{2}\right)E_{g}^{\prime}\left(T\right)$.
In applications, $\chi$ can be used as a fitting parameter. It shall
be noted that the terms involving $E_{c/v}^{\prime}\left(T\right)$
in Eq.~(\ref{eq: explicit Seebeck-1}) are non-negligible and yield
a significant contribution to the Seebeck coefficients at elevated
temperatures. Indeed, some room temperature values of $k_{B}^{-1}E_{g}^{\prime}\left(300\,\text{K}\right)$
for important semiconductors are $-2.95$ (Si), $-4.47$ (Ge), $-5.32$
(GaAs) \citep{Palankovski2004}, which are on the same order of magnitude
as the first term $TN_{c/v}^{\prime}\left(T\right)/N_{c/v}\left(T\right)\approx1.5$
in Eq.~(\ref{eq: explicit Seebeck-1}).
In Fig.~\ref{fig: Seebeck coefficient-2}\,(b), the Kelvin formula
is plotted along with experimental data for n-GaAs. We observe a good
quantitative agreement of the formula (\ref{eq: explicit Seebeck - electrons-1})
with the experimental data in both the weak and the heavy doping regime
for temperatures above $150\,\text{K}$. At high carrier densities
(${N_{D}^{+}\geq9\times10^{17}\,\text{cm}^{-3}}$) the conduction
band electrons become degenerate (see the deviation of the solid from the
dashed lines), where the experimental values nicely follow the degenerate
formula (\ref{eq: explicit Seebeck - electrons-1}). See the caption
for details. At low temperatures ($T<150\,\text{K}$,
not shown), the Seebeck coefficient is increasingly dominated by the
phonon drag effect \citep{Homm2008}, which is not considered in
the present model.
\subsection{Heat generation rate \label{sec: Heat generation rate}}
A commonly accepted form of the self-consistent heat generation rate
$H$ was derived by Wachutka \citep{Wachutka1990}:
\begin{align}
\begin{aligned}H & =\frac{1}{\sigma_{n}}\left\Vert \mathbf{j}_{n}\right\Vert ^{2}+\frac{1}{\sigma_{p}}\left\Vert \mathbf{j}_{p}\right\Vert ^{2}-T\,\mathbf{j}_{n}\cdot\nabla P_{n}-T\,\mathbf{j}_{p}\cdot\nabla P_{p}+q\left(T\frac{\partial\varphi_{n}\left(n,p,T\right)}{\partial T}-\varphi_{n}-T\frac{\partial\varphi_{p}\left(n,p,T\right)}{\partial T}+\varphi_{p}\right)R\\
& \phantom{=}-T\left(\frac{\partial\varphi_{p}\left(n,p,T\right)}{\partial T}+P_{p}\right)\nabla\cdot\mathbf{j}_{p}-T\left(\frac{\partial\varphi_{n}\left(n,p,T\right)}{\partial T}+P_{n}\right)\nabla\cdot\mathbf{j}_{n}.
\end{aligned}
\label{eq: Wachutka heat source}
\end{align}
Here we omit the radiation power density contribution from the original
work. The notation $\left\Vert \mathbf{x}\right\Vert =\left(\mathbf{x}\cdot\mathbf{x}\right)^{1/2}$
is the standard vector norm.
The derivation of Eq.~(\ref{eq: Wachutka heat source}) is based on the conservation of internal energy and does not involve any explicit assumptions on the Seebeck coefficient.
Using the Maxwell relations (\ref{eq: Maxwell relations-1}) and the transport
Eqs.~(\ref{eq: electron transport equation})--(\ref{eq: hole transport equation}),
we rewrite Eq.~(\ref{eq: Wachutka heat source}) as
\begin{align}
\begin{aligned}H & =\frac{1}{\sigma_{n}}\left\Vert \mathbf{j}_{n}\right\Vert ^{2}+\frac{1}{\sigma_{p}}\left\Vert \mathbf{j}_{p}\right\Vert ^{2}-T\,\mathbf{j}_{n}\cdot\nabla P_{n}-T\,\mathbf{j}_{p}\cdot\nabla P_{p}+q\left(\varphi_{p}-\varphi_{n}+\Pi_{p}-\Pi_{n}\right)R\\
& \phantom{=}+qT\left(P_{p}-\frac{1}{q}\frac{\partial s\left(n,p,T\right)}{\partial p}\right)\partial_{t}p-qT\left(P_{n}+\frac{1}{q}\frac{\partial s\left(n,p,T\right)}{\partial n}\right)\partial_{t}n,
\end{aligned}
\label{eq: heat source-1}
\end{align}
where we introduced the Peltier coefficients $\Pi_{n}=TP_{n}$ and
$\Pi_{p}=TP_{p}$ (``second Kelvin relation''). Before we highlight the consequences of the Kelvin
formula for the Seebeck coefficients on $H$, we give a brief interpretation
of the individual terms in Eq.~(\ref{eq: heat source-1}).
The first two terms $H_{J,\lambda}=\sigma_{\lambda}^{-1}\left\Vert \mathbf{j}_{\lambda}\right\Vert ^{2}$
(for $\lambda\in\left\{ n,p\right\} $) describe Joule heating,
which is always non-negative and therefore never leads to cooling
of the device. The next two terms $H_{\text{T--P},\lambda}=-T\,\mathbf{j}_{\lambda}\cdot\nabla P_{\lambda}$
(for $\lambda\in\left\{ n,p\right\} $) describe the Thomson--Peltier
effect, which can either heat or cool the device depending on the
direction of the current flow. At constant temperature, this reduces
to the Peltier effect $H_{\text{T--P},\lambda}\vert_{T=\text{const}.}=-\mathbf{j}_{\lambda}\cdot\nabla\Pi_{\lambda}$,
which is important at heterointerfaces and p-n junctions. At constant
carrier densities, one obtains the Thomson heat term $H_{\text{T--P},\lambda}\vert_{n,p=\text{const}.}=-\mathcal{K}_{\lambda}\,\mathbf{j}_{\lambda}\cdot\nabla T$
with the Thomson coefficient $\mathcal{K}_{\lambda}=T\frac{\partial P_{\lambda}}{\partial T}=\frac{\partial\Pi_{\lambda}}{\partial T}-P_{\lambda}$
(for $\lambda\in\left\{ n,p\right\} $). The Thomson--Peltier
effect combines both contributions. The recombination heat term $H_{R}=q(\varphi_{p}-\varphi_{n}+\Pi_{p}-\Pi_{n})R$
models the self-heating of the device due to recombination of electron-hole
pairs. The difference of the Peltier coefficients describes the
average excess energy of the carriers above the Fermi voltage. The
last line in Eq.~(\ref{eq: heat source-1}) is a purely transient
contribution, that has been discussed by several authors \citep{Wachutka1990,Lindefelt1994,Brand1995,Parrott1996,Freeman2014}.
In simulation practice, this term is often neglected, since estimations
show that it is negligible in comparison with the other self-heating
sources, see Refs.~\citep{Kells1993,Wolbert1994}.
We observe that the transient term vanishes exactly if we choose the
Kelvin formula (\ref{eq: Kelvin formula}) for the Seebeck coefficients.
As a result, solely the classically known self-heating terms are contained
in the model and all additional, transient heating mechanisms are
excluded:
\begin{equation}
H =\frac{1}{\sigma_{n}}\left\Vert \mathbf{j}_{n}\right\Vert ^{2}+\frac{1}{\sigma_{p}}\left\Vert \mathbf{j}_{p}\right\Vert ^{2}-T\,\mathbf{j}_{n}\cdot\nabla P_{n}-T\,\mathbf{j}_{p}\cdot\nabla P_{p}+q\left(\varphi_{p}-\varphi_{n}+\Pi_{p}-\Pi_{n}\right)R.\label{eq: heat source final}
\end{equation}
Finally, we rewrite the recombination heating term using the Seebeck
coefficients (\ref{eq: explicit Seebeck-1}) and Eq.~(\ref{eq: carrier density state equations}).
One obtains
\begin{equation*}
H_{R}=\left(E_{g}\left(T\right)-TE_{g}^{\prime}\left(T\right)+\left[\frac{TN_{v}^{\prime}\left(T\right)}{N_{v}\left(T\right)}g\left(\frac{p}{N_{v}\left(T\right)}\right)+\frac{TN_{c}^{\prime}\left(T\right)}{N_{c}\left(T\right)}g\left(\frac{n}{N_{c}\left(T\right)}\right)\right]k_{B}T\right)R.
\end{equation*}
The last term describes the (differential) average thermal
energy per recombining electron-hole pair. For an effective density
of states function $N_{c/v}\propto T^{3/2}$ and non-degenerate carrier
statistics, we recover the classical result $H_{R}\approx\left(E_{g}\left(T\right)-TE_{g}^{\prime}\left(T\right)+3k_{B}T\right)R.$
This yields a clear interpretation of the degeneracy factor $g$ (see
Eq.~(\ref{eq: degeneracy factor - both expressions})): It describes
the increased average thermal energy of the Fermi gas due to Pauli
blocking in comparison to the non-degenerate case at the same carrier
density. We emphasize that the Kelvin formula immediately yields the
correct average kinetic energy $2\times\frac{3}{2}k_{B}T$ of the
three-dimensional electron-hole plasma just from the temperature dependency
of the effective density of states function $N_{c/v}\propto T^{3/2}$.
This does in general not hold for Seebeck coefficients derived from
the Boltzmann transport equation in relaxation time approximation,
where the average thermal energy of the electron-hole plasma in the
recombination heat term depends on a scattering parameter, see e.\,g.
Ref.~\citep{Wenzel2017}.
The dissipated heat is closely related with the electrical power injected through the contacts.
The global power balance equation for the present model is derived in \ref{sec: power balance}.
\subsection{Electrical current densities in drift-diffusion form\label{sec: Drift-diffusion-current-densities}}
In this section we recast the electrical current density expressions
from the thermodynamic form (\ref{eq: current densities-2}) to the
drift-diffusion form. As we will see below, the Kelvin formula for
the Seebeck coefficient allows to entirely absorb the thermally driven
part of the electrical current density in the diffusion coefficient
via a generalized Einstein relation. Thus, the $\nabla T$ term can
be eliminated in the drift-diffusion form, which significantly simplifies
the current density expression. Our derivation is based on rewriting
the gradient of the quasi-Fermi potential using the free energy density
(\ref{eq: free energy density-1}) and further thermodynamic relations
stated above. In the following, we sketch the essential steps for
the electron current density, the corresponding expression for the
holes follows analogously. We obtain
\[
-q\nabla\varphi_{n}\stackrel{\text{Eq.}\,\eqref{eq: conjugate fields-1}}{=}\nabla\frac{\partial f}{\partial n}\stackrel{\text{Eq.}\,\eqref{eq: free energy density-1}}{=}\nabla\left(\frac{\partial f_{\text{Coul}}}{\partial n}+\frac{\partial f_{\text{e--h}}}{\partial n}\right)\stackrel{\text{Eq.}\,\eqref{eq: electrostatic derivatives-1}}{=}-q\nabla\phi+\frac{\partial^{2}f_{\text{e--h}}}{\partial n^{2}}\nabla n+\frac{\partial^{2}f_{\text{e--h}}}{\partial n\,\partial p}\nabla p+\frac{\partial^{2}f_{\text{e--h}}}{\partial n\,\partial T}\nabla T,
\]
where we have separated the contributions from the Coulomb interaction
energy $f_{\text{Coul}}$ (leading to drift in the electric field)
and the quasi-free electron-hole plasma (yielding Hessian matrix elements
of the ideal Fermi gas' free energy density $f_{\text{e--h}}$). The
electrons and holes are decoupled in the non-interacting Fermi-gas
such that
\begin{align*}
\frac{\partial^{2}f_{\text{e--h}}}{\partial n\,\partial p} & =0.
\end{align*}
Moreover, since (i)~the Coulomb interaction energy is independent
of the temperature and therefore does not contribute to the system's
entropy and (ii)~the lattice contribution $f_{L}$ is independent
of the carrier densities, it holds
\[
\frac{\partial^{2}f_{\text{e--h}}}{\partial n\,\partial T}=-\frac{\partial s}{\partial n},
\]
where $s$ is the entropy density of the full system (see the last
formula in Eq.~(\ref{eq: conjugate fields-1})). Thus, we arrive
at
\[
\nabla\varphi_{n}=\nabla\phi-\frac{1}{q}\frac{\partial^{2}f_{\text{e--h}}}{\partial n^{2}}\nabla n+\frac{1}{q}\frac{\partial s}{\partial n}\nabla T,
\]
which must be substituted in Eq.~(\ref{eq: current densities-2})
to obtain
\[
\mathbf{j}_{n}=-\sigma_{n}\left(\nabla\phi-\frac{1}{q}\frac{\partial^{2}f_{\text{e--h}}}{\partial n^{2}}\nabla n+\left[\frac{1}{q}\frac{\partial s}{\partial n}+P_{n}\right]\nabla T\right)=-\sigma_{n}\nabla\phi+\sigma_{n}\frac{1}{q}\frac{\partial^{2}f_{\text{e--h}}}{\partial n^{2}}\nabla n.
\]
In the last step, we have used the Kelvin formula (\ref{eq: Kelvin formula})
for the Seebeck coefficient. The temperature gradient term vanishes
exactly, since reversing the order of the derivatives in the Hessian
of the free energy density immediately yields the definition (\ref{eq: Kelvin formula})
and cancels with the Seebeck term in Eq.~\eqref{eq: current densities-2}.
The same result can be obtained by simply inverting the carrier density
state equation (\ref{eq: carrier density state equations}) and using
the explicit expression (\ref{eq: explicit Seebeck-1}). With the
electrical conductivities $\sigma_{n}=qM_{n}n$, $\sigma_{p}=qM_{p}p$
and
\begin{align*}
\frac{\partial^{2}f_{\text{e--h}}}{\partial n^{2}} & =\frac{k_{B}T}{n}g\left(\frac{n}{N_{c}\left(T\right)}\right), & \frac{\partial^{2}f_{\text{e--h}}}{\partial p^{2}} & =\frac{k_{B}T}{p}g\left(\frac{p}{N_{v}\left(T\right)}\right),
\end{align*}
(from Eq.~(\ref{eq: non-interacting Fermi gas chemical potentials-1})),
we finally arrive at the drift-diffusion form:
\begin{align}
\mathbf{j}_{n} & =-qM_{n}n\nabla\phi+qD_{n}\left(n,T\right)\nabla n, & \mathbf{j}_{p} & =-qM_{p}p\nabla\phi-qD_{p}\left(p,T\right)\nabla p.\label{eq: drift-diffusion currents}
\end{align}
The diffusion coefficients are given by the \emph{generalized} Einstein
relation \citep{Landsberg1952,VanVliet1976}
\begin{align}
D_{n}\left(n,T\right) & =\frac{k_{B}TM_{n}}{q}g\left(\frac{n}{N_{c}\left(T\right)}\right), & D_{p}\left(p,T\right) & =\frac{k_{B}TM_{p}}{q}g\left(\frac{p}{N_{v}\left(T\right)}\right).\label{eq: generalized Einstein relations-1}
\end{align}
Here the degeneracy factor $g$ describes an effective enhancement
of the diffusion current that depends nonlinearly on the carrier densities,
which results from the increased average thermal energy of the carriers
in the case of Fermi--Dirac statistics (see above). The diffusion
enhancement due to carrier degeneracy has been found to be important
in, e.\,g., semiconductor laser diodes \citep{Shore1976}, quantum-photonic devices operated at cryogenic temperatures \cite{Kantner2016a,Kantner2016}
as well as organic field-effect transistors \citep{Roichman2002} and light emitting
diodes \citep{Mensfoort2008}. We emphasize that the drift-diffusion
form (\ref{eq: drift-diffusion currents}) of the current densities
is fully equivalent to the thermodynamic form (\ref{eq: current densities-2}).
Thus, even though the $\nabla T$ term is eliminated, the thermoelectric
cross-coupling via the Seebeck effect is fully taken into account
via the temperature dependency of the diffusion coefficient.
A generalization to the case of hot carrier transport (with multiple temperatures) is described in \ref{sec: Generalization to the case of multiple temperatures}.
For Seebeck coefficients that deviate from the Kelvin formula, additional
thermodiffusion terms emerge.
\section{Non-isothermal generalization of the Scharfetter--Gummel scheme
for degenerate semiconductors \label{sec: Non-isothermal generalization of the Scharfetter=002013Gummel scheme for degenerate semiconductors}}
The typically exponentially varying carrier densities in semiconductor
devices lead to numerical instabilities when using a standard finite
difference discretization. In particular, the naive discretization
approach results in spurious oscillations and may cause unphysical
results such as negative carrier densities \citep{Brezzi1989,Farrell2017}.
A robust discretization scheme for the drift-diffusion current density
was introduced by Scharfetter and Gummel \citep{Scharfetter1969},
who explicitly solved the current density expressions as a separate
differential equation along the edge between two adjacent nodes of
the mesh. The resulting discretized current density expressions feature
exponential terms that reflect the characteristics of the doping profile
and allow for numerically stable calculations. Over the last decades,
several generalizations of the Scharfetter--Gummel method have been
proposed for either degenerate semiconductors \citep{Yu1985,Gajewski1993,Bessemoulin-Chatard2012,Koprucki2013,Koprucki2015,Fuhrmann2015}
or non-isothermal carrier transport with included thermoelectric cross
effects \citep{Tang1984,McAndrew1985,Rudan1986,Forghieri1988,Chen1991,Souissi1991,TenThijeBoonkkamp1993,Smith1993}.
In this section, we derive two different generalizations of the Scharfetter--Gummel
scheme for degenerate semiconductors obeying the Kelvin formula for
the Seebeck coefficient. Both schemes differ in the treatment of degeneration
effects and are obtained by extending the approaches previously developed
in Refs.~\citep{Bessemoulin-Chatard2012,Koprucki2015} and \citep{Yu1985}.
First, we outline the finite volume method in Sec.~\ref{sec: finite volume discretization}
and then introduce the non-isothermal Scharfetter--Gummel schemes
in Sec.~\ref{sec: Generalized Scharfetter=002013Gummel scheme}.
We study important limiting cases and structure preserving properties
of the discretizations (Sec.~\ref{Sec: Limiting cases and structure preserving properties}),
give a detailed comparison with the numerically exact solution of
the underlying two-point boundary value problem (Sec.~\ref{sec: Comparison with numerically exact solution})
and derive analytical error bounds (Sec.~\ref{sec: Analytical error estimate}).
Finally, we present a numerical convergence analysis by means of numerical
simulations of a one-dimensional p-n-diode in Sec.~\ref{sec: benchmark simulation}.
\subsection{Finite volume discretization \label{sec: finite volume discretization}}
We assume a boundary conforming Delaunay triangulation \citep{Si2010}
of the point set $\mathbf{R}=\left\{ \mathbf{r}_{K}\right\} _{K=1\ldots N_{\text{nodes}}}$,
$\mathbf{r}_{K}\in\Omega$, where $\Omega\subset\mathbb{R}^{d}$ is
the computational domain with dimensionality $d=\{ 1,2,3 \}$. The dual mesh is
given by the Vorono\"i cells
\[
\Omega_{K}=\left\{ \mathbf{r}\in\Omega:\left\Vert \mathbf{r}-\mathbf{r}_{K}\right\Vert \leq\left\Vert \mathbf{r}-\mathbf{r}_{L}\right\Vert \text{ for all }\mathbf{r}_{L}\in\mathbf{R}\text{ with }\mathbf{r}_{L}\neq \mathbf{r}_{K}\right\} ,
\]
which provides a non-overlapping tessellation $\Omega=\bigcup_{K}\Omega_{K}$
of the domain. This represents an admissible mesh in the sense of
Ref.~\citep{Eymard2000}. The finite volume discretization of the
system (\ref{eq: Poisson equation})--(\ref{eq: heat equation})
is obtained by integration over the cell $\Omega_{K}$ and usage
of the divergence theorem \citep{Eymard2000,Farrell2017}. The discrete
(stationary) non-isothermal drift-diffusion system reads\begin{subequations}\label{eq: discrete energy-drift-diffusion system}
\begin{align}
-\sum_{L\in\mathcal{N}\left(K\right)}s_{K,L}\varepsilon\left(\phi_{L}-\phi_{K}\right) & =q\vert\Omega_{K}\vert\left(C_{K}+p_{K}-n_{K}\right)\label{eq: discrete Poisson equation}\\
-\sum_{L\in\mathcal{N}\left(K\right)}s_{K,L}J_{n,K,L} & =-q\vert\Omega_{K}\vert R_{K},\label{eq: discrete electron transport equation}\\
+\sum_{L\in\mathcal{N}\left(K\right)}s_{K,L}J_{p,K,L} & =-q\vert\Omega_{K}\vert R_{K},\label{eq: discrete hole transport equation}\\
-\sum_{L\in\mathcal{N}\left(K\right)}s_{K,L}\kappa\left(T_{L}-T_{K}\right) & =\frac{1}{2}\sum_{L\in\mathcal{N}\left(K\right)}s_{K,L}\left(H_{J,K,L}+H_{\text{T--P},K,L}\right)+\vert\Omega_{K}\vert H_{R,K}\label{eq: discrete heat equation}
\end{align}
\end{subequations}
with the flux projections
\begin{align}
J_{n,K,L} & =\left(\mathbf{r}_{L}-\mathbf{r}_{K}\right)\cdot\mathbf{j}_{n}, & J_{p,K,L} & =\left(\mathbf{r}_{L}-\mathbf{r}_{K}\right)\cdot\mathbf{j}_{p} \label{eq: discrete current normal projection}
\end{align}
on the edge $\overline{KL} := \{ x \, \mathbf{r}_L + (1-x)\, \mathbf{r}_K \, \vert\, x\in\mathbb{R}, 0\leq x \leq 1 \}$.
The geometric factors in Eq.~(\ref{eq: discrete energy-drift-diffusion system})
are the volume $\vert\Omega_{K}\vert$ of the $K$-th Vorono\"i cell
and the edge factor
\begin{equation}
s_{K,L}=\frac{\vert\partial\Omega_{K}\cap\partial\Omega_{L}\vert}{\left\Vert \mathbf{r}_{L}-\mathbf{r}_{K}\right\Vert }. \label{eq: edge-factor}
\end{equation}
The symbol $\mathcal{N}\left(K\right)$ denotes the set of nodes adjacent
to $K$. For the sake of simplicity we restrict ourselves to the case
of a homogeneous material. This limitation is not important for
the flux discretization, as the discrete fluxes appear only along
possible heterointerfaces (edges of the primary simplex grid, see
Fig.~(\ref{fig: Voronoii})) but never across. In the case of heterostructures,
the currents along material interfaces are weighted by the respective
edge factors.
Moreover, boundary terms on $\partial\Omega\cap\Omega_{K}\neq\emptyset$
are omitted in Eq.~(\ref{eq: discrete energy-drift-diffusion system}), which are treated in the standard way as described in Ref.~\cite{Eymard2000} and references therein.
The discrete electron density reads $n_{K}=N_{c}\left(T_{K}\right)\mathscr{F}\left(\eta_{n,K}\right)$
with $\eta_{n,K}=-\left(E_{c}\left(T_{K}\right)-q\phi_{K}+q\varphi_{n,K}\right)/\left(k_{B}T_{K}\right)$
(holes analogously).
The discrete recombination rate $R_K$ is obtained by locally evaluating Eq.~\eqref{eq: recombination rate} as
$R_{K}=R(\phi_{K},\varphi_{n,K},\varphi_{p,K},T_{K})$. Similarly, the discrete doping density
on $\Omega_K$ is taken as $C_K = C(\mathbf{r}_K)$. The discrete self-heating terms are
\begin{subequations}\label{eq: discrete heat generation rate}
\begin{align}
H_{J,K,L} & =-J_{n,K,L}\left(\varphi_{n,L}-\varphi_{n,K}+P_{n,K,L}\left(T_{L}-T_{K}\right)\right)-J_{p,K,L}\left(\varphi_{p,L}-\varphi_{p,K}+P_{p,K,L}\left(T_{L}-T_{K}\right)\right),\label{eq: discrete Joule heating}\\
H_{\text{T--P},K,L} & =-T_{K,L}J_{n,K,L}\left(P_{n,L}-P_{n,K}\right)-T_{K,L}J_{p,K,L}\left(P_{p,L}-P_{p,K}\right),\label{eq: discrete Thomson Peltier heating}\\
H_{R,K} & =q\left(\varphi_{p,K}-\varphi_{n,K}+T_{K}\left(P_{p,K}-P_{n,K}\right)\right)R_{K}.\label{eq: discrete recombination heating}
\end{align}
\end{subequations}
The finite volume discretization of the Joule and Thomson--Peltier heating terms is not straightforward \citep{Bradji2008,Chainais-Hillairet2009, Eymard2003}. Details on the derivation of Eqs.~\eqref{eq: discrete Joule heating}--\eqref{eq: discrete Thomson Peltier heating} are provided in \ref{sec: Discretization of the heat source term}.
The discretization of the edge current densities $J_{n/p,K,L}$,
the edge-averaged temperature $T_{K,L}$ and the Seebeck coefficients
$P_{n/p,K,L}$ along the edge $\overline{KL}$ are subject to the
following sections.
\begin{figure}
\centering
\includegraphics{fig3-voronoi_new}
\caption{Delaunay triangulation and construction of Vorono\"i cells. The red
arrow indicates the discrete current $J_{n,K,L}$ between two neighboring
control volumes $\Omega_{K}$ and $\Omega_{L}$. The green area is
the bi-hyperpyramid (or ``diamond cell'') $D_{K,L}$ with height $\left\Vert \mathbf{r}_{L}-\mathbf{r}_{K}\right\Vert $
and internal face $\vert\partial\Omega_{L}\cap\partial\Omega_{K}\vert$.
Adapted, with permission, from Ref.~\cite{Kantner2019a}. \copyright~2019 IEEE.
}
\label{fig: Voronoii}
\end{figure}
\subsection{Discretization of the current density expression\label{sec: Generalized Scharfetter=002013Gummel scheme}}
The discretization of $J_{n/p,K,L}$ is obtained
by integrating the current density expressions (\ref{eq: drift-diffusion currents})
along the edge $\overline{KL}$ between two adjacent nodes of the
mesh. Since the Kelvin formula implies a remarkably simple form of
the electrical current densities in drift-diffusion form, where the
thermal driving force is eliminated exactly (see Sec.~\ref{sec: Drift-diffusion-current-densities}),
this allows for a straightforward adaptation of the Scharfetter--Gummel
schemes developed for the isothermal case. We assume the electrostatic
field $\mathbf{E}=-\nabla\phi$ and the temperature gradient $\nabla T$
to be constant along the edge $\overline{KL}$, such that
\begin{align*}
\phi\left(x\right) & =x\,\phi_{L}+\left(1-x\right)\phi_{K}, & T\left(x\right) & =x\,T_{L}+\left(1-x\right)T_{K},
\end{align*}
where $x\in\left[0,1\right]$ parametrizes the coordinate on the edge
$\mathbf{r}\left(x\right)=x\thinspace\mathbf{r}_{L}+\left(1-x\right)\mathbf{r}_{K}$.
Tacitly, these assumptions have already been used above in Eqs.~(\ref{eq: discrete Poisson equation})
and (\ref{eq: discrete heat equation}). Moreover, also the mobilities
$M_{n/p}$ and the fluxes $J_{n/p,K,L}$ are assumed to be constant
on the edge. For the electron current density, this yields the two-point
boundary value problem (BVP)
\begin{align}
k_{B}T\left(x\right)g\left(\frac{n\left(x\right)}{N_{c}\left(T\left(x\right)\right)}\right)\frac{\mathrm{d}n}{\mathrm{d}x} & =q\left(\phi_{L}-\phi_{K}\right)n\left(x\right)+\frac{J_{n,K,L}}{M_{n}}, & n\left(0\right) & =n_{K}, & n\left(1\right) & =n_{L},\label{eq: two-point boundary value problem}
\end{align}
on $x=\left[0,1\right]$. The problem for the holes current density
is analogous.
In the non-degenerate case (Maxwell--Boltzmann statistics) the degeneracy
factor is exactly $g\equiv1$, such that the problem can be solved
exactly by separation of variables. One obtains
\begin{align*}
\int_{n_{K}}^{n_{L}}\frac{\mathrm{d}n}{\frac{J_{n,K,L}}{qM_{n}\left(\phi_{L}-\phi_{K}\right)}+n} & =\frac{q\left(\phi_{L}-\phi_{K}\right)}{k_{B}}\int_{0}^{1}\frac{\mathrm{d}x}{T\left(x\right)},
\end{align*}
where the integral on the right hand side yields the (inverse) logarithmic
mean temperature
\begin{align}
\int_{0}^{1}\frac{\mathrm{d}x}{T\left(x\right)} & =\int_{0}^{1}\frac{\mathrm{d}x}{x\,T_{L}+\left(1-x\right)T_{K}}=\frac{1}{\Lambda\left(T_{L},T_{K}\right)}\equiv\frac{1}{T_{K,L}}, & \Lambda\left(x,y\right) & =\frac{x-y}{\log{\left(x/y\right)}},\label{eq: logarithmic mean temperature}
\end{align}
where $\Lambda\left(x,y\right)$ is the logarithmic mean. Solving for
the flux yields the non-isothermal Scharfetter--Gummel scheme
\begin{align}
J_{n,K,L}^{\text{ndeg}} & =M_{n}k_{B}T_{K,L}\left(n_{L}B\left(X_{n,K,L}^{\text{ndeg}}\right)-n_{K}B\left(-X_{n,K,L}^{\text{ndeg}}\right)\right),\qquad X_{n,K,L}^{\text{ndeg}}=\frac{q\left(\phi_{L}-\phi_{K}\right)}{k_{B}T_{K,L}},\label{eq: non-degenerate, non-isothermal SG}
\end{align}
where $B\left(x\right)=x/\left(\exp{\left(x\right)}-1\right)$ is
the Bernoulli function. The Bernoulli function is closely related
to the logarithmic mean: $B\left(x\right)=1/\Lambda\left(\exp{(x)},1\right)$.
At isothermal conditions $T_{K} \equiv T_{L}$, Eq.~(\ref{eq: non-degenerate, non-isothermal SG})
reduces to the original Scharfetter--Gummel scheme \citep{Scharfetter1969}.
In the case of Fermi--Dirac statistics ($g\neq1$), no closed-form
solution exists such that approximate solutions of the BVP (\ref{eq: two-point boundary value problem})
are required. As the degeneracy factor $g$ depends on both the carrier
density and temperature, the problem is not even separable
\subsubsection{Modified thermal voltage scheme \label{sec: Modified thermal voltage scheme}}
Following Refs.~\citep{Bessemoulin-Chatard2012,Koprucki2015}, we
solve the BVP (\ref{eq: two-point boundary value problem}) by freezing
the degeneracy factor $g\left(n/N_{c}\left(T\right)\right)\to g_{n,K,L}$
to a carefully chosen average. The resulting problem has the same
structure as in the non-degenerate case (see above), but with a modified
thermal voltage $k_{B}T_{K,L}/q\to k_{B}T_{K,L}g_{n,K,L}/q$ along
the edge, which takes the temperature variation and the degeneration
of the electron gas into account. This yields the \emph{modified thermal
voltage scheme}\begin{subequations}\label{eq: log mean temp Chatard scheme}
\begin{align}
J_{n,K,L}^{g} & =M_{n}k_{B}T_{K,L}g_{n,K,L}\left(n_{L}B\left(X_{n,K,L}^{g}\right)-n_{K}B\left(-X_{n,K,L}^{g}\right)\right),\qquad X_{n,K,L}^{g}=\frac{q\left(\phi_{L}-\phi_{K}\right)}{k_{B}T_{K,L}g_{n,K,L}},\label{eq: generalized Chatard}
\end{align}
where $T_{K,L}$ is the logarithmic mean temperature (\ref{eq: logarithmic mean temperature}).
In order to ensure the consistency with the thermodynamic equilibrium
and boundedness $g_{n,K}\leq g_{n,K,L}\leq g_{n,L}$ (for $\eta_{n,K}\leq\eta_{n,L}$
or with $K\leftrightarrow L$ else), the edge-averaged degeneracy
factor is taken as \citep{Bessemoulin-Chatard2012,Koprucki2015}
\begin{equation}
g_{n,K,L}=\frac{\eta_{n,L}-\eta_{n,K}}{\log{\left(\mathscr{F}\left(\eta_{n,L}\right)/\mathscr{F}\left(\eta_{n,K}\right)\right)}}=\frac{\mathscr{F}^{-1}\left(n_{L}/N_{c}\left(T_{L}\right)\right)-\mathscr{F}^{-1}\left(n_{K}/N_{c}\left(T_{K}\right)\right)}{\log{\left(n_{L}/N_{c}\left(T_{L}\right)\right)}-\log{\left(n_{K}/N_{c}\left(T_{K}\right)\right)}}.\label{eq: average degeneracy factor}
\end{equation}
\end{subequations}In the limit of $\eta_{n,L}=\eta_{n,K}$, it approaches
the common nodal value
\[
\lim_{\eta_{n,L}\to\eta_{n,K}\equiv\bar{\eta}_{n}}g_{n,K,L}=\frac{\mathscr{F}\left(\bar{\eta}_{n}\right)}{\mathscr{F}^{\prime}\left(\bar{\eta}_{n}\right)}=g\left(\mathscr{F}\left(\bar{\eta}_{n}\right)\right)\equiv g_{\eta}\left(\bar{\eta}_{n}\right).
\]
For constant temperature $T_{L}=T_{K}=T$, the scheme reduces to the
modified Scharfetter--Gummel scheme discussed in Refs.~\citep{Bessemoulin-Chatard2012,Koprucki2015}.
It can thus be regarded as a non-isothermal generalization of this
approach. In the non-degenerate limit $g=1$, it reduces to the non-isothermal,
non-degenerate Scharfetter--Gummel scheme (\ref{eq: non-degenerate, non-isothermal SG}).
\subsubsection{Modified drift scheme \label{sec: Correction factor scheme}}
The traditional approach for the inclusion of degeneration effects
in the Scharfetter--Gummel scheme, that is widely used in commercial
software packages, is based on introducing the correction factors
\citep{Yu1985,Synopsys2010,Silvaco2016}
\begin{equation}
\gamma\left(\eta\right)=\frac{\mathscr{F}\left(\eta\right)}{\exp\left(\eta\right)},\label{eq: correction factor-1-1}
\end{equation}
and rearranging the current density expression with nonlinear diffusion
(\ref{eq: drift-diffusion currents}) (involving the generalized Einstein
relation (\ref{eq: generalized Einstein relations-1})) into a form
with linear diffusion and a modified drift term:
\begin{align}
\mathbf{j}_{n} & =\sigma_{n}\mathbf{E}_{n}+M_{n}k_{B}T\nabla n+\frac{k_{B}}{q}\sigma_{n}\thinspace\rho_{n}\left(T,\eta_{n}\right)\nabla T, & \mathbf{E}_{n} & =-\nabla\left(\phi+\frac{k_{B}T}{q}\log\left(\gamma\left(\eta_{n}\right)\right)\right).\label{eq: drift-diffusion form with linear diffusion}
\end{align}
Here, the degeneration of the electron gas induces a thermodiffusion
term with the coefficient
\begin{equation}
\rho_{n}\left(T,\eta_{n}\right)=\log\left(\gamma\left(\eta_{n}\right)\right)-\frac{TN_{c}^{\prime}\left(T\right)}{N_{c}\left(T\right)}\frac{\gamma^{\prime}\left(\eta_{n}\right)/\gamma\left(\eta_{n}\right)}{1+\gamma^{\prime}\left(\eta_{n}\right)/\gamma\left(\eta_{n}\right)}=\log\left(\gamma\left(\eta_{n}\right)\right)+\frac{TN_{c}^{\prime}\left(T\right)}{N_{c}\left(T\right)}\left(g_{\eta}\left(\eta_{n}\right)-1\right),\label{eq: deviation function}
\end{equation}
that vanishes exactly in the non-degenerate limit ${\gamma\left(\eta\right)\equiv1\equiv g_{\eta}\left(\eta\right)}$.
Hence, the function $\rho_{n}\left(T,\eta_{n}\right)$ quantifies
the difference between the degenerate and the non-degenerate Seebeck-coefficient,
see Fig.~\ref{fig: Seebeck coefficient-2}\,(a). On the step from
Eq.~(\ref{eq: drift-diffusion currents}) to (\ref{eq: drift-diffusion form with linear diffusion}),
we have used the relation $g_{\eta}\left(\eta\right)=\left(1+\gamma^{\prime}\left(\eta\right)/\gamma\left(\eta\right)\right)^{-1}$.
A plot of the correction factor (\ref{eq: correction factor-1-1})
is given in Fig.~\ref{fig: distribution function and degeneracy factor}\,(c).
The current density expression (\ref{eq: drift-diffusion form with linear diffusion})
is discretized by projecting the current on the edge $\overline{KL}$,
assuming the effective electric field $\mathbf{E}_{n}$ to be a constant
along the edge, and freezing $\rho_{n}\left(T,\eta_{n}\right)\to\rho_{n,K,L}$
to a constant average value.
Here, different averages can be taken for $\rho_{n,K,L}$, see Fig.\,\ref{fig: SG comparison 1D plots}\,(c). The influence of this choice will be discussed below in Sec.~\ref{sec: Comparison with numerically exact solution}.
Along the same lines as above, one arrives
at the \emph{modified drift scheme}\begin{subequations}\label{eq: modified drift scheme}
\begin{align}
\begin{aligned}J_{n,K,L}^{\gamma}\end{aligned}
& =M_{n}k_{B}T_{K,L}\left(n_{L}B\left(X_{n,K,L}^{\gamma}\right)-n_{K}B\left(-X_{n,K,L}^{\gamma}\right)\right),\label{eq: modified advection scheme-1}
\end{align}
with
\begin{equation}
X_{n,K,L}^{\gamma}=\frac{q\left(\phi_{L}-\phi_{K}\right)}{k_{B}T_{K,L}}+\frac{T_{L}\log\left(\gamma\left(\eta_{n,L}\right)\right)-T_{K}\log\left(\gamma\left(\eta_{n,K}\right)\right)}{T_{K,L}}-\rho_{n,K,L}\log{\left(\frac{T_{L}}{T_{K}}\right)}.\label{eq: modified advection scheme-1 X}
\end{equation}
\end{subequations}Again, $T_{K,L}$ is the logarithmic mean temperature
(\ref{eq: logarithmic mean temperature}). The corresponding non-degenerate
limit \eqref{eq: non-degenerate, non-isothermal SG} is easily recovered by $\gamma\left(\eta_{n,L/K}\right)\to1$ and $\rho_{n,K,L}\to0$.
\subsection{Limiting cases and structure preserving properties\label{Sec: Limiting cases and structure preserving properties}}
In the following, we investigate some important limiting cases and
structure preserving properties of the generalized Scharfetter--Gummel
schemes (\ref{eq: log mean temp Chatard scheme}) and (\ref{eq: modified drift scheme}).
This includes an analysis of the consistency of the discrete expressions
with fundamental thermodynamical principles (thermodynamic equilibrium,
second law of thermodynamics). To this end, it is convenient to rewrite
both expressions using the identity $B\left(-x\right)=\exp{\left(x\right)}\,B\left(x\right)$
and the logarithmic mean $\Lambda$ (see Eq.~(\ref{eq: logarithmic mean temperature}))
as
\begin{align}
J_{n,K,L}^{g} & =-\sigma_{n,K,L}^{g}\left(\varphi_{n,L}-\varphi_{n,K}+P_{n,K,L}^{g}\left(T_{L}-T_{K}\right)\right) & \text{with}\quad\sigma_{n,K,L}^{g}= & qM_{n}\frac{\Lambda\left(n_{L}\exp{\left(-\frac{1}{2}X_{n,K,L}^{g}\right)},n_{K}\exp{\left(\frac{1}{2}X_{n,K,L}^{g}\right)}\right)}{\mathrm{sinhc}{\left(\frac{1}{2}X_{n,K,L}^{g}\right)}},\label{eq: modified thermal voltage scheme - discrete thermodynamic form}
\end{align}
and
\begin{align}
J_{n,K,L}^{\gamma} & =-\sigma_{n,K,L}^{\gamma}\left(\varphi_{n,L}-\varphi_{n,K}+P_{n,K,L}^{\gamma}\left(T_{L}-T_{K}\right)\right) & \text{with}\quad\sigma_{n,K,L}^{\gamma}= & qM_{n}\frac{\Lambda\left(n_{L}\exp{\left(-\frac{1}{2}X_{n,K,L}^{\gamma}\right)},n_{K}\exp{\left(\frac{1}{2}X_{n,K,L}^{\gamma}\right)}\right)}{\mathrm{sinhc}{\left(\frac{1}{2}X_{n,K,L}^{\gamma}\right)}},\label{eq: modified drift scheme - discrete thermodynamic form}
\end{align}
where $\mathrm{sinhc}\left(x\right)=\sinh{\left(x\right)}/x$. This
representation directly corresponds to the continuous current density
expression in the thermodynamic form (\ref{eq: current densities-2}),
where the conductivity along the edge $\sigma_{n,K,L}^{g/\gamma}$
is determined by a ``tilted'' logarithmic average of the nodal carrier
densities. Both expressions (\ref{eq: modified thermal voltage scheme - discrete thermodynamic form})--(\ref{eq: modified drift scheme - discrete thermodynamic form})
have a common structure, but differ in the discrete conductivity $\sigma_{n,K,L}^{g}\neq\sigma_{n,K,L}^{\gamma}$
(due to $X_{n,K,L}^{g}\neq X_{n,K,L}^{\gamma}$) and the discrete
Seebeck coefficients $P_{n,K,L}^{g}\neq P_{n,K,L}^{\gamma}$ along
the edge, which are implicitly prescribed by the Scharfetter--Gummel discretization
procedure. The latter read
\begin{subequations}\label{eq: edge averaged Seebeck coefficients}
\begin{align}
P_{n,K,L}^{g} & =-\frac{k_{B}}{q}\left[\log{\left(\frac{N_{c}\left(T_{L}\right)}{N_{c}\left(T_{K}\right)}\right)}\frac{g_{n,K,L}}{\log{\left(T_{L}/T_{K}\right)}}-\frac{\left(T_{L}-T_{K,L}\right)\eta_{n,L}-\left(T_{K}-T_{K,L}\right)\eta_{n,K}}{T_{L}-T_{K}}-\frac{1}{k_{B}}\frac{E_{c}\left(T_{L}\right)-E_{c}\left(T_{K}\right)}{T_{L}-T_{K}}\right]\label{eq: edge averaged Seebeck coefficients - mod thermal voltage}
\end{align}
and
\begin{align}
P_{n,K,L}^{\gamma} & =-\frac{k_{B}}{q}\Bigg[\log{\left(\frac{N_{c}\left(T_{L}\right)}{N_{c}\left(T_{K}\right)}\right)}\frac{1}{\log{\left(T_{L}/T_{K}\right)}}+\rho_{n,K,L}-\frac{T_{L}\log\left(\gamma\left(\eta_{n,L}\right)\right)-T_{K}\log\left(\gamma\left(\eta_{n,K}\right)\right)-T_{K,L}\log{\left(\gamma\left(\eta_{n,L}\right)/\gamma\left(\eta_{n,K}\right)\right)}}{T_{L}-T_{K}}\label{eq: edge averaged Seebeck coefficients - correction factor}\\
& \phantom{=-\frac{k_{B}}{q}\Bigg[\log{\left(\frac{N_{c}\left(T_{L}\right)}{N_{c}\left(T_{K}\right)}\right)}\frac{1}{\log{\left(T_{L}/T_{K}\right)}}+\rho_{n,K,L}}-\frac{\left(T_{L}-T_{K,L}\right)\eta_{n,L}-\left(T_{K}-T_{K,L}\right)\eta_{n,K}}{T_{L}-T_{K}}-\frac{1}{k_{B}}\frac{E_{c}\left(T_{L}\right)-E_{c}\left(T_{K}\right)}{T_{L}-T_{K}}\Bigg].\nonumber
\end{align}
\end{subequations}
The discrete Seebeck coefficients (\ref{eq: edge averaged Seebeck coefficients})
enter the discrete Joule heat term (\ref{eq: discrete Joule heating})
and thus the discrete entropy production rate (see Sec.~\ref{sec: Non-negativity-of-the discrete dissipation rate}
below). Out of the thermodynamic equilibrium, the discrete Seebeck coefficients
(\ref{eq: edge averaged Seebeck coefficients}) determine the point
of compensating (discrete) chemical and thermal current flow such that $J_{n,K,L}\vert_{\varphi_{n,K}\neq\varphi_{n,L},T_{K}\neq T_{L}}=0$.
In other words, there is a non-equilibrium configuration with $T_K \neq T_L$ and $\varphi_{n,K}\neq\varphi_{n,L}$,
where the discrete Seebeck coefficient $P_{n,K,L}^{g/\gamma} = - \left(\varphi_{n,L} - \varphi_{n,K}\right)/\left(T_L - T_K\right)$
equals the (negative) ratio of both discrete driving forces such that the discrete current density is zero.
This compensation point is in general slightly different between both
schemes (see inset of Fig.~\ref{fig: SG comparison 1D plots}\,(c)).
In the limit of a small temperature gradient and a small difference
in the reduced Fermi energy along the edge, both discrete Seebeck
coefficients approach in leading order the continuous expression (\ref{eq: explicit Seebeck - electrons-1})
\begin{equation*}
P_{n,K,L}^{g/\gamma} =-\frac{k_{B}}{q}\left[\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)-\bar{\eta}_{n}-\frac{1}{k_{B}}E_{c}^{\prime}(\bar{T})\right]+\mathcal{O}(\delta\eta_{n}^{2})+\mathcal{O}(\delta\eta_{n}\,\delta\Theta)+\mathcal{O}(\delta\Theta^{2}),
\end{equation*}
where $\delta\Theta=\left(T_{L}-T_{K}\right)/\bar{T}$, $\bar{T}=\frac{1}{2}\left(T_{L}+T_{K}\right)$,
$\delta\eta_{n}=\eta_{n,L}-\eta_{n,K}$ and $\bar{\eta}_{n}=\frac{1}{2}\left(\eta_{n,L}+\eta_{n,K}\right)$.
\subsubsection{Thermodynamic equilibrium \label{sec:Thermodynamic-equilibrium}}
In the thermodynamic equilibrium (thermal equilibrium $T_{K}=T_{L}$
and chemical equilibrium $\varphi_{n,K}=\varphi_{n,L}$), both the
discrete current densities (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}) are exactly zero. This is easily
seen from Eqs.~(\ref{eq: modified thermal voltage scheme - discrete thermodynamic form})--(\ref{eq: modified drift scheme - discrete thermodynamic form}),
where the discrete driving force $\left(\varphi_{n,L}-\varphi_{n,K}+P_{n,K,L}^{g/\gamma}\left(T_{L}-T_{K}\right)\right)$
vanishes under thermodynamic equilibrium conditions.
\subsubsection{Strong electric field (drift-dominated limit) \label{sec:Strong-electric-field}}
Due to the asymptotics of the Bernoulli function $B\left(x\to\infty\right)=0$
and $B\left(x\to-\infty\right)\sim-x$, the modified thermal voltage
scheme (\ref{eq: log mean temp Chatard scheme}) approaches the first-order
upwind scheme
\begin{align}
J_{n,K,L}^{g}\left(\delta\phi_{K,L}\to\pm\infty\right) & \sim-qM_{n}\left(\frac{n_{L}+n_{K}}{2}+\frac{n_{K}-n_{L}}{2}\mathop{\mathrm{sign}}\left(\delta\phi_{K,L}\right)\right)\,\delta\phi_{K,L}=J_{n,K,L}^{\text{{upw}}}\label{eq: upwind scheme-1}
\end{align}
in the limit of a strong electrostatic potential gradient $\delta\phi_{K,L}=\phi_{L}-\phi_{K}\to\pm\infty$.
The upwind scheme is a stable, first-order accurate discretization
for advection-dominated problems, where the coefficient is evaluated
in the ``donor cell'' of the flow \citep{Versteeg2007}. Hence,
this asymptotic feature of the original Scharfetter--Gummel scheme,
which is important for the robustness of the discretization as it
avoid spurious oscillations, is preserved in the degenerate and non-isothermal
case. The modified drift scheme (\ref{eq: modified drift scheme})
approaches the upwind scheme as well
\begin{align*}
J_{n,K,L}^{\gamma}\left(\delta\phi_{K,L}\to\pm\infty\right) & \sim-qM_{n}\left(\frac{n_{L}+n_{K}}{2}+\frac{n_{K}-n_{L}}{2}\mathop{\mathrm{sign}}\left(\delta\phi_{K,L}\right)\right)\left(\delta\phi_{K,L}+\frac{k_{B}}{q}\left(\log\left(\frac{\left[\gamma\left(\eta_{n,L}\right)\right]^{T_{L}}}{\left[\gamma\left(\eta_{n,K}\right)\right]^{T_{K}}}\right)-\left(T_{L}-T_{K}\right)\rho_{n,K,L}\right)\right),
\end{align*}
however, in the case of strong degeneration the convergence is significantly
slowed down if the nodal correction factors $\gamma\left(\eta_{n,K}\right)\neq\gamma\left(\eta_{n,K}\right)$
and temperatures $T_{L}\neq T_{K}$ are very different. This is shown
in Fig.~\ref{fig: SG comparison 1D plots}\,(c), where the modified
drift scheme shows a constant offset from the numerically exact solution
of the BVP (\ref{eq: two-point boundary value problem}) for $\delta\phi_{K,L}\to-\infty$.
\subsubsection{No electric field (diffusive limit)}
In the case of a vanishing electrostatic potential gradient $\delta\phi_{K,L}=\phi_{L}-\phi_{K}=0$
the schemes take the form
\begin{align}
\lim_{\delta\phi_{K,L}\to0}J_{n,K,L}^{g} & =M_{n}k_{B}T_{K,L}g_{n,K,L}\left(n_{L}-n_{K}\right),\label{eq: central finite difference}\\
\lim_{\delta\phi_{K,L}\to0}J_{n,K,L}^{\gamma} & =M_{n}k_{B}T_{K,L}\frac{1}{\Lambda\left(\frac{T_{L}^{\rho_{n,K,L}}}{\left[\gamma\left(\eta_{n,L}\right)\right]^{T_{L}/T_{K,L}}},\frac{T_{K}^{\rho_{n,K,L}}}{\left[\gamma\left(\eta_{n,K}\right)\right]^{T_{K}/T_{K,L}}}\right)}\left(\frac{T_{L}^{\rho_{n,K,L}}}{\left[\gamma\left(\eta_{n,L}\right)\right]^{T_{L}/T_{K,L}}}n_{L}-\frac{T_{K}^{\rho_{n,K,L}}}{\left[\gamma\left(\eta_{n,K}\right)\right]^{T_{K}/T_{K,L}}}n_{K}\right).\nonumber
\end{align}
The modified thermal voltage scheme (\ref{eq: log mean temp Chatard scheme})
approaches the central finite difference discretization (\ref{eq: central finite difference}),
which is a stable discretization for diffusion-dominated transport
problems \citep{Versteeg2007}. With the edge-averaged degeneracy
factor $g_{n,K,L}$, Eq.~(\ref{eq: central finite difference}) nicely
reflects the structure of the diffusive part of the continuous current
density expression (\ref{eq: drift-diffusion currents}) involving
the generalized Einstein relation (\ref{eq: generalized Einstein relations-1}).
For the modified drift scheme (\ref{eq: modified drift scheme}),
the limiting expression is a weighted finite difference discretization.
Due to the different treatment of the degeneracy of the electron gas
via the correction factors (\ref{eq: correction factor-1-1}), it
does not yield a discrete analogue of the generalized Einstein relation.
\subsubsection{Purely thermally driven currents}
In the chemical equilibrium ($\varphi_{\ensuremath{n,L}}=\varphi_{n,K}$),
the current is driven only by the temperature gradient. The corresponding
expressions are easily obtained from Eqs.~(\ref{eq: modified thermal voltage scheme - discrete thermodynamic form})--(\ref{eq: modified drift scheme - discrete thermodynamic form}),
which include the discrete Seebeck coefficients (\ref{eq: edge averaged Seebeck coefficients}).
\subsubsection{Non-negativity of the discrete dissipation rate \label{sec: Non-negativity-of-the discrete dissipation rate}}
The continuous entropy production rate (dissipation rate) per volume
(see Eq.~(\ref{eq: entropy production rate-2}))
\begin{align*}
\dot{s}_{\text{tot}} & =\frac{1}{T}\left(\mu_{c}-\mu_{v}\right)R+\frac{\kappa}{T^{2}}\left\Vert \nabla T\right\Vert ^{2}+\frac{1}{T}H_{J},
\end{align*}
has contributions from carrier recombination, heat flux and Joule
heating $H_{J}=-\left(\nabla\varphi_{n}+P_{n}\nabla T\right)\cdot\mathbf{j}_{n}-\left(\nabla\varphi_{p}+P_{p}\nabla T\right)\cdot\mathbf{j}_{p}$.
With the current density expressions (\ref{eq: current densities-2})
and a recombination rate of the form \eqref{eq: recombination rate}, all
terms in $\dot{s}_{\text{tot}}$ (including $H_{J}$) are evidently
non-negative (i.\,e., zero in the thermodynamic equilibrium and positive
else). Therefore, the model obeys the second law of thermodynamics.
In order to rule out unphysical phenomena such as steady state dissipation \citep{Bessemoulin-Chatard2012,Bessemoulin-Chatard2017},
it is highly desirable to preserve this important structural property
of the continuous system in its discrete counterpart. Given the finite
volume discretization described above, this is straightforwardly achieved
for the contributions from the carrier recombination and the heat
flux, however, it is less obvious for the Joule heating term. In fact,
the non-negativity of the discrete Joule heating term is non-trivial
and can be violated in general when using a naive discretization approach
as in Ref.~\citep{Kato1994}.
We show that the discrete dissipation rate is evidently non-negative
for both generalized Scharfetter--Gummel schemes (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}). This follows immediately
from their consistency with the thermodynamic equilibrium (see Sec.~\ref{sec:Thermodynamic-equilibrium})
in conjunction with the discrete form (\ref{eq: discrete Joule heating})
of the heating term. Substituting Eq.~(\ref{eq: modified thermal voltage scheme - discrete thermodynamic form})
in (\ref{eq: discrete Joule heating}), one obtains
\begin{align*}
H_{J,n}^{g} & =-\left(\varphi_{n,L}-\varphi_{n,K}+P_{n,K,L}^{g}\left(T_{L}-T_{K}\right)\right)J_{n,K,L}^{g}=\sigma_{n,K,L}^{g}\left|\varphi_{n,L}-\varphi_{n,K}+P_{n,K,L}^{g}\left(T_{L}-T_{K}\right)\right|^{2}\geq0,
\end{align*}
which is zero only in the (discrete) thermodynamic equilibrium and
positive else. The discrete conductivity $\sigma_{n,K,L}^{g}$ is
positive by construction, see Eq.~(\ref{eq: modified thermal voltage scheme - discrete thermodynamic form}).
Analogous expressions are obtained for the holes' current contribution
and the modified drift scheme \eqref{eq: modified drift scheme}. In conclusion, the consistency of the
discrete system (\ref{eq: discrete energy-drift-diffusion system})
with the second law of thermodynamics relies on using the respective
Seebeck coefficients $P_{n/p,K,L}$ implied by the current discretization
(see Eq.~(\ref{eq: edge averaged Seebeck coefficients})) consistently
in the discretized Joule heating term (\ref{eq: discrete Joule heating}).
Only then this structural property of the discrete system holds without
any smallness assumption.
\subsection{Comparison with numerically exact solution \label{sec: Comparison with numerically exact solution}}
We investigate the accuracy of the schemes (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}) by comparing them with the
numerically exact solution of the BVP (\ref{eq: two-point boundary value problem}).
In the isothermal case, this has been carried out before in a similar
way by Farrell et al. \citep{Farrell2017a} (for different Scharfetter--Gummel
schemes), which inspired the investigation of highly accurate Scharfetter--Gummel
type discretizations based on the direct numerical integration of
the arising integral equation using quadrature rules in Ref.~\citep{Patriarca2019}.
In the present non-isothermal case, the problem is more complicated
because of the spatially varying temperature distribution along the
edge. It is convenient to recast the problem (\ref{eq: two-point boundary value problem})
into the form
\begin{align}
\begin{aligned}\frac{\mathrm{d}y\left(x\right)}{\mathrm{d}x} & =\frac{\bar{T}}{T\left(x\right)}\left(\delta\Phi+\frac{N_{c}(\bar{T})}{N_{c}(T(x))}\frac{J}{\mathscr{F}\left(y\right)}-\delta\Theta\frac{T(x)N_{c}^{\prime}(T(x))}{N_{c}(T(x))}\frac{\mathscr{F}\left(y\right)}{\mathscr{F}^{\prime}\left(y\right)}\right),\\
y\left(0\right) & =\bar{\eta}_{n}-\frac{1}{2}\delta\eta_{n},\qquad\qquad y\left(1\right)=\bar{\eta}_{n}+\frac{1}{2}\delta\eta_{n},
\end{aligned}
\label{eq: two-point boundary value problem - eta form}
\end{align}
with the notations $T\left(x\right)\equiv\left(1+\left[x-\frac{1}{2}\right]\delta\Theta\right)\bar{T}$,
$\bar{T}=\frac{1}{2}\left(T_{L}+T_{K}\right)$, $\delta T=T_{L}-T_{K}$,
$\bar{\eta}_{n}=\frac{1}{2}\left(\eta_{n,L}+\eta_{n,K}\right)$, $\delta\eta_{n}=\eta_{n,L}-\eta_{n,K}$
and the non-dimensionalized quantities
\begin{align*}
J & =\frac{J_{n,K,L}}{M_{n}k_{B}\bar{T}N_{c}(\bar{T})}, & \delta\Phi & =\frac{q\left(\phi_{L}-\phi_{K}\right)}{k_{B}\bar{T}}, & \delta\Theta & =\frac{\delta T}{\bar{T}}.
\end{align*}
The exact current $J_{\text{exact}}=J_{\text{exact}}(\delta\Phi,\delta\eta_{n},\delta\Theta,\bar{\eta}_{n},\bar{T})$
is a function of five parameters that satisfies the BVP (\ref{eq: two-point boundary value problem - eta form}).
We solve the BVP \eqref{eq: two-point boundary value problem - eta form}
numerically using the shooting method \citep{Keller1976}, where we
combine a 4th order Runge--Kutta method with Brent's root finding
algorithm \citep{Brent1971} . The problem is invariant under
the simultaneous transformation
\begin{align*}
\delta\Phi & \to-\delta\Phi, & \delta\eta_{n} & \to-\delta\eta_{n}, & \delta\Theta & \to-\delta\Theta, & x & \to1-x, & J & \to-J
\end{align*}
(i.\,e., the sign of the current changes when changing the nodes
$K\leftrightarrow L$), such that we can restrict our analysis to
$\delta\Theta\geq0$, when exploring the accuracy of the discrete
current in the $\left(\delta\Phi,\delta\eta_{n}\right)$-plane. The
comparison is carried out for $\mathscr{F}\left(\eta\right)=F_{1/2}\left(\eta\right)$
and $N_{c}=2\left(m_{c}^{\ast}k_{B}T/(2\pi\hbar^{2})\right)^{3/2}$.
The Fermi--Dirac integrals $F_{1/2}\left(\eta\right)$ and $F_{-1/2}\left(\eta\right)$
are evaluated using MacLeod's algorithm \citep{MacLeod1998}.
\begin{figure}
\centering
\includegraphics[width=1\columnwidth]{fig4-1Dcompare_new}
\caption{Comparison of the non-isothermal Scharfetter--Gummel schemes for
$\delta\eta_{n}=5$. (a)~In the non-degenerate regime ($\bar{\eta}_{n}=-5$)
all schemes coincide, even in the presence of a temperature gradient.
(b)~At $\bar{\eta}_{n}=2$ degeneration effects become significant.
While $J_{g}$ follows $J_{\text{exact }}$ with an acceptable error
over the whole range of $\delta\Phi$, the modified drift scheme $J_{\gamma}$
has a significant offset at strong electric fields ($\delta\Phi\to-\infty$).
The grey shaded area indicates the analytic bounds of the modified
thermal voltage scheme determined using the nodal values $g_{n,K}$
and $g_{n,L}$ instead of $g_{n,K,L}$. (c)~An additional temperature
gradient increases the error in the degenerate regime. The insets
show the effect of different averages $\rho_{n,K,L}$ of the nodal
values $\rho_{n,K}$ and $\rho_{n,L}$ in the modified drift scheme
and the different behavior of the schemes in the region of vanishing
discrete currents.}
\label{fig: SG comparison 1D plots}
\end{figure}
Figure \ref{fig: SG comparison 1D plots} shows the numerically exact
current $J_{\text{exact}}$ along with the approximations $J_{g}$
(modified thermal voltage scheme (\ref{eq: log mean temp Chatard scheme}))
and $J_{\gamma}$ (modified drift scheme (\ref{eq: modified drift scheme}))
as a function of the normalized electric field $\delta\Phi$ along
the edge. For weak degeneracy, both schemes agree with the numerically
exact solution -- even in the case of non-isothermal conditions.
This is shown in Fig.~\ref{fig: SG comparison 1D plots}\,(a) for
$\bar{\eta}_{n}=-5$ and $\delta\Theta=0.1$. At strong electric fields
$\delta\Phi\to\pm\infty$, all schemes approach the upwind scheme
$J_{\text{upw}}$ (grey dashed line, cf. Eq.~(\ref{eq: upwind scheme-1})).
The schemes (\ref{eq: log mean temp Chatard scheme}) and (\ref{eq: modified drift scheme})
differ in the treatment of degeneration effects, which becomes apparent
for increased $\bar{\eta}_{n}$. Figure~\ref{fig: SG comparison 1D plots}\,(b)
shows the results for $\bar{\eta}_{n}=2$ at isothermal conditions
$\delta\Theta=0$. The modified thermal voltage scheme $J_{g}$ (red
line) yields an acceptable deviation from the exact result $J_{\text{exact}}$
(black line) over the whole range of $\delta\Phi$. The error vanishes
at strong electric fields where both $J_{g}$ and $J_{\text{exact}}$
converge to the upwind scheme $J_{\text{upw}}$. The modified drift
scheme (purple line), however, shows a significant error at large
(negative) $\delta\Phi$, where it overestimates the current density
significantly (about 33~\% relative error at $\delta\Phi=-3$). This
behavior results from the different treatment of the degeneration
effects, that degrades the convergence of the modified drift scheme
in the case of strong degeneration (see Sec.~\ref{sec:Strong-electric-field}).
The plot highlights two other important exceptional points (pure diffusion
and zero current), where both schemes show a similar accuracy. In
the presence of an additional temperature gradient along the edge,
see Fig.~\ref{fig: SG comparison 1D plots}\,(c), the approximation
error of both schemes increases. The upper inset shows that the choice
of the average of $\rho_{n,K,L}$ (see Eq.~(\ref{eq: deviation function}))
has only a minor impact on the modified drift scheme. The lower inset
zooms on the region where the currents become zero. Here, all schemes
provide a satisfying accuracy, but none of them is exact, i.\,e.,
they yield a small spurious discrete current and intersect with the
exact solution only in the vicinity of the exact zero current point.
We observe that the modified drift schemes show a slightly better
performance in this case, i.\,e., the Seebeck coefficient (\ref{eq: edge averaged Seebeck coefficients - correction factor})
appears to be slightly better than (\ref{eq: edge averaged Seebeck coefficients - mod thermal voltage}).
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{fig5-2Dcompare_new}
\caption{Comparison of the two non-isothermal Scharfetter--Gummel schemes
(\ref{eq: log mean temp Chatard scheme}) and (\ref{eq: modified drift scheme})
(with arithmetic average $\rho_{n,K,L}=\left(\rho_{n,K}+\rho_{n,L}\right)/2$)
in the ($\delta\Phi,\delta\eta_{n}$)-plane at isothermal ($\delta\Theta=0$,
top row (a)--(d)) and non-isothermal ($\delta\Theta=1/6$, bottom
row (e)--(h)) conditions and different levels of degeneration (weak
$\bar{\eta}_{n}=0$ or strong $\bar{\eta}_{n}=5$). The normalized
absolute error $(J-J_{\text{exact}})/\hat{J}$ is color-coded for
the range $[-1,1]$ (dark colored regions correspond to larger errors,
see the level lines). See the text for a discussion.}
\label{fig: SG comparison 2D plots}
\end{figure}
The normalized absolute errors $(J-J_{\text{exact}})/\hat{J}$ (with
$\hat{J}=M_{n}k_{B}\bar{T}N_{c}(\bar{T})$) of the two schemes (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}) are shown in the ($\delta\Phi,\delta\eta_{n}$)-plane
in Fig.~\ref{fig: SG comparison 2D plots} under isothermal ($\delta\Theta=0$,
top row (a)--(d)) and non-isothermal ($\delta\Theta=1/6$, bottom
row (e)--(h)) conditions and for different levels of degeneration
(weak $\bar{\eta}_{n}=0$ and strong $\bar{\eta}_{n}=5$). In the
limit of very fine meshes $(\delta\Phi,\delta\eta_{n},\delta\Theta)\to(0,0,0)$,
both schemes coincide and the deviation from the exact current approaches
zero. One observes that the size of the white regions with a normalized
absolute error below $0.01$ (in the following denoted as ``low error
domain'') are generally larger for the modified thermal voltage scheme
than for the modified drift scheme. Thus, the modified thermal voltage
scheme is expected to yield a higher accuracy on sufficiently fine
meshes. This will be evidenced by the numerical simulation of a p-n-diode
in Sec.~\ref{sec: benchmark simulation}. The plots in Fig.~\ref{fig: SG comparison 2D plots}
feature two additional lines, that refer to special limiting cases
where both schemes yield a very high accuracy. The case of a pure
drift current (i.\,e., no diffusion $n_{L}=n_{K}$) is indicated
by a red line; the zero current line (blue) refers to the curve in
the ($\delta\Phi,\delta\eta_{n}$)-plane where the exact current vanishes
($J_{\text{exact}}=0$ in the BVP (\ref{eq: two-point boundary value problem - eta form})).
In the isothermal case ($\delta\Theta=0$), the latter corresponds
to the thermodynamic equilibrium, in the non-isothermal case $(\delta\Theta\neq0)$
it refers to the situation of compensating chemical and thermal driving
forces. Both schemes are exact in the case of a pure drift current,
i.\,e., they asymptotically approach the upwind scheme (\ref{eq: upwind scheme-1}),
which is important for the robustness of the discretization in order
to avoid spurious oscillations. The modified thermal voltage scheme
shows a high accuracy also for slight deviations from the pure drift
line, even in the case of strong degeneration, see Fig.~\ref{fig: SG comparison 2D plots}\,(b,\,f).
In contrast, the modified drift scheme yields significant errors (much
higher than $0.01$) already for tiny deviations from the pure drift
line in the strongly degenerate case, see Fig.~\ref{fig: SG comparison 2D plots}\,(d,\,h).
This behavior has already been observed above in Fig.~\ref{fig: SG comparison 1D plots}\,(b,\,c)
and was predicted analytically in Sec.~\ref{sec:Strong-electric-field}.
Note that in the non-isothermal case, the temperature gradient shifts
the pure drift line from $\delta\eta_{n}^{\text{pure drift}}\vert_{\delta\Theta=0}=0$
(in Fig.~\ref{fig: SG comparison 2D plots}\,(a--d)) to $\delta\eta_{n}^{\text{pure drift}}\approx-\delta\Theta g_{\eta}(\bar{\eta}_{n})\bar{T}N_{c}^{\prime}(\bar{T})/N_{c}(\bar{T})$
(see Fig.~\ref{fig: SG comparison 2D plots}\,$\mbox{(e\textendash h)}$).
A prominent feature of the modified drift scheme is the additional
intersection with the exact solution (see also Fig.~\ref{fig: SG comparison 1D plots}\,(c)
at $\delta\Phi\approx3.5$), which leads to additional ``fingers''
of the low error domain, see Fig.~\ref{fig: SG comparison 2D plots}\,(c,\,d,\,g,\,h),
that are not associated with any special limiting case. The same feature
has been observed for the so-called \emph{inverse activity scheme}
described in Ref.~\citep{Farrell2017a}. Finally, we study the consistency
of the discretization schemes with the zero current line. In the isothermal
case, both schemes are exact and therefore consistent with the thermodynamic
equilibrium, see Fig.~\ref{fig: SG comparison 2D plots}\,(a--d).
In the strongly degenerate case, however, the zero current line is
only partially located within the low error domain, since the schemes
intersect with the exact solution only in the vicinity of the zero
current line and not exactly on it (see also the inset of Fig.~\ref{fig: SG comparison 1D plots}\,(c)).
Nevertheless, the zero current lines of the discrete schemes, which
are plotted as dashed orange lines in Fig.~\ref{fig: SG comparison 2D plots}\,(e--h),
nicely overlap with the exact zero current line (blue). Thus, the
spurious non-zero currents are very low and the little discrepancy
in this limiting case is only of minor importance.
\subsection{Analytical error estimate \label{sec: Analytical error estimate}}
We compare both schemes (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}) by deriving an upper error
bound. We follow the approach developed by Farrell et al. \citep{Farrell2017a}
and extend it to the non-isothermal case. Using the identities
\begin{align*}
B\left(x\right)-B\left(-x\right) & =-x, & B\left(x\right)+B\left(-x\right) & =x\coth{\left(\frac{x}{2}\right)},
\end{align*}
we obtain the series expansion of the discrete currents (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}) at $\left(\delta\Phi,\delta\eta_{n},\delta\Theta\right)=\left(0,0,0\right)$
up to second order as
\begin{align*}
J_{g} & =-\mathscr{F}\left(\bar{\eta}_{n}\right)\delta\Phi+\mathscr{F}\left(\bar{\eta}_{n}\right)\left(\delta\eta+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\delta\Theta\right)\frac{\tilde{X}_{g}}{2}\coth{\left(\frac{\tilde{X}_{g}}{2}\right)}+\mathcal{O}(\delta^{3}),\\
J_{\gamma} & =-\mathscr{F}\left(\bar{\eta}_{n}\right)\tilde{X}_{\gamma}+\mathscr{F}^{\prime}\left(\bar{\eta}_{n}\right)\left(\delta\eta+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\delta\Theta\right)\frac{\tilde{X}_{\gamma}}{2}\coth{\left(\frac{\tilde{X}_{\gamma}}{2}\right)}+\mathcal{O}(\delta^{3}),
\end{align*}
where
\begin{align*}
\tilde{X}_{g} & =\frac{1}{g_{\eta}\left(\bar{\eta}_{n}\right)}\delta\Phi, & \tilde{X}_{\gamma} & =\delta\Phi-\frac{g_{\eta}\left(\bar{\eta}_{n}\right)-1}{g_{\eta}\left(\bar{\eta}_{n}\right)}\left(\delta\eta+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\delta\Theta\right),
\end{align*}
and $\mathcal{O}(\delta^{3})\equiv\mathcal{O}(\delta\eta_{n}^{3})+\mathcal{O}(\delta\eta_{n}^{2}\,\delta\Theta)+\mathcal{O}(\delta\Theta^{2}\,\delta\eta_{n})+\mathcal{O}(\delta\Theta^{3})+\mathcal{O}(\delta\Phi\,\delta\eta_{n}^{2})+\mathcal{O}(\delta\Phi\,\delta\eta_{n}\,\delta\Theta)+\mathcal{O}(\delta\Phi\,\delta\Theta^{2})$
denotes the third-order corrections. The second-order expansion of
the modified drift scheme is independent of the kind of average used
for $\rho_{n,K,L}\approx\rho_{n}(\bar{\eta}_{n},\bar{T})+\mathcal{O}(\delta\eta_{n}^{2})+\mathcal{O}(\delta\eta_{n}\,\delta\Theta)+\mathcal{O}(\delta\Theta^{2})$,
as only its zeroth-order contribution (where all means coincide) is
relevant here. Using the inequality \citep{Farrell2017a}
\[
1\leq x\coth{\left(x\right)}\leq1+\left|x\right|,
\]
we arrive at the error estimates for the modified thermal voltage
scheme (neglecting third-order terms)
\begin{align}
\left|J_{g}-J_{1}\right| & \leq\frac{1}{2}\mathscr{F}^{\prime}\left(\bar{\eta}_{n}\right)\left(\left|\delta\Phi\,\delta\eta_{n}\right|+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\left|\delta\Phi\,\delta\Theta\right|\right)\label{eq: Chatard error bound}
\end{align}
and the modified drift scheme
\begin{align}
\left|J_{\gamma}-J_{1}\right| & \leq\frac{1}{2}\mathscr{F}^{\prime}\left(\bar{\eta}_{n}\right)\left(\left|\delta\Phi\,\delta\eta_{n}\right|+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\left|\delta\Phi\,\delta\Theta\right|+\frac{g_{\eta}\left(\bar{\eta}_{n}\right)-1}{g_{\eta}\left(\bar{\eta}_{n}\right)}\left|\delta\eta_{n}+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\delta\Theta\right|^{2}\right),\label{eq: correction factor error bound}
\end{align}
where $J_{1}=\mathscr{F}\left(\bar{\eta}_{n}\right)\left(\delta\eta_{n}-\delta\Phi+\frac{\bar{T}N_{c}^{\prime}(\bar{T})}{N_{c}(\bar{T})}g_{\eta}\left(\bar{\eta}_{n}\right)\delta\Theta\right)$
is the first-order exact solution of the BVP (\ref{eq: two-point boundary value problem - eta form}).
Both schemes converge to the exact result as their first-order terms
agree with $J_{1}$. The first two terms of Eq.~(\ref{eq: correction factor error bound})
coincide with Eq.~(\ref{eq: Chatard error bound}). The error bound
for the modified drift scheme has an additional second-order contribution
that becomes significant in the case of strong degeneration $\bar{\eta}\gg1$
where $(g_{\eta}\left(\bar{\eta}_{n}\right)-1)/g_{\eta}\left(\bar{\eta}_{n}\right)\to1$.
Therefore, the maximum error of the modified thermal voltage scheme
is guaranteed to be smaller than that of the modified drift scheme
in the case of degenerate carrier statistics. This analytical result
is consistent with the numerical results shown in Figs.~\ref{fig: SG comparison 1D plots}
and \ref{fig: SG comparison 2D plots} and holds in both the isothermal
and the non-isothermal case. For non-degenerate carrier statistics,
both error estimates (\ref{eq: Chatard error bound}) and (\ref{eq: correction factor error bound})
coincide, since both schemes reduce to the non-degenerate scheme (\ref{eq: non-degenerate, non-isothermal SG}).
\subsection{Numerical simulation of a p-n-diode \label{sec: benchmark simulation}}
\begin{figure}
\includegraphics[width=1\textwidth]{fig6-convergence_noheating}\caption{Convergence of total current density in the p-n diode problem without
self-heating effects ($H=0$, isothermal case). (a)~Current-voltage
curves obtained by the two schemes (\ref{eq: log mean temp Chatard scheme})
and (\ref{eq: modified drift scheme}) on an equidistant grid with
13 nodes. The reference solution (black line) was obtained on a fine
grid (65535 nodes). (b)~Convergence of the relative error with respect
to the reference solution under mesh refinement at $2\,\text{V}$.
The modified drift scheme (blue squares) shows a monotonous, quadratic
convergence for decreasing $h$. The modified thermal voltage scheme
(red circles) converges non-monotonously as it intersects with the
reference solution at $h\approx4.5\,\text{nm}$. (c)~Convergence
of the absolute error of both schemes.}
\label{fig: convergence isothermal}
\end{figure}
We consider a one-dimensional GaAs-based p-n-diode and compare the
convergence of the total current density ($\mathbf{j}_{\text{tot}}=\mathbf{j}_{n}+\mathbf{j}_{p}$)
under mesh refinement using both discretization schemes. The device
consists of a $1\,\text{\textmu m}$ n-doped section with $C=N_{D}^{+}=2\times10^{18}\,\text{cm}^{-3}$
followed by a $1\,\text{\textmu m}$ long p-doped section with $C=-N_{A}^{-}=-2\times10^{18}\,\text{cm}^{-3}$.
We use Fermi--Dirac statistics $\mathscr{F}\left(\eta\right)=F_{1/2}\left(\eta\right)$
and take Shockley--Read--Hall recombination, spontaneous emission
and Auger recombination into account \citep{Selberherr1984,Palankovski2004}.
The material parameters, mobility models (depending on temperature
and doping density) and the temperature-dependent heat conductivity
model are taken from Ref.~\citep{Palankovski2004}. The mobilities
and thermal conductivity along the edges are taken as the harmonic
average of the respective nodal values. We use Dirichlet boundary
conditions on both ends of the diode, modeling ideal Ohmic contacts
(charge neutrality at the boundary) and ideal heat sinks with $T_{\text{contact}}=300\,\text{K}$.
The simulations are carried out on equidistant grids with varying
number of mesh points $N_{\text{nodes}}$ and mesh size $h=2\,\text{\textmu m}/(N_{\text{nodes}}-1)$.
The nonlinear systems are solved using a Newton iteration method with a fully analytical Jacobian matrix \cite{Farrell2017}.
Figure~\ref{fig: convergence isothermal}\,(a) shows the current-voltage
curves obtained by both discretization schemes on a coarse grid (13
nodes, $h\approx1.7\times10^{-7}\,\text{m}$) under isothermal conditions,
i.\,e., without self-heating and Seebeck effect. For the evaluation
of the error, we use a reference solution that was computed on a fine
grid with 65535 nodes ($h\approx3.1\times10^{-11}\,\text{m}$), where
the relative error between both schemes is about $9.6\times10^{-9}$.
At $2\thinspace\text{V}$ the computed currents differ significantly
from the reference result: The relative error is about $13\,\%$ for
the modified thermal voltage scheme and $15\,\%$ for the modified
drift scheme. The convergence of the computed current densities to
the reference result under mesh refinement is shown in Fig.~\ref{fig: convergence isothermal}\,(b).
The modified drift scheme (Eq.~(\ref{eq: modified drift scheme}),
blue squares) shows a monotonous, quadratic convergence for decreasing
$h$. The modified thermal voltage scheme (Eq.~(\ref{eq: log mean temp Chatard scheme}),
red circles), however, shows a non-monotonous convergence behavior
as it intersects with the reference solution at $h\approx4.5\times10^{-9}\,\text{m}$.
On sufficiently fine meshes ($h<10^{-8}\,\text{m}$), the error of
the modified thermal voltage scheme is almost one order of magnitude
smaller than that of the modified drift scheme. Conversely, the modified
thermal voltage scheme reaches the same accuracy as the modified drift
scheme already on a coarse grid with less than half of the number
of nodes. Thus, the modified thermal voltage scheme saves about one
refinement step. The convergence of the absolute error is plotted
in Fig.~\ref{fig: convergence isothermal}\,(c), where the inset
highlights the origin of the non-monotonous convergence behavior of
the modified thermal voltage scheme.
The numerical results for the non-isothermal case, where self-heating
and the Seebeck effect are taken into account, are shown in Fig.~\ref{fig: convergence non-isothermal}.
The results are qualitatively very similar to the isothermal case
shown in Fig.~\ref{fig: convergence isothermal}; quantitatively
the advantage of the modified thermal voltage scheme over the modified
drift scheme is even greater. On a coarse grid, the total current
is underestimated by both schemes with a relative error of about $8\,\%$,
see Fig.~\ref{fig: convergence non-isothermal}\,(a). For sufficiently
fine meshes ($h<0.5\times10^{-8}\text{m}$), the error of the modified
thermal voltage scheme is always more than one order of magnitude
smaller than that of the modified drift scheme, see Fig.~\ref{fig: convergence non-isothermal}\,(b).
In other words, the modified thermal voltage scheme reaches the same
accuracy already on an about four times coarser mesh (two refinement
steps), which is a substantial advantage for large problems involving
complex multi-dimensional geometries. Again, we observe a non-monotonous
convergence behavior of the modified thermal voltage scheme, see Fig.~\ref{fig: convergence non-isothermal}\,(b,\,c).
\begin{figure}
\includegraphics[width=1\textwidth]{fig7-convergence_heating}\caption{Convergence of total current density in the p-n diode problem with
self-heating effects. (a)~Current-voltage curves obtained by the
two schemes (\ref{eq: log mean temp Chatard scheme}) and (\ref{eq: modified drift scheme})
on an equidistant grid with 13 nodes. As in Fig.~\ref{fig: convergence isothermal},
the reference solution (black line) was obtained on a grid with 65535
nodes. Self-heating lowers the mobilities such that the total current
density is smaller than in the isothermal case, cf. Fig.~\ref{fig: convergence isothermal}\,(a).
(b)~Convergence of the relative error with respect to the reference
solution under mesh refinement at $2\,\text{V}$. As in the isothermal
case, the modified thermal voltage scheme (red circles) shows a non-monotonous
convergence behavior that is faster than the quadratic convergence
of the modified drift scheme (blue squares). (c)~Convergence of the
absolute error of both schemes. The modified thermal voltage scheme
intersects twice with the reference solution.}
\label{fig: convergence non-isothermal}
\end{figure}
\section{Summary and conclusion}
We discussed the non-isothermal drift-diffusion system for the simulation
of electro-thermal transport processes in semiconductor devices. It
was shown that the model equations take a remarkably simple form
when assuming the Kelvin formula for the Seebeck coefficient. First,
the heat generation rate involves exactly the three classically known
self-heating effects (Joule heating, recombination heating, Thomson--Peltier
effect) without any further transient contributions. Moreover, our
modeling approach immediately yields the correct average kinetic energy
of the carriers in the recombination heating term, independently of any
scattering parameter. Second, the Kelvin formula enables a simple
representation of the electrical current densities in the drift-diffusion
form, where the thermal driving force can be entirely absorbed in
the (nonlinear) diffusion coefficient via the generalized Einstein
relation. The Kelvin formula accounts for the degeneration of the
electron-hole plasma (Fermi--Dirac statistics) and was shown to be
in a good quantitative agreement with experimental data reported for
n-GaAs.
We have derived two non-isothermal generalizations of the finite volume
Scharfetter--Gummel scheme for the discretization of the current
densities, which differ in their treatment of degeneration effects.
The first approach is based on an approximation of the discrete generalized
Einstein relation and implies a specific modification of the thermal
voltage. The second scheme is based on including the degeneration
effects into a modification of the electric field, which is similar
to the conventional method that is widely used in commercial device
simulation software packages \citep{Synopsys2010,Silvaco2016}. We
presented a detailed analysis of both schemes by assessing their accuracy
in comparison to the numerically exact solution of the underlying
two-point boundary value problem. Moreover, we derived analytical
error bounds and investigated important structure preserving properties
of the discretizations, including the consistency with the thermodynamic
equilibrium, the non-negativity of the discrete dissipation rate (second
law of thermodynamics on the discrete level) and their asymptotic
behavior in the drift- and diffusion-dominated limits. Finally, we
performed a numerical convergence study for a simple example case. Our results indicate a significantly
higher accuracy and faster convergence of the modified thermal
voltage scheme in comparison to the modified drift scheme. This result
holds under both isothermal and non-isothermal conditions. The
higher accuracy --- about one order of magnitude for sufficiently
fine grids in the present case study --- of the modified thermal
voltage scheme makes it a favorable discretization method for problems
exhibiting stiff solutions (internal layers at p-n junctions, boundary
layers at electrical contacts) or devices with a complicated
multi-dimensional geometry, where the number of nodes required to
reach the asymptotic accuracy regime is extremely large and routinely
exceeds the available computational power.
In more general situations, where the Seebeck coefficient deviates
from the Kelvin formula (e.\,g., due to the phonon drag effect),
we suggest to combine the two discretization techniques by decomposing
the Seebeck coefficient into a Kelvin formula part and an excess contribution:
$P_{n}=P_{n}^{\text{Kelvin}}+P_{n}^{\text{exc}}$. The first part
can be absorbed in the generalized Einstein relation, which allows
for the treatment described in Sec.~\ref{sec: Modified thermal voltage scheme}
and inherits the improved accuracy of the modified thermal voltage
scheme. The excess part $P_{n}^{\text{exc}}$ must be averaged along
the edge and plays a similar role as the $\rho_{n,K,L}$ term in Sec.~\ref{sec: Correction factor scheme}
(leading to an additive correction in the argument of the Bernoulli
function).
\section*{Acknowledgements}
This work was funded by the German Research Foundation (DFG) under
Germany's Excellence Strategy -- EXC2046: \textsc{Math}+ (project
AA2-3). The author is grateful to Thomas Koprucki for carefully reading
the manuscript and giving valuable comments.
|
1,108,101,564,276 | arxiv | \section{Introduction}
Braneworld cosmology is based on the scenario in which matter is confined
on a brane moving in the higher dimensional bulk
with only gravity allowed to propagate in the bulk \cite{arkani,antoniadis}.
Among many braneworld models a particularly important are the two versions of the Randall-Sundrum (RS) model.
The first RS model (RSI) \cite{randall1} was
originally proposed as a solution to the hierarchy problem in particle physics whereas
the second RS model (RSII) \cite{randall2} renders
a mechanism for localizing gravity on the 3+1 dimensional universe embedded in
a 4+1 spacetime without compactification of the extra dimension.
Immediately after the papers \cite{randall1,randall2} appeared, it was realized that the RS model
is immersed in a wider framework of the so called AdS/CFT correspondence
\cite{gubser2} (for a recent retrospect see appendix of Ref.\ \cite{bilic1}).
At the same time it was realized that the RS model, as well as similar braneworld models,
may have interesting cosmological
implications \cite{binetruy,flanagan}. In particular, owing to the presence of an extra dimension
and the bulk cosmological constant related to the brane tension, the usual
Friedmann equations are modified
\cite{binetruy}
so the model can have predictions different from the standard cosmology
and is therefore subject to cosmological tests \cite{godlowski}.
The RS model, originally proposed with a pure 4+1-dimensional anti-de Sitter (AdS$_5$) bulk,
can be extended to include
matter in the bulk.
A massive scalar field in the bulk was first introduced by Goldberger and Wise \cite{goldberger}
to stabilize the brane separation in RSI.
The RS model with a minimally coupled scalar field in the bulk, referred to as the {\em thick brane} model,
has been constructed for maximally symmetric solutions on the brane
\cite{kobayashi}.
It has been demonstrated that the bulk scalar potential and the corresponding
brane potential can be
derived from a superpotential in which case the solution of the static vacuum geometry reduces to a set
of first-order BPS-like equations \cite{dewolfe}.
A noncanonical scalar field in the bulk with bulk tachyon Lagrangian has been considered
in Ref.\ \cite{german1} where a thick braneworld
has been investigated in the cosmological context. In particular,
the stability under tensor perturbations and gravity localization has been demonstrated.
In a subsequent paper \cite{german2} the stability under scalar perturbation has been studied for a
braneworld with maximally symmetric geometry.
Models where matter in the bulk contains a non-minimally coupled selfinteracting scalar have also been studied.
Some interesting features of these models can be found in Refs.\ \cite{Bogdanos:2006dt,HerreraAguilar:2011jm}
and references therein.
In this paper we study an
RSII-type braneworld cosmology extended to more general warp factors.
As an application, we study in particular
a braneworld scenario based on this
extended RSII with an effective tachyon field on
the brane\footnote{Note that our tachyon field is located on the observer brane
in contrast to the bulk tachyon of
Refs.\ \cite{german1,german2}.}.
What distinguishes the tachyon from the canonical
scalar field is
the Lagrangian of the Dirac-Born-Infeld (DBI) form \cite{sen}:
\begin{equation}
\mathcal{L}=V(\varphi) \sqrt{1-g^{\mu\nu}\varphi_{,\mu}\varphi_{,\nu}} .
\end{equation}
A similar Lagrangian appears in the so called DBI
inflation models \cite{shandera}. In these models the inflation
is driven by the motion of a D3-brane in a warped throat
region of a compact space and the
DBI field corresponds to the position of the D3-brane.
As shown by Abramo and Finelly \cite{abramo} in a standard cosmological scenario,
for the class of tachyon models
with inverse power-law potentials $V(\varphi) \propto \varphi^{-n}$,
the power $n=2$ divides two subclasses with distinct behaviors in the asymptotic regimes.
For $n<2$
in the limit $\varphi \rightarrow \infty$, the pressure
$p\rightarrow -1$ and the universe
behaves as quasi-de Sitter.
For $n>2$, for large $\varphi$ the pressure $p\rightarrow 0^{-}$ very quickly
yielding asymptotically a cold dark matter (CDM) domination.
In the context of tachyon inflation,
in both cases after the inflationary epoch the tachyon will remain a dominant component
unless at the end of inflation, it
decayed into inhomogeneous fluctuations and
other particles. This period, known as reheating
\cite{kofman2,kofman3,bassett}, links
the inflationary epoch with the subsequent thermalized
radiation era.
A simple tachyon model can be realized in the framework of RSII.
The original RSII consists of two 3-branes in
the AdS$_5$ bulk
with line element
\begin{equation}
ds^2_{(5)}=G_{ab} dX^a dX^b=e^{-2|y|/\ell} \eta_{\mu\nu}dx^\mu dx^\nu
-dy^2 ,
\label{eq3000}
\end{equation}
with observer's brane placed at $y=0$ and the negative tension brane pushed of to $y=\infty$.
It may be easily shown \cite{bilic} that one additional 3-brane moving in the AdS$_5$ bulk
behaves effectively as a tachyon with a potential
$V(\varphi) \propto \varphi^{-4}$ and hence
drives a dark matter attractor.
A more general tachyon potential could be obtained from
more general bulk geometry.
This can be achieved if one assumes a presence of matter in the bulk, e.g.,
in the form of a selfinteracting scalar field.
The bulk scalar would change the bulk
geometry depending on the scalar field potential.
In addition, the braneworld cosmology would differ from that of the original RSII model.
A straightforward approach would be to start from a given bulk field potential and
derive the bulk geometry which would, in turn, yield a tachyon potential
of the effective tachyon field induced by the dynamical 3-brane.
A more empirical approach would be to
go the other way round: starting from a given, phenomenologically interesting tachyon potential
one would fix the warp factor and one could, in principle,
construct the bulk scalar-field selfinteraction potential.
In this paper we will start from a warp factor of a general form
and use the tachyon model to study the cosmology on the brane.
In particular, we will study the effects of the warp factor of the power-law form which may be linked to
the exponential superpotential of the bulk field.
With this warp the tachyon will have an inverse power-law potential.
We will analyze this type of tachyon
potentials in four different cosmological scenarios: the standard tachyon cosmology, low
and high density regimes of our branenworld model, and high density Gauss-Bonnet braneworld cosmology.
The remainder of the paper is organized as follows.
In the next section we introduce the extended RSII
and derive the corresponding braneworld cosmology. In Sec. \ref{dynamical}
we introduce the tachyon as a dynamical brane and derive the field equations
in a covariant Hamiltonian formalism.
The asymptotic solutions to the field equations for
the inverse power law potential are presented in Sec.\ \ref{solutions}.
In Sec. \ref{conclude} we give the concluding remarks.
Finally, in Appendix \ref{scalar} we outline the derivation of
the generalized RSII model with a scalar field in the bulk.
\section{Braneworld cosmology}
\label{braneworld}
Our curvature conventions are as follows:
$R^{a}{}_{bcd} = \partial_c \Gamma_{db}^a -
\partial_d \Gamma_{cb}^a + \Gamma_{db}^e \Gamma_{ce}^a - \Gamma_{cb}^e \Gamma_{de}^a$
and $R_{ab} = R^s{}_{asb}$,
so Einstein's equations are $R_{ab} - \frac{1}{2}R G_{ab} = +8\pi G T_{ab}.$
Here, we
derive the braneworld cosmology assuming that the bulk spacetime is
given by the metric
\begin{equation}
ds^2_{(5)}=G_{ab} dX^a dX^b=\psi(y)^2 \eta_{\mu\nu}dx^\mu dx^\nu
-dy^2 ,
\label{eq3006}
\end{equation}
and the cosmology is determined by the motion of the brane.
It will be sometimes advantageous to work in conformal coordinates with line element
\begin{equation}
ds^2_{(5)}=\frac{1}{\chi(z)^2}( g_{\mu\nu}dx^\mu dx^\nu
-dz^2).
\label{eq4112}
\end{equation}
The two metrics are related by the coordinate transformation
\begin{equation}
dz= \frac{dy}{\psi(y)}
\label{eq3011}
\end{equation}
and
\begin{equation}
\chi(z)=\frac{1}{{\psi(y(z))}}.
\label{eq3010}
\end{equation}
The observer brane is placed at $y=y_{\rm br}$ ($z=z_{\rm br}$) and,
as in the original RSII model, we assume
the $Z_2$ orbifold symmetry $y-y_{\rm br}\leftrightarrow y_{\rm br}-y$ ($z\leftrightarrow z_{\rm br}^2/z$), so
the region $-\infty<y\leq y_{\rm br}$ ($0<z\leq z_{\rm br}$) is identified with
$ y_{\rm br}\leq y <\infty$ ($z_{\rm br} \leq z <\infty$).
Next, we assume
that observer's brane has additional matter
represented by the Lagrangian $\mathcal{L}$,
i.e., the brane action is
\begin{equation}
S_{\rm br}[h] =\int_\Sigma d^{4}x\sqrt{-h} (-\lambda(y) + \mathcal{L}[h]) ,
\end{equation}
with a $y$-dependent brane tension $\lambda$.
Then, we allow the brane
to move in the bulk along the fifth coordinate $y$.
In other words, the brane hypersurface $\Sigma$
is time dependent and may be
defined by
\begin{equation}
\psi(y)^2-a(t)^2=0 ,
\label{eq027}
\end{equation}
where $a=a(t)$ is an arbitrary function.
The normal to $\Sigma$ is then given by
\begin{equation}
n_\mu \propto\partial_\mu (\psi(y)-a(t))=(-\dot{a},0,0,0,\psi')
\label{eq013}
\end{equation}
and, using the normalization $g^{\mu\nu}n_\mu n_\nu=-1$, one finds the nonvanishing components
\begin{equation}
n_t =\frac{\dot{a}}{\psi'}\left(1 -\frac{\dot{a}^2}{\psi^2\psi^{\prime 2}}\right)^{-1/2},
\label{eq014}
\end{equation}
\begin{equation}
n_y=-\left(1 -\frac{\dot{a}^2}{\psi^2\psi^{\prime 2}}\right)^{-1/2} .
\label{eq0141}
\end{equation}
Using this, we find the induced line element on the brane
\begin{equation}
ds_{\rm ind}^2=(G_{ab} +n_an_b) dx^adx^b=n(t)^2dt^2 -a(t)^2 d{\bf x}^2 ,
\label{eq028}
\end{equation}
where
\begin{equation}
n^2 =a^2-\frac{\dot{a}^2}{\psi^{\prime 2}} .
\label{eq016}
\end{equation}
It is understood that the argument $y$ of $\psi'(y)$
is implicitly time dependent through (\ref{eq027}).
The Friedmann equations on the brane follow directly from
the junction conditions \cite{israel}
\begin{equation}
\left[ \left[ K^{\mu}_{\nu} - \delta^{\mu}_{\nu} K_{\alpha}^{\alpha} \right] \right]
= 8 \pi G_5 (\lambda(y)\delta^{\mu}_{\nu}+ T^{\mu}_{\nu}) ,
\label{eq099}
\end{equation}
where $ K^{\mu}_{\nu}$ is the pullback of the extrinsic
curvature tensor
defined in Appendix (Eq.\ (\ref{eq109})).
The energy momentum tensor $T^{\mu}_{\nu}={\rm diag} (\rho,-p,-p,-p)$ corresponds to the Lagrangian
$\mathcal{L}$. From (\ref{eq099}) together with
(\ref{eq014})-(\ref{eq016}) we obtain
\begin{equation}
\frac{(\partial_t a)^2}{n^2 a^2} +\frac{\psi^{\prime 2}}{\psi^2}=\left(\frac{4\pi G_5}{3}\right)^2
(\lambda+\rho)^2 .
\label{eq020}
\end{equation}
The first term on the left-hand side of (\ref{eq020}) is the square of
the Hubble expansion rate for the metric (\ref{eq028}) on the brane
\begin{equation}
H^2=\frac{(\partial_t a)^2}{n^2 a^2}.
\label{eq021}
\end{equation}
Then, the first Friedmann equation takes the form
\begin{equation}
H^2= \frac{8\pi G_{\rm N}}{3}\left(\frac{4\pi G_5}{3k}\lambda\right) \rho+
\left(\frac{4\pi G_5}{3}\right)^2\rho^2 +
\left(\frac{4\pi G_5}{3}\lambda\right)^2 -\frac{\psi^{\prime 2}}{\psi^2},
\label{eq022}
\end{equation}
where $G_{\rm N}$ is the four-dimensional Newton constant and $k$ is a mass scale related to $G_5$,
\begin{equation}
k= \frac{G_{\rm N}}{G_5} .
\label{eq023}
\end{equation}
Generally, $\lambda$, $\psi$ and $\psi'$ are implicit functions of $a(t)$ through their dependence
on $y$ which in turn is a function of $a$ via (\ref{eq027}).
For a pure AdS$_5$ bulk with curvature radius $\ell=1/k$ we have
$ \psi^{\prime}/\psi=-k$, $k=1/\ell$, $\lambda$ is constant and with the RSII fine tuning condition
we recover the usual RSII expressions (see, e.g., Ref. \cite{bilic1}).
Henceforth we will assume $|\psi^{\prime}/\psi|\lesssim k$ and the average of $|\psi^{\prime}/\psi|$
will basically represent a warp compactification scale.
A modified Friedmann equation similar to (\ref{eq022})
was derived by P.~Brax, C.~van de Bruck and A.~C.~Davis \cite{brax2}
for a braneworld with a scalar field $\Phi$ in the bulk
with a time dependent geometry.
The full expression in our notation reads
\begin{equation}
H^2=\frac{4\pi G_5}{9}W\rho+\left( \frac{8\pi G_5}{3}\rho\right)^2-\frac{4\pi G_5}{9a^4}
\int d\tau\frac{dW}{d\tau} a^4\rho -\frac{1}{6a^4}\int d\tau\frac{da^4}{d\tau}
\left(\frac12 (\partial_\tau\Phi)^2-U_{\rm eff}\right),
\label{eq315}
\end{equation}
where $\tau$ is the synchronous time and $W=W(\Phi)$ is the superpotential. The
effective potential $U_{\rm eff}=U_{\rm eff}(\Phi)$
is defined as
\begin{equation}
U_{\rm eff}=\frac{W^2}{6}-\frac18 \left(\frac{dW}{d\Phi}\right)^2+U
\label{eq316}
\end{equation}
and $U=U(\Phi)$ is the bulk-field potential.
It is understood that $\Phi$ and its derivative $\partial_\tau\Phi$
are functions of $\tau$ only.
The contribution of the last two terms in (\ref{eq315}) is referred to as the {\em retarded effect}
\cite{brax2}.
In deriving (\ref{eq315}) the contribution of dark radiation has not been taken into account.
Eq.\ (\ref{eq315}) reduces to a simpler equation similar to our
(\ref{eq022}) if one assumes
that the bulk scalar field is evolving much slower than the scale factor.
On this assumption we have
\begin{equation}
\frac{1}{W}\frac{dW}{d\tau} \ll \frac{1}{a^4\rho}\frac{da^4\rho}{d\tau}
\label{eq317}
\end{equation}
and
\begin{equation}
\frac{da^4}{d\tau}
\left(\frac12 (\partial_\tau\Phi)^2-U_{\rm eff}\right)\simeq
\frac{d}{d\tau}
\left(\frac{a^4}{2} (\partial_\tau\Phi)^2-a^4U_{\rm eff}\right),
\label{eq333}
\end{equation}
so the third term on the righthand side of (\ref{eq315}) can be neglected compared with
the first term and
the integration in the last term is trivially performed.
The constant of integration can be set to zero as it
would only contribute to
a dark radiation term of the form $1/a^4$ which has been ignored anyway. Hence, up to
a dark radiation term, Eq. (\ref{eq315}) reduces to
\begin{equation}
H^2=\frac{4\pi G_5}{9}W\rho+\left( \frac{8\pi G_5}{3}\rho\right)^2+\frac{1}{6}
\left(U_{\rm eff}-\frac12 (\partial_\tau\Phi)^2\right).
\label{eq318}
\end{equation}
Now we show explicitly that our equation (\ref{eq022}) is equivalent to (\ref{eq318}).
First we identify the superpotential and bulk potential
\begin{equation}
W(\Phi)\equiv 8\pi G_5 \lambda(y),
\label{eq320}
\end{equation}
\begin{equation}
U(\Phi)\equiv\frac12 \Phi^{\prime 2} -6\frac{\psi^{\prime 2}}{\psi^2},
\label{eq330}
\end{equation}
where $\Phi=\Phi(y)$ is yet unspecified function of $y$.
Next, using
Eqs.\ (\ref{eq316}), (\ref{eq320}), and (\ref{eq330}) as the defining equations for $U_{\rm eff}$,
we can express (\ref{eq022})
in the form
\begin{equation}
H^2=\frac{4\pi G_5}{9}W\rho+\left( \frac{8\pi G_5}{3}\rho\right)^2+\frac{1}{6}U_{\rm eff}-
\frac{1}{12}\left(\Phi^{\prime 2}-\frac14\left(\frac{\partial W}{\partial\Phi}\right)^2\right).
\label{eq331}
\end{equation}
Finally, by demanding that the function $\Phi$ satisfies
\begin{equation}
\Phi^{\prime 2}-\frac14\left(\frac{\partial W}{\partial\Phi}\right)^2=\left(\partial_\tau\Phi\right)^2,
\label{eq332}
\end{equation}
our equation (\ref{eq022}) takes the form identical to (\ref{eq318}).
Hence, we have demonstrated that equations (\ref{eq022}) and (\ref{eq318}) are equivalent.
By manipulating Eq. (\ref{eq332}) with the help of (\ref{eq013}), (\ref{eq016}), and (\ref{eq021})
we find that $\Phi$ satisfies another equation equivalent to (\ref{eq332}):
\begin{equation}
\Phi^{\prime}=\frac12 \frac{\partial W}{\partial\Phi}\left(1-H^2\frac{\psi^2}{\psi^{\prime 2}}\right)^{-1/2}.
\label{eq319}
\end{equation}
Given the functions $\lambda(y)$ and $\psi(y)$, equations (\ref{eq320}), (\ref{eq330}), and
(\ref{eq319}) together with (\ref{eq022}) determine parametrically $W$ and $U$ as functions of $\Phi$.
Note that Eqs.\ (\ref{eq319})
is consistent with the first equation in (\ref{eq4007}) in the static limit $H\rightarrow 0$.
In this way, the scale dependence of the brane tension $\lambda(y)$ which has not been specified yet,
can be attributed to a scalar field dynamics in the bulk.
Due to the retarded effect, equation (\ref{eq315}), even in its reduced form (\ref{eq022}),
is rather complicated.
However, we can simplify our braneworld cosmology by assuming that the contribution of
the retarded effects in (\ref{eq022}) are negligible compared to $H^2$, i.e., we will assume
\begin{equation}
\left(\frac{4\pi G_5}{3}\lambda\right)^2\approx\frac{\psi^{\prime 2}}{\psi^2}.
\label{eq335}
\end{equation}
This assumption is motivated by the junction condition (\ref{eq303})
(as a generalization of the fine tuning condition of RSII)
which is exact in the static case.
Then our approximated braneworld cosmology is defined by the Friedmann equation
\begin{equation}
H^2= \frac{8\pi G}{3} \rho+\left(\frac{4\pi G_{\rm N}}{3k}\rho\right)^2 .
\label{eq005}
\end{equation}
Here $G=G(a)$ is a scale dependent effective gravitational constant defined as
\begin{equation}
G(a) = \frac{G_{\rm N}}{k}\left.\frac{d\chi}{dz}\right|_{z=\chi^{-1}(1/a)} ,
\label{eq007}
\end{equation}
where $\chi^{-1}$ denotes the inverse function of $\chi$.
The second Friedmann equation is easily obtained by combining the time derivative of (\ref{eq005})
with energy-momentum conservation yielding
\begin{equation}
\dot{H}= -\left(4\pi G+3\left(\frac{4\pi G_{\rm N}}{3k}\right)^2 \rho\right) (p+\rho)+
\frac{4\pi}{3}\frac{dG}{da}a\rho .
\label{eq4205}
\end{equation}
As a promising future research topic
it would be of interest to extend our approach along the lines of Ref.\
\cite{bernardini}
where the warp factor was allowed to depend on the radial braneworld coordinate $r$
in addition to the usual $y$ and $t$ dependence.
A natural extension worth of investigating would be including $r$
dependence of the bulk scalar field in addition to its
$y$ dependence.
In the following we shall abbreviate by BWC
the braneworld cosmology described by (\ref{eq005}) and (\ref{eq4205}).
Thus, the Friedmann equations of BWC differ from those of the original RSII
in the scale dependence of the effective gravitational constant $G$ and
in one additional term in the second equation that depends on the derivative
of $G$. This term will be suppressed if the scale dependence of $G$ is weak.
Equation (\ref{eq007}) imposes certain restrictions on the function $\chi$.
First, we need $G(a)$ to be positive which restricts
$\chi(z)$ to the class of monotonously increasing functions of $z$ and,
as a consequence of (\ref{eq3011}), $\psi$ must be a monotonously decreasing function of $y$.
Second, the variation of $G(a)$ is constrained by the big bang nucleosynthesis \cite{accetta}
and other cosmological and astrophysical
observations \cite{uzan}. The observations are roughly consistent with the constraint
\begin{equation}
\left|\frac{\dot{G}}{G}\right|_{\rm today} \lesssim 10^{-12} {\rm yr}^{-1}
\end{equation}
which, in turn, implies
\begin{equation}
\left|\frac{a}{G}\frac{dG}{da}\right|_{\rm today}=\frac{1}{H_0}
\left|\frac{\dot{G}}{G}\right|_{\rm today}
\lesssim 1.43\times10^{-2},
\label{eq006}
\end{equation}
where we have used the value $H_0^{-1}=14.3$ Gyr corresponding to
the Planck 2015 estimate of the Hubble constant $H_0=68$ km/s/Mpc \cite{planck2015}.
Using the definition (\ref{eq007}), we find a relation
\begin{equation}
\frac{a}{G}\frac{dG}{da}=-\left.
\frac{\chi \chi_{,zz}}{\chi_{,z}^2}\right|_{z=\chi^{-1}(1/a)}.
\label{eq009}
\end{equation}
Thus, equation (\ref{eq006}) imposes a constraint also on $\chi(z)$.
For our purpose the metric of the power law form
$\chi\propto z^{n/4}$ is of particular interest.
In this case
Eq. (\ref{eq009}) reads
\begin{equation}
\frac{a}{G}\frac{dG}{da}=-\frac{n-4}{n}
\label{eq314}
\end{equation}
and Eq. (\ref{eq006}) imposes a constraint on the power
\begin{equation}
|n-4| \lesssim 0.057,
\label{eq1004}
\end{equation}
where the central value $n=4$ corresponds to the original RSII setup with constant $G$.
As demonstrated in Appendix \ref{scalar}, the power-law warp $\chi\propto z^{n/4}$ with $n\geq 4$ can be
attributed to a selfinteracting scalar field $\Phi$ in the bulk with
exponential superpotential $W\propto \exp \gamma \Phi$,
Then, the constraint (\ref{eq1004}) implies a constraint on the
parameter $\gamma$ (see also Ref. \cite{amarilla})
\begin{equation}
\gamma^2 \lesssim 0.477\times 10^{-2} ,
\label{eq1008}
\end{equation}
which is less stringent than the order of magnitude estimate \cite{davis}
$\gamma\lesssim 0.01$ based on the solar system bounds on the Edington parameter \cite{bertotti}.
The constraint (\ref{eq006}) yielding (\ref{eq1004}) and (\ref{eq1008})
is obtained from astrophysical and cosmological
bounds on variation of $G$ for the period between BBN
(corresponding roughly to the scales $a\simeq 10^{-9}$) and today.
It is conceivable that the bounds on variations of $G$ with $a$ are less restrictive
for the early cosmology prior to BBN.
\section{Dynamical brane as a tachyon}
\label{dynamical}
In this section we introduce a tachyon via the dynamical brane.
In addition to the observer brane at $y=y_{\rm br}$ we place a non BPS positive tension
brane at $y>y_{\rm br}$ in the bulk with
metric (\ref{eq3006}). Our setup is similar to that of Lykken and Randall \cite{lykken}.
The action of the 3+1 dimensional brane in the five dimensional bulk is equivalent
to the Dirac-Born-Infeld description of a Nambu-Goto 3-brane.
\cite{bordemann,jackiw}.
Consider a 3-brane
moving in the 4+1-dimensional bulk spacetime
with coordinates $X^{a}$,
$a=0,1,2,3,4$. The points on the brane are parameterized by
$X^a (x^{\mu})$, $\mu=0,1,2,3$, where $x^{\mu}$
are the coordinates on the brane.
The brane action is given by
\begin{equation}
S_{\rm br}= - \sigma
\int d^4x\, \sqrt{-\det (g^{(\rm ind)})} \, ,
\label{eq0001}
\end{equation}
where $\sigma$ is the brane tension and $g_{\mu\nu}^{(\rm ind)}$ is the induced metric
or the ``pull back" of the bulk space-time metric
$G_{ab}$ to the brane,
\begin{equation}
g^{(\rm ind)}_{\mu\nu}=G_{ab}
\frac{\partial X^a}{\partial x^\mu}
\frac{\partial X^b}{\partial x^\nu} \, .
\label{eq0002}
\end{equation}
Taking the Gaussian normal parameterization
$X^a(x^\mu)=\left(x^\mu, z(x^\mu)\right)$,
we find
\begin{equation}
g^{(\rm ind)}_{\mu\nu}=\frac{1}{\chi(z)^2}
\left( g_{\mu\nu}
-z_{,\mu}z_{,\nu}\right).
\label{eq2002}
\end{equation}
A straightforward calculation of the determinant yields
the brane action in the form
\begin{equation}
S_{\rm br}
=-\sigma\int d^4 x\sqrt{- g}\:
\chi^{-4}
\sqrt{(1-X)} ,
\label{eq0006}
\end{equation}
where we have introduced the abbreviation
\begin{equation}
X=g^{\mu\nu} z_{,\mu} z_{,\nu}.
\end{equation}
Hence, we have obtained a
tachyon Lagrangian with potential
\begin{equation}
V(z)=\sigma\chi(z)^{-4},
\label{eq1007}
\end{equation}
where the fifth conformal coordinate $z=z(x)$
has become a dynamical tachyon field.
An attempt was made to generalize the action (\ref{eq0006}) by replacing
$(1-X)^{1/2}$ by $(1-X)^q$ with $q$ being an arbitrary positive power \cite{choudhury}.
However, only the action with $q=1/2$ stems from $d$-brane dynamics
in a $d+1+1$ bulk.
Next, we derive the tachyon field equations
from the action (\ref{eq0006}).
The tachyon Lagrangian takes the form
\begin{equation}
{\cal{L}} =
-\frac{\sigma}{\chi(z)^4}\sqrt{1-g^{\mu\nu}z_{,\mu}z_{,\nu}} \,,
\label{eq000}
\end{equation}
and in the following we assume that the function $\chi(z)$ is known.
Note that for a pure AdS$_5$ bulk
we have $\chi=kz$
and we reproduce the brane action of Ref. \cite{bilic}
if we identify the scale $k$ with the inverse of the AdS$_5$ curvature radius $\ell$.
It is important to stress that the cosmology in section \ref{braneworld} was derived assuming
that the observer brane is moving in the bulk with time independent geometry,
and Eq.\ (\ref{eq027}) relates the position of
the observer brane to the cosmological scale $a$.
However, in this section
we work in the gauge where the observer brane is at a fixed position and the cosmology
will reflect the time dependence of the bulk metric in addition to
the time dependence of the dynamical brane position.
We consider a spatially flat FRW spacetime on the observer brane
with four dimensional line element in the standard form
\begin{equation}
ds^2=g_{\mu\nu}dx^\mu dx^\nu=dt^2-a(t)^2(dr^2+r^2 d\Omega^2)
\label{eq0012}
\end{equation}
where, unlike in Sec.\ \ref{braneworld}, the time $t$ is synchronous.
In the cosmological context it is natural to assume that the
tachyon condensate is comoving, i.e., the velocity components are $u_\mu=(1,0,0,0)$
and $X$
becomes simply
\begin{equation}
X=\dot{z}^2 .
\label{eq2036}
\end{equation}
The treatment of our system is conveniently performed in the
covariant Hamiltonian formalism
based on earlier works on symplectic formalism of De Donder \cite{dedonder} and Weyl \cite{weyl}
(for recent reviews see Refs. \cite{struckmeier,cremaschini};
for details and application in cosmology see Ref.\ \cite{bilic}).
For this purpose we first define
the conjugate momentum field as
\begin{equation}
\pi_z^\mu=
\frac{\partial{\cal{L}}}{\partial z_{,\mu}}.
\end{equation}
In the cosmological context $\pi_z^\mu$ is time-like so we may also define
its magnitude as
\begin{equation}
\pi_z=\sqrt{g_{\mu\nu}\pi_z^\mu \pi_z^\nu}.
\label{eq2118}
\end{equation}
The Hamiltonian density may be derived from the stress tensor corresponding to the
Lagrangian (\ref{eq000}) or by the Legendre transformation.
Either way one finds
\begin{equation}
{\cal{H}} =\frac{\sigma}{\chi^4}\sqrt{1+\pi_{z}^2\chi^8/\sigma^2}.
\label{eq001}
\end{equation}
Then, we can write Hamilton's equations in the form \cite{bilic}
\begin{eqnarray}
\dot{z} = \frac{\partial{\cal{H}}}{\partial\pi_z},\label{eqHam2} \\
\dot{\pi}_z + 3H\pi_z=-\frac{\partial{\cal{H}}}{\partial z}.\label{eqHam4}
\end{eqnarray}
In the spatially flat cosmology
the Hubble expansion rate $H$ is related to the Hamiltonian via the
modified Friedmann equation (\ref{eq022}).
As the cosmological scale is no longer related to the observer's brane position,
the function $\chi(a)$ need not satisfy the condition (\ref{eq027}).
Nevertheless, the brane cosmology is governed by the same
(approximate) Friedmann equation (\ref{eq005})
in which the scale dependent gravitational constant will have a
functional dependence on the the warp factor as dictated by equation (\ref{eq335}), i.e., $G\propto \chi_{,z}$.
However, the functional dependence on the cosmological scale
will be subject to the field equations (\ref{eqHam2}) and (\ref{eqHam4})
together with Eq.\ (\ref{eq005})
which can be written as
\begin{equation}
H\equiv\frac{\dot{a}}{a}=\sqrt{\frac{8 \pi G_{\rm N}}{3} \mathcal{H}\left(\frac{\chi_{,z}}{k}
+ \frac{2 \pi G_{\rm N}}{3k^2} \mathcal{H}\right) }.
\label{scale_a}
\end{equation}
To solve the system of equations (\ref{eqHam2})-(\ref{scale_a}) it is convenient to
rescale the time as $t=\tau/k$ and express
the system in terms of dimensionless quantities.
Besides,
by appropriately rescaling
the tachyon field $z$ and its
conjugate field $\pi_{z}$ we can eliminate the brane tension $\sigma$ from the equations.
To this end we introduce the dimensionless functions
\begin{eqnarray}
h = H/k,
\quad
\varphi=k z, \quad
\pi_\varphi = \pi_z/\sigma ,
\label{eq002}
\end{eqnarray}
and we rescale the Lagrangian and Hamiltonian to obtain the
rescaled dimensionless
pressure and energy density:
\begin{equation}
p= \frac{\mathcal{L}}{\sigma}=-\frac{1}{\chi^4\sqrt{1+\chi^8\pi_{\varphi}^2}}=
-\frac{1}{\chi^4}\sqrt{1-\dot{\varphi}^2} ,
\label{eq0081}
\end{equation}
\begin{equation}
\rho= \frac{\mathcal{H}}{\sigma}=\frac{1}{\chi^4}\sqrt{1+\chi^8\pi_{\varphi}^2}=
\frac{1}{\chi^4}\frac{1}{\sqrt{1-\dot{\varphi}^2}}.
\label{eq008}
\end{equation}
In these equations and from now on the overdot denotes a derivative with respect to $\tau$.
Then we introduce a combined dimensionless coupling
\begin{equation}
\kappa^2=\frac{8\pi G_5}{k}\sigma=\frac{8\pi G_{\rm N}}{k^2}\sigma
\label{eq102}
\end{equation}
and from (\ref{eqHam2})-(\ref{scale_a})
we obtain the following set of equations
\begin{equation}
\dot \varphi=\frac{\chi^4\pi_{\varphi}}
{\sqrt{1+\chi^8\pi_{\varphi}^2}}
=\frac{\pi_{\varphi}}{\rho} ,
\label{eq003}
\end{equation}
\begin{equation}
\dot \pi_\varphi=-3h\pi_\varphi
+\frac{4\chi_{,\varphi}}{\chi^5
\sqrt{1+\chi^8\pi_\varphi^2}}.
\label{eq004}
\end{equation}
Here
\begin{eqnarray}
\label{h}
h=\sqrt{\frac{\kappa^2}{3}\rho\left(\chi_{,\varphi}+\frac{\kappa^2}{12}\rho \right)} ,
\label{eq4305}
\end{eqnarray}
where
$\chi_{,\varphi}$
is an abbreviation for $\partial\chi/\partial\varphi$.
Obviously, the explicit dependence on $\sigma$ and $k$ in Eqs.\ (\ref{eq003})-(\ref{eq4305}) is eliminated
leaving one dimensionless
free parameter $\kappa$ which could, in principle, be fixed from phenomenology.
\section{Cosmological solutions to the field equations}
\label{solutions}
Next, we analyze in detail the tachyon with potential
\begin{equation}
V(\varphi)=\varphi^{-n}, \quad \mbox{or} \quad
\chi(\varphi)=\varphi^{n/4},
\label{eq1006}
\end{equation}
As shown in Appendix, this inverse power-law dependence for $n>4$
can be derived from the exponential superpotential (\ref{eq307})
in the braneworld model with a scalar in the bulk.
According to (\ref{eq007}) and (\ref{eq1007}),
the potential (\ref{eq1006}) being a monotonously decreasing function of $\varphi$,
is consistent with the positivity requirement for $G(a)$.
We will assume that the bounds on variations of $G$ with $a$ discussed in Sec.\ \ref{braneworld}
do not apply to the pre-BBN cosmology, in particular during inflationary epoch,
so we will ignore the constraint (\ref{eq1004}).
With (\ref{eq1006}) equations (\ref{eq003}) and (\ref{eq004}) become
\begin{equation}
\dot \varphi=\frac{\varphi^{n}\pi_\varphi}{\sqrt{1+\pi_\varphi^2\varphi^{2n}}},
\label{eq003a}
\end{equation}
\begin{equation}
\dot \pi_\varphi=-3h\pi_\varphi
+\frac{n}{\varphi^{n+1}\sqrt{1+\pi_\varphi^2\varphi^{2n}}}.
\label{eq004a}
\end{equation}
We will try to solve these equations by the ansatz
\begin{equation}
\pi_\varphi=c_n\varphi^{m-n},
\label{eq4001}
\end{equation}
where the constants $c_n$ and $m$ are to be fixed by the field equations.
With this ansatz the density is given by
\begin{equation}
\rho=\varphi^{-n}(1+\pi_\varphi^2\varphi^{2n})^{1/2}=\varphi^{-n}(1+c_n^2\varphi^{2m})^{1/2}
\label{eq401}
\end{equation}
and
Eq.\ (\ref{eq003a}) becomes
\begin{equation}
\dot{\varphi}=\frac{c_n\varphi^m}{\sqrt{1+c_n^2\varphi^{2m}}} .
\label{eq4003}
\end{equation}
Furthermore, the time derivative of (\ref{eq4001}) together with (\ref{eq4003})
yields
\begin{equation}
\dot{\pi_\varphi}=c_n^2(m-n)\varphi^{2m-n-1}\left(1+c_n^2\varphi^{2m}\right)^{-1/2}.
\label{generalpi}
\end{equation}
We will look for solutions in the low and high energy density
regimes of BWC characterized by the conditions
$\kappa^2\rho/12\ll\chi_{,\varphi}$ and $\kappa^2\rho/12\gg\chi_{,\varphi}$,
respectively.
In these regimes the Hubble rate is respectively given by
\begin{equation}
h=\frac{\kappa}{\sqrt{3}}\rho^{1/2}\chi_{,\varphi}^{1/2}, \quad\quad
h=\frac{\kappa^2}{6}\rho.
\label{h1}
\end{equation}
It is advantageous to analyze these two particular cosmologies by
making use of a more general equation
\begin{equation}
h=h_0 \rho^\alpha \chi_{,\varphi}^\beta ,
\label{eq1001}
\end{equation}
where the constants $h_0$ and $\alpha$ are positive and $\beta<1$.
In particular, $h_0$, $\alpha$, and $\beta$, are
respectively equal to $\kappa^2/6$, 1, and 0, at high density and $\kappa/\sqrt3$,
$1/2$, and $1/2$
at low density. Equation (\ref{eq1001}) also includes the standard cosmology analyzed
by Abramo and Finelli \cite{abramo} and the Gauss-Bonnet braneworld (GBB)
at high density \cite{lidsey,tsujikawa,calcagni1,calcagni2}.
In the standard cosmology we have
$h_0=\kappa/\sqrt3$, $\alpha=1/2$, and $\beta=0$ whereas
in the high density limit of GBB we have \cite{lidsey,calcagni2}
$h_0=\kappa^{2/3}$, $\alpha=1/3$, and $\beta=0$.
In the latter scenario the coupling $\kappa^2$ is defined by the first equation (\ref{eq102})
where we have identified the mass scale $(4k)^2$ with the inverse of the GB coupling, i.e.,
we set $1/k =4\sqrt{\alpha_{\rm GB}}$.
Applying (\ref{eq4001})-(\ref{generalpi}) and (\ref{eq1001})
we obtain an identity
\begin{equation}
c_n^2(m-n)\varphi^{2m}+3 h_0 c_n\left(\frac{n}{4}\right)^\beta
\varphi^{m+1-\alpha n+\beta n/4-\beta}\left(1+c_n^2\varphi^{2m}\right)^{\alpha/2+1/2}
-n=0 ,
\label{eq97}
\end{equation}
which must hold for any $\varphi$.
It is easily seen that this identity will be satisfied if and only if $m=0$ and
$n=n_{\rm cr}$, where the critical power $n_{\rm cr}$ depends on $\alpha$ and
$\beta$,
\begin{equation}
n_{\rm cr}=\frac{1-\beta}{\alpha-\beta/4} .
\end{equation}
In particular
$n_{\rm cr}$ equals 2 in the standard cosmology \cite{abramo},
3 in the high density GBB,
and, respectively, 4/3 and 1
in the low and high density regimes of BWC.
For $n=n_{\rm cr}$ the derivative $\dot{\varphi}$ is simply a constant yielding
\begin{equation}
\varphi=\frac{c_{\rm cr}}{\sqrt{1+c_{\rm cr}^2}} \tau ,
\label{eq86}
\end{equation}
where, for simplicity, we have set the integration constant to zero and
$c_{\rm cr}$ stands for $c_{n_{\rm cr}}$.
As a consequence of (\ref{eq97}), the constant $c_{\rm cr}$ satisfies the equation
\begin{equation}
c_{\rm cr}^2(1+c_{\rm cr}^2)^{\alpha-1}=\left(\frac{4^\beta n_{\rm cr}^{1-\beta}}{3 h_0}\right)^2,
\label{eq1002}
\end{equation}
which, in general, cannot be solved for $c_{\rm cr}$.
However, for each of the four cases mentioned above, equation (\ref{eq1002}) becomes relatively simple
with solutions
\begin{equation}
c_{\rm cr}=\left\{
\begin{array}{ll}
8\sqrt2/ (3\kappa)^2
\left(1+\sqrt{1+4\left(3\kappa/4\right)^4}\right)^{1/2}& \mbox{BWC, low density}\\
2/\kappa^2 &\mbox{BWC, high density}\\
2\sqrt2/(3\kappa^2)\left(1+\sqrt{1+9\kappa^4/4}\right)^{1/2} & \mbox{standard cosmology}\\
1/(3\kappa^2)\left(1+u^{1/3}+u^{-1/3}\right)& \mbox{GBB, high density} ,
\end{array}
\right.
\end{equation}
where, in the last line
\begin{equation}
u=1+\frac{27\kappa^4}{2}+\frac{\sqrt{27} \kappa^2}{2}\sqrt{4+27\kappa^4} .
\end{equation}
From (\ref{eq0081}) and (\ref{eq008}) it follows that the equation of state is a
negative constant
\begin{equation}
w\equiv \frac{p}{\rho}=-\frac{1}{1+c_{\rm cr}^2}
\end{equation}
and hence describes a dark energy fluid. Note that in the strong coupling limit, i.e., $\kappa\rightarrow \infty$,
corresponding to large brane tensions,
we have $c_{\rm cr}\rightarrow 0$ and the fluid
approaches the cosmological constant.
From (\ref{eq401})
and (\ref{eq86}) we obtain the density as a function of $\tau$
\begin{equation}
\rho(\tau)=\rho_0\tau^{-n_{\rm cr}},
\end{equation}
where
\begin{equation}
\rho_0=\frac{\left(1+c_{\rm cr}^2\right)^{n_{\rm cr}/2+1/2}}{c_{\rm cr}^{n_{\rm cr}}} .
\end{equation}
Furthermore, from (\ref{h1}) we find that the cosmological scale
behaves as a power of $\tau$
\begin{equation}
a(\tau)=a_0\tau^q ,
\end{equation}
where
\begin{equation}
q=\frac{n_{\rm cr}\left(1+c_{\rm cr}^2\right)}{3c_{\rm cr}^2} .
\end{equation}
Finally $\rho$ can be expressed as a function of the cosmological scale as
\begin{equation}
\rho(a)=\rho_0\left(\frac{a}{a_0}\right)^{-n_{\rm cr}/q}=
\rho_0\left(\frac{a}{a_0}\right)^{-3c_{\rm cr}^2/(1+c_{\rm cr}^2)}.
\end{equation}
Obviously, in the limit $\kappa\rightarrow \infty$
we have $c_{\rm cr} \rightarrow 0$,
and the universe approaches de Sitter.
In contrast, in the weak coupling limit $c_{\rm cr} \rightarrow \infty$,
the universe behaves as dust.
For $n\neq n_{\rm cr}$ equation (\ref{eq97}) admits no solution.
Nevertheless, it can be solved in the asymp\-totic regimes of high and small $\varphi$.
In these regimes we distinguish two cases:
a) $\varphi\rightarrow \infty$ and $m>0$ or $\varphi\rightarrow 0$ and $m<0$,
and b) $\varphi\rightarrow \infty$ and $m<0$ or $\varphi\rightarrow 0$ and $m>0$.
\subsection*{a) $\varphi\rightarrow \infty$ and $m>0$ or $\varphi\rightarrow 0$ and $m<0$}
Keeping only the dominant terms in (\ref{eq97}) in the limit $\varphi\rightarrow\infty$ for $m>0$
or $\varphi\rightarrow 0$ for $m<0$
we find
\begin{equation}
c_n^2(m-n)+3h_0\left(n/4\right)^\beta c_n^{\alpha+2}\varphi^{m\alpha-s}=0,
\label{eq3002}
\end{equation}
where
\begin{equation}
s=n\left(\alpha-\frac{\beta}{4}\right)+\beta-1=\left(\frac{n}{n_{\rm cr}}-1\right)(1-\beta) .
\end{equation}
Then, to satisfy (\ref{eq3002}) for any $\varphi$ we must have
\begin{equation}
m=\frac{s}{\alpha} .
\label{eq3001}
\end{equation}
From this it follows $n\lessgtr n_{\rm cr}$ for $m\lessgtr 0$.
The constant $c_n$ is given by
\begin{equation}
c_n=\left(\frac{1-\beta+n\beta/4}{3h_0\alpha}\right)^{1/\alpha}\left(\frac{4}{n}\right)^{\beta/\alpha}
\end{equation}
and we have
\begin{equation}
\rho=\pi_\varphi=c_n\varphi^{-(1-\beta+n\beta/4)/\alpha}.
\end{equation}
The equation of state
\begin{equation}
w=-(1-\dot{\varphi}^2)=-\frac{1}{1+c_n^2\varphi^{2s/\alpha}}
\end{equation}
approaches $0$ for large $\varphi$ with $n>n_{\rm cr}$ or small $\varphi$ with $n<n_{\rm cr}$. Hence, the system
(\ref{eq003a})-(\ref{eq004a})
has a dark-matter attractor at $\dot{\varphi}^2=1$ or $\varphi=\tau$
in both limits.
The large and small $\varphi$ limits correspond the of large and small $\tau$ limits, respectively.
In these limits we obtain the following behavior of the density and cosmological scale
as functions of time
\begin{equation}
\rho(\tau)=c_n\tau^{-(1-\beta+n\beta/4)/\alpha},
\end{equation}
\begin{equation}
a(\tau)=a_0\tau^{(1-\beta+n\beta/4)/(3\alpha)},
\end{equation}
so that the density
as a function of the cosmological scale
\begin{equation}
\rho(a)=c_n\frac{a_0^3}{a^3}
\end{equation}
clearly demonstrates the dust behavior.
\subsection*{b) $\varphi\rightarrow \infty$ and $m<0$ or $\varphi\rightarrow 0$ and $m>0$}
This case is relevant for an inflationary scenario.
Namely, as we shall shortly see,
the slow roll condition $\dot{\varphi} \ll 1$ for the tachyon inflation
\cite{steer,bilic3} is met in both $\varphi\rightarrow \infty$ and $\varphi\rightarrow 0$
limits.
Keeping the dominant terms in (\ref{eq97}) in the limit $\varphi\rightarrow\infty$ for $m<0$
or $\varphi\rightarrow 0$ for $m>0$
the equation
reduces to
\begin{equation}
3h_0\left(n/4\right)^\beta c_n
\varphi^{m-s}=n
\end{equation}
yielding
\begin{equation}
m=s\equiv\left(\frac{n}{n_{\rm cr}}-1\right)(1-\beta) ,
\end{equation}
so $m\lessgtr 0$ implies $n\lessgtr n_{\rm cr}$ as before.
The coefficient $c_n$ is now given by
\begin{equation}
c_n=\frac{4^\beta n^{1-\beta}}{3h_0}.
\end{equation}
Then, using
\begin{equation}
\rho=\varphi^{-n}, \quad\quad \pi_\varphi=c_n\varphi^{s-n} ,
\label{eq3005}
\end{equation}
we obtain
\begin{equation}
\dot{\varphi}= \pi_\varphi/\rho=c_n\varphi^s ,
\label{eq3003}
\end{equation}
from which it follows $\dot{\varphi}\rightarrow 0$ in both
$\varphi\rightarrow \infty$ (with $s<0$) and
$\varphi\rightarrow 0$ (with $s>0$) limits. Hence, the equation of state
$w\rightarrow-1$ in both limits.
However, the solution to (\ref{eq3003}) critically depends on
whether $s$ is equal, greater or smaller than 1:
\begin{equation}
\varphi(\tau)=\left\{
\begin{array}{ll}
\left[c_n(1-s)(\tau-\tilde{\tau})\right]^{1/(1-s)}& \left\{
\begin{array}{l}
s<1, \tau>\tilde{\tau}\\
s>1, \tau<\tilde{\tau}
\end{array}
\right.
\\
\varphi_0\exp c_n\tau & s=1 ,
\end{array}
\right.
\label{eq3004}
\end{equation}
where $\tilde{\tau}$ and $\varphi_0>0$ are arbitrary constants of integration.
Obviously, the limits $\varphi\rightarrow\infty$ (with $s<0$)
and $\varphi\rightarrow 0$ (with $0< s<1$) correspond to
the limits $\tau\rightarrow\infty$ and $\tau\rightarrow \tilde{\tau}$, respectively.
The limit $\varphi\rightarrow 0$ (with $s\geq 1$) correspond to
$\tau\rightarrow -\infty$.
As a consequence of (\ref{eq3004}), the cosmological scale factor evolves as
\begin{equation}
\frac{a(\tau)}{a_0}=\left\{
\begin{array}{lcl}
\exp\left\{(\tau-\tilde{\tau})^r/\tau_0^r\right\}
& s<0, & \tau>\tilde{\tau}\\
\exp\left\{-|\tau-\tilde{\tau}|^r/\tau_0^r\right\}
&\left\{
\begin{array}{r}
0<s<1,\\
s>1,
\end{array}
\right. &
\begin{array}{l}
\tau>\tilde{\tau}\\
\tau<\tilde{\tau}
\end{array}
\\
\exp\left\{-b_n e^{-2c_n\tau}\right\}
& s=1 ,&
\end{array}
\right.
\label{eq3008}
\end{equation}
where
\begin{equation}
r=\frac{2s}{s-1}=\frac{2(n-n_{\rm cr})(1-\beta)}{(n-n_{\rm cr})(1-\beta)-n_{\rm cr}},
\end{equation}
\begin{equation}
b_n=\frac{3h_0^2}{2\varphi_0^2 n}\left(\frac{n}{4}\right)^{2\beta} ,
\end{equation}
and
\begin{equation}
\tau_0=\left(\frac{|r|}{h_0}\right)^{1/r}
\left(\frac{4}{n}\right)^{\beta/r}
\left(c_n|s-1|\right)^{1/r-1}.
\end{equation}
Note that the limits $\varphi\rightarrow \infty$ and $\varphi\rightarrow 0$ correspond to
$a\rightarrow \infty$ and $a\rightarrow 0$, respectively.
From (\ref{eq3005}) together with
(\ref{eq3004})
using the inverted relations (\ref{eq3008}) we find the density
as a function of the cosmological scale:
\begin{equation}
\rho(a)=\left\{
\begin{array}{ll}
\left[c_n|s-1|\tau_0\right]^{n/(s-1)}\left|\ln(a/a_0)\right|^{n/(2s)}&
\left\{
\begin{array}{ll}
s<0, & a>a_0 \\
s>0, \neq 1, &a<a_0
\end{array}
\right. \\
\left(\varphi_0^2b_n\right)^{-n/2}\left|\ln(a/a_0)\right|^{n/2}&
s=1, \quad a<a_0.
\end{array}
\right.
\end{equation}
Hence, in the asymptotic regimes of small and large $a$ the density varies logarithmically,
thus demonstrating a quasi de Sitter behavior.
\section{Conclusions}
\label{conclude}
We have studied a braneworld cosmology (BWC) scenario based on
the second RS model
extended to more general warp factors.
We have shown how our BWC is related to the cosmology of the braneworld in
the bulk with a selfinteracting scalar field minimally coupled to gravity.
In the high density regime of our BWC the modified Friedmann equation is identical to that of the original RSII
cosmology
and can be relevant for the early stages of inflation.
Within a reasonable approximation
in the low density regime the modified Friedmann equation
remains of the same form as in the standard cosmology
except that the effective gravitational constant $G$ is scale dependent.
As an application we have investigated a class of tachyon models in the framework of BWC.
Assuming no restrictions on variations of $G$ in a pre-BBN cosmology we have analyzed
a power-law variation
corresponding to the inverse power-law tachyon potential $V\propto \varphi^{-n}$.
We have demonstrated a universal critical behavior for the cosmologies described by (\ref{eq1001})
for the tachyon field theory with inverse power-law potential:
there exist a critical power $n=n_{\rm cr}$ that divides
a dust universe for $n>n_{\rm cr}$ and a quasi de Sitter universe for
$0<n<n_{\rm cr}$ in both asymptotic regimes of large and small tachyon field $\varphi$
with $n_{\rm cr}$ depending on the details of the cosmological scenarios.
In particular we have analyzed three different scenarios:
the standard tachyon cosmology and low and high energy-density regimes of the
braneworld cosmology. For these three cosmologies, we have found $n_{\rm cr}$ to be equal to
2, 4/3, and 1, respectively.
\section*{Acknowledgments}
This work has been supported by the Croatian
Science Foundation under the project IP-2014-09-9582
and partially supported by
ICTP - SEENET-MTP project PRJ-09 Cosmology and Strings.
The work of N.\ Bili\'c and S.\ Domazet
has been partially supported by the H2020 CSA Twinning project No.\ 692194, “RBI-T-WINNING”.
G.\ Djordjevic acknowledges support by the Serbian Ministry for Education,
Science and Technological Development under the projects No.\ 176021
and No. 174020.
|
1,108,101,564,277 | arxiv | \section{Introduction}
The Fermi liquid \cite{lifshitz2013statistical, pines2018theory} is a conventional and ubiquitous phase of matter in condensed matter physics, modeling the universal low-energy features of electrons in metals. Despite its long history of study, there has been renewed interest in the Fermi liquid, motivated by the quest to understand the surprising stability \cite{Pomeranchuk1958, Gholizade2012, Watanabe1404.3728} of gapless fermions on the Fermi surface. An emerging paradigm in condensed matter theory is to understand all gapless quantum phases of matter from the perspective of emergent symmetries and quantum anomalies \cite{Moon1503.05199, Wang1703.02426, Wen1812.02517, Ji1912.13492, Yang2203.15791, Wen2208.09001}. This paradigm has led to significant progress in understanding the Fermi liquid as a gapless state of fermions protected by an emergent quantum anomaly known as the \emph{Fermi surface anomaly} \cite{Watanabe1505.04193, Cheng1511.02263, Lu1705.09298, Cho1705.03892, Bultinck1808.00324, Song1909.08637, Else2007.07896, Else2010.10523, Wen2101.08772, Ma2110.09492, Wang2110.10692, Darius-Shi2204.07585, Lu2210.16304, Cheng2211.12543}.
The boundary-bulk correspondence between quantum anomalies and symmetry-protected topological (SPT) orders has been a key area of study in condensed matter physics in the past decade \cite{Ryu1010.0936, Tiwari1710.04730, Wen1303.1803, Kapustin1403.0617, Kapustin1404.3230, Else1409.5436, Hsieh1403.6902, Wang1405.7689, Witten1508.04715, Hsieh1503.01411}. There is a growing consensus \cite{Horava2005FSTopo, Zhao2013FSTopo, Shinsei2013FSTopo, Bulmash1410.4202, Zhang2017FSTopo}
that the gapless fermions on the Fermi surface can be viewed as the topological boundary modes of a bulk fermionic SPT state and that the Fermi surface anomaly is related to the bulk SPT order. So what on earth should be the ``bulk'' of a Fermi surface? The most honest answer is the Fermi sea --- a region in the momentum space enclosed by the Fermi surface. Then what is ``topological'' about the Fermi sea? \refcite{Bulmash1410.4202} made a key observation that a $d$-dimensional Fermi sea could be viewed as a quantum Hall insulator (or, equivalently, a Chern insulator) in the $2d$-dimensional \emph{phase space} (i.e.~position-momentum space). This sets the basis for classifying Fermi surface anomaly by classifying topological insulators in the phase space.
The main goal of this work is to provide a comprehensive and rigorous classification of the Fermi surface anomaly along the above line of thought. We will only consider codimension-1 Fermi surface \cite{Ma2110.09492} (i.e., the Fermi surface is one dimension less than the momentum space dimension). Our key result is that the classification of the Fermi surface anomaly in any spacetime dimension is universally equivalent to the classification of interacting fermionic SPT phases in (0+1)-dimensional spacetime. This might not be too surprising as many thermodynamic and transport properties of Fermi liquids remain identical across different dimensions already. The proposed equivalence is established through a careful analysis of the non-commutative geometry \cite{Connes2014, Seiberghep-th/9908142, Dong2006.01282} in phase space, the synthetic dimension reduction \cite{Teo1006.0690, Jian1804.03658} of a phase-space Dirac fermion field theory, and the use of cobordism classification \cite{Kapustin1403.1467, Kapustin1404.6659, Kapustin1406.7329, Freed1604.06527, Guo1711.11587, Wan1812.11967, Yonekura1803.10796, Witten1909.08775, Wan1912.13504, Guo1812.11959} for interacting fermionic SPT states.
We also provide a non-perturbative definition \cite{Cheng2211.12543} of the Fermi surface anomaly protected by the internal symmetry $G$ and the translation symmetry. When $G=\mathrm{U}(1)$, our results match known results such as the Luttinger theorem \cite{LuttingerRP1960, Paramekanticond-mat/0406619, Haldanecond-mat/0505529} for conventional Fermi liquids. When the $\mathrm{U}(1)$ symmetry is broken down to $G=\mathbb{Z}_4$ (both contain the fermion parity symmetry ${\mathbb{Z}_2^F}$ as a subgroup), we discover non-trivial examples of Fermi surface symmetric mass generation (SMG) \cite{Lu2210.16304}, where the Fermi surface can be gapped out by multi-fermion interactions and deformed to a trivial product state without breaking any symmetry. These novel gapping mechanisms may shed light on the understanding of pseudo-gap physics in cuprates \cite{Zhang2001.09159, Zhang2006.01140}.
The article is organized as follows. In \secref{sec: effective}, we analyze the non-commutative geometry in the phase space to establish a mathematical foundation for defining quantum field theory in the phase space. We propose a phase-space Dirac fermion field theory as the bulk regularization for the Fermi surface and demonstrate that it reproduces the expected phase space Chern-Simons response theory of the Fermi liquid, as well as the Fermi surface gapless modes as topological boundary modes. This sets the stage for our argument. We then provide a non-perturbative definition of the Fermi surface anomaly and connect it to the recently proposed emergent loop group anomaly in \secref{sec: definition}. Using dimension reduction techniques of synthetic dimensions, we prove our key result: the equivalence between Fermi surface anomaly and (0+1)-dimensional fermionic SPT order in \secref{sec: classification}. We use cobordism tools to classify a list of unitary and anti-unitary symmetries and provide physical insights into our classification results. The article concludes with a summary in \secref{sec: summary}.
\section{Effective Descriptions of Fermi Liquids}\label{sec: effective}
\subsection{Non-Commutative Phase Space Geometry}\label{sec: ncg}
Given the spacetime manifold $M_d\times\mathbb{R}$ of a $(d+1)$-dimensional physical system (where $M_d$ is the $d$-dimensional spatial manifold and $\mathbb{R}$ is the time axis), for each position $\vect{x}=(x_1,x_2,\cdots,x_d)\in M_d$ in the space, the conjugate momentum $\vect{k}=(k_1,k_2,\cdots,k_d)$ generates infinitesimal translations on the manifold $M_d$ in the vicinity of $\vect{x}$ and hence lives in the $d$-dimensional cotangent space $T_\vect{x}^*M_d$. Thus the phase space is represented by the cotangent bundle $T^*M_d:=\{(\vect{x},\vect{k})|\vect{x}\in M_d,\vect{k}\in T_\vect{x}^*M_d\}$,
equipped with a canonical commutator (setting $\hbar=1$)
\eq{\label{eq: [x,k]}
[x_i,k_i]=\mathrm{i}\quad(i=1,2,\cdots,d),}
with $\mathrm{i}$ being the imaginary unit.
Unlike in a classical space where all coordinates commute, the phase space coordinates obey non-trivial commutation relations \eqnref{eq: [x,k]}, which makes the phase space $T^*M_d$ a non-commutative manifold.
There are two strategies to deal with the non-commutative phase space coordinates:
\begin{itemize}
\setlength\itemsep{0pt}
\item[(I)] \emph{Phase-space background Berry curvature}. Treat both $\vect{x}$ and $\vect{k}$ as ordinary commuting coordinates at the price of introducing a uniform background magnetic field (Berry curvature) in each $(x_i,k_i)$-plane, such that any unit-charged particle moving in such a background magnetic field will accumulate the same Berry phase as required by the commutation relation \eqnref{eq: [x,k]}.
\item[(II)] \emph{Canonical quantization}. Represent the position operator $\vect{x}=\mathrm{i}\partial_{\vect{k}}$ as a gradient operator in the eigenbasis of the momentum operator $\vect{k}$, or vice versa $\vect{k}=-\mathrm{i}\partial_{\vect{x}}$, such that the commutation relation \eqnref{eq: [x,k]} is satisfied on the operator level as in quantum mechanics.
\end{itemize}
The strategy (I) of phase-space background Berry curvature has been used in many works \cite{Bulmash1410.4202, Else2007.07896, Else2010.10523, Ma2110.09492, Wang2110.10692} to formulate the Fermi liquid as a phase-space quantum Hall insulator. The phase-space Berry curvature is also responsible for the Berry phase term in Wen's effective theory of Fermi liquid \cite{Wen2101.08772}, or the Wess-Zumino-Witten term in the recently proposed nonlinear bosonization of Fermi surfaces by the coadjoint orbit method \cite{Delacretaz2203.05004}. In this work, we will explore more of the strategy (II) of canonical quantization and hope to gain different insights.
For simplicity, we will always restrict our scope to a \emph{translation invariant} Fermi liquid in the Euclidean position space $M_d=\mathbb{R}^d$, then the momentum space is also Euclidean $T_\vect{x}^*M_d=\mathbb{R}^d$ and is identical among all points $\vect{x}$. The phase space reduces to a trivial bundle as a product of the position and the momentum spaces
\eq{T^*M_d = \mathbb{R}^d\Bowtie\mathbb{R}^d.}
We use the symbol $\Bowtie$ instead of $\times$ to indicate the non-commutative nature between the position and momentum space coordinates.
\subsection{Bulk Description: Fermi Sea = Phase-Space Chern Insulator}\label{sec: blk}
A Chern insulator in the phase space $T^*M_d$ can be formally described by a low-energy effective Hamiltonian of massive Dirac fermions \cite{Bulmash1410.4202}
\eq{\label{eq: H Dirac}
H=\int_{T^*M_d}\mathrm{d}^d\vect{x}\mathrm{d}^d\vect{k}\;\psi^\dagger(\mathrm{i}\partial_{\vect{x}}\cdot\vect{\Gamma}_x+\mathrm{i}\partial_{\vect{k}}\cdot\vect{\Gamma}_k+m(\vect{k})\Gamma^0)\psi,}
where $\psi:=\psi(\vect{x},\vect{k})$ is a $2^d$-component fermion operator defined at each ``point'' of the
$2d$-dimensional
phase space $T^*M_d = \mathbb{R}^d\Bowtie\mathbb{R}^d$ (let us not worry about the non-commutativity between $\vect{x}$ and $\vect{k}$ for now, which will be resolved later). Let $\Gamma^\mu$ (for $\mu=0,1,2,\cdots, 2d$) be a set of $2^d\times 2^d$ anti-commuting Hermitian matrices, satisfying $\{\Gamma^\mu,\Gamma^\nu\}=2\delta^{\mu\nu}$ and $\Gamma^0=\mathrm{i}^d\prod_{\mu=1}^{2d}\Gamma^\mu$. These $\Gamma$ matrices can be grouped into the temporal $\Gamma^0$, the position spatial $\vect{\Gamma}_x=(\Gamma^1,\cdots,\Gamma^d)$, and the momentum spatial $\vect{\Gamma}_k=(\Gamma^{d+1},\cdots,\Gamma^{2d})$ components. Here $\mathrm{i}\partial_\vect{x}\cdot\vect{\Gamma}_x=\sum_{i=1}^{d}\mathrm{i}\partial_{x_i}\Gamma^i$ denotes the dot product between the differential operator $\mathrm{i}\partial_\vect{x}$ and the set of matrices $\vect{\Gamma}_x$, and similarly for $\mathrm{i}\partial_\vect{k}\cdot\vect{\Gamma}_k$. A few comments on this theory are as follows:
\begin{itemize}
\setlength\itemsep{0pt}
\item \emph{Locality}. Without interaction, \eqnref{eq: H Dirac} looks like a valid local theory of the fermion field $\psi$ in the phase space. However, once fermion interaction is introduced, \eqnref{eq: H Dirac} is no longer a local field theory because the interaction is generally non-local in the momentum space. Therefore, \eqnref{eq: H Dirac} should only be viewed as a ``formal'' description of the phase-space Chern insulator. One way to regularize the theory is to evoke the strategy (II) in \secref{sec: ncg} to resolve the non-commutative phase space geometry by replacing $\mathrm{i}\partial_\vect{k}\to\vect{x}$, and rewrite \eqnref{eq: H Dirac} as
\eq{\label{eq: H Dirac reg}
H=\int_{M_d}\mathrm{d}^d\vect{x}\;\psi^\dagger(\mathrm{i}\partial_{\vect{x}}\cdot\vect{\Gamma}_x+\vect{x}\cdot\vect{\Gamma}_k+m(-\mathrm{i}\partial_\vect{x})\Gamma^0)\psi,}
which is solely defined in the position space and respects the position space locality such that local interactions can be introduced if needed.
\item \emph{Mass profile}. The bulk Dirac mass $m(\vect{k})$ is supposed to be a polynomial function of $\vect{k}$, which specifies the shape of the Chern insulator in the phase space. For example, given the Fermi momentum $k_F$, $m(\vect{k})=\vect{k}^2-k_F^2$ is one possible choice of the mass profile. Suppose the Fermi sea occupies a region $\Omega\subset \mathbb{R}^d$ in the momentum space enclosed by the $(d-1)$-dimensional Fermi surface $\partial \Omega$, the Dirac fermion mass profile should satisfy
\eq{\label{eq: m profile}
m(\vect{k})\left\{
\begin{array}{cc}
\leq 0 & \text{if }\vect{k}\in \Omega,\\
>0 & \text{if }\vect{k}\notin \Omega.
\end{array}\right.}
This described a phase-space Chern insulator in the Fermi sea region $\Omega$, such that the Fermi surface $\partial\Omega$ (as the boundary of the phase-space Chern insulator) corresponds to the mass domain wall at $m(\vect{k})=0$.
The fermions are gapped everywhere in the phase space except on the Fermi surface, where the fermion mass vanishes. This is consistent with the physical intuition that the gapless fermions on the Fermi surface are the only non-trivial low-energy feature of the Fermi liquid. We will study these boundary fermion modes in more detail in \secref{sec:bdy} to show that they travel in the directions perpendicular to the Fermi surface as expected.
\item \emph{Particle-hole symmetry}. Under the particle-hole transformation $\mathbb{Z}_2^C$, the inside and outside of the Fermi surface will interchange, corresponding to flipping the fermion mass $\mathbb{Z}_2^C: m\to -m$, or equivalently, conjugating the fermion operator
\eq{\mathbb{Z}_2^C: \psi\to \mathcal{K} \Gamma^0\psi^*,}
where $\mathcal{K}$ denotes the complex conjugate operator, such that $\mathbb{Z}_2^C: \psi^\dagger\Gamma^0\psi\to -\psi^\dagger\Gamma^0\psi$. Note that $\mathbb{Z}_2^C$ is \emph{not} a symmetry of the Hamiltonian $H$ in \eqnref{eq: H Dirac}, as the mass term $m$ explicitly breaks this symmetry. However, it is useful in defining the Fermi surface. We propose that the Fermi surface should be more generally defined as the \emph{particle-hole symmetric} sub-manifold in the phase space, specified by the locus of $\langle \psi^\dagger \Gamma^0\psi \rangle =0$. This definition applies to the case of interacting fermions.
\item \emph{Phase-space $\mathrm{U}(1)$ symmetry}. The Hamiltonian $H$ in \eqnref{eq: H Dirac} has a 0-form $\mathrm{U}(1)$ symmetry in the phase space, generated by the charge operator
\eq{\label{eq: Q}
Q=\int_{T^*M_d}\mathrm{d}^d\vect{x}\mathrm{d}^d\vect{k}\;\psi^\dagger \psi.}
The symmetry transformation $\mathrm{e}^{\mathrm{i} \phi Q}$ forms the $\mathrm{U}(1)$ symmetry group,
where $\phi \in [0, 2 \pi)$ and $Q \in \mathbb{Z}$.
The fermion field transforms as $\psi\to\mathrm{e}^{\mathrm{i}\phi}\psi$ under the symmetry transformation.
\end{itemize}
The essential bulk topological response of the Fermi liquid is captured by a phase-space Chern-Simons theory \cite{Bulmash1410.4202, Else2007.07896, Else2010.10523, Ma2110.09492, Wang2110.10692} of the phase-space $\mathrm{U}(1)$ symmetry. To show that the effective Hamiltonian in \eqnref{eq: H Dirac} indeed reproduces the desired topological response, we first gauge the 0-form $\mathrm{U}(1)$ symmetry of the fermion $\psi$ (under which $\psi\to\mathrm{e}^{\mathrm{i}\phi}\psi$) by introducing a 1-form gauge field $A$ in the phase spacetime
\eq{A=A_0\mathrm{d} t + \vect{A}_x\cdot\mathrm{d} \vect{x}+\vect{A}_k\cdot\mathrm{d} \vect{k},}
where $A_0$, $\vect{A}_x=(A_1,\cdots,A_d)$, $\vect{A}_k=(A_{d+1},\cdots,A_{2d})$ are respectively the components of the $\mathrm{U}(1)$ gauge connection in the time, position, and momentum spaces. We will treat $A$ as a background gauge field that does not have dynamics. Let $F:=\mathrm{d} A$ be the $\mathrm{U}(1)$ gauge curvature. Following the strategy (I) mentioned in \secref{sec: ncg}, we must set $F_{i,d+i}=1$ for $i=1,2,\cdots,d$ to reproduce the position-momentum commutator in \eqnref{eq: [x,k]}. This background gauge curvature effectively replaces the non-commutative $2d$-dimensional
phase space geometry, and the effective Hamiltonian \eqnref{eq: H Dirac} becomes \cite{Bulmash1410.4202}
\eq{\label{eq: H Dirac gauged}
H=\int_{T^*M_d}\mathrm{d}^d\vect{x}\mathrm{d}^d\vect{k}\;\psi^\dagger(\mathrm{i} D_\vect{x}\cdot\vect{\Gamma}_x+\mathrm{i} D_\vect{k}\cdot\vect{\Gamma}_k+m\Gamma^0-A_0)\psi,}
where $\mathrm{i} D_\mu:=\mathrm{i} \partial_\mu-A_\mu$ are gauge covariant derivatives. Now, in \eqnref{eq: H Dirac gauged}, $\vect{x}$ and $\vect{k}$ are ordinary \emph{commuting} coordinates, as the background Berry curvature $F_{i,d+i}=1$ has been implemented in the $\mathrm{U}(1)$ gauge configuration to resolve the non-commutativity. Therefore, we can use conventional field theory approaches to deal with \eqnref{eq: H Dirac gauged}.
Integrating out the fermion field in \eqnref{eq: H Dirac gauged} generates the following Chern-Simons action in the $(2d+1)$-dimensional
phase spacetime \cite{Bulmash1410.4202, Hayata1701.04012} (assuming the Dirac fermion $\psi$ is such regularized that $m>0$ corresponds to a trivial insulator)
\eq{\label{eq: CS}
S=\frac{1}{(d+1)!(2\pi)^d}\int_{T^*M_d\times\mathbb{R}}\frac{1-\operatorname{sgn} m}{2}A\wedge(\mathrm{d} A)^{\wedge d}.}
This is the defining bulk topological field theory whose inflow generates the Fermi surface anomaly \cite{Else2007.07896, Else2010.10523, Darius-Shi2204.07585}. \footnote{Our discussion here is unrelated to the previous study of chiral and gravitational anomalies on Fermi surfaces \cite{Basar1307.2234}, which is about the non-trivial Berry curvature on Fermi surfaces purely defined in the momentum space.} In particular, if we plug in the phase-space background gauge configuration $F_{i,d+i}=1$ (i.e., $\mathrm{d} A=F=\sum_{i=1}^d\mathrm{d} x_i\wedge \mathrm{d} k_i$), take the fermion mass profile in \eqnref{eq: m profile}, and finish the momentum space integration, \eqnref{eq: CS} will reduce to
\eq{S=\frac{1}{(2\pi)^d}\int_{M_d\times\mathbb{R}}\mathrm{d} t\,\mathrm{d}^d\vect{x}\;A_0 \operatorname{vol} \Omega,}
which indicates that the fermion charge density $\nu$ (filling fraction) is related to the Fermi volume $\operatorname{vol}\Omega$ by
\eq{\nu=\frac{\delta S}{\delta A_0}=\frac{\operatorname{vol}\Omega}{(2\pi)^d},}
where $\operatorname{vol}\Omega:=\int\mathrm{d}^d\vect{k}\;(1-\operatorname{sgn} m(\vect{k}))/2$ is by-definition the momentum-space volume where $m(\vect{k})\leq 0$.
This is precisely the Luttinger theorem --- a hallmark of the Fermi surface anomaly. Thus we have confirmed that the effective bulk Hamiltonian \eqnref{eq: H Dirac} can produce the correct anomaly inflow to describe a $(d+1)$-dimensional unit-charged Fermi liquid with a single Fermi surface. The extension to cases of generic fermion charges and multiple Fermi surfaces is straightforward (see \refcite{Else2007.07896} for example) and will not be elaborated further here.
\subsection{Boundary Description: Fermi Surface = Phase-Space Chiral Boundary Fermions}\label{sec:bdy}
How do we see more explicitly that the effective Hamiltonian \eqnref{eq: H Dirac} reproduces the low-energy fermions on a Fermi surface? Since the Fermi surface is interpreted as the boundary of the phase-space Chern insulator, the gapless fermions should arise as the topological boundary modes, which can be analyzed as follows.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.65]{fig_boundary}
\caption{Illustration of a point $\vect{k}_F$ on the Fermi surface $\partial\Omega$ with the normal vector $\vect{n}$ and the tangent vector(s) $\vect{\tau}_j$, for the case when the Fermi sea $\Omega$ dimension is $d=2$.}
\label{fig: boundary}
\end{center}
\end{figure}
As shown in \figref{fig: boundary}, we consider a point $\vect{k}_F\in\partial\Omega$ on the Fermi surface at which the normal vector is specified by $\vect{n}$. This means that the fermion mass will cross zero in the phase space at $\vect{k}_F$ with a gradient along the $\vect{n}$ direction:
\eq{\label{eq: m dw}
m(\vect{k}_F)=0,\quad \partial_{\vect{k}}m(\vect{k}_F)\propto \vect{n}.}
Such a mass domain wall at $\vect{k}_F$ will trap gapless fermion modes in the eigenspace specified by the projection $P_0=(1+\mathrm{i}(\vect{n}\cdot\vect{\Gamma}_k)\Gamma^0)/2$. Under this projection, only those terms that commute with $P_0$ can remain, so the effective Hamiltonian \eqnref{eq: H Dirac} reduces to
\eqs{\label{eq: H bdy1}
H&=\int_{M_d\Bowtie T_{\vect{k}_F}\partial\Omega}\mathrm{d}^d\vect{x}\mathrm{d}^{d-1}\vect{k}\;\psi^\dagger P_0\Big(\mathrm{i}(\vect{n}\cdot\partial_\vect{x})(\vect{n}\cdot\vect{\Gamma}_x)\\
&\sum_{j=1}^{d-1}\big(\mathrm{i}(\vect{\tau}_j\cdot\partial_\vect{x})(\vect{\tau}_j\cdot\vect{\Gamma}_x)+\mathrm{i}(\vect{\tau}_j\cdot\partial_{\vect{k}})(\vect{\tau}_j\cdot\vect{\Gamma}_k)\big)\Big)P_0 \psi,}
where $T_{\vect{k}_F}\partial\Omega$ denotes the $(d-1)$-dimensional tangent space of the Fermi surface $\partial \Omega$ at the base point $\vect{k}_F$, and $\vect{\tau}_j$ (for $j=1,2,\cdots,d-1$) denote a set of orthonomal basis of the tangent space $T_{\vect{k}_F}\partial\Omega$.
To resolve the non-commutativity between $\vect{x}$ and $\vect{k}$ coordinates, we evoke the strategy (II) outlined in \secref{sec: blk}. Given that $\vect{x}=\mathrm{i}\partial_\vect{k}$ resolves the canonical commutation relation in \eqnref{eq: [x,k]}, we can simply replace the gradient operator $\mathrm{i}\partial_\vect{k}$ by $\vect{x}$, and fall back to the standard quantum mechanical description in the position space $M$ alone. Under this replacement, \eqnref{eq: H bdy1} becomes
\eqs{\label{eq: H bdy2}
H&=\int_{M_d}\mathrm{d}^d\vect{x}\;\psi^\dagger P_0\Big(\mathrm{i}(\vect{n}\cdot\partial_\vect{x})(\vect{n}\cdot\vect{\Gamma}_x)\\
&\sum_{j=1}^{d-1}\big(\mathrm{i}(\vect{\tau}_j\cdot\partial_\vect{x})(\vect{\tau}_j\cdot\vect{\Gamma}_x)+(\vect{\tau}_j\cdot\vect{x})(\vect{\tau}_j\cdot\vect{\Gamma}_k)\big)\Big)P_0 \psi.}
Now the terms $(\vect{\tau}_j\cdot\vect{x})(\vect{\tau}_j\cdot\vect{\Gamma}_k)$ in the Hamiltonian \eqnref{eq: H bdy2} can be interpreted as a new set of perpendicular domain walls of fermion masses (each one is normal to a $\vect{\tau}_j$ direction). They will further localize the fermions to the origin in all tangent directions $\vect{\tau}_j$ (for $j=1,\cdots,d-1$). The localized fermion modes are specified by a sequence of further projections $P_j=(1+(\vect{\tau}_j\cdot\vect{\Gamma}_x)(\vect{\tau}_j\cdot\vect{\Gamma}_k))/2$, such that the total projection is
\eq{
P=P_0\prod_{j=1}^{d-1}P_j.}
Under the total projection $P$, only one fermion mode survives. This can be seen by a simple counting argument: the fermion field $\psi$ has $2^d$ components to start with, given $P_0,\cdots, P_{d-1}$ are $d$ commuting projectors, each reducing the number of fermion components by half, the remaining component number is $2^d/2^d=1$.
The only term in the Hamiltonian that commute with the total projection $P$ is $\mathrm{i}(\vect{n}\cdot\partial_\vect{x})(\vect{n}\cdot\vect{\Gamma}_x)$, which will survive in the low-energy theory. Moreover, $(\vect{n}\cdot\vect{\Gamma}_x)$ becomes an identity operator in the projected subspace, because
\eqs{\label{eq: PGP=P}
&P(\vect{n}\cdot\vect{\Gamma}_x)P\\
=&P(\vect{n}\cdot\vect{\Gamma}_x)\mathrm{i}(\vect{n}\cdot\vect{\Gamma}_k)\Gamma^0\prod_{j=1}^{d-1}\big(\mathrm{i}(\vect{\tau}_j\cdot\vect{\Gamma}_x)(\vect{\tau}_j\cdot\vect{\Gamma}_k)\big)P\\
=&P\Big(\mathrm{i}^d\prod_{\mu=0}^{2d}\Gamma^\mu \Big)P=P\mathds{1} P=P.}
The first equality in \eqnref{eq: PGP=P} relies on the fact that we can insert between projection operators $P$ matrices like $\mathrm{i}(\vect{n}\cdot\vect{\Gamma}_k)\Gamma^0$ or $\mathrm{i}(\vect{\tau}_j\cdot\vect{\Gamma}_x)(\vect{\tau}_j\cdot\vect{\Gamma}_k)$, as they all behave like identity operators in the projected subspace. If we denote the projected fermion mode as $\psi_{\vect{k}_F}=P\psi$ (the low-energy fermion localized on the intersection of mass domain walls at the Fermi momentum $\vect{k}_F$), the effective Hamiltonian for this fermion mode reads
\eq{\label{eq: H chiral bdy}
H=\int\mathrm{d}(\vect{n}\cdot\vect{x})\;\psi_{\vect{k}_F}^\dagger \mathrm{i}(\vect{n}\cdot\partial_\vect{x})\psi_{\vect{k}_F},}
which describes a single chiral fermion at momentum $\vect{k}_F\in\partial\Omega$ on the Fermi surface moving along the normal direction $\vect{n}$, which matches the low-energy physics of Fermi liquid precisely. Therefore, the phase-space Chern insulator effective Hamiltonian $H$ in \eqnref{eq: H Dirac} indeed provides a bulk regularization for the Fermi liquid, reproducing all the expected low-energy behaviors of gapless fermions on the Fermi surface. This is an alternative bulk regularization of Fermi liquid compared to the Weyl fermion regularization proposed by Ma and Wang \cite{Ma2110.09492} recently. To make a comparison between our regularization and that in \refcite{Ma2110.09492},
\begin{itemize}
\setlength\itemsep{0pt}
\item We use the canonical quantization approach to resolving the non-commutative phase space geometry, while \refcite{Ma2110.09492} uses the phase-space background Berry curvature approach.
\item The low-energy chiral fermions are realized as domain-wall fermions in our approach, compared to Landau-level Weyl fermions in \refcite{Ma2110.09492}. The directional nature of the chiral fermions (i.e., they always move along the normal direction at each point on the Fermi surface) is more explicit in our regularization.
\end{itemize}
\section{Definition of Fermi Surface Anomaly}\label{sec: definition}
\subsection{Emergent Loop Group Symmetry and Perturbative Fermi Surface Anomaly}\label{sec: loop group}
The chiral boundary fermion effective Hamiltonian \eqnref{eq: H chiral bdy} has a rather large emergent symmetry, described by the loop-$\partial\Omega$ group of $\mathrm{U}(1)$ \cite{Else2007.07896,Else2010.10523} or the mapping space from the Fermi surface $\partial\Omega$ to $\mathrm{U}(1)$, denoted as $\mathrm{L}_{\partial\Omega}\mathrm{U}(1):=\mathrm{Map}(\partial\Omega, \mathrm{U}(1))$ \footnote{For codimension-1 Fermi surface, $\partial\Omega$ is a $(d-1)$-dimensional closed manifold. In the case that $\partial\Omega$ is diffeomorphic to a $S^{d-1}$ sphere, the loop group is also denoted as $\mathrm{L}^{d-1}\mathrm{U}(1)$}. Under the group action, fermion operators transform as
\eq{
\mathrm{L}_{\partial\Omega}\mathrm{U}(1):\psi_{\vect{k}_F}\to\mathrm{e}^{\mathrm{i}\phi(\vect{k}_F)}\psi_{\vect{k}_F}\quad(\forall\vect{k}_F\in\partial\Omega)}
where $\phi(\vect{k}_F)$ is a continuous function on the Fermi surface $\partial\Omega$, subject to the equivalence $\phi(\vect{k}_F)\sim \phi(\vect{k}_F)+2\pi$. Mathematically, the loop group $\mathrm{L}_{\partial\Omega}\mathrm{U}(1)$ is the group of all continuous maps from the closed manifold $\partial\Omega$ to $\mathrm{U}(1)$, with the group multiplication defined pointwise.
In contrast, for a conventional real-space $\mathrm{U}(1)$-symmetric Chern insulator, the boundary theory only has the same $\mathrm{U}(1)$ symmetry inherited from the bulk. In this case, the boundary symmetry is not enlarged because the gapless fermion mode can propagate (along tangent directions) throughout the boundary, locking point-wise $\mathrm{U}(1)$ transformations together into a global $\mathrm{U}(1)$ transformation on the boundary manifold. However, for the phase-space Chern insulator, due to the non-commutative nature between the position and momentum coordinates, the boundary fermion mode is localized in all tangent directions of the Fermi surface and only propagates along the normal direction $\vect{n}$. Therefore, the $\mathrm{U}(1)$ transformations at different momentum points $\vect{k}_F$ on the Fermi surfaces are not locked together, giving rise to the enlarged loop group symmetry $\mathrm{L}_{\partial\Omega}\mathrm{U}(1)$.
Our argument establishes the loop group symmetry $\mathrm{L}_{\partial\Omega}\mathrm{U}(1)$ on the Fermi surface as an emergent symmetry, originated from the $\mathrm{U}(1)$ symmetry in the phase-space bulk. Therefore, the Fermi surface anomaly, which was proposed \cite{Else2007.07896} to be a perturbative anomaly of $\mathrm{L}_{\partial\Omega}\mathrm{U}(1)$, can be described by the bulk topological field theory of a $\mathrm{U}(1)$ connection $A$ of
the $\mathrm{U}(1)$ bundle in the phase spacetime, as derived in \eqnref{eq: CS} already,
\eq{\label{eq: CS bulk_restricted}
S=\frac{k}{(d+1)!(2\pi)^d}\int_{M_d\times\Omega\times\mathbb{R}}A\wedge(\mathrm{d} A)^{\wedge d}.}
Here we have added in the Chern-Simons level $k\in\mathbb{Z}$ for generality, which should correspond to the multiplicity (degeneracy) of the Fermi surface. We set $k=1$ for a single Fermi surface. Various physical consequences of this theory have been discussed in the literature \cite{Bulmash1410.4202, Else2007.07896, Else2010.10523, Ma2110.09492, Wang2110.10692}, which we will not reiterate. This description sets the basis to classify the loop group $\mathrm{L}G$ anomaly on the $(d-1)$-dimensional Fermi surface by the $G$-symmetric invertible topological phases in the $2d$-dimensional phase space, which will be our key strategy in \secref{sec: classification}.
\subsection{Interstitial Defect and Non-Perturbative Fermi Surface Anomaly}\label{sec: defect}
One drawback of using the phase-space Chern-Simons theory \eqnref{eq: CS bulk_restricted} to characterize the Fermi surface anomaly is that it is not straightforward to extend the description to Fermi liquids with a more general symmetry group $G$, such as $G=\mathbb{Z}_{2n}$. We propose to define the Fermi surface anomaly in a lattice fermion system by the projective representation of the internal symmetry $G$ in the presence of an \emph{interstitial defect} that adds an extra site to the lattice \cite{Cheng1804.10122, Cheng2211.12543}, as illustrated in \figref{fig: defect} (a).
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.65]{fig_defect}
\caption{(a) Characterize the Fermi surface anomaly by the projective representation of the internal symmetry in the presence of an interstitial defect. (b) On the lattice, an interstitial defect (the red dot) is created by translating a semi-infinite line of sites along the line direction. (c) In the phase space, this creates extra Berry curvature (in the shaded plaquettes) along a line of momenta at the defect position.}
\label{fig: defect}
\end{center}
\end{figure}
Consider a lattice fermion system in $d$-dimensional space with global internal symmetry $G$ and lattice translation symmetry $\mathbb{Z}^d$. Let $T_i$ be the generator of translation symmetry in the $i$-th spatial direction. In the phase space, the lattice translation symmetry $\mathbb{Z}^d$ acts as an emanant momentum-space dipole symmetry $\mathrm{U}(1)^d$ (i.e.~the dipole moment conservation in the momentum space)
\eq{\label{eq: dipole sym} \psi(\vect{x},\vect{k})\to T_i\psi(\vect{x},\vect{k}) T_i^{-1}=\mathrm{e}^{\mathrm{i} k_i}\psi(\vect{x},\vect{k}).}
An emanant symmetry \cite{Cheng2211.12543} is an exact IR symmetry that only acts on low-energy degrees of freedom. Its action on high-energy degrees of freedom is not well-defined. However, it arises from a UV symmetry in that any low-energy operator charged under the emanant symmetry must also be charged under the corresponding UV symmetry. The momentum-space dipole symmetry $\mathrm{U}(1)^d$ in \eqnref{eq: dipole sym} emanates from the lattice translation symmetry $\mathbb{Z}^d$ in the sense that any low-energy operator violating the momentum-space dipole symmetry will also break the lattice translation symmetry \cite{Cheng2211.12543, Metlitski1707.07686}, even though there is no group homomorphism between these two symmetry groups. A similar discussion also appeared in \refcite{Wen2101.08772}, where the emanant symmetry was proposed to be $\mathbb{R}^d$ (as a non-compact version of our proposed $\mathrm{U}(1)^d$).
An interstitial defect is a point defect that adds one extra site (or unit cell) to the lattice. It can be created by translating a semi-infinite line of sites along the line direction as shown in \figref{fig: defect}(b) on the lattice level. The choice of direction for this semi-infinite line does not matter. We may choose it to be along the positive axis of $x_1$. The twist operator $U_\mathrm{tw}$ creates the interstitial defect at the origin $\vect{x}=0$,
\eq{U_\mathrm{tw}=T_1^{\Theta(x_1)\prod_{i=2}^{d}\delta(x_i)},}
where $\Theta$ is the Heaviside step function and $\delta$ is the Kronecker delta function:
\eq{\Theta(x)=\left\{\begin{array}{ll}1 & \text{if }x>0,\\ 0& \text{if }x<0,\end{array}\right.\quad\delta(x)=\partial_x\Theta(x).}
They together ensure that the translation is only implemented along the positive axis of $x_1$.
For any field or operator $\mathcal{O}$, we defined the twisted version $\mathcal{O}_\mathrm{tw}$ as $\mathcal{O}_\mathrm{tw}:=U_\mathrm{tw}\mathcal{O} U_\mathrm{tw}^{-1}$. In particular, the fermion field is twisted to
\eq{\label{eq: psi twist}
\psi_\mathrm{tw}(\vect{x},\vect{k})=\mathrm{e}^{\mathrm{i} k_1\Theta(x_1)\prod_{i=2}^{d}\delta(x_i)}\psi(\vect{x},\vect{k}).}
This allows us to define the twisted Hamiltonian $H_\mathrm{tw}$ and the twisted representation of symmetry operation $\rho_\mathrm{tw}(g)$ for any group element $g\in G$ of the internal symmetry group $G$ by replacing all operators in $H$ or $\rho(g)$ with their twisted version.
We say that the fermion system has a Fermi surface anomaly, if there exists a cyclic subgroup of $G$ (generated by $g\in G$ and $g^n=1$) such that the twisted partition function accumulates a non-trivial phase $\mathrm{e}^{\ii2\pi\nu}\neq 1$ (or equivalently, a non-trivial index $\nu\neq 0\mod 1$) under the cyclic symmetry action:
\eq{\label{eq: FSA}\operatorname{Tr} (\mathrm{e}^{-\beta H_\mathrm{tw}} \rho_\mathrm{tw}(g)^n)=\mathrm{e}^{\ii2\pi\nu} \operatorname{Tr} \mathrm{e}^{-\beta H_\mathrm{tw}}.}
This indicates that the interstitial defect transforms projectively under the internal symmetry $G$, which provides a non-perturbative definition of the Fermi surface anomaly. From this perspective, the Fermi surface anomaly may also be viewed as the mixed anomaly between the internal symmetry $G$ and the emanant symmetry $\mathrm{U}(1)^d$, which is a straightforward generalization of the mixed $\mathrm{U}(1)\times\mathbb{R}^d$ anomaly proposed by Wen \cite{Wen2101.08772}.
To demonstrate the validity of the general definition of the Fermi surface anomaly by \eqnref{eq: FSA}, we consider the special case of $G=\mathrm{U}(1)$ and show that it reproduces the known filling constraints by the Luttinger theorem.
When $G=\mathrm{U}(1)$, for $g=\mathrm{e}^{\mathrm{i} \phi}\in G$, we have $\rho(g)_\mathrm{tw}=\mathrm{e}^{\mathrm{i} \phi Q_\mathrm{tw}}$, where $Q_\mathrm{tw}$ is twisted from the charge operator $Q$ in \eqnref{eq: Q}. The twisted partition function can be defined as
\eq{\label{eq: Z_tw U1}
Z_\mathrm{tw}(\beta,\phi)=\operatorname{Tr} (\mathrm{e}^{-\beta H_\mathrm{tw}}\mathrm{e}^{\mathrm{i} \phi Q_\mathrm{tw}}).}
The Fermi surface anomaly is manifested by
\eq{\label{eq: FSA U1}
Z_\mathrm{tw}(\beta,\phi+2\pi)=\mathrm{e}^{\mathrm{i} 2 \pi \nu}Z_\mathrm{tw}(\beta,\phi),}
where $\nu$ (mod 1) serves as the anomaly index, and $\mathrm{e}^{\mathrm{i} 2 \pi \nu}$ is the same non-trivial phase factor that appeared in \eqnref{eq: FSA}.
To compute the anomaly index $\nu$, we notice that the transformation of the fermion field in \eqnref{eq: psi twist} induces a $\mathrm{U}(1)$ gauge transformation in the phase space \cite{Metlitski1707.07686, Song1909.08637}, such that
\eq{A_\mathrm{tw}=A+\Theta(x_1)\prod_{i=2}^{d}\delta(x_i)\,\mathrm{d} k_1.}
This means that the background $\mathrm{U}(1)$ gauge field component $A_{k_1}$ in the phase space is shifted by a uniform amount over the half-plane of $x_1>0$, as shown in \figref{fig: defect}(c). As a result, this leads to additional $\mathrm{U}(1)$ gauge curvature $F:=\mathrm{d} A$ in the phase space along the interface of $x_1=0$,
\eq{F_\mathrm{tw}=F+\delta(\vect{x})\,\mathrm{d} x_1\wedge \mathrm{d} k_1,}
where $\delta(\vect{x})=\prod_{i=1}^{d}\delta(x_i)$. Substitute into the bulk topological response theory in \eqnref{eq: CS bulk_restricted}, and take a phase-space uniform configuration for the temporal gauge field $A_0(\vect{x},\vect{k})=\varphi\delta(t)$ at the $t=0$ time slice, we have
\eq{S_\mathrm{tw}=S+k \varphi\frac{\operatorname{vol}\Omega}{(2\pi)^d},}
hence the twisted charge operator is given by
\eq{\label{eq: Q_tw}
Q_\mathrm{tw}=\frac{\partial S_\mathrm{tw}}{\partial \varphi}=Q+k\frac{\operatorname{vol}\Omega}{(2\pi)^d}.}
Substitute \eqnref{eq: Q_tw} to \eqnref{eq: Z_tw U1}, we can compute the anomalous phase factor in \eqnref{eq: FSA U1}. Given that the total charge $Q\in \mathbb{Z}$ is quantized, the Fermi surface anomaly index $\nu$ is associated with the fractional charge of the global $\mathrm{U}(1)$ symmetry induced by the interstitial defect
\eq{\nu=k\frac{\operatorname{vol}\Omega}{(2\pi)^d}\mod 1.}
The level $k\in \mathbb{Z}$ is integer classified in this case. For generic Fermi volume $\operatorname{vol}\Omega$ that is not a rational fraction of the Brillouin zone volume $(2\pi)^d$, the Fermi surface anomaly is non-vanishing as long as the level $k\neq 0$. This reproduces the know results about Fermi liquid with $\mathrm{U}(1)$ symmetry and demonstrates that our non-perturbative definition of the Fermi surface anomaly in \eqnref{eq: FSA} falls back to the perturbative $\mathrm{L}_{\partial\Omega}\mathrm{U}(1)$ anomaly proposed in \refcite{Else2007.07896, Else2010.10523} for the case of $G=\mathrm{U}(1)$.
For more general internal symmetry $G$, we proposed that the Fermi surface anomaly should be defined via \eqnref{eq: FSA}, following the general idea of the twist defect construction by Cheng and Seiberg \cite{Cheng2211.12543}. The major difference is that they twist the translation symmetry in time and internal symmetry in space, while we twist the translation symmetry in space and internal symmetry in time. This modification allows us to define the Fermi surface anomaly in general dimensions (beyond $(1+1)$D).
\section{Classification of Fermi Surface Anomaly}\label{sec: classification}
\subsection{Synthetic Dimension Reduction Argument}\label{sec: dim}
The remaining objective is to classify the Fermi surface anomaly for a general internal symmetry group $G$. According to \secref{sec: defect}, the anomaly is defined by the fractionalized representation of $G$ carried by interstitial defects in the fermionic system, indicating that the anomaly classification can be mapped to the classification of $(0+1)$-dimensional phase transitions between $G$-symmetric invertible topological phases of fermions, which is equivalent to the classification of $(0+1)$-dimensional fermionic SPT states. However, in \secref{sec: loop group}, the bulk topological field theory described by \eqnref{eq: CS bulk_restricted} suggests a different conclusion that classifying the Fermi surface anomaly of a $(d+1)$-dimensional Fermi liquid is equivalent to classifying the $(d+d+1)$-dimensional fermionic SPT states in phase spacetime. This raises a paradox as the two different counting of dimensions seem to be inconsistent with each other.
The paradox can be resolved by considering the non-trivial dimension counting in the phase space. Because position and momentum are non-commuting coordinates, their dimensions should not be simply added together. Instead, the correct classification should consider the momentum dimensions as ``negative'' spatial dimensions \cite{Teo1006.0690}, effectively defining the bulk SPT phase in a $(d-d+1)=(0+1)$-dimensional spacetime, aligning with the view from the interstitial defect.
To understand this unusual dimension counting, we revisit the effective bulk Hamiltonian in \eqnref{eq: H Dirac}, which describes a phase-space Chern insulator (or, equivalently, a $d$-dimensional Fermi sea). Following the strategy (II) of canonical quantization to regularize the bulk Hamiltonian by replacing $\mathrm{i}\partial_\vect{k}\to\vect{x}$ as \eqnref{eq: H Dirac reg}, we have
\eq{\label{eq: H synthetic}
H=\int_{M_d}\mathrm{d}^d\vect{x}\;\psi^\dagger(\mathrm{i}\partial_{\vect{x}}\cdot\vect{\Gamma}_x+\vect{x}\cdot\vect{\Gamma}_k+m\Gamma^0)\psi.
}
This describes a series of perpendicular mass domain walls (one in each independent direction) that intersect at $\vect{x}=0$, trapping a single fermion mode at the intersection point, which is described by the following effective Hamiltonian:
\eq{\label{eq: H0d}
H=m(\psi^\dagger\psi-1/2),}
where $m$ plays the role of the chemical potential. This single fermion mode can also be understood as the topological zero mode of the Dirac operator $\mathrm{i} D=\mathrm{i} D_\vect{x}\cdot\vect{\Gamma}_x+\mathrm{i} D_\vect{k}\cdot\vect{\Gamma}_k$ in the phase space $T^*M_d$, as required by the index theorem (assuming $M_d=\mathbb{R}^d$ is Euclidean):
\eq{
\text{index}(D)=\int_{T^*M_d}\text{ch}(D)=\frac{1}{d!}\int_{\mathbb{R}^d\times\mathbb{R}^d}\Big(\frac{\mathrm{d} A}{2\pi}\Big)^d=1.}
Therefore, regardless of the spatial dimension $d$ of a Fermi sea, its corresponding bulk description as a phase-space Chern insulator is always equivalent to a $(0+1)$-dimensional fermion mode at low energy under dimension reduction. As a result, the classification of the Fermi surface anomaly for a Fermi liquid in $(d+1)$-dimensional spacetime is equivalent to the classification of fermionic SPT phases in $(0+1)$-dimensional spacetime.
The above statement holds true even in the presence of fermion \emph{interactions}. It was originally realized by Teo and Kane \cite{Teo1006.0690} that momentum space (or parameter space) dimensions should be treated as negative dimensions in classifying topological defects in free fermion SPT states. The argument is recently generalized by Jian and Xu \cite{Jian1804.03658} to classify interacting fermionic SPT phases with synthetic dimensions, which is relevant to our discussion here as the Hamiltonian in \eqnref{eq: H synthetic} precisely describes a fermionic SPT system with physical dimension $d$ and synthetic dimension $\delta=d$. According to \refcite{Jian1804.03658}, the key criterion to distinguish the physical and synthetic dimensions relies on the locality of fermion interactions: the interactions must be local in the physical coordinate space and the synthetic momentum space, while non-local in the physical momentum space and the synthetic coordinate space. Based on this principle, the physical momentum is equivalent to the synthetic coordinate in dimension counting; thus, the momentum space dimension should be treated as the synthetic dimension. The main result of \refcite{Jian1804.03658} is that the classification of interacting fermionic SPT states in $(d,\delta)$ physical-synthetic dimension is the same as that in $d_\text{eff}$-dimensional physical space with
\eq{d_\text{eff}=d-\delta.}
Applying this result to our case, we conclude that the classification of interacting phase-space Chern insulators (or phase-space fermionic SPT states more generally) in any spatial dimension $d$ is equivalent to the classification of real-space interacting fermionic SPT states in $(d-\delta)=(d-d)=0$-dimensional space (or, correspondingly, in $(0+1)$-dimensional spacetime).
\subsection{Cobordism Classification Results}
Using the cobordism classification \cite{Kapustin1403.1467, Kapustin1404.6659, Kapustin1406.7329, Freed1604.06527, Guo1711.11587, Wan1812.11967, Yonekura1803.10796, Witten1909.08775, Wan1912.13504, Guo1812.11959} of interacting fermionic SPT states, we propose:
\begin{quote}
The classification of the Fermi surface anomaly associated with the loop group symmetry $\mathrm{L}G$ is equivalent to the classification of $(0+1)$-dimensional interacting fermionic SPT phases with symmetry $G$, which is given by $\text{TP}_1(\mathrm{Spin}\ltimes G)$.
\end{quote}
Here $G$ is the global internal symmetry group, and $\mathrm{Spin}\ltimes G$ denotes the total spacetime-internal symmetry group given by the extension $1\to G\to \mathrm{Spin}\ltimes G\to \mathrm{Spin}\to 1$, with $\mathrm{Spin}$ being the spin group of the spacetime manifold. Although we start with the Dirac fermion theory \eqnref{eq: H Dirac} in the $2d$-dimensional phase space, the effective Euclidean spacetime manifold is only $(0+1)$-dimensional after the synthetic dimension reduction, so the Euclidean spacetime rotation symmetry of the fermionic spinor field is described by the $\mathrm{Spin}(1)$ group. In the presence of time-reversal symmetry, the $\mathrm{Spin}$ structure can be further extended to $\mathrm{Pin}^{\pm}$ structures \cite{Wan1912.13504}. The Fermi surface $\partial\Omega$ with symmetry $G$ can have an emergent loop-$\partial\Omega$ group of $G$ symmetry denoted as $\mathrm{L}G$ in general. The notion of loop group symmetry is more subtle when $G$ is discrete, which will be discussed case by case later.
\begin{table}[htp]
\caption{Cobordism classification of the Fermi surface anomaly of the loop group symmetry $\mathrm{L}G$ by $\text{TP}_1(\mathrm{Spin}\ltimes G)$. In the table, $n\in\mathbb{N}$ stands for any natural number, and $\mathrm{Spin}\times_H G:=(\mathrm{Spin}\times G)/H$ denotes the quotient of the group product by their shared normal subgroup. $\mathbb{Z}_2^F$ denotes the Fermion parity symmetry.}
\begin{center}
\begin{tabular}{c|cc|c}
\hline\hline
$\mathrm{L}G$ & $G$ & $\mathrm{Spin}\ltimes G$ & $\text{TP}_1$\\
\hline
$\mathrm{L}\mathrm{U}(1)$ & $\mathrm{U}(1)$ & $\mathrm{Spin}^c$ & $\mathbb{Z}$ \\
$\mathrm{L}\mathrm{U}(n)$ & $\mathrm{U}(n)$ & $\mathrm{Spin}\times_{\mathbb{Z}_2^F}\mathrm{U}(n)$ & $\mathbb{Z}$\\
$\mathrm{L}\mathrm{SU}(2n)$ & $\mathrm{SU}(2n)$ & $\mathrm{Spin}\times_{\mathbb{Z}_2^F}\mathrm{SU}(2n)$ & $0$\\
$\tilde{\mathrm{L}}\mathrm{U}(1)\times\mathbb{Z}_{2n}$ & $\mathbb{Z}_{2n}$ & $\mathrm{Spin}\times_{\mathbb{Z}_2^F}\mathbb{Z}_{2n}$ & $\mathbb{Z}_{2n}$ \\
$\mathrm{L}\mathrm{SU}(2n+1)$ & $\mathrm{SU}(2n+1)$ & $\mathrm{Spin}\times\mathrm{SU}(2n+1)$ & $\mathbb{Z}_2$\\
$\tilde{\mathrm{L}}\mathrm{U}(1)\times\mathbb{Z}_{2n+1}$ & $\mathbb{Z}_{2n+1}$ & $\mathrm{Spin}\times\mathbb{Z}_{2n+1}$ & $\mathbb{Z}_{4n+2}$ \\
$\mathrm{LU}(1)\rtimes\mathbb{Z}_2^T$ & $\mathrm{U}(1)\rtimes\mathbb{Z}_2^T$ & $\mathrm{Pin}^{-}\ltimes_{\mathbb{Z}_2^F}\mathrm{U}(1)$ & $\mathbb{Z}$ \\
$\mathrm{LU}(1)\rtimes_{\mathbb{Z}_2^F}\mathbb{Z}_4^{TF}$ & $\mathrm{U}(1)\rtimes_{\mathbb{Z}_2^F}\mathbb{Z}_4^{TF}$ & $\mathrm{Pin}^{+}\ltimes_{\mathbb{Z}_2^F}\mathrm{U}(1)$ & $\mathbb{Z}$ \\
$\mathrm{LU}(1)\times\mathbb{Z}_2^T$ & $\mathrm{U}(1)\times\mathbb{Z}_2^T$ & $\mathrm{Pin}^c$ & 0 \\
\hline\hline
\end{tabular}
\end{center}
\label{tab: class}
\end{table}
In $(0+1)$-dimensional spacetime, SPT phases protected by the total symmetry $\mathrm{Spin}\ltimes G$ are classified by the cobordism group $\text{TP}_1(\mathrm{Spin}\ltimes G)$ \cite{Freed1604.06527} and their topological invariants are given by the cobordism group generators (i.e., the cobordism invariants). Here $\text{TP}$ is shorthand for the topological phase \cite{Freed1604.06527,Guo1711.11587,Wan1812.11967,Wan1912.13504}. \tabref{tab: class} summarizes a few examples of the cobordism classification of Fermi surface anomalies.
The cobordism group element $k\in\text{TP}_1(\mathrm{Spin}\ltimes G)$ is always an integer index given by
\eq{k=\pm q N,}
where $q$ is the symmetry charge carried by the fermion, $N$ is the multiplicity (flavor degeneracy) of the Fermi surface and the sign depends on whether the Fermi surface is electron-like ($+$) or hole-like ($-$). If there are multiple Fermi surfaces in the system, each one can have an independent integer-valued cobordism index $k_\alpha\in\text{TP}_1(\mathrm{Spin}\ltimes G)$. The total Fermi surface anomaly is characterized by a $\mathrm{U}(1)$-valued index $\nu$,
\eq{\nu=\sum_{\alpha}k_\alpha\frac{\operatorname{vol}\Omega_\alpha}{(2\pi)^d}\mod 1.}
Each cobordism index $k_\alpha$ is multiplied by the fraction of Fermi volume $\operatorname{vol}\Omega_\alpha$ in the Brillouin zone. The Fermi surface anomaly can vanish in the following cases:
\begin{itemize}
\setlength\itemsep{0pt}
\item $\operatorname{vol}\Omega_\alpha/(2\pi)^d\in\mathbb{Z}$. The Fermi volume is an integer multiple of the Brillouin zone volume, i.e., the fermion filling is an integer per unit cell for every fermion flavor. In this case, there is no Fermi surface anomaly regardless of the cobordism index $k$.
\item $k_\alpha\sim 0$ (meaning $k_\alpha=0$ when $k\in\mathbb{Z}$ or $k_\alpha=0\mod 2n$ when $k\in\mathbb{Z}_{2n}$). When the cobordism index $k_\alpha$ is trivial, there is no Fermi surface anomaly, regardless of the filling. This scenario becomes particularly noteworthy when the cobordism group is $\mathbb{Z}_{2n}$, as in this case $k_\alpha=2n$ multiples of the (unit-charged) Fermi surface can collectively cancel the anomaly and become deformable to a symmetric product state.
\item Multiple Fermi surfaces of different cobordism indices $k_\alpha$ and Fermi volumes $\operatorname{vol}\Omega_\alpha$ can cancel the anomaly collectively, if $\nu$ adds up to an integer. Examples of such have been recently studied in the context of Fermi surface symmetric mass generation (SMG) \cite{Lu2210.16304}.
\end{itemize}
More generally, the Fermi surface SMG refers to the phenomenon that the Fermi surface anomaly vanishes $\nu\sim 0$. Still, no symmetric fermion bilinear operator can gap out the Fermi surface into a symmetric product state. Then the symmetric gapping of the Fermi surface can only be achieved through non-trivial interaction effects. It generalizes the concepts of the interaction-reduced SPT classification \cite{Fidkowski0904.2197,Fidkowski1008.4138,Turner1008.4346,Ryu1202.4484,Qi1202.3983,Yao1202.5805,Gu1304.4569,Wang1401.1142,Metlitski1406.3032,You1409.0168,Cheng1501.01313,Yoshida1505.06598,Gu1512.04919,Song1609.07469,Queiroz1601.01596,Witten1605.02391,Wang1703.10937,Kapustin1701.08264} and symmetric mass generation \cite{Wang1307.7480,Ayyar1410.6474,Slagle1409.7401,BenTov1412.0154,Catterall1510.04153,Ayyar1511.09071,Catterall1609.08541,Ayyar1606.06312,Ayyar1611.00280,He1603.08376,DeMarco1706.04648,Ayyar1709.06048,You1705.09313,Schaich1710.08137,You1711.00863,Catterall1708.06715,Butt1811.01015,Butt1810.06117,Kikukawa1710.11618,Kikukawa1710.11101,Catterall2002.00034,Wang1809.11171,Xu2103.15865,Tong2104.03997,Catterall2010.02290,Butt2101.01026,Butt2111.01001,Zeng2202.12355,Wang2204.14271} to the case of finite fermion filling. We will explore more examples of such in the next subsection.
\subsection{Examples and Comments}\label{sec: cases}
In the following, we will provide some physical understanding of the cobordism classifications in several different cases. To focus our discussion on the \emph{discrete} aspect of the Fermi surface anomaly (as characterized by the integer-valued cobordism index $k$), we will restrict our scope to a unit-charged ($q=1$) single Fermi surface of multiplicity $N$ (such that the cobordism index is $k=N$) with a generic Fermi volume $\operatorname{vol} \Omega$ (e.g.~$\operatorname{vol} \Omega$ is some irrational fraction of the Brillouin zone volume), such that the Fermi surface anomaly index $\nu=k\operatorname{vol} \Omega/(2\pi)^d$ is only trivialized when the cobordism index $k\sim0$ belongs to the trivial class.
Our starting point will be the dimension-reduced $(0+1)$-dimensional effective bulk theory of the Fermi liquid, as described by the single-mode fermion Hamiltonian \eqnref{eq: H0d}. The objective is to understand the interacting fermionic SPT classification in this $(0+1)$-dimensional quantum system and make connections to the classification of Fermi surface anomaly. After the case by case discussions, we summarize the anomaly-free condition in \tabref{tab: summary}.
\subsubsection{$G=\mathrm{U}(1)$ and $\mathbb{Z}$ Classification}
The $G=\mathrm{U}(1)$ is the most common symmetry in the conventional discussion of Fermi liquids, under which the fermion operator $\psi$ transforms as $\psi\to\mathrm{e}^{\mathrm{i}\phi}\psi$ for $\phi\in[0,2\pi)$. The dimension-reduced bulk effective Hamiltonian $H=m(\psi^\dagger\psi-1/2)$ has only two eigenstates: $\ket{n_\psi=0}$ and $\ket{n_\psi=1}$, labeled by the two distinct eigenvalues of the fermion number operator $n_\psi:=\psi^\dagger\psi$. The excitation gap closes at $m=0$ as the ground state switches from one to another, which is also the point where the particle-hole symmetry $\mathbb{Z}_2^C$ is restored. The gap closing signifies a ``quantum phase transition'' in the $(0+1)$-dimensional system. Therefore, $m<0$ and $m>0$ should be identified as two different SPT phases. If there are many copies of such system, each copy can undergo the SPT transition separately, leading to $\mathbb{Z}$-classified SPT phases.
In the presence of the $\mathrm{U}(1)$ symmetry, this gap closing can not be avoided even under interaction. Because the $\mathrm{U}(1)$ symmetry enforces that the interaction can only take the form of a polynomial of $n_\psi$, which does not change the fact that $\ket{n_\psi=0}$ and $\ket{n_\psi=1}$ are still eigenstates of the interacting Hamiltonian. Then the two states have to degenerate on the locus of $\langle n_\psi\rangle=1/2$ where the particle-hole symmetry $\mathbb{Z}_2^C$ is restored, resulting in the unavoidable gap closing. So the $\mathbb{Z}$ classification is robust against fermion interaction, confirming the cobordism calculation.
As discussed previously in \secref{sec: blk}, the Fermi surface should be defined as the particle-hole symmetric sub-manifold in the phase space. Tuning the mass parameter $m$ across 0 in the effective theory can be viewed as going across the Fermi surface in the momentum space. The inevitable gap closing at $m=0$ (or at the particle-hole symmetric point) corresponds to the protected gapless fermions on the Fermi surface. The cobordism index $k\in\mathbb{Z}$ labels the number of gapless fermion modes (assuming fermions are unit-charged under $\mathrm{U}(1)$) both at the SPT transition in the effective theory and on the Fermi surface in the Fermi liquid system.
In this case, the emergent symmetry on the Fermi surface is $\mathrm{L}_{\partial\Omega}\mathrm{U}(1):\psi_{\vect{k}_F}\to\mathrm{e}^{\mathrm{i}\phi(\vect{k}_F)}\psi_{\vect{k}_F}$, which is defined for any smooth phase function $\mathrm{e}^{\mathrm{i}\phi(\vect{k}_F)}$ on the Fermi surface $\partial\Omega$. There is no further constraint on the choice of the function $\phi(\vect{k}_F)$. The loop group symmetry is denoted as $\mathrm{L}\mathrm{U}(1)$ for short in \tabref{tab: class}.
\subsubsection{$G=\mathrm{U}(n)$ and $\mathbb{Z}$ Classification}
Apart from carrying $\mathrm{U}(1)$ charge, the fermions may also have internal degrees of freedom. For example, electrons also carry the $\mathrm{SU}(2)$ spin freedom, such that for electronic Fermi liquid in a metal, the total internal symmetry is $\mathrm{U}(1)\times_{\mathbb{Z}_2^F}\mathrm{SU}(2)=\mathrm{U}(2)$. More generally, we may consider a $\mathrm{U}(n)$ symmetry, under which an $n$-component fermion field $\psi$ transforms as $\psi_a\to U_{ab}\psi_b$ for $U\in\mathrm{U}(n)$. The classification of Fermi surface anomaly for $G=\mathrm{U}(n)$ is the same as that of $G=\mathrm{U}(1)$, which is $\mathbb{Z}$, because the protecting symmetry is only the $\mathrm{U}(1)=\mathrm{U}(n)/\mathrm{SU}(n)$ quotient group. In this case, the emergent symmetry on the Fermi surface is $\mathrm{L}_{\partial\Omega}\mathrm{U}(n):\psi_{\vect{k}_F}\to U(\vect{k}_F)\psi_{\vect{k}_F}$ with $U(\vect{k}_F)\in \mathrm{U}(n)$, denoted as $\mathrm{LU}(n)$ in \tabref{tab: class}.
\subsubsection{$G=\mathrm{SU}(2n)$ and Trivial Classification}
However, once the internal symmetry is reduced from $\mathrm{U}(2n)$ to $\mathrm{SU}(2n)$, the classification collapses, and there is no Fermi surface anomaly for any Fermi volume. From the perspective of the $(0+1)$-dimensional effective theory, a $\mathrm{SU}(2n)$ fundamental fermion $\psi$ (which contains $2n$ flavor components $\psi_a$ for $a=1,2\cdots,2n$) can always be gapped by the following multi-fermion interaction,
\eq{\label{eq: H_int}
H_\text{int}=\prod_{a=1}^{2n}\psi_a +\text{h.c.}.}
This interaction always stabilizes a unique $\mathrm{SU}(2n)$ singlet ground state. In the presence of this interaction, the $m<0$ and $m>0$ phases can be smoothly tuned to each other without gap closing. Therefore, the $(0+1)$-dimensional interacting fermionic SPT states have only a trivial class under the $\mathrm{SU}(2n)$ symmetry.
The vanishing Fermi surface anomaly implies that the $\mathrm{SU}(2n)$ symmetric Fermi liquid at any filling level (of any Fermi volume) can always be deformed into a gapped product state without breaking the $\mathrm{SU}(2n)$ and translation symmetry. For $n=1$, this gapping term is simply the $s$-wave spin-singlet pairing. For $n>1$, the gapping will be achieved by uniform $\mathrm{SU}(2n)$-singlet multi-fermion condensation. Such multi-fermion condensation can happen independently on each site (or in each unit cell), resulting in a gapped symmetric product state.
\subsubsection{$G=\mathbb{Z}_{2n}$ and $\mathbb{Z}_{2n}$ Classification}
When reducing the symmetry from $\mathrm{U}(2n)$ to $\mathrm{SU}(2n)$, what essentially happens is that the $\mathrm{U}(1)=\mathrm{U}(2n)/\mathrm{SU}(2n)$ quotient group is broken to its $\mathbb{Z}_{2n}$ subgroup, which is also the $\mathbb{Z}_{2n}$ center of $\mathrm{SU}(2n)$. In fact, we only need to keep this essential $\mathbb{Z}_{2n}$ center symmetry, under which the fermion operator transforms as $\psi\to\mathrm{e}^{\frac{2\pi\mathrm{i}}{2n}m}\psi$ for $m=0,1,\cdots, 2n$. The multi-fermion condensation interaction \eqnref{eq: H_int} is still the gapping interaction to trivialize the SPT phase (or to gap out the SPT phase transition). However, since the $\mathbb{Z}_{2n}$ group has only 1-dimensional representations, the fermionic SPT root state (the generator state) only contains one fermion flavor. Therefore, the trivialization is achieved at $2n$ copies of the root state so that the classification is $\mathbb{Z}_{2n}$.
In particular, for $n=1$, a $\mathbb{Z}_2$ symmetric Fermi liquid allows the opening of a pairing gap by superconductivity. In this case, the $\mathbb{Z}_2$ classification of the Fermi surface anomaly indicates that for generic Fermi volume, the deformation of the Fermi liquid to a symmetric product state is only achievable when there are two fermion flavors (like spin-1/2 electrons) with the cobordism index $k=2\sim 0$, which enables the $s$-wave spin-singlet pairing. One may wonder, even when the fermion flavor number is one (like spinless fermions in condensed matter language) with the cobordism index $k=1$, it is still possible to fully gap the Fermi surface by $p_x+\mathrm{i} p_y$ pairing in $(2+1)$D, although the Fermi surface anomaly is not canceled for general Fermi volume. However, one should note that the $p_x+\mathrm{i} p_y$ superconductor is not a trivial gaped state, as it is not deformable to a product state due to its chiral edge mode. The non-vanishing Fermi surface anomaly at $k=1$ enforces the non-trivial invertible topological order in the gapped state. This is related to many discussions about filling-enforced SPT states in the literature \cite{Lu1705.04691, Lu1705.09298, Jiang1907.08596}.
Another case worth discussing is the $n=2$ case, which is the simplest case where Fermi surface SMG \cite{Lu2210.16304} can occur. In this case, the fermions have a $\mathbb{Z}_4$ internal symmetry that forbids any pairing gap from opening on the fermion bilinear level. The $\mathbb{Z}_4$ classification indicates that every four copies of the Fermi surface (with generic Fermi volume) can be deformed to a gapped product state by interaction. A simple lattice model to demonstrate this phenomenon is described by the following Hamiltonian,
\eq{\label{eq: H Z4}
H=\sum_{a=1}^{4}\sum_{ij}t_{ij}\psi_{ia}^\dagger \psi_{ja}+g \sum_{i}\psi_{i1}\psi_{i2}\psi_{i3}\psi_{i4}+\text{h.c.}.}
There are four fermion modes $\psi_{ia}$ ($a=1,2,3,4$) on each site $i$. The $t_{ij}$ term describes a generic fermion hopping model on the lattice. Without fine-tuning the chemical potential, the fermion system will generally fall in the Fermi liquid phase with a generic Fermi volume. Gapping of the Fermi surface can be achieved by the $\mathbb{Z}_4$-symmetric interaction $g$, which drives four-fermion condensation on each site, leading to a gapped symmetric product state in the $g\to\infty$ limit. This gapping mechanism applies to lattice fermions in any spatial dimension. So the $\mathbb{Z}_4$-symmetric Fermi liquid is universally $\mathbb{Z}_4$ classified in any dimension.
A key feature of our dimension counting argument is that the classification of the Fermi surface anomaly does not depend on the spacetime dimension. Instead, if we naively considered $(d+1)$D Fermi liquid as a quantum Hall insulator in the $2d$-dimensional phase space, we might mistakenly classify the Fermi surface anomaly by fermionic SPT states in $(2d+1)$-dimensional spacetime. The problem may not be exposed if the symmetry is $\mathrm{U}(1)$ because the classification is always $\mathbb{Z}$ and never gets reduced by the interaction effect. So we would not tell any difference. However, once the $\mathrm{U}(1)$ symmetry is broken to its $\mathbb{Z}_4$ subgroup, the discrepancy will be manifest. Take $d=2$ for example, the phase space is a 4-dimensional space, and the $\mathbb{Z}_4$-symmetric fermionic SPT states in $(4+1)$-dimensional spacetime is $\mathbb{Z}_{16}$ classified, which clearly deviates from the $\mathbb{Z}_{4}$-classified Fermi surface anomaly predicted by our theory. We know that $\mathbb{Z}_4$ should be the correct answer because the lattice model \eqnref{eq: H Z4} explicitly trivialized the Fermi surface in multiples of four (not sixteen). This speaks for the correctness of our dimension counting approach that the momentum space should be treated as negative dimensions, and Fermi liquids in any dimension are topologically equivalent to $(0+1)$-dimensional fermionic SPT states (with boundaries).
Finally, we would like to comment on the emergent loop group symmetry on the Fermi surface when the $\mathrm{U}(1)$ symmetry is broken to $\mathbb{Z}_{2n}$. With the multi-fermion condensation term $g$, the low-energy theory takes the form of
\eqs{&H=\sum_{\vect{k}_F\in\partial\Omega}\epsilon_{\vect{k}_F}\psi_{\vect{k}_F}^\dagger\psi_{\vect{k}_F}+\cdots\\&+g\sum_{\{\vect{k}_F^{(a)}\}\in\partial\Omega}\delta_{\sum_{a=1}^{2n}\vect{k}_F^{(a)}}\prod_{a=1}^{2n}\psi_{\vect{k}_F^{(a)}}+\text{h.c.},}
which is symmetric under
\eq{\psi_{\vect{k}_F}\to\mathrm{e}^{\mathrm{i}\frac{2\pi p}{2n}}\mathrm{e}^{\mathrm{i}\phi(\vect{k}_F)}\psi_{\vect{k}_F},}
with $p=0,1,\cdots,2n$ labeling a $\mathbb{Z}_{2n}$ group element and $\phi(\vect{k}_F)\sim\phi(\vect{k}_F)+2\pi$ being a smooth function of $\vect{k}_F$ subject to the following constraint:
\eq{\label{eq: constraint}\forall \sum_{a=1}^{2n}\vect{k}_F^{(a)}=0:\sum_{a=1}^{2n}\phi(\vect{k}_F^{(a)})=0\mod 2\pi.}
All the $\mathrm{U}(1)$ functions $\mathrm{e}^{\mathrm{i}\phi(\vect{k}_F)}$ satisfying the constraint in \eqnref{eq: constraint} form a group under pointwise multiplication. We denoted this constrained loop group as $\tilde{\mathrm{L}}_{\partial\Omega}\mathrm{U}(1)$. Then the emergent symmetry on the Fermi surface is $\tilde{\mathrm{L}}_{\partial\Omega}\mathrm{U}(1)\times\mathbb{Z}_{2n}$, or short-handed as $\tilde{\mathrm{L}}\mathrm{U}(1)\times\mathbb{Z}_{2n}$ in \tabref{tab: class}.
\subsubsection{$G=\mathrm{SU}(2n+1)$ and $\mathbb{Z}_2$ Classification}
We have discussed the case of $\mathrm{SU}(2n)$ flavor symmetry with an even number of fermion flavors. Now we turn to the case when the fermion flavor number is odd and the flavor symmetry is $\mathrm{SU}(2n+1)$. The major difference here is that the $\mathrm{SU}(2n+1)$ flavor symmetry group no longer contains the $\mathbb{Z}_2^F$ fermion parity symmetry as a subgroup. In this case, the Fermi surface anomaly is $\mathbb{Z}_2$ classified. The physical argument is that with a single copy of the $\mathrm{SU}(2n+1)$ fundamental fermion $\psi$ (with contains $2n+1$ flavor components $\psi_a$ for $a=1,2,\cdots,2n+1$), it is no longer possible to write down the $\mathrm{SU}(2n+1)$-singlet multi-fermion gapping term of the form $\prod_{a=1}^{2n+1}\psi_a+\text{h.c.}$ in the $(0+1)$-dimensional effective theory, because such a term contains an odd number of fermion operators and does not respect the $\mathbb{Z}_2^F$ fermion parity symmetry. Therefore, one has to double the system and introduce two $\mathrm{SU}(2n+1)$ fundamental fermions $\psi_1$ and $\psi_2$, such that the following gapping interaction becomes possible
\eq{\label{eq: Hint 2n+1}
H_\text{int}=\prod_{a=1}^{2n}\psi_{1a}\psi_{2a}+\text{h.c.}.}
Similar multi-fermion interaction is applicable to gap out the Fermi surface at a generic Fermi volume if there are two copies of $\mathrm{SU}(2n+1)$ fundamental fermions on the Fermi surface, which explains the $\mathbb{Z}_2$ classification. This is also an example of the Fermi surface SMG.
\subsubsection{$G=\mathbb{Z}_{2n+1}$ and $\mathbb{Z}_{4n+2}$ Classification}
If the $\mathrm{SU}(2n+1)$ flavor symmetry is broken to its center $\mathbb{Z}_{2n+1}$ symmetry group, under which the fermion operator transforms as $\psi\to\mathrm{e}^{\frac{2\pi\mathrm{i}}{2n+1}m}\psi$ for $m=0,1,\cdots, 2n+1$, the Fermi surface anomaly classification will be $\mathbb{Z}_{4n+2}$. The physics is essentially the same as the $G=\mathrm{SU}(2n+1)$ case, which relies on the same multi-fermion interaction \eqnref{eq: Hint 2n+1} to drive the SMG in the $(0+1)$-dimensional effective theory. Similar interaction also drives Fermi surface SMG. The SMG gapping mechanism only works when the fermion flavor number is $4n+2$, which is consistent with the $\mathbb{Z}_{4n+2}$ classification.
\subsubsection{$G=\mathrm{U}(1)\rtimes\mathbb{Z}_2^T$ and $\mathbb{Z}$ Classification}
We can extend our discussion to anti-unitary symmetries \cite{Wigner1960a,Wigner1960b}, which will be generally denoted as time-reversal symmetries $\mathbb{Z}_2^T$. There are different ways that an anti-unitary symmetry can be combined with the $\mathrm{U}(1)$ charge conservation symmetry of the fermion. Let us first consider the case of $G=\mathrm{U}(1)\rtimes\mathbb{Z}_2^T$, where the $\mathrm{U}(1)$ rotation does not commute with the anti-unitary symmetry action $\mathcal{T}\in\mathbb{Z}_2^T$ and $\mathcal{T}^2=+1$. More specifically, the fermion operator transforms as
\eqs{\mathrm{U}(1)&:\psi\to\mathrm{e}^{\mathrm{i}\phi}\psi,\\
\mathbb{Z}_2^T&:\psi\to\mathcal{K}\psi, \psi^\dagger\to\mathcal{K}\psi^\dagger,}
where $\mathcal{K}\mathrm{i}\mathcal{K}^{-1}=-\mathrm{i}$ denotes the complex conjugation operator that acts on all complex coefficients in the operator algebra.
In this scenario, the presence of the anti-unitary symmetry does not alter the anomaly classification. The $(0+1)$-dimensional effective theory, characterized by the Hamiltonian $H=m(\psi^\dagger\psi-1/2)$, still includes the mass term $m$ which is symmetric under $\mathbb{Z}_2^T$. As the anti-unitary symmetry does not impose additional restrictions on the Hamiltonian, the SPT classification remains unchanged from the case with $G=\mathrm{U}(1)$, which is $\mathbb{Z}$. As a result, the Fermi surface anomaly is still classified as $\mathbb{Z}$.
\subsubsection{$G=\mathrm{U}(1)\rtimes_{\mathbb{Z}_2^F}\mathbb{Z}_4^{TF}$ and $\mathbb{Z}$ Classification}
Another way to combine the anti-unitary symmetry with $\mathrm{U}(1)$ is to consider $G=\mathrm{U}(1)\rtimes_{\mathbb{Z}_2^F}\mathbb{Z}_4^{TF}$, meaning that the $\mathrm{U}(1)$ rotation does not commute with the generator $\mathcal{T}\in \mathbb{Z}_4^{TF}$ of the anti-unitary symmetry, but $\mathcal{T}^2=-1$ (or more precisely, $\mathcal{T}$ squares to the fermion parity operator, hence the anti-unitary symmetry is four-fold and sharing the $\mathbb{Z}_2^F$ subgroup with $\mathrm{U}(1)$). This is actually the standard time-reversal symmetry of electrons that enforces a Kramers doublet \cite{Kramers1930}. The fermion operator $\psi=(\psi_\uparrow, \psi_\downarrow)^\intercal$ is a doublet, which transforms under the symmetry as
\eqs{\mathrm{U}(1)&:\psi\to\mathrm{e}^{\mathrm{i}\phi}\psi,\\
\mathbb{Z}_4^{TF}&:\psi_\uparrow\to\mathcal{K}\psi_\downarrow, \quad \psi_\downarrow\to\mathcal{K}\psi_\uparrow.}
The time-reversal symmetry is denoted as a $\mathbb{Z}_4$ group because its two-fold action is non-trivial and corresponds to the fermion parity operation ($\psi\to-\psi$) that falls in the $\mathbb{Z}_2^F$ subgroup of $\mathrm{U}(1)$.
The mass term $H=m(\psi^\dagger\psi-1/2)$ is still allowed in the effective Hamiltonian under the $\mathbb{Z}_4^{TF}$ symmetry. As the anti-unitary symmetry does not introduce new restrictions, the SPT classification remains the same as the $G=\mathrm{U}(1)$ case, which is $\mathbb{Z}$. Therefore, the Fermi surface anomaly is also $\mathbb{Z}$ classified in this case.
\subsubsection{$G=\mathrm{U}(1)\times\mathbb{Z}_2^T$ and Trivial Classification}
We further consider $G=\mathrm{U}(1)\times\mathbb{Z}_2^T$ where the $\mathbb{Z}_2^T$ anti-unitary symmetry operation commutes with the $\mathrm{U}(1)$ symmetry operation. The symmetry action can be realized on the fermion operator as
\eqs{\mathrm{U}(1)&:\psi\to\mathrm{e}^{\mathrm{i}\phi}\psi,\\
\mathbb{Z}_2^T&:\psi\to\mathcal{K}\psi^\dagger, \psi^\dagger\to\mathcal{K}\psi.}
The anti-unitary symmetry $\mathbb{Z}_2^T$ here should be interpreted as a particle-hole symmetry, which maps $\psi$ and $\psi^\dagger$ to each other.
In the presence of this symmetry, the original mass term $H=m(\psi^\dagger\psi-1/2)$ is forbidden in the effective Hamiltonian. A symmetry-allowed mass term can only be realized in the doubled system, where the fermion operator $\psi=(\psi_+,\psi_-)^\intercal$ must contain two components, and the symmetry $G=\mathrm{U}(1)\times\mathbb{Z}_2^T$ acts as
\eqs{\mathrm{U}(1)&:\psi_\pm\to\mathrm{e}^{\mathrm{i}\phi}\psi_\pm,\\
\mathbb{Z}_2^T&:\psi_\pm\to\mathcal{K}\psi_\mp^\dagger, \psi_\pm^\dagger\to\mathcal{K}\psi_\mp,}
such that two anti-commuting mass terms are allowed
\eq{H=m(\psi_{+}^\dagger\psi_{+} - \psi_{-}^\dagger\psi_{-})+m'(\mathrm{i}\psi_{-}^\dagger \psi_{+}+\text{h.c.}).}
It is possible to tune smoothly from $m<0$ to $m>0$ without closing the excitation gap of this $(0+1)$-dimensional system in the presence of $m'\neq 0$. Therefore, all gapped state belongs to the same SPT phase and the SPT classification is trivial.
Mapping to the Fermi surface, imposing the particle-hole symmetry enforces the Fermi surface to be perfectly nested \cite{Virosztek1990PRB}. Tuning $m$ from the inside ($m<0$) to the outside ($m>0$) of the Fermi surface, two bands cross at the Fermi level. In this case, a band hybridization term (similar to $m'$) is sufficient to gap out the Fermi surface fully without symmetry breaking (note that the nesting momentum is already zero in this case). Therefore, the system is free of Fermi surface anomaly, consistent with the trivial classification.
\section{Summary}\label{sec: summary}
In this work, we propose an approach to classify the Fermi surface anomaly by leveraging the correspondence between the Fermi liquid and the Chern insulator in the phase space. Specifically, we suggest using the classification of interacting fermionic symmetry-protected topological (SPT) states in the phase space to determine the Fermi surface anomaly. The non-commutative geometry of the phase space implies that the phase-space SPT states follow unusual dimension counting, where the momentum space dimensions are treated as negative dimensions. As a result, the effective spacetime dimension for the classification problem is reduced to $(0+1)$D. To support our argument, we analyze a phase-space Dirac fermion field theory of fermionic SPT states and apply the dimension reduction technique after resolving the non-commutative geometry. Our proposed approach offers a comprehensive and rigorous way to classify the Fermi surface anomaly, providing valuable insights into the universal low-energy features of electrons in metals.
\begin{table}[hbtp]
\caption{The summary of the Fermi surface anomaly-free condition. The system with a certain number of copies can be symmetrically gapped. The anomaly is free if and only if $\nu = k \frac{\operatorname{vol}\Omega}{(2\pi)^d} =0 \mod 1$. The integer-valued index $k$ is classified by cobordism in \tabref{tab: class}. The case that the normalized Fermi volume $\frac{\operatorname{vol}\Omega}{(2\pi)^d}$ is irrational is discussed in \secref{sec: cases} and summarized in the fourth column. The condition for the normalized Fermi volume being rational number $p/q$ with $p,q\in \mathbb{Z}$ is summarized in the third column. }
\begin{center}
\begin{tabular}{c|c|c|c}
\hline \hline
\multirow{2}{*}{$\mathrm{L}G$} & integer & \multicolumn{2}{c}{Number of copies to trivialize} \\ \cline{3-4}
& index $k$ & $\frac{\operatorname{vol}\Omega}{(2\pi)^d}=p/q$ & $\frac{\operatorname{vol}\Omega}{(2\pi)^d}$ is irrational \\ \hline
$\mathrm{L}\mathrm{U}(1)$ & $\mathbb{Z}$ & $q$ & Never \\
$\mathrm{L}\mathrm{U}(n)$ & $\mathbb{Z}$ & $q$ & Never\\
$\mathrm{L}\mathrm{SU}(2n)$ & $0$ & $1$ & 1\\
$\tilde{\mathrm{L}}\mathrm{U}(1)\times\mathbb{Z}_{2n}$ & $\mathbb{Z}_{2n}$ & $\min(q,2n)$ & $2n$ \\
$\mathrm{L}\mathrm{SU}(2n+1)$ & $\mathbb{Z}_2$ & $\min(q,2)$ & 2\\
$\tilde{\mathrm{L}}\mathrm{U}(1)\times\mathbb{Z}_{2n+1}$ & $\mathbb{Z}_{4n+2}$ & $\min(q,4n+2)$ & $4n+2$ \\
$\mathrm{LU}(1)\rtimes\mathbb{Z}_2^T$ & $\mathbb{Z}$ & $q$ & Never\\
$\mathrm{LU}(1)\rtimes_{\mathbb{Z}_2^F}\mathbb{Z}_4^{TF}$ & $\mathbb{Z}$ & $q$ & Never \\
$\mathrm{LU}(1)\times\mathbb{Z}_2^T$ & 0 & $1$ & $1$\\
\hline \hline
\end{tabular}
\end{center}
\label{tab: summary}
\end{table}
To summarize, the Fermi surface anomaly can be defined by the projective representation of the internal symmetry $G$ on the interstitial defect in the fermion system. It is characterized by a $\mathrm{U}(1)$-valued anomaly index
\eq{\nu=\sum_{\alpha}k_\alpha\frac{\operatorname{vol}\Omega_\alpha}{(2\pi)^d}\mod 1,}
which is a sum of contributions from each Fermi surface labeled by $\alpha$. Each term in the summation contains an integer-valued index $k_\alpha$ multiplied with a real-valued fraction $\operatorname{vol}\Omega_\alpha/(2\pi)^d$. The ratio $\operatorname{vol}\Omega_\alpha/(2\pi)^d$ describes the fraction of Fermi volume $\operatorname{vol}\Omega_\alpha$ in the Brillouin zone. The integer $k_\alpha=\pm q_\alpha N_\alpha$ is given by the fermion charge $q_\alpha$ and multiplicity (flavor degeneracy) $N_\alpha$ of the Fermi surface and classified by the cobordism group $\text{TP}_1(\mathrm{Spin}\ltimes G)$. Assuming a generic Fermi volume for each Fermi surface (i.e.~$\operatorname{vol}\Omega_\alpha/(2\pi)^d$ is not a rational number), the Fermi surface anomaly is determined by the cobordism index $k_\alpha\in \text{TP}_1(\mathrm{Spin}\ltimes G)$. The classification result for a list of internal symmetries $G$ is shown in \tabref{tab: class}.
The complete gapping of the Fermi surface into a product state is feasible if and only if the Fermi surface anomaly vanishes, i.e.~$\nu\sim0$. This can occur through the opening of a superconducting gap (when $G=\mathbb{Z}_2$) or a perfect-nested band hybridization gap (when $G=\mathrm{U}(1)\times\mathbb{Z}_2^T$) at the free fermion level, when the fermion flavor number falls in the trivial cobordism class. Nevertheless, unconventional mechanisms exist for gapping, referred to as the Fermi surface symmetric mass generation (SMG) \cite{Lu2210.16304}, that can solely be realized via interaction effects when the Fermi surface anomaly vanishes but no fermion bilinear gapping term is allowed due to symmetry constraints. One informative example of such is the quartet (charge-4e) fermion condensation \cite{KivelsonPRB1990, Kameicond-mat/0505468, Berg0810.1564, Radzihovsky0812.3945, Berg0904.1230, Moon1202.5389, Jiang1607.01770} on Fermi surfaces with internal $G=\mathbb{Z}_4$ symmetry, where the Fermi surface anomaly is $\mathbb{Z}_4$ classified. In this scenario, every four multiples of Fermi surfaces can be collectively gapped via four-fermion interactions. The fact that this gapping mechanism is feasible in all dimensions aligns with our assertion that the Fermi surface anomaly is universally categorized by $(0+1)$-dimensional fermionic SPT phases. More cases of Fermi surface trivialization are summarized in \tabref{tab: summary}.
The classification of Fermi surface anomalies can help us understand the possible ways a Fermi surface can be gapped and the role of interactions in this process. The recent proposal of the ancilla qubit approach \cite{Zhang2001.09159, Zhang2006.01140} for pseudo-gap physics draws a connection between the pseudo-gap metal to Fermi liquid transition with the Fermi surface SMG transition in the ancilla layers, as both transitions are described by field theories of fermionic deconfined quantum critical points \cite{You1705.09313, You1711.00863, Zou2002.02972, Zou2004.14391, Hou2212.13364}. The Fermi surface anomaly constrains the dynamical behavior of such field theories and can potentially shed light on the open problem of pseudo-gap transition in correlated materials.
\begin{acknowledgments}
We acknowledge the discussions with Xiao-Liang Qi, Cenke Xu, Chao-Ming Jian, Chong Wang, Meng Cheng, Nathan Seiberg, Dominic Else, Ryan Thorngren, Zhen Bi, Umang Mehta, Ashvin Vishwanath, Charles Kane, Ya-Hui Zhang, Subir Sachdev, John McGreevy. DCL and YZY are supported by the National Science Foundation (NSF) Grant DMR-2238360 ``Theoretical and Numerical Investigation of Symmetric Mass Generation''. JW is supported by the Center for Mathematical Sciences and Applications at Harvard University and
NSF Grant DMS-1607871 ``Analysis, Geometry and Mathematical Physics.''
\end{acknowledgments}
|
1,108,101,564,278 | arxiv | \section{Introduction}
\label{sec:intro}
\begin{figure*}
\setlength{\fboxsep}{0pt}
\centering
~\hfill \fbox{\includegraphics[height=4cm]{arrangement1-croppedSmall.JPG}}
\fbox{\includegraphics[height=4cm]{arrangement2-croppedSmall.JPG}}
\fbox{\includegraphics[height=4cm]{arrangement3-croppedSmall.JPG}}\hfill
\fbox{\includegraphics[height=4cm]{placingRice1Small.JPG}}\hfill~
\caption{Left: different ways of organizing a set of grocery objects on shelves
according to varying user preferences. Right: our approach enables a service
robot to tidy up objects by predicting and following such subjective
preferences. We predict pairwise preferences between objects with respect
to placing them on the same shelf. We then assign objects to different
shelves by maximally satisfying these preferences.}
\label{fig:motivation}
\end{figure*}
One of the key goals of robotics is to develop autonomous service
robots that assist humans in their everyday life. One envisions smart
robots that can undertake a variety of tasks including tidying up,
cleaning, and attending to the needs of disabled people. For
performing such tasks effectively, each user should teach her robot
\emph{how} she likes those tasks to be performed. However, learning
user preferences is an intricate problem. In a home scenario, for
example, each user has a preferred way of sorting and storing
groceries and kitchenware items in different shelves or
containers. Many of our preferences stem from factors such as personal
taste, cultural background, or common sense, which are hard to
formulate or model a priori. At the same time, it is highly
impractical for the robot to constantly query users about their
preferences.
In this work, we provide a novel solution to the problem of learning
user preferences for arranging objects in tidy-up tasks. Our method is
based on the framework of collaborative filtering, which is a popular
paradigm from the data-mining community. Collaborative filtering is
generally used for learning user preferences in a wide variety of
practical applications including suggesting movies on Netflix or
products on Amazon. Our method predicts user preferences of pairwise
object arrangements based on partially-known preferences, and then computes the
best subdivision of objects in shelves or boxes. It is able
to encode multiple user preferences for each object and it does not
require that all user preferences are specified for all object-pairs.
Our approach is even able to make predictions when novel objects,
unknown to previous users, are presented to the robot. For this, we
combine collaborative filtering with a mixture of experts that compute
similarities between objects by using object hierarchies. These
hierarchies consist of product categories downloaded from online
shops, supermarkets, etc. Finally, we organize objects in different
containers by finding object groups that maximally satisfy the
predicted pairwise constraints. For this, we solve a minimum $k$-cut
problem by efficiently applying self-tuning spectral clustering. Our
prediction model is easy to update and simultaneously offers the
possibility for lifelong learning and improvement.
To discover patterns in user preferences, we first bootstrap our learning by
collecting many user preferences, e.g., through crowdsourcing surveys. Using
this data, we learn a model for object-pair preferences for a certain tidy-up
task. Given partial knowledge of a new user's preferences (e.g., by querying the
user or observing how the user has arranged some objects in the
environment), the robot can then use this model to predict unknown
object-pair preferences of the new user, and sort objects accordingly.
To summarize, we make the following contributions:
\begin{itemize}
\item We model the problem of organizing objects in different
containers using the framework of collaborative filtering for
predicting personalized preferences;
\item We present an approach by which a service robot can easily learn
the preferences of a new user using observations from the
environment and a model of preferences learned from several previous
users;
\item We present a novel method to complement standard collaborative
filtering techniques by leveraging information from the Web in cases
where there is not enough ratings to learn a model;
\item We present an extensive experimental evaluation using
crowdsourcing data that demonstrates that our approach is suitable
for lifelong learning of user preferences with respect to organizing
objects.
\end{itemize}
Our evaluation covers two relevant tidy-up scenarios, arranging toys
in different boxes and grocery items on shelves, as well as a
real-robot experiment. For training, we collected preferences from
over 1,200 users through different surveys.
This paper incorporates the approach and initial results from our
previous conference publication~\cite{abdo15icra}, and extends our
work in the following ways: \emph{i}) we present a more thorough
review of related work, \emph{ii}) we present a new extension of our
approach for inferring the preferences of new users in an efficient
manner, and \emph{iii}) we conduct a more extensive experimental
evaluation of all aspects of our method, presenting new results and
insights.
\section{Related Work}
\label{sec:related}
Equipping service robots with the knowledge and skills needed to attend to
complex chores in domestic environments has been the aim of researchers for
years. Indeed, recent advances in perception, manipulation, planning, and
control have enabled robots to perform a variety of chores that range from
cleaning and tidying up to folding
laundry~\cite{saxena2008robotic,hess12iros,miller2012geometric,doumanoglou2014autonomous}.
However, as highlighted by a number of researchers, service robots should also
be able to attend to such tasks in a manner that corresponds to the personal
preferences of end users~\cite{TakayamaChores13,Forlizzi2006,
Dautenhahn2005IROS,PantofaruHRI12,Ray2008, Smarr14IJSR, Cha2015HRI}. For
example, the results of \citeauthor{PantofaruHRI12} show that people exhibit
strong feelings with respect to robots organizing personal items, suggesting
the need for the robot to ask humans to make decisions about where to store
them~\cite{PantofaruHRI12}. In this work, we present a novel approach by which
robots can discover patterns in organizing objects from a corpus of user
preferences in order to achieve preferred object arrangements when tidying up
for a specific user. This allows a robot to predict the preferred location
(e.g., a specific shelf) to store an object by observing how the user has
previously arranged other objects in the same environment. Several researchers
have leveraged the fact that our environments are rich with cues that can
assist robots in various tasks that require reasoning about objects and their
locations. For example, different works have addressed object classification
or predicting the locations of objects using typical 3D structures in indoor
environments or object-object relations such as co-occurrences in a
scene~\cite{joho11ras, aydemir2012exploiting,lorbach_object_search_2014,
icra14ensmln, Kunze14}. However, our work is concerned with learning
pairwise object preferences to compute preferred arrangements when tidying
up. In the remainder of this section, we discuss prior work in the
literature that is most relevant to the problem we address and the
techniques we present.
\paragraph{Learning Object Arrangements and Placements} Recently,
\citeauthor{schuster2010perceiving} presented an approach for distinguishing
clutter from clutter-free areas in domestic environments so that a robot can
reason about suitable surfaces for placing
objects~\cite{schuster2010perceiving}. Related to that, the work of
\citeauthor{jiang2012learning} targets learning physically stable and
semantically preferred poses for placing objects given the 3D geometry of the
scene~\cite{jiang2012learning}. \citeauthor{joho12rss} developed a novel
hierarchical nonparametric Bayesian model for learning scenes consisting of
different object constellations~\cite{joho12rss}. Their method can be used to
sample missing objects and their poses to complete partial scenes based on
previously seen constellations. Other approaches have targeted synthesising
artificial 3D object arrangements that respect constraints like physical
stability or that are semantically plausible~\cite{xu2002constraint,
Fisher:2012:ESO:2366145.2366154}. We view such works as complimentary to
ours, as we address the problem of learning preferred groupings of objects
in different containers (e.g., shelves) for the purpose of tidying up.
After predicting the preferred container for a specific object, our approach
assumes that the robot is equipped with a suitable technique to compute a
valid placement or pose of the object in that location. Moreover, as we
explicitly consider sorting objects when tidying up, we do not reason about
object affordances associated with various human poses and activities in the
scene (e.g., cooking, working at a desk, etc) when computing arrangements,
as in the work of \citeauthor{jiang2012humancontext} and
\citeauthor{savva2014scenegrok}~\cite{jiang2012humancontext,
savva2014scenegrok}.
Related to our work, previous approaches have addressed learning
organizational patterns from surveys conducted with different users.
\citeauthor{Schuster12} presented an approach for predicting the location
for storing different objects (e.g., cupboard, drawer, fridge, etc) based on
other objects observed in the environment~\cite{Schuster12}. They consider
different features that capture object-related properties (e.g., the purpose of
an object or its semantic similarity to other objects) and train classifiers
that predict the location at which an object should be stored. Similarly,
\citeauthor{Cha2015HRI} explored using different features describing both
objects and users to train classifiers for predicting object locations in
user homes~\cite{Cha2015HRI}. Similar to \citeauthor{Schuster12}, we
also make use of a similarity measure based on hierarchies
mined from the Web and use it for making predictions for
objects for which we have no training data. However, in contrast to these
works, our approach learns latent organizational patterns across different
users in a collaborative manner and without the need for designing features
that describe objects or users. Recently, \citeauthor{toris2015unsupervised}
presented an approach to learn placing locations of objects based on
crowdsourcing data from many users~\cite{toris2015unsupervised}. Their
approach allows for learning multiple hypotheses for placing the same object,
and for reasoning about the most likely frame of reference when learning the
target poses. They consider different pick-and-place tasks such as setting a
table or putting away dirty dishes, where the aim is to infer the final object
configuration at the goal. Our approach is also able to capture multiple
modes with respect to the preferred location for placing a certain object. In
contrast to \citeauthor{toris2015unsupervised}, we explicitly target learning
patterns in user preferences with respect to sorting objects in different
containers. Moreover, in contrast to the above works, our method allows the
robot to adapt to the available number of containers in a certain environment
to sort the objects while satisfying the user's preferences as much as
possible. Note that in this work, we assume the robot is equipped with a map
of the environment where relevant containers are already identified. Previous
work has focused on constructing such semantic maps that are useful for robots
when planning to solve complex household
tasks~\cite{Vasudevan2007,zender2008conceptual,iros12semantic_mapping}.
\paragraph{Service Robots Leveraging the Web} Recently, several researches
have leveraged the Web as a useful source of information for assisting
service robots in different tasks~\cite{tenorth11www, kehoe2013ICRA}.
To cope with objects that are not in the robot's database, our method combines
collaborative filtering with a mixture of experts approach based on object
hierarchies we mine from online stores. This allows us to compute the semantic
similarity of a new object to previously known objects to compensate for missing
user ratings. The work by~\citeauthor{Schuster12} has also utilized such
similarity measures as features when training classifiers for predicting
locations for storing objects~\cite{Schuster12}. \citeauthor{irosws11germandeli}
also leverage information from online stores but in the context of object
detection~\cite{irosws11germandeli}. \citeauthor{Kaiser2014} recently presented
an approach for mining texts obtained from the Web to extract common sense
knowledge and object locations for planning tasks in domestic
environments~\cite{Kaiser2014}. Moreover, \citeauthor{icra14ensmln} presented an
ensemble approach where different perception techniques are combined in the
context of detecting everyday objects~\cite{icra14ensmln}.
\paragraph{Collaborative Filtering}
We predict user preferences for organizing objects based on the
framework of collaborative filtering, a successful paradigm from the data mining
community for making personalized user
recommendations of
products~\cite{CannyCF2002,bennett2007netflix,koren2008factorization,koren2010factor,sarwar2001item}.
Such techniques are known for their scalability and suitability for life-long
learning settings, where the quality of the predictions made by the
recommender system improves with more users providing their ratings. Outside
the realm of customers and products, factorization-based collaborative filtering
has recently been successfully applied to other domains including
action-recognition in videos \cite{Matikainen2012ModelRecom} and
predicting drug-target interactions~\cite{Temerinac-Ott2015}. Recently,
\citeauthor{Matikainen2013Bandits} combined a recommender system
with a multi-armed bandit formulation for suggesting good floor
coverage
strategies to a vacuum-cleaning robot by modeling different room layouts
as users~\cite{Matikainen2013Bandits}. To the best of our knowledge, we
believe we are the first work to use collaborative filtering for
predicting personalized user preferences in the context of service robotics.
\paragraph{Crowdsourcing for Robotics}
To learn different user preferences, we collect data from many non-expert users
using a crowdsourcing platform. Prior work has also leveraged crowdsourcing for
data labeling or as an efficient platform for transferring human knowledge to
robots, e.g.,~\cite{DengKrauseFei-Fei_CVPR2013,kent2015icra}. For example,
\citeauthor{Sorokin2010} utilized crowdsourcing to teach robots how to grasp
new objects~\cite{Sorokin2010}. Moreover, several researchers have used
crowdsourcing to facilitate learning manipulation tasks from large numbers of
human
demonstrations~\cite{chungaccelerating,toris2014robot,toris2015unsupervised,ratner2015web}.
In the context of learning user preferences, \citeauthor{jain2015icra}
recently presented a new crowdsourcing platform where non-experts can label
segments in robot trajectories as desirable or not~\cite{jain2015icra}. This
is then used to learn a cost function for planning preferred robot
trajectories in different indoor environments.
\section{Collaborative Filtering for Predicting \\
Pairwise Object Preferences}
\label{sec:CF}
Our goal is to enable a service robot to reason about the preferred way to sort
a set of objects into containers when tidying up in the environment of a
specific user. To achieve this, we aim at predicting the preferences of the user
with respect to grouping different objects together. As the types of objects
(e.g., grocery items) and number of containers (e.g., shelves) typically vary
across environments, we aim to learn user preferences for object-object
combinations, rather than directly learning an association between an object and
a specific container. The problem of predicting an object-object preference for
a user closely resembles that of suggesting products to customers based on their
tastes. This problem is widely addressed by employing recommender systems,
commonly used by websites and online stores such as Amazon and Netflix. The
key idea there is to learn to make recommendations based on the purchasing
histories of different users collaboratively.
In the same spirit of reasoning about products and users, our method relates
pairs of objects to users. We predict a user preference, or \textit{rating}, for
an object-pair based on two sources of information: \emph{i}) known preferences
of the user, e.g., how the user has previously organized other objects, and
\emph{ii}) how other users have organized these objects in their environments.
\subsection{Problem Formulation}
\label{sec:probForm}
More formally, let $\mathcal{O} = \{o_1, o_2, \dots, o_O\}$ be a set of objects,
each belonging to a known class, e.g., book, coffee, stapler, etc.
Accordingly, we define $\mathcal{P}=\{p_1, p_2, \dots, p_M\}$ as the set of all
pairs of objects from $\mathcal{O}$. We assume to have a finite number of
\emph{containers} $\mathcal{C}=\{c_1, c_2, \dots, c_C\}$, which the robot
can use to organize the objects, e.g., shelves, drawers, boxes, etc. We
model each container as a set which could be $\oldemptyset$ or could
contain a subset of $\mathcal{O}$. Given a set of users $\mathcal{U} = \{u_1,
\dots, u_N\}$, we assign a rating $r_{ij}$ to a pair $p_i = \{o_l, o_k\}$
to denote the preference of user $u_j$ for placing $o_l$ and $o_k$ in the
same container. Each rating takes a value between 0 and 1, where 0 means
that the user prefers to place the objects of the corresponding pair into
separate containers, and 1 means that the user prefers placing them
together. For convenience, we use $r(o_l, o_k)$ to denote the rating for
the pair consisting of objects $o_l$ and $o_k$ when the user is clear from
the context. We can now construct a ratings matrix $\mathbf{R}$ of size $M
\times N$, where the rows correspond to the elements in $\mathcal{P}$ and the
columns to the users, see \figref{fig:ratingsMatrix}. We use $R$ to denote
the number of known ratings in $\mathbf{R}$. Note that typically, $R \ll
MN$, i.e., $\mathbf{R}$ is missing most of its entries. This is due to the
fact that each user typically ``rates'' only a small subset of
object-pairs. In this work, we denote the set of indices of object-pairs
that have been rated by user $u_j$ by $\mathcal{I}_j \subseteq \{1, \dots,
M\}$. Analogously, $\mathcal{J}_i \subseteq \{1, \dots, N\}$ is the set of
indices of users who have rated object-pair $p_i$.
Given a set of objects $\mathcal{O}'\subseteq\mathcal{O}$ that the robot has to
sort for a specific user $u_j$, and the set of containers $\mathcal{C}$
available for the robot to complete this task, our goal is to: \emph{i})
predict the unknown preference $\hat{r}_{ij}$ of the user for each of the
object-pairs $\mathcal{P}'$ over $\mathcal{O}'$ and, accordingly, \emph{ii}) assign
each object to a specific container such that the user's preferences are
maximally satisfied.
\begin{figure}[t]
\centering
\includegraphics[height=4.5cm]{ratingsMatrix.pdf}
\caption{The ratings matrix $\mathbf{R}$. Each entry $r_{ij}$
corresponds to the rating of a user $u_j$ for an object-pair
$p_i=\{o_k, o_l\}$, a value between 0 and 1 denoting whether the two objects
should be placed in the same container or not. Our goal is to predict the
missing ratings denoted by * and compute a partitioning of the objects in
different containers that satisfies the user preferences.}
\label{fig:ratingsMatrix}
\end{figure}
\subsection{Collaborative Learning of User Preferences}
\label{sec:cfLearning}
We aim to discover latent patterns in the ratings matrix $\mathbf{R}$ that
enable us to make predictions about the preferences of users. For this, we take
from factorization-based collaborative
filtering~\cite{koren2008factorization,koren2010factor}.
First, we decompose $\mathbf{R}$ into a bias matrix $\mathbf{B}$ and a residual
ratings matrix $\overline{\mathbf{R}}$:
\begin{equation}
\label{eq:rDecomp}
\mathbf{R} = \mathbf{B} + \overline{\mathbf{R}}.
\end{equation}
Each entry $b_{ij}$ in $\mathbf{B}$ is formulated as follows:
\begin{equation}
b_{ij} = \mu + b_i + b_j,
\end{equation}
where $\mu$ is a global bias term, $b_i$ is the bias of the pair
$p_i$, and $b_j$ is the bias of user $u_j$. We compute $\mu$ as the
mean rating over all users and object-pairs in $\mathbf{R}$, i.e.,
\begin{equation}
\mu = \frac{1}{R} \sum_{i=1}^M \sum_{j\in \mathcal{J}_i} r_{ij}.
\end{equation}
The bias $b_j$ describes how high or low a certain user $u_j$ tends to
rate object-pairs compared to the average user. Similarly,
$b_i$ captures the tendency of a pair $p_i$ to receive high or low
ratings. For example, the pair $\{$\emph{salt}, \emph{pepper}$\}$
tends to receive generally high ratings compared to the pair $\{$\emph{candy},
\emph{vinegar}$\}$.
After removing the bias, the residual ratings matrix
$\overline{\mathbf{R}}$ captures the fine, subjective user preferences that we
aim to learn by factorizing the matrix to uncover latent patterns. Due to the
large amount of missing ratings in $\overline{\mathbf{R}}$, it is infeasible to
apply classical factorization techniques such as singular value decomposition.
Instead, we learn a data-driven factorization based only on the \emph{known
entries} in $\overline{\mathbf{R}}$. This approach has been shown to lead to
better results in matrix completion or factorization problems compared to
imputation of the missing values~\cite{CannyCF2002, koren2008factorization}.
We express $\overline{\mathbf{R}}$ as the product of an object-pair factors
matrix $\mathbf{S}^T$, and a user factors matrix $\mathbf{T}$ of sizes $M
\times K$ and $K \times N$, respectively. Each column $\mathbf{s}_i$ of
$\mathbf{S}$ is a $K$-dimensional factors vector corresponding to an
object-pair $p_i$. Similarly, each column $\mathbf{t}_j$ in $\mathbf{T}$ is a
$K$-dimensional factors vector associated with a user $u_j$. We compute the
residual rating $\overline{r}_{ij}$ as the dot product of the factor vectors
for object-pair $p_i$ and user $u_j$, i.e.,
\begin{equation}
\label{eq:ratingDecomp}
\overline{r}_{ij} \,=\, \mathbf{s}_i^T \cdot \mathbf{t}_j.
\end{equation}
The vectors $\mathbf{s}$ and $\mathbf{t}$ are low-dimensional projections of the
pairs and users, respectively, capturing latent characteristics of both. Pairs
or users that are close to each other in that space are similar with respect to
some property. For example, some users could prefer to group objects together
based on their shape, whereas others do so based on their function.
Accordingly, our prediction $\hat{r}_{ij}$ for the rating of an object-pair
$p_i$ by a user $u_j$ is expressed as
\begin{equation}
\label{eq:ratingDetailed}
\begin{aligned}
\hat{r}_{ij} &= b_{ij} + \overline{r}_{ij}\\
&= \mu + b_i + b_j + \mathbf{s}_i^T \cdot \mathbf{t}_j.
\end{aligned}
\end{equation}
We learn the biases and factor vectors from all available ratings in
$\mathbf{R}$ by formulating an optimization problem. The goal is to
minimize the difference between the observed ratings $r_{ij}$ made by
users and the predictions $\hat{r}_{ij}$ of the system over all known
ratings. Let the error associated with rating $r_{ij}$ be
\begin{equation}
\label{eq:predError}
e_{ij} = r_{ij} - (\mu + b_i + b_j + \mathbf{s}_i^T \cdot \mathbf{t}_j).
\end{equation}
We jointly learn the biases and factors that minimize the error over all
known ratings, i.e.,
\begin{equation}
\begin{aligned}
b_*, \mathbf{S}, \mathbf{T} &= \argmin_{b_*,\mathbf{S},\mathbf{T}} \sum_{i=1}^M
\sum_{j\in \mathcal{J}_i} (e_{ij})^2 +\\
&\frac{\lambda}{2}(b_i^2 + b_j^2 + \|\mathbf{s}_i\|^2 +
\|\mathbf{t}_j\|^2),
\end{aligned}
\label{eq:optimization}
\end{equation}
where $b_*$ denotes all object-pair and user biases and $\lambda$ is a
regularizer.
To do so, we use L-BFGS optimization with a random initialization for all
variables~\cite{nocedal1980lbfgs}. At every step of the optimization, we update
the value of each variable based on the error gradient with respect to that
variable, which we derive from~\eqref{eq:optimization}.
\subsection{Probing and Predicting for New Users}
\label{sec:cfNewUsers}
After learning the biases and factor vectors for all users and object-pairs as
in \secref{sec:cfLearning}, we can use \eqref{eq:ratingDetailed} to predict the
requested rating $\hat{r}_{ij}$ of a user $u_j$ for an object-pair $p_i$ that
she has not rated before. However, this implies that we have already learned the
bias $b_j$ and factor vector $\mathbf{t}_j$ associated with that user. In other
words, at least one entry in the $j$-th column of $\mathbf{R}$ should be known.
The set of known preferences for a certain user, used for learning her model,
are sometimes referred to as \emph{probes} in the recommender system
literature. In this work, we use \emph{probing} to refer to the process of
eliciting knowledge about a new user.
\subsubsection{Probing}
\label{sec:probing}
In the context of a tidy-up service robot, we envision two strategies to do
so. In the first probing approach, the robot infers some preferences of the
user based on how she has previously sorted objects in the containers
$\mathcal{C}$ in the environment. By detecting the objects it encounters there,
the robot can infer the probe rating for a certain object-pair based on
whether the two objects are in the same container or not:
\begin{equation}
r_{ij} = \begin{cases}
1,& \text{if } o_l, o_k \in c_m\\
0,& \text{if } o_l \in c_m, o_k\in c_n, m\neq n.
\end{cases}
\label{eqn:probing}
\end{equation}
We do this for all object-pairs that the robot observes in the environment and
fill the corresponding entries in the user's column with the inferred ratings,
see~\figref{fig:probingExample}.
In the second probing approach, we rely on actively querying the user
about her preferences for a set of object-pairs. For this, we rely on simple,
out-of-the-box user interface solutions such as a text interface where the
user can provide a rating. Let $P$ be the
maximum number of probe ratings for which the robot queries the user.
One naive approach is to acquire probes by randomly querying the user about $P$
object-pairs. However, we aim at making accurate predictions with as few probes
as possible. Thus, we propose an efficient strategy based on insights into the
factorization of \secref{sec:cfLearning}. The columns of the matrix $\mathbf{S}$
can be seen as a low dimensional projection of the rating matrix capturing the
similarities between object-pairs; object-pairs that are close in that space
tend to be treated similarly by users. We therefore propose to cluster the
columns of $\mathbf{S}$ in $P$ groups, randomly select one column as a
representative from each cluster, and query the user about the associated
object-pair. For clustering, we use $k$-means with $P$ clusters. In this way,
the queries to the users are selected to capture the complete spectrum of
preferences.
Note that the nature of a collaborative filtering system allows us to
continuously add probe ratings for a user in the ratings matrix, either
through observations of how objects are organized in the environment, or by
active querying as needed. This results in a life-long and flexible approach
where the robot can continuously update its knowledge about the user.
\begin{figure}
\centering
\includegraphics[]{probing.pdf}
\caption{A simple illustration of the probing process by which the robot can
infer some preferences for a new user. We set a rating of 0 for a pair of
objects that the robot observes to be in different containers, and a rating
of 1 for those in the same container. Using these ratings, we can learn a
model of the user to predict her preferences for other object-pairs.}
\label{fig:probingExample}
\end{figure}
\begin{figure*}[t]
\centering
~\hfill\includegraphics[width=0.4\textwidth]{expert2.pdf}\hfill
\includegraphics[width=0.38706\textwidth]{expert3.pdf}\hfill~
\caption{Two examples of expert hierarchies used to compute the semantic
similarities between object classes. For example, expert~$\mathcal{E}_1$ on
the left assigns a similarity $\rho$ of 0.4 to the pair $\mathit{\{canned\
corn, canned\ tuna\}}$, whereas $\mathcal{E}_2$ on the right assigns a
similarity of 0.33 to the same pair, see~\eqref{eq:wup}.}
\label{fig:groceryExperts}
\end{figure*}
\subsubsection{Inferring a New User's Preferences}
\label{sec:onlineLearning}
After acquiring probes for the new user, we can now append her column to the
ratings matrix and learn her biases and factors vector along with those of all
object-pairs and other users in the system as in \eqref{eq:optimization}. In
practice, we can avoid re-learning the model for all users and object-pairs
known in the system. Note that the computation of the factorization will
require more time as the number of known ratings in $\mathbf{R}$ increases or
for higher values of $K$. Here, we propose a more efficient technique suitable
for inferring a new user's preferences given a previously learned
factorization. After learning with all object-pairs and users in the database,
we assume that all object-pair biases $b_i$ and factor vectors
$\mathbf{S}$ are fixed, and can be used to model the preferences of new users.
We can then formulate a smaller problem to learn the bias $b_j$ and
factors vector $\mathbf{t}_j$ of the new user $u_j$ based on the probe ratings
we have for this user, i.e.,
\begin{equation}
\begin{aligned}
b_j, \mathbf{t}_j &= \argmin_{b_j,\mathbf{t}_j} \sum_{i\in \mathcal{I}_j}
(e_{ij})^2 + \frac{\lambda}{2}(b_j^2 + \|\mathbf{t}_j\|^2),\\
&= \argmin_{b_j,\mathbf{t}_j} \sum_{i\in \mathcal{I}_j} (r_{ij} - (\mu + b_i +
b_j + \mathbf{s}_i^T \cdot \mathbf{t}_j))^2 +\\
&\ \ \ \ \ \frac{\lambda}{2}(b_j^2 + \|\mathbf{t}_j\|^2).
\end{aligned}
\label{eq:newUserLearning}
\end{equation}
Note that, in general, the inclusion of the ratings of a new user in
$\mathbf{R}$ will affect the biases and factor vectors of the object-pairs.
Whereas \eqref{eq:optimization} represents the batch learning problem to update
the model for all users and object-pairs, \eqref{eq:newUserLearning} assumes
that the object-pair biases and factor vectors have already been learned from a
sufficiently-large set of users that is representative of the new user. This can
be useful in a lifelong learning scenario where the robot can efficiently make
predictions for a new user when solving a tidy-up task. With more knowledge
accumulated about the new users, we can update the factorization model and
biases for all object-pairs and users in a batch manner.
\section{Mixture of Experts for Predicting Preferences of Unknown Objects}
\label{sec:wup}
Thus far, we presented how our approach can make predictions for object-pairs
that are known to the robot. In this section, we introduce our approach for
computing predictions for an object-pair that no user has rated before, for
example when the robot is presented with an object $o_*$ that is not in
$\mathcal{O}$. There, we cannot rely on standard collaborative filtering since we
have not learned the similarity of the pair (through its factors vector) to
others in $\mathcal{P}$.
Our idea is to leverage the known ratings in $\mathbf{R}$ as well as
prior information about object similarities that we mine from the internet. The
latter consists of object hierarchies provided by popular websites, including
online supermarkets, stores, dictionaries, etc. \figref{fig:groceryExperts}
illustrates parts of two example experts for a grocery scenario. Formally,
rather than relying on one source of information, we adopt a
\emph{mixture of experts} approach where each expert $\mathcal{E}_i$
makes use of a mined hierarchy that provides information about
similarities between different objects. The idea is to query the
expert about the unknown object $o_*$ and retrieve all the
object-pair preferences related to it. The hierarchy is a graph or a
tree where a node is an object and an edge represents an ``is-a''
relation.
When the robot is presented with a set of objects to organize that includes a
new object $o_*$, we first ignore object-pairs involving $o_*$ and follow our
standard collaborative filtering approach to estimate preferences for all other
object-pairs, i.e., \eqref{eq:ratingDetailed}. To make predictions for
object-pairs related to the new object, we compute the similarity $\rho$ of
$o_*$ to other objects using the hierarchy graph of the expert. For that, we
employ the $\mathit{wup}$ similarity~\cite{wup94}, a measure between 0 and 1
used to find semantic similarities between concepts
\begin{equation}
\label{eq:wup}
\rho_{lk} =
\frac{\mathit{depth}(\mathit{LCA}(o_l,o_k))}{0.5(\mathit{depth}(o_l)+\mathit{depth}(o_k))},
\end{equation}
where $\mathit{depth}$ is the depth of a node, and $\mathit{LCA}$ denotes the
lowest common ancestor. In the example of expert $\mathcal{E}_1$ in
\figref{fig:groceryExperts}-left, the lowest common ancestor of \emph{canned
corn} and \emph{canned tuna} is Canned Foods. Their $\mathit{wup}$ similarity
based on $\mathcal{E}_1$ and $\mathcal{E}_2$
(\figref{fig:groceryExperts}-right) is 0.4 and 0.33, respectively.
Note that in general, multiple paths could exist between two object
classes in the same expert hierarchy. For example, $\mathit{coffee}$ could
be listed under both Beverages and Breakfast Foods. In such cases, we take
the path ($\mathit{LCA}$) that results in the highest $\mathit{wup}$
measure for the queried pair.
Given this similarity measure, our idea is to use the known ratings of objects
similar to $o_*$ in order to predict the ratings related to it. For example, if
$\mathit{salt}$ is the new object, we can predict a rating for $\mathit{\{salt,
coffee\}}$ by using the rating of $\mathit{\{pepper, coffee\}}$ and the
similarity of $\mathit{salt}$ to $\mathit{pepper}$. We compute the expert
rating $\hat{r}_{\mathcal{E}_i}(o_*, o_k)$ for the pair $\{o_*,o_k\}$ as the
sum of a baseline rating, taken as the similarity $\rho_{*k}$, and a weighted
mean of the residual ratings for similar pairs, i.e.,
\begin{equation}
\label{eq:expertPred}
\hat{r}_{\mathcal{E}_i}(o_*, o_k) = \rho_{*k} + \eta_1\sum_{l \in \mathcal{L}}
\rho_{*l} \, \, (r(o_l, o_k) - \rho_{lk}),
\end{equation}
where $\eta_1 = 1/\sum_{l \in \mathcal{L}} \rho_{*l}$ is a normalizer, and
$\mathcal{L}$ is the set of object indices such that the user's rating of pair
$\{o_l,o_k\}$ is known. In other words, we rely on previous preferences of the
user ($r(o_l, o_k)$) combined with the similarity measure extracted from the
expert. The expert hierarchy captures one strategy for organizing the objects by
their similarity. If this perfectly matches the preferences of the user, then
the sum in \eqref{eq:expertPred} will be zero, and we simply take the expert's
baseline $\rho_{*k}$ when predicting the missing rating. Otherwise, we correct
the baseline based on how much the similarity measure deviates from the known
ratings of the user.
Accordingly, each of our experts predicts a rating using its associated
hierarchy. We compute a final prediction $\hat{r}_{\mathcal{E}_*}$ as a
combined estimate of all the expert ratings:
\begin{equation}
\label{eq:expertsMerged}
\hat{r}_{\mathcal{E}_*}(o_*, o_k) = \eta_2\sum_i w_i \, \hat{r}_{\mathcal{E}_i}
(o_*, o_k),
\end{equation}
where $w_i \in [0,1]$ represents the confidence of $\mathcal{E}_i$,
$\mathcal{E}_*$ denotes the mixture of experts, and $\eta_2 = 1/\sum_i
w_i$ is a normalizer. We compute the confidence of expert
$\mathcal{E}_i$ as $w_i = \exp(-e_i)$, where $e_i$ is the mean error in the
expert predictions when performing a leave-one-out cross-validation on the
known ratings of the user as in \eqref{eq:expertPred}. We set this score to
zero if it is below a threshold, which we empirically set to 0.6 in our work.
Moreover, we disregard the rating of an expert if $o_*$ cannot be found in its
hierarchy, or if all associated similarities $\rho_{*l}$ to any relevant
object $o_l$ are smaller than 0.4.
Note that in general, both objects in a new pair could have been previously
encountered by the robot separately, but no rating is known for them
together. When retrieving similar pairs to the new object-pair, we consider
the similarities of both objects in the pair to other objects. For example,
we can predict the rating of $\{\mathit{sugar}, \mathit{coffee}\}$ by
considering the ratings of both $\{\mathit{flour}, \mathit{coffee}\}$ and
$\{\mathit{sugar}, \mathit{tea}\}$.
\section{Grouping Objects Based on Predicted Preferences}
\label{sec:spectralClustering}
\begin{figure}[t]
\centering
\vspace{2mm}
\includegraphics[width=0.9\columnwidth]{graphPartitioning.pdf}\\
\vspace{5mm}
\includegraphics[height=3.2cm]{partitioning.pdf}
\caption{Top: a graph depicting the relations between objects. Each node
corresponds to an object, and the weights (different edge thickness)
correspond to the pairwise ratings. We partition the graph into subgraphs
using spectral clustering. Bottom: we assign objects in the same subgraph to
the same container.}
\label{fig:graph}
\end{figure}
Now that it is possible to compute pairwise object preferences about
known or unknown objects, we aim to sort the objects into different
containers. In general, finding a partitioning of objects such that
all pairwise constraints are satisfied is a non-trivial task. For
example, the user can have a high preference for $\mathit{\{pasta,
rice\}}$ and for $\mathit{\{pasta, tomato\ sauce\}}$, but a low
preference for $\mathit{\{rice, tomato\ sauce\}}$. Therefore, we aim
at satisfying as many of the preference constraints as possible when
grouping the objects into $C'\leq C$ containers, where $C$ is the
total number of containers the robot can use.
First, we construct a weighted graph where the nodes represent the
objects, and each edge weight is the rating of the corresponding
object-pair, see~\figref{fig:graph}. We formulate the subdivision of objects
into $C'$ containers as a problem of partitioning of the graph into
$C'$ subgraphs such that the cut (the sum of the weights between the
subgraphs) over all pairs of subgraphs is minimized. This is called
the minimum $k$-cut problem~\cite{minKCut94}. Unfortunately, finding
the optimal partitioning of the graph into $C'\leq C$ subgraphs is
NP-hard. In practice, we efficiently solve this problem by using a
spectral clustering approach~\cite{spectral96}. The main idea is to
partition the graph based on the eigenvectors of its Laplacian matrix,
$L$, as this captures the underlying connectivity of the graph.
Let $V$ be the matrix whose columns are the first $C'$ eigenvectors of
$L$. We represent each object by a row of the matrix $V$, i.e., a
$C'$-dimensional point, and apply $k$-means clustering using $C'$
clusters to get a final partitioning of the objects. To estimate the best number
of clusters, we implement a self-tuning heuristic which sets the number of
clusters $C'$ based on the location of the biggest eigen-gap from the
decomposition of $L$, which typically indicates a reliable way to partition the
graph based on the similarities of its nodes. A good estimate for this is the
number of eigenvalues of $L$ that are approximately
zero~\cite{luxburgTutorial07,zelnik2004self}. If there exist less containers in
the environment than this estimate, we use all $C$ containers for partitioning
the objects.
\begin{figure*}[t]
\setlength{\fboxsep}{0pt}
\centering
~\hfill\fbox{\includegraphics[height=4cm]{toysCroppedSmall.JPG}}\hfill
\includegraphics[height=4cm]{userFacScatterPlot.pdf}\hfill~
\caption{Left: we considered a scenario of organizing toys in boxes. Right: a
visualization of user tastes with respect to organizing toys, where we plot
the user factor vectors projected to the first two dimensions. For example,
the cluster $\mathcal{U}_1$ corresponds to users who grouped all building
blocks together in one box. Cluster $\mathcal{U}_2$ corresponds to users
who separated building blocks into standard bricks, car-shaped blocks,
and miscellaneous.}
\label{fig:toys}
\end{figure*}
\section{Experimental Evaluation}
\label{sec:experiments}
In this section, we present the experimental evaluation of our approach by
testing it on two tidy-up scenarios. We first demonstrate different aspects of
our approach for a simple scenario of organizing toys in boxes based on a small
dataset with 15 survey participants. In the second scenario, we address sorting
grocery items on shelves, and provide an extensive evaluation based on ratings
we collected from over 1,200 users using crowdsourcing.
We demonstrate that:
\emph{i}) users indeed have different preferences with respect to sorting
objects when tidying up, \emph{ii}) our approach can accurately predict personal
user preferences for organizing objects (\secref{sec:cfLearning}), \emph{iii})
we are able to efficiently and accurately learn a model for a new user's
preferences based on previous training users~(\secref{sec:cfNewUsers}),
\emph{iv}) our mixture of experts approach enables making reasonable
predictions for previously unknown objects (\secref{sec:wup}), \emph{v}) our
approach is suitable for life-long learning of user preferences, improving
with more knowledge about different users, \emph{vi}) our object partitioning
approach based on spectral clustering can handle conflicting pairwise
preferences and is flexible with respect to the number of available containers
(\secref{sec:spectralClustering}), and \emph{vii}) our approach is applicable
on a real tidy-up robot scenario.
In the following experiments, we evaluate our approach using two different
methods for acquiring probe ratings, and compare our results to different
baselines. For that, we use the following notation:
\begin{itemize}
\item CF refers to our collaborative filtering approach for learning user
preferences, as described in \secref{sec:cfLearning}. When selecting probes to
learn for a new user, we do so by clustering the object-pairs based on their
learned factor vectors in order to query the user for a range of preferences,
see~\secref{sec:probing}.
\item CF-rand selects probes randomly when learning for a new user and then uses
our collaborative filtering approach to make predictions as in
\secref{sec:cfLearning}.
\item CF-rand$'$ selects probes randomly and learns the preferences of a new
user based on the object-pair biases and factor vectors learned from previous
users as in~\secref{sec:onlineLearning}.
\item Baseline-I uses our probing approach as in CF, and then predicts each
unknown pair rating as the mean rating over all users who rated it.
\item Baseline-II selects probes randomly and then predicts each
unknown pair rating as the mean rating over all users.
\end{itemize}
In all experiments, unless stated otherwise, we set the number of factor
dimensions to $K=3$ and the regularizer to~$\lambda~=~0.01$. As part of
our implementation of \eqref{eq:optimization} and \eqref{eq:newUserLearning},
we rely on the L-BFGS implementation by
\citeauthor{liblbfgs}~\cite{liblbfgs}. Note that in our work, we assume that
the robot is equipped with suitable techniques for recognizing the objects of
interest. In our experiments, we relied on fiducial markers attached to the
objects, and also implemented a classifier that recognizes grocery items by
matching visual features extracted from the scene to a database of product
images.
\subsection{Task 1: Organizing Toys}
\label{sec:expToy}
In this experiment, we asked 15 people to sort 26 different toys in
boxes, see~\figref{fig:toys}-left. This included some plush
toys, action figures, a ball, cars, a flashlight, books, as well as different
building blocks. Each participant could use \emph{up to} six boxes to
sort the toys. Overall, four people used four boxes, seven people used
five boxes, and four people used all six available boxes to sort
the toys.
We collected these results in a ratings matrix with 15 user columns
and 325 rows representing all pairs of toys. Each entry in a user's
column is based on whether the user placed the corresponding objects in
the same box or not, see~\secref{sec:probing}. For a fine quantification, we
used these ratings to bootstrap a larger ratings matrix representing a noisy
version of the preferences with 750 users. For this, we randomly selected 78
ratings out of 325 from each column. We repeated this operation 50 times for
each user and constructed a ratings matrix of size 325$\times$750 where 76$\%$
of the ratings are missing.
As a first test, we computed a factorization of the ratings matrix as described
in~\secref{sec:cfLearning}. \figref{fig:toys}-right shows the user factors
$\mathbf{T}$ projected to the first two dimensions, giving a visualization of
the user tastes. For example, the cluster of factors labeled $\mathcal{U}_1$
corresponds to users who grouped all building blocks together in one box.
\subsubsection{Predicting User Preferences for Pairs of Toys}
\label{sec:toysFactors}
We evaluated our approach for predicting the preferences of the 15
participants by using the partial ratings in the matrix we constructed
above. For each of the participants, we queried for the ratings of
$P$ probes. We hid all other ratings from the user's column and predicted them
using the ratings matrix and our approach. We rounded each prediction to the
nearest integer on the rating scale [0,1] and compared it to the ground truth
ratings. We evaluated our results by computing the precision, recall,
and F-score of our predictions with respect to the two rating classes:
\emph{no} ($r=0$), and \emph{yes} ($r=1$). We set the number of probes to $P$ =
50, 100, \dots, 300 known ratings, and repeated the experiment 20 times for each
value, selecting different probes in each run. The mean F-scores of both rating
classes, averaged over all runs are shown in~\figref{fig:resultsToys}-top.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\columnwidth]{toysProbingFScore.pdf}\\[5mm]
\includegraphics[width=0.8\columnwidth]{toysProbingNumShelves.pdf}
\caption{Top: the mean F-score of the predictions of our
approach (CF) in the toys scenario for different numbers of known
probe ratings. We achieve an F-score of 0.98-0.99 on average over
all predicted ratings. CF-rand selects probes randomly and then
uses our approach for predicting. It is able to achieve an F-score of 0.98. On
the other hand, baselines I and II are unable to adapt to multimodal user
preferences. Bottom: the percentage of times our approach is able to predict
the correct arrangement of boxes for sorting different toys. We outperform
both baselines and improve with more probe ratings as expected, reaching a
success rate of 80$\%$. By selecting probes based on object-pair factor
vectors, we are able to achieve higher success rates with less probes compared
to CF-rand.}
\label{fig:resultsToys}
\end{figure}
Both collaborative filtering techniques outperform baselines I and II. On
average, CF and CF-rand maintain an F-score around 0.98 over all predicted pair
ratings. On the other hand, Baseline-I and Baseline-II achieve an F-score of
0.89 on average. By employing the same strategy for all users, these baselines
are only able to make good predictions for object-pairs that have a unimodal
rating distribution over all users, and cannot generalize to multiple tastes
for the same object-pair.
\subsubsection{Sorting Toys into Boxes}
We evaluated our approach for grouping toys into different boxes based on the
predicted ratings in the previous experiment. For each user, we partitioned the
objects into boxes based on the probed and predicted ratings as described
in~\secref{sec:spectralClustering}, and compared that
to the original arrangement. We computed the success rate, i.e.,
the percentage of cases where we achieve the same number and content of boxes,
see~\figref{fig:resultsToys}-bottom. Our approach has a success rate
of 80$\%$ at $P=300$. As expected, the performance improves with the
number of known probe ratings. On the other hand, even with $P=300$
known ratings, Baseline-I and Baseline-II have a success rate of only
56$\%$ and 58$\%$. Whereas CF-rand achieves a success rate of 82$\%$
at $P=300$, it requires at least 200 known probe ratings on average to
achieve success over 50$\%$. On the other hand, CF achieves a success rate of
55$\%$ with only 100 known probe ratings. The probes chosen by our
approach capture a more useful range of object-pairs based on the distribution
of their factor vectors, which is precious information to distinguish a user's
taste.
\subsubsection{Predicting Preferences for New Objects}
\label{sec:toysRUs}
We evaluated the ability of our approach to make predictions for
object-pairs that no user has rated before~(\secref{sec:wup}). For
each of the 26 toys, we removed all ratings related to that toy from
the ratings of the 15 participants. We predicted those pairs using a
mixture of three experts and the known ratings for the remaining
toys. We evaluated the F-scores of our predictions as before by
averaging over both \emph{no} and \emph{yes} ratings. We based our
experts on the hierarchy of an online toy store (toysrus.com),
appended with three different hierarchies for sorting the building
blocks (by size, color, or function).
The expert hierarchies contained between 165-178 nodes. For one of the
toys (flash light), our approach failed to make predictions since the
experts found no similarities to other toys in their hierarchy. For
all other toys, we achieved an average F-score of 0.91 and predicted
the correct box to place a new toy 83$\%$ of the time.
\begin{figure*}
\centering
\includegraphics[width=0.68\textwidth]{groceriesExampleDistributions.pdf}
\caption{Example distributions of the ratings given by users for different
object-pairs. Each user could answer with \emph{no} ($r=0$), \emph{maybe}
($r=0.5$), or \emph{yes} ($r=1$) to indicate the preference for placing the
two objects on the same shelf. The three possible rating classes, as well as
the noise inherent to crowdsourcing surveys, resulted in multi-modal taste
distributions. This highlights the difficulty of manually designing rules to
guide the robot when sorting objects into different containers.}
\label{fig:ratingExamplesGroceries}
\end{figure*}
\subsection{Task 2: Organizing Groceries}
\label{sec:groceries}
In this scenario, we considered the problem of organizing different grocery
items on shelves. We collected data from over 1,200 users using a crowdsourcing
service~\cite{crowdFlower}, where we considered a set of
22 common grocery item types, e.g., cans of beans, flour, tea, etc.
We asked each user about her preferences for a subset of pairs related
to these objects. For each pair, we asked the user if she would
place the two objects together on the same shelf. Each user could
answer with \emph{no}, \emph{maybe}, or \emph{yes}, which we
translated to ratings of 0, 0.5, and 1, respectively. We aggregated the answers
into a ratings matrix $\mathbf{R}$ of size 179$\times$1,284. Each of the user
columns contains between 28 and 36 known ratings, and each of the 179
object-pairs was rated between 81 to 526 times. Overall, only around $16\%$ of
the matrix is filled with ratings, with the ratings distributed as in
\tabref{tab:NoMaybeYes}. Due to the three possible ratings and the noise
inherent to crowdsourcing surveys, the ratings we obtained were largely
multi-modal, see~\figref{fig:ratingExamplesGroceries} for some examples.
\subsubsection{Predicting User Preferences for Pairs of Grocery Items}
\label{sec:GroceriesProbing}
\begin{table}
\centering
\normalsize
\caption{The distribution of ratings for the groceries scenario obtained through
crowdsourcing. Overall, we gathered 37,597 ratings about 179 object-pairs from
1,284 users. For each object-pair, users indicated whether they would place
the two objects on the same shelf or not.}
\label{tab:NoMaybeYes}
\begin{tabular}{c|c|c|c|}
\cline{2-4} & \multicolumn{3}{ c| }{Rating Classes} \\
\cline{2-4} & \emph{no} & \emph{maybe} & \emph{yes}\\
& ($r=0$) & ($r=0.5$)& ($r=1$)\\
\cline{1-4} \multicolumn{1}{ |c| } {Rating Percentage} & $47.9\%$ & $29.2\%$ &
$22.9\%$ \\
\cline{1-4}
\end{tabular}
\end{table}
We show that our approach is able to accurately predict user ratings of
object-pairs using the data we gathered from crowdsourcing. For this, we tested
our approach through 50 runs of cross-validation. In each run, we selected 50
user columns from $\mathbf{R}$ uniformly at random, and queried them with $P$ of
their known ratings. We hid the remaining ratings from the matrix and predicted
them using our approach. We rounded each prediction to the closest rating
(\emph{no}, \emph{maybe}, \emph{yes}) and evaluated our results by
computing the precision, recall, and F-score. Additionally, we
compared the predictions of our approach (CF) to CF-rand, Baseline-I,
and Baseline-II described above. The average F-scores
over all runs and rating classes are shown in~\figref{fig:resultsGroceries}-top
for $P=$ 4, 8, \dots, 20. Both collaborative filtering approaches outperform the
baseline approaches, reaching a mean F-score of 0.63 at $P=$ 20 known probe
ratings. Baseline-I and Baseline-II are only able to achieve an
F-score of 0.45 by using the same rating of a pair for all users. Note
that by employing our probing strategy, our technique is able to
achieve an F-score of 0.6 with only 8 known probe ratings. On the
other hand, CF-rand needs to query a user for the ratings of at least 12
object-pairs on average to achieve the same performance.
For a closer look at the performance with respect to the three rating classes,
we select the results at $P=12$ and show the per-class precision, recall, and
F-score values for both CF and Baseline-I in \figref{fig:groceriesP12}-top.
Note that the baseline achieves its highest recall value for the \emph{maybe}
class since it uses the mean rating received by a specific object-pair to
predict its rating for new users. On the other hand, we are able to achieve a
similar recall (0.63) for the \emph{maybe} class, as well as higher recall
values for the \emph{no} and \emph{yes} classes despite the large degree of
noise and the variance in people preferences in the training data. Our
approach is able to achieve higher F-scores over all rating classes compared to
the baseline. Out of the three classes, we typically achieved better scores
for predicting the \emph{no} class compared to \emph{maybe} or \emph{yes}. This
is expected due to the distribution of the training ratings we gathered from
the crowdsourcing data, see~\tabref{tab:NoMaybeYes}.
Additionally, we computed the prediction error (\eqref{eq:predError}) averaged
over all experimental runs for each value of $P$,
see~\figref{fig:resultsGroceries}-bottom. The baselines are unable to cope
with the different modes of user preferences, and consistently result in a
prediction error of around $0.27$ irrespective of the number of probes. On
the other hand, the mean prediction error using CF and CF-rand drops from
$0.24$ to $0.18$ and from $0.25$ to $0.19$ as $P$ increases from
4 to 20, respectively. Note that, using our probing technique, we
are able to achieve a lower error with fewer probes compared to
CF-rand. This illustrates the importance of selecting more
intelligent queries for users to learn their preferences. For a
closer inspection of the prediction error,
\figref{fig:groceriesP12}-bottom shows the distribution of the
error for our approach and Baseline-I given $P=12$ probes. Our
approach achieves an error of $0$ for $64.62\%$ of the predictions
we make, compared to $49.78\%$ only for Baseline-I. Moreover,
Baseline-I results in an absolute error of $0.5$ (confusing
\emph{no}/\emph{yes} with \emph{maybe}) for $47.60\%$ of the
predictions, compared to $32.88\%$ only for our approach.
Finally, our approach and the baseline result in a prediction
error of $1.0$ (misclassifying \emph{no} as \emph{yes} or vice
versa) for only $2.49\%$ and $2.62\%$ of the predictions,
respectively.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{groceriesProbingFScore.pdf}\\[5mm]
\includegraphics[width=0.7\columnwidth]{groceriesProbingError.pdf}
\caption{Results for the scenario of organizing
grocery items on different shelves. Top: the
mean F-score of our predictions averaged over
all rating classes \emph{no}, \emph{maybe},
and \emph{yes}. Despite the large degree of
multi-modality and noise in the user
preferences we collected through
crowdsourcing, our approach (CF) is able to
achieve an F-score of 0.63 with 20 known
probes and to outperform the
baselines. Moreover, our performance improves
with more knowledge about user preferences as
expected. Bottom: the mean prediction error for
different numbers of probes, $P$. The baselines are
unable to cope with different modes of user
preferences. They consistently result in a prediction
error of around $0.27$ irrespective of the number of
probes. On the other hand, the mean prediction error
using CF $0.24$ to $0.18$ as $P$ increases from
4 to 20. Using our probing technique, we are able to achieve a lower
error with fewer probes compared to CF-rand.}
\label{fig:resultsGroceries}
\end{figure}
\begin{figure}[t]
\centering
{
\footnotesize
\renewcommand{\arraystretch}{1.5}
\begin{tabular}[b]{cc|c|c|c|}
\cline{3-5}
& &\emph{no}&\emph{maybe} &\emph{yes}\\ \cline{3-5}
\cline{1-5}
\multicolumn{1}{ |c }{\multirow{3}{*}{Baseline-I} } &
\multicolumn{1}{ |c| }{Precision} & $0.71$ & $0.34$ &$0.79$ \\ \cline{2-5}
\multicolumn{1}{ |c }{} &
\multicolumn{1}{ |c| }{Recall} & $0.52$ & $0.69$ & $0.19$ \\ \cline{2-5}
\multicolumn{1}{ |c }{} &
\multicolumn{1}{ |c| }{F-score} & $0.60$ & $0.46$ & $0.31$ \\ \cline{1-5}
\multicolumn{1}{ |c }{\multirow{3}{*}{CF} } &
\multicolumn{1}{ |c| }{Precision} & $0.80$ & $0.45$ & $0.72$ \\ \cline{2-5}
\multicolumn{1}{ |c }{} &
\multicolumn{1}{ |c| }{Recall} & $0.72$ & $0.63$ & $0.49$ \\ \cline{2-5}
\multicolumn{1}{ |c }{} &
\multicolumn{1}{ |c| }{F-score} & $0.76$ & $0.53$ & $0.58$ \\ \cline{1-5}
\end{tabular}}\\[5mm]
\includegraphics[width=0.72\columnwidth]{groceriesErrorDistributionP12.pdf}
\caption{Top: the detailed evaluation for the groceries scenario with $P=12$
probes. Our approach results in higher F-scores across all rating classes
compared to the baseline. \figref{fig:resultsGroceries}-top shows the mean
F-score for different values of $P$. Bottom: the detailed distribution of
the prediction errors using $P=12$ probes,
see~\figref{fig:resultsGroceries}-bottom for the mean error for
different values of $P$.}
\label{fig:groceriesP12}
\end{figure}
\subsubsection{The Effect of the Number of Latent Dimensions}
\label{sec:varyingNumFactors}
In this experiment, we investigated the effect of varying the number of latent
dimensions $K$ used when learning the factorization of $\mathbf{R}$ on the
quality of the learned model. We repeated the experiment in
\secref{sec:GroceriesProbing} for $K= 3, 6, 9, 15$. For each setting of $K$, we
conducted 50 runs where, in each run, we selected 50 random user columns,
queried them for $P$ random probe ratings, and learned the
factorization in \secref{sec:cfLearning} to predict the remaining
ratings. As in the previous experiment, we evaluated the quality of
predicting the unknown ratings by computing the average F-score for
the \emph{no}, \emph{maybe}, and \emph{yes} classes. Additionally, we
computed the root mean square error (RMSE) for reconstructing the
\emph{known} ratings in $\mathbf{R}$ used in training, i.e.,
\begin{equation*}
\label{eq:rmse}
\text{RMSE} = \sqrt{\frac{1}{R} \sum_{i}\sum_{j\in\mathcal{J}_i}\Big(r_{ij}
- (\mu + b_i + b_j + \mathbf{s}_i^T \cdot \mathbf{t}_j)\Big)^2}.
\end{equation*}
The results are shown in~\figref{fig:resultsGroceriesRMSE}. When using
larger values of $K$, we are able to reconstruct the known ratings in
$\mathbf{R}$ with lower RMSE values. This is expected since we are
computing a more accurate approximation of $\mathbf{R}$ when factorizing it
into higher-dimensional matrices ($\mathbf{S}$ and $\mathbf{T}$), thus
capturing finer details in user preferences. However, this results in
over-fitting (lower F-scores) when predicting unknown ratings, especially
for lower values of $P$. In other words, for higher values of $K$, we need
more probes per user when predicting unknown ratings, since we need to
learn more factors for each user and object-pair. In general, the more
known ratings we have in the user columns, the more sophisticated are
the models that we can afford to learn.
Furthermore, we found interesting similarities between object-pairs when
inspecting their learned biases and factor vectors. For example (for $K=3$),
users tend to rate $\mathit{\{coffee, honey\}}$ similarly to
$\mathit{\{tea, sugar\}}$ based on the similarity of their factor
vectors. Also, the closest pairs to $\{\mathit{pasta},
\mathit{tomato\ sauce}\}$ included $\mathit{\{ pancakes, maple\
syrup\}}$ and $\mathit{\{cereal, honey\}}$, suggesting that people often
consider whether objects can be used together or not. With respect to the
biases ($b_i$) learned, object-pairs with the largest biases (rated above
average) included $\mathit{\{pepper, spices\}}$, $\mathit{\{pasta,
rice\}}$ and $\mathit{\{cans\ of\ corn, cans\ of\ beans\}}$. Examples of
object-pairs with the lowest biases (rated below average) included
$\mathit{\{candy, olive\ oil\}}$, $\mathit{\{cereal, vinegar\}}$, and
$\mathit{\{cans\ of\ beans, cereals\}}$. On the other hand, object-pairs
like $\mathit{\{cans\ of\ corn, pasta\}}$ and $\mathit{\{pancakes, honey\}}$
had a bias of almost 0.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{groceriesVaryingKRMSE.pdf}\\[5mm]
\includegraphics[width=0.7\columnwidth]{groceriesVaryingKFScore.pdf}
\caption{We learned different factorizations of the
ratings matrix $\mathbf{R}$ by varying the number of
latent dimensions, $K$. For each learned model, we
evaluated the RMSE when reconstructing the known
ratings in $\mathbf{R}$ (top), and the F-score for
predicting unknown ratings in $\mathbf{R}$ given
different numbers of probes $P$ for randomly selected
user columns (bottom). Learning factors
($\mathbf{S} $ and
$\mathbf{T}$) with larger dimensionality leads to
reconstructing the known ratings in $\mathbf{R}$ with
a higher fidelity (lower RMSE). However, this comes
at the expense of over-fitting to the known ratings
for users, leading to lower F-scores with larger $K$
when predicting new ratings given the same number
of probes, $P$.}
\label{fig:resultsGroceriesRMSE}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=4cm]{groceriesOnlineVsOfflineFScore.pdf}\\[5mm]
\includegraphics[height=4cm]{groceriesOnlineVsOfflineErrorP12.pdf}
\caption{Top: we tested our approach for learning the preferences of
new users based on the object-pair biases and factors vectors learned from
rating matrices $\mathbf{R}_{N}$ of different sizes. The results are shown
for predicting with a set of 100 test users based on $P$ random probe
ratings each, averaged over 50 runs. The performance improves with more
training users as expected, approaching the performance when training
with all users, see~\figref{fig:resultsGroceries}-top for comparison.
Given sufficient users in the robot's database ($\geq 750$), we can
infer the preferences of new users by assuming fixed object-pair
biases and factor vectors without loss in prediction accuracy. Bottom:
the prediction error given $P=12$ probe ratings when inferring the
preferences of test users given a previously-learned factorization
(CF-rand$'$) compared to batch learning with the training and test
users combined (CF-rand). As expected, the error for both approaches
drops given more training users, converging to 0.20 for
$\mathbf{R}_{750}$ and $\mathbf{R}_{1000}$, i.e., approaching the
performance when training with the full $\mathbf{R}$, see
\figref{fig:resultsGroceries}-bottom.}
\label{fig:onlineVsOffline}
\end{figure}
\subsubsection{Learning of New User Preferences}
\label{sec:continualExp}
In this experiment, we show that our approach is able to learn the preferences
of new users based on the object-pair biases and factor vectors learned from
previous users, see~\secref{sec:onlineLearning}. We conducted an experiment
similar to that in \secref{sec:GroceriesProbing} using a random probing
strategy. However, we first learned the biases $b_i$ and factor vectors
$\mathbf{S}$ using rating matrices $\mathbf{R}_{100}$, $\mathbf{R}_{250}$,
$\dots$, $\mathbf{R}_{1000}$, corresponding to 100, 250, $\dots$, 1000
training users, respectively (\eqref{eq:optimization}). We then used this
model to compute the biases $b_j$ and factor vectors $\mathbf{T}$ for a
set of 100 (different) test users (\eqref{eq:newUserLearning}) and predict
their missing ratings. As before, we repeated this experiment for different
values $P$ of known probe ratings for the test users.
The prediction F-score averaged over 50 runs is shown
in~\figref{fig:onlineVsOffline}-top. As expected, the performance
improves given more training users for learning the $b_i$'s and
$\mathbf{S}$, converging to the performance when training with all
user columns (compare to CF-rand
in~\figref{fig:resultsGroceries}-top).
This validates that, given enough users in the robot's database, we
can decouple learning a projection for the object-pairs from the
problem of learning the new users' biases and factor vectors.
Moreover, we compared the predictions using this approach (CF-rand$'$)
to the standard batch approach that first appends the
100 new user columns to the training matrix and learns all biases and
factor vectors collaboratively (CF-rand). The prediction error,
averaged over the 50 runs, is shown in
\figref{fig:onlineVsOffline}-bottom for $P=12$ probe ratings. As
expected, the error for both approaches drops given more training
users, converging to 0.20 for $\mathbf{R}_{750}$ and
$\mathbf{R}_{1000}$, i.e., approaching the performance when
training with the full $\mathbf{R}$ (compare to CF-rand
in~\figref{fig:resultsGroceries}-bottom for $P=12$).
Furthermore, with smaller training matrices, we observed a slight
advantage in performance for CF-rand$'$. In other words, given
fewer ratings, it might be advantageous to solve the smaller
optimization problem in~\eqref{eq:newUserLearning}.
\paragraph*{Probing and Learning for New Users}
\begin{figure}[]
\setlength{\fboxsep}{0pt}
\centering%
\hfill\fbox{\includegraphics[width=0.73\columnwidth]{Config3Cam.jpg}}\hfill~~\\[3mm]
~\hfill\fbox{\includegraphics[width=0.73\columnwidth]{Config3RVIZCropped.png}}\hfill~
\caption{An application of our approach that demonstrates how our
predictions for a new user change based on how (probing) objects are
arranged on the shelves. Top: the camera image of the scene. To label
the objects, we relied on matching SIFT features from a database of
images for the used products. Bottom: a visualization of the predicted
preferred arrangement of other objects based on the corresponding
learned model. Our method is able to infer the user's preferences by
adapting to the perceived arrangement. For example, by moving
$\mathit{coffee}$ from the shelf containing $\mathit{tea}$ to the one
containing $\mathit{flour}$, the predicted arrangement separates
$\mathit{cake\ mix}$ and $\mathit{sugar}$ and moves them to different
shelves.}\label{fig:shelfPerception}
\end{figure}
Using our method, the time for computing the model for one new user (based on
a previously-learned factorization) on a consumer-grade notebook was
10-20\,ms on average, compared to about 4\,s for batch learning with all 1248
user columns ($K=3$). To demonstrate the applicability of our approach
(\secref{sec:cfNewUsers}) in a real-world scenario, we conducted an
experiment where we used a Kinect camera to identify a set of objects that we
placed on shelves and used the perceived pairwise ratings as probes for
inferring a user's preference. For perception, we clustered the perceived
point cloud to segment the objects, and relied on SIFT features matching
using a database of product images to label each object. We learned the bias
and factors vector for the user associated with this scene using the
object-pairs model that we learnt with our crowdsourcing data. Accordingly,
we predicted the pairwise ratings related to objects that are not in the
scene and computed the preferred shelves to place them on.
\figref{fig:shelfPerception} shows an example where the top image shows the
camera image of the scene, and the bottom image shows the corresponding
computed shelf arrangement in the rviz visualization environment. Due to
physical space constraints, we assume that each shelf is actually divided
into two shelves. A video demonstrating how the predicted arrangement changes
as the configuration on the shelves varies can be seen at \url{http://www.informatik.uni-freiburg.de/\%7Eabdon/task_preferences.html}.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{newGroceries.pdf}\\[5mm]
\includegraphics[width=0.8\columnwidth]{groceriesMatrixSize.pdf}
\caption{Top: we predict preferences related to new objects by using a mixture
of experts approach. The experts $\mathcal{E}_1$-$\mathcal{E}_3$ are based on
the hierarchies of three online grocery stores. The mixture of experts
$\mathcal{E}_*$ is a merged prediction of all three experts based on their
confidence for a specific user. Therefore, it is able to recover if
a certain expert cannot find similarities for a new object, as in
the case of \textit{rice}. The baselines $\mathcal{E}_1'$-$\mathcal{E}_3'$
make predictions based only on the semantic $\mathit{wup}$ similarity of two
objects without considering the ratings of similar pairs rated by the user,
see~\secref{sec:wup}. Bottom: the mean F-score for predicting the
ratings for a new object vs. the number of training user columns
who have rated pairs related to it. As soon as some users have
rated pairs related to a new object, our collaborative filtering
approach is able to make predictions about it. The performance
improves with more users rating pairs related to the object.}
\label{fig:newGroceries}
\end{figure}
\subsubsection{Predicting Preferences for New Objects}
\label{sec:newGroceries}
In this experiment, we demonstrate that our mixture of experts approach is able
to make reasonable predictions for previously unrated object-pairs. For this,
we defined three experts by mining the hierarchies of the groceries section of
three large online stores (amazon.com, walmart.com,
target.com). This includes up to 550 different nodes in the object
hierarchy. For each of the 22 grocery objects, we removed ratings
related to all of its pairs from $\mathbf{R}$, such that the typical
collaborative filtering approach cannot make predictions related to that object.
We used the mixture of experts to predict those ratings using the remaining
ratings in each column and the expert hierarchies as explained
in~\secref{sec:wup}. The mean F-score over all users for three grocery
objects is shown in~\figref{fig:newGroceries}-top, where the mixture
of experts is denoted by $\mathcal{E}_*$. We also show the individual
expert results ($\mathcal{E}_1$-$\mathcal{E}_3$) and their
corresponding baseline predictions
($\mathcal{E}_1'$-$\mathcal{E}_3'$). The baselines take only the
$\mathit{wup}$ similarity of two objects as the rating of the pair but
do not consider the ratings of similar pairs made by the same user as our
approach does. As we can see, the results of each individual expert outperform
the baseline predictions. Note that $\mathcal{E}_*$ is able to overcome the
shortcomings of the individual experts, as in the case of \textit{rice}. There,
$\mathcal{E}_1$ is unable to find similarities between \textit{rice}
and any of the rated objects, whereas $\mathcal{E}_2$ and
$\mathcal{E}_3$ are able to relate it to \textit{pasta} in their
hierarchies. For two of the objects (\textit{bread} and
\textit{candy}), we were unable to make any predictions, as none of
the experts found similarities between them and other rated
objects. For all other objects, we achieve an average F-score of 0.61.
\paragraph*{Predicting Ratings for New Object-Pairs}
Furthermore, we applied the same mixture of experts based on the three online
stores above to extend our ratings matrix $\mathbf{R}$ with rows for
object-pairs that no user rated in our crowdsourcing surveys. We created a new
ratings matrix $\mathbf{R}'$ of size 214$\times$1284, i.e., with
35 additional object-pair rows. These included pairs related to the new object
$\mathit{cake\ mix}$, as well as other object combinations. For each user
column in the original $\mathbf{R}$, the mixture of experts used
already-rated object-pairs to infer ratings for the new pairs.
\figref{fig:ratingExamplesGroceriesExperts} shows examples of rating
distributions in the resulting ratings matrix for two object-pairs:
$\{\mathit{cans\ of\ beans, sugar}\}$ and $\{\mathit{cake\ mix, flour}\}$.
Additionally, for each of them, we show the rating distributions of two of
their most similar object-pairs that the experts used when making their
predictions. In a later experiment, we use the resulting $\mathbf{R}'$ with
this combination of crowdsourcing and expert-generated ratings to train a
factorization model for predicting preferred arrangements of survey
participants, see~\secref{sec:shelving}.
\begin{figure*}
\centering
\includegraphics[width=0.68\textwidth]{groceriesExampleDistributionsExperts.pdf}
\caption{The rating distributions depicted in red correspond to example
object-pairs that no user had rated in the original crowdsourcing data. We
generated those ratings using our mixture of experts approach based on the
hierarchies of three online stores. In the case of $\{\mathit{cans\ of\
beans, sugar}\}$, the experts relied on how each user rated similar
object-pairs such as $\{\mathit{cans\ of\ beans, flour}\}$ and
$\{\mathit{cans\ of\ tuna, sugar}\}$. The rating distributions of those
pairs (over all user columns who rated them) are depicted in blue on the
same row. Similarly, in the case of $\{\mathit{cake\ mix, flour}\}$, the
experts relied on the known ratings of $\{\mathit{flour, sugar}\}$ and
$\{\mathit{pancake\ mix, flour}\}$.}
\label{fig:ratingExamplesGroceriesExperts}
\end{figure*}
\subsubsection{Improvement with Number of Users}
\label{sec:bigData}
We conducted an experiment to show that the performance of our approach improves
with more users in the system. For each object, we removed from $\mathbf{R}$ all
columns with ratings related to that object. Over 20 runs, we randomly sampled
ten different user columns (test users) from these and hid their ratings for
pairs related to the object. We predicted those ratings using our approach
(\secref{sec:cfLearning}) by incrementally adding more columns of other
(training) users who rated that object to the ratings matrix in increments
of 25. We evaluated the mean F-score for the predictions for the test
users. The results (CF - overall) are shown in~\figref{fig:newGroceries}-bottom
averaged over 20 different types of objects (those where we had at
least 300 user ratings). We also show the improvement with respect to
two of the objects individually. The performance of our approach
improves steadily with the number of users who rate pairs related to a
new object, as opposed to a baseline that updates the mean rating over
all users and uses that for predicting. This shows that collaborative
filtering is suitable for lifelong and continual learning of user
preferences.
\subsubsection{Assigning Objects to Shelves Based on Pairwise Preferences}
\label{sec:shelving}
The goal of this experiment is to show that our approach is able to group
objects into containers to satisfy pairwise preferences,
see~\secref{sec:spectralClustering}. We evaluated our approach in two
settings. In the first, we compute the object groupings given ground
truth pairwise ratings from users. In the second, we predict the
pairwise ratings according to our approach and use those when grouping
the objects on different shelves.
\paragraph{Arrangements Based on Ground Truth Ratings}
We conducted a qualitative evaluation of our approach for grouping objects into
different containers based on \emph{known} object-pair preferences. We asked a
group of 16 people to provide their ratings (0 or 1) for 55 object-pairs,
corresponding to all pairs for a set of 11 objects. For each participant,
we then computed an object arrangement allowing our spectral clustering
approach to use up to 6 shelves. We showed each participant the shelf
arrangement we computed for them in the rviz visualization environment. We
asked each of them to indicate whether they think the arrangement
represents their preferences or not. They then had the choice to make
changes to the arrangement by indicating which objects they would move
from one shelf to another.
The results are shown in~\figref{fig:shelvingSurveyDistribution}.
In five out of 16 cases, the participants accepted the arrangements without
any modifications. Overall, the participants modified only two objects on
average. Even given ground truth object-pair ratings from the users, there
are often several inconsistencies in the preferences that can make it
challenging for the eigen-gap heuristic to estimate the best number of
shelves to use. Nonetheless, we are able to compute reasonable groupings of
objects. Moreover, the nature of our approach allows the robot to observe
arrangements given by users in order to modify its knowledge about their
preferences and use this when making new predictions in the future.
\begin{figure}
\centering
\includegraphics[height=4.5cm]{groceriesShelvingSurveyDistributionCumulative.pdf}
\caption{In a small survey with 16 participants, we computed groupings of
11 objects based on each participant's ground truth object-pair ratings. We
presented the arrangement to each participant by visualizing it in a 3D
visualization environment. Each participant then indicated which objects
they would like to move in order to achieve a better arrangement.
Despite inconsistencies in the pairwise preferences of the participants,
we are able to compute reasonable object groupings that correspond
to their preferences. On average, the participants modified the
locations of only two objects.
}
\label{fig:shelvingSurveyDistribution}
\end{figure}
\paragraph{Arrangements Based on Predicted Ratings}
\label{sec:shelvingSurvey15}
\begin{figure}
\setlength{\fboxsep}{0pt}
\centering
\fbox{\includegraphics[height=3.5cm]{allGroceriesSmall.JPG}}
\caption{We asked survey participants to organize different types of grocery
objects using up to six shelves in order to test our approach for predicting
their preferences.}
\label{fig:allGroceries}
\end{figure}
The goal of this experiment is to evaluate the performance of our approach for
grouping objects based on \emph{predicted} object-pair ratings. To collect
ground truth data for arrangements, we asked 15 people to organize 17 different
grocery items according to their preferences, using \emph{up to} six shelves,
see~\figref{fig:allGroceries}. Four people grouped the items on four
shelves, three people used five shelves, and eight people used all six
shelves. \figref{fig:motivation}-left shows examples of arrangements
produced by the survey participants. We translated these arrangements to
user rating columns with 0 or 1 ratings as described
in~\secref{sec:probing}. \figref{fig:shelfAdapting} shows the
arrangements our method computes for one of the participants (who used
four shelves) when given all ground truth object-pair ratings of
that participant. Given four or more shelves to use, we are able to
reproduce the original object grouping of this user with $C' = 4$
shelves. The figure also shows how our approach adapts by merging some
object groups together when given only two or three shelves to sort the
objects.
Our goal is to evaluate the arrangements we compute for the 15 participants
given only partial knowledge of their ratings, and based on a previously-learned
model of object-pair biases $b_i$ and factor vectors $\mathbf{S}$ from
training users. To learn the model, we used the ratings matrix $\mathbf{R}'$
described in \secref{sec:newGroceries} above, as this covers all object-pairs
relevant for the objects in this experiment. For each of the 15 participants, we
then simulated removing $O$ random objects from their arrangement, and hid all
ratings related to those objects. Using the remaining ratings as probes, we
learned the bias $b_j$ and factors vector $\mathbf{t}_j$ of each
participant (\secref{sec:onlineLearning}), and predicted the missing ratings
accordingly. Finally, we used those ratings to compute an object arrangement
for each participant, and compared it to their ground truth arrangement. We
conducted this experiment for $O = 1, 2, 3, \dots, 12$, repeating it 100 times
for each setting.
We evaluated the results by computing an ``edit" distance~$d$ between the
computed and ground truth arrangements. We compute $d$ as the minimum number
of objects we need to move from one shelf to another to achieve the correct
arrangement, divided by $O$. This gives a normalized error measure between
0 and 1 capturing the ratio of misplaced objects. The results (averaged over
all runs) are shown in \figref{fig:resultsShelving}-top, where we denote our
method by CF-rand$'$. We compared our results to two baselines. The first is
Baseline-II, which predicts the missing object-pair preferences using the mean
ratings over all users in $\mathbf{R}'$ and then computes the object groupings
using our approach. The second is Baseline-III, which makes no predictions
but simply assigns each object to a random shelf.
\figref{fig:resultsShelving}-bottom also shows the mean F-score of our
approach and Baseline-II averaged over the 0 and
1 rating categories.
Our approach outperforms both baselines for $O\le10$, and results in a mean
error from 0.19 to 0.41, and an F-score from 0.80 to 0.78, as $O$ changes from
1 to 10. The model we learned from $\mathbf{R}'$ (noisy crowdsourcing data
from 1284 users augmented by ratings from a mixture of experts) is able to
accurately predict the preferences of the new 15 survey participants. For the
same values of $O$, Baseline-II achieves a mean error ranging between
0.36 and 0.54, and an F-score of 0.77. As $O$ increases, the error in our
predictions increases as expected, since we have less probe ratings based on
the remaining objects on the shelves to infer the preferences of a new user.
For $O>11$, Baseline-II results in less error than our approach when
computing arrangements. On the other hand, using a random strategy for
assigning objects to shelves (Baseline-III) resulted in an error above 0.72
for all values of $O$.
\begin{figure*}[t]
\centering
\hfill\includegraphics[width=0.3\columnwidth]{2shelves.pdf}\hfill
\includegraphics[width=0.3\columnwidth]{3shelves.pdf}\hfill
\includegraphics[width=0.3\columnwidth]{4shelves.pdf}\hfill
\includegraphics[width=0.3\columnwidth]{5shelves.pdf}\hfill~
\flushleft
\centering
\begin{tabular}{llllll}
\centering
$o_1:\mathit{cake\ mix}$ & $o_4:\mathit{olive\ oil}$ & $o_7:\mathit{spices}$ &
$o_{10}:\mathit{coffee}$ & $o_{13}: \mathit{corn}$ & $o_{16}: \mathit{tomato\
sauce}$\\
$o_2:\mathit{flour}$ & $o_5:\mathit{pepper}$ & $o_{8}:\mathit{vinegar}$ &
$o_{11}:\mathit{tea}$&$o_{14}:\mathit{pasta}$&$o_{17}: \mathit{tuna}$\\
$o_3:\mathit{sugar}$ & $o_6:\mathit{salt}$& $o_{9}:\mathit{cereal}$ &
$o_{12}:\mathit{beans}$&$o_{15}:\mathit{rice}$&\\
\end{tabular}
\caption{Our approach is able to adapt to the number of containers $C$ available
for organizing the objects. In this example, applying the self-tuning
heuristic correctly estimates the best number of shelves to use ($C' = 4$),
matching the original user preferences. Given more shelves in the
scene ($C=5$), our approach still prefers grouping the objects
on four shelves, as this maximally satisfies the user's
preferences. With only two or three shelves, our method attempts
to satisfy the preferences of the user as much as possible by
grouping the objects differently.}
\label{fig:shelfAdapting}
\end{figure*}
\paragraph{Predictions by Humans}
Finally, we conducted a qualitative evaluation to gauge the difficulty of the
above task for humans. We asked 15 new participants (who did not take part in
the previous surveys) to complete partial arrangements by predicting the
preferences of the 15 users above, whom they do not know. Each participant
solved six tests. In each test, we manually reconstructed an arrangement from
the surveys on the six shelves, then we removed $O$ objects randomly and placed
them on a nearby table. We asked each participant to predict the preference of
another user by inspecting the arrangement in front of them, and to finish
sorting the remaining $O$ objects accordingly. Each participant solved six such
cases ($O = \{2, 4, \dots, 12\}$), each corresponding to a different user. As
before, we computed the error $d$, the ratio of objects that were placed on
wrong shelves. The results are shown in~\tabref{tab:HumanPreds}.
When presented with two objects to place only, most participants were able to
predict the correct shelf for them based on the arrangement of the remaining
objects. However, given four to twelve objects, the participants misplaced
between one forth to one third of the objects on average.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{groceriesShelving3FactorsExperts35.pdf}\\[5mm]
\includegraphics[width=0.7\columnwidth]{groceriesShelving3FactorsExperts35FScores.pdf}
\caption{We evaluated our approach (CF-rand$'$) for predicting the correct
shelves for $O$ randomly-selected objects (out of 17) given the shelves that
the remaining objects have been assigned to. CF-rand$'$ predicts the missing
object-pair ratings using our approach and then partitions the objects using
our spectral clustering method. Baseline-II predicts the missing ratings as
the mean rating over all training users, and uses our method for
partitioning the objects. Baseline-III makes no predictions, but randomly
assigns each object to one of the six shelves. Top: the mean error in the
computed arrangements (ratio of misplaced objects). Bottom: the prediction
F-score of the object-pair ratings averaged over the 0 and 1 categories. Our
approach outperforms both baselines for $O\le10$. As $O$ increases, the
error in our predictions increases since there are less probe ratings based
on the remaining objects on the shelves to infer the preferences of a user.
}
\label{fig:resultsShelving}
\end{figure}
\begin{table}
\centering
\normalsize
\caption{The error $d$ in the final arrangement produced by 15~participants when
we asked them to sort $O$ objects by predicting the preferences of users they
do not know.}
\begin{tabular}{|c|c|}
\cline{1-2} Number of objects, $O$ & Error in arrangement, $d$\\
\cline{1-2} 2 & $0.07\pm0.17$\\
\cline{1-2} 4 & $0.27\pm0.25$\\
\cline{1-2} 6 & $0.24\pm0.22$\\
\cline{1-2} 8 & $0.27\pm 0.18$\\
\cline{1-2} 10 & $0.33\pm0.18$\\
\cline{1-2} 12 & $0.34\pm0.17$\\
\cline{1-2}
\end{tabular}
\label{tab:HumanPreds}
\end{table}
\subsubsection{Real Robot Experiments}
We conducted an experiment to illustrate the applicability of our approach
(\secref{sec:cfLearning}) on a real tidy-up robot scenario using our PR2 robot
platform, see~\figref{fig:pr2Exp}-left. We executed 25 experimental runs where
the task of the robot was to fetch two objects from the table and return them to
their preferred shelves as predicted by our approach. In each run, we arranged
15 random objects on the shelves according to the preferences of a random user
from the survey we conducted in \secref{sec:shelvingSurvey15}, and provided
this information as probes for the robot to learn a model of that user
(\secref{sec:probing}). The robot used this to predict the pairwise
preferences related to the two objects on the table, which it recognized with
its Kinect camera using unique fiducial markers we placed on the objects. It
then computed an assignment of the objects to one of the shelves
(\secref{sec:spectralClustering}), and expressed those assignments as
planning goals in the form of logical predicates in PDDL
(e.g., $\mathit{on(coffee,\ shelf_2)}$). To achieve those goals, we used a
state-of-the-art planner \cite{dornhege13aaaiss} to generate a plan for the
robot to navigate to the table, grasp the detected objects, navigate to the
shelves, and place the objects on their corresponding shelves (that may be
empty or have objects on them) after detecting free space on them. For
manipulation, we relied on an out-of-the-box motion planner. Additionally,
we provided the robot with a 3D map of the environment for localizing
itself, where we labeled the table and the six shelves. Overall, the robot
predicted the correct shelf assignment for 82$\%$ of the objects using our
approach, see~\figref{fig:pr2Exp}-right for an example where the robot
successfully placed \textit{coffee} on the same shelf as \textit{tea}.
Video excerpts from the experimental runs can be found at \url{http://www.informatik.uni-freiburg.de/\%7Eabdon/task_preferences.html}.
\begin{figure*}
\setlength{\fboxsep}{0pt}
\centering
~\hfill\fbox{\includegraphics[height=4cm]{pr2SetupSmall.JPG}}\hfill
\fbox{\includegraphics[height=4cm]{PR2_placing2Small.JPG}}\hfill~
\caption{Left: the robot has to assign the two objects that are on the table to
shelves according to predicted user preferences. In this example, the robot
places \textit{coffee} on the same shelf as \textit{tea}, and \textit{rice}
next to \textit{pasta}. Right: an example where the robot places
\textit{coffee} next to \textit{tea} and \textit{sugar} next to
\textit{salt}.}
\label{fig:pr2Exp}
\end{figure*}
\section{Conclusions}
\label{sec:conclusion}
In this work, we presented a novel approach that enables robots to predict user
preferences with respect to tidying up objects in containers such as shelves or
boxes. To do so, we first predict pairwise object preferences of the user by
formulating a collaborative filtering problem. Then, we subdivide the objects in
containers by modeling and solving a spectral clustering problem. Our approach
is able to make predictions for new users based on partial knowledge of their
preferences and a model that we learn collaboratively from several users.
Furthermore, our technique allows for easily updating knowledge about user
preferences, does not require complex modeling of objects or users, and improves
with the amount of user data, allowing for lifelong learning of user
preferences. To deal with novel objects that the robot encounters, our
approach complements collaborative filtering with a mixture of
experts based on object hierarchies from the Web.
We trained the system by using surveys from over 1,200 users through
crowdsourcing, and thoroughly evaluated the effectiveness of our approach for
two tidy-up scenarios: sorting toys in boxes and arranging groceries on shelves.
Additionally, we demonstrated the applicability of our approach in a real
service robot scenario. Our results show that our technique is accurate and is
able to sort objects into different containers according to user preferences.
\section*{Acknowledgment}
\label{sec:ack}
This work has partly been supported by the German Research Foundation under
research unit FOR 1513 (HYBRIS) and grant number EXC 1086.
\bibliographystyle{abbrvnat}
|
1,108,101,564,279 | arxiv | \section{\@startsection {section}{1}{\z@
{-3.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\z@
{-3.25ex\@plus -1ex \@minus -.2ex
{1.5ex \@plus .2ex
{\normalfont\normalsize\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\z@
{-3.25ex\@plus -1ex \@minus -.2ex
{1.5ex \@plus .2ex
{\normalfont\normalsize\it}}
\renewcommand\paragraph{\@startsection{paragraph}{4}{\z@
{-3.25ex\@plus -1ex \@minus -.2ex
{1.5ex \@plus .2ex
{\normalfont\normalsize\bf}}
\renewcommand\subparagraph{\@startsection{subparagraph}{5}{\z@
{-1.25ex\@plus -1ex \@minus -.2ex
{0ex \@plus .2ex
{\normalfont\normalsize\it}}
\numberwithin{equation}{section}
\long\def\@makecaption#1#2{%
\vskip\abovecaptionskip
\sbox\@tempboxa{{\bf #1:} #2}%
\ifdim \wd\@tempboxa >\hsize
{\small\bf #1:} {\small #2}\par
\else
\global \@minipagefalse
\hb@xt@\hsize{\hfil\box\@tempboxa\hfil}%
\fi
\vskip\belowcaptionskip}
\setcounter{tocdepth}{3}
\renewcommand*\l@section[2]{%
\ifnum \c@tocdepth >\z@
\addpenalty\@secpenalty
\addvspace{.5em \@plus\p@}%
\setlength\@tempdima{1.5em}%
\begingroup
\parindent \z@ \rightskip \@pnumwidth
\parfillskip -\@pnumwidth
\leavevmode \bfseries
\advance\leftskip\@tempdima
\hskip -\leftskip
#1\nobreak\hfil \nobreak\hb@xt@\@pnumwidth{\hss #2}\par
\endgroup
\fi}
\renewcommand*\l@subsection{\addvspace{.0em \@plus\p@}\@dottedtocline{2}{1.5em}{2.3em}}
\renewcommand*\l@subsubsection{\addvspace{-.2em \@plus\p@}\@dottedtocline{3}{3.8em}{3.2em}}
\def\hepth#1{\href{http://xxx.arxiv.org/abs/hep-th/#1}{{arXiv:hep-th/#1}}}
\def\astroph#1{\href{http://xxx.arxiv.org/abs/astro-ph/#1}{{arXiv:astro-ph/#1}}}
\def\hepph#1{\href{http://xxx.arxiv.org/abs/hep-ph/#1}{{arXiv:hep-ph/#1}}}
\def\grqc#1{\href{http://xxx.arxiv.org/abs/gr-qc/#1}{{arXiv:gr-qc/#1}}}
\def\mathcv#1{\href{http://xxx.arxiv.org/abs/math.CV/#1}{{arXiv:math.cv/#1}}}
\def\mathsg#1{\href{http://xxx.arxiv.org/abs/math.SG/#1}{{arXiv:math.sg/#1}}}
\def\mathag#1{\href{http://xxx.arxiv.org/abs/math.AG/#1}{{arXiv:math.ag/#1}}}
\def\alggeom#1{\href{http://xxx.arxiv.org/abs/alg-geom/#1}{{arXiv:alg-geom/#1}}}
\definecolor{refcol}{rgb}{0.2,0.2,0.8}
\definecolor{eqcol}{rgb}{.6,0,0}
\definecolor{purple}{cmyk}{0,1,0,0}
\gdef\@citecolor{refcol}
\gdef\@linkcolor{eqcol}
\def\gdef\@urlcolor{purple}{\gdef\@urlcolor{purple}}
\def\gdef\@urlcolor{blue}{\gdef\@urlcolor{blue}}
\def\gdef\@urlcolor{red}{\gdef\@urlcolor{red}}
\def\ie{{\it i.e.}}
\def\eg{{\it e.g.}}
\def{\it etc.}{{\it etc.}}
\def\revise#1 {\raisebox{-0em}{\rule{3pt}{1em}
\marginpar{\raisebox{.5em}{\vrule width3pt\
\vrule width0pt height 0pt depth0.5em
\hbox to 0cm{\hspace{0cm}
\parbox[t]{4em}{\raggedright\footnotesize{#1}}}\hss}}}}
\newcommand\fnxt[1] {\raisebox{.12em}{\rule{.35em}{.35em}}\mbox{\hspace{0.6em}}#1}
\newcommand\nxt[1] {\\\fnxt#1}
\def\cala {{\cal A}}
\def\calA {{\mathfrak A}}
\def\calAbar {{\underline \calA}}
\def\calb {{\cal B}}
\def\calc {{\cal C}}
\def\cald {{\cal D}}
\def\cale {{\cal E}}
\def\calf {{\cal F}}
\def\calg {{\cal G}}
\def\calG {{\mathfrak G}}
\def\calh {{\cal H}}
\def\cali {{\cal I}}
\def\calj {{\cal J}}
\def\calk {{\cal K}}
\def\call {{\cal L}}
\def\calm {{\cal M}}
\def\caln {{\cal N}}
\def\calo {{\cal O}}
\def\calp {{\cal P}}
\def\calq {{\cal Q}}
\def\calr {{\cal R}}
\def\cals {{\cal S}}
\def\calt {{\cal T}}
\def\calu {{\cal U}}
\def\calv {{\cal V}}
\def\calw {{\cal W}}
\def\complex {{\mathbb C}}
\def\naturals {{\mathbb N}}
\def\projective {{\mathbb P}}
\def\rationals {{\mathbb Q}}
\def\reals {{\mathbb R}}
\def\zet {{\mathbb Z}}
\def\del {\partial}
\def\delbar {\bar\partial}
\def\ee {{\it e}}
\def\ii {{\it i}}
\def\chain {{\circ}}
\def\tr {{\rm Tr}}
\def\Re {{\rm Re\hskip0.1em}}
\def{\rm Im} {{\rm Im\hskip0.1em}}
\def\id {{\rm id}}
\def\const {{\it const.\,}}
\def\de#1#2{{\rm d}^{#1}\!#2\,}
\def\De#1{{\cald}#1\,}
\def\half{{\frac12}}
\newcommand\topa[2]{\genfrac{}{}{0pt}{2}{\scriptstyle #1}{\scriptstyle #2}}
\def\undertilde#1{{\vphantom#1\smash{\underset{\widetilde{\hphantom{\displaystyle#1}}}{#1}}}}
\def\prodprime{\mathop{{\prod}'}}
\def\gsq#1#2
{\scriptstyle #1}\square\limits_{\scriptstyle #2}{\,}}
\def\sqr#1#2{{\vcenter{\vbox{\hrule height.#2pt
\hbox{\vrule width.#2pt height#1pt \kern#1pt
\vrule width.#2pt}\hrule height.#2pt}}}}
\def\square
\mathop{\mathchoice{\sqr{12}{15}}{\sqr{9}{12}}{\sqr{6.3}{9}}{\sqr{4.5}{9}}}}
\def{\rm Hom}{{\rm Hom}}
\def{\it U}{{\it U}}
\def{\it GL}{{\it GL}}
\def{\rm Ker}{{\rm Ker}}
\def{\rm Im}{{\rm Im}}
\def{\rm Coker}{{\rm Coker}}
\def\cali{\cali}
\def\calv{\calv}
\def\mathfrak{H}{\mathfrak{H}}
\def\mathfrak{C}{\mathfrak{C}}
\def\mathfrak{c}{\mathfrak{c}}
\def\mathfrak{g}{\mathfrak{g}}
\def\mathfrak{k}{\mathfrak{k}}
\def\mathfrak{h}{\mathfrak{h}}
\def\mathfrak{m}{\mathfrak{m}}
\def\complex\projective{\complex\projective}
\def\reals\projective{\reals\projective}
\def{\mathop{\rm Str}}{{\mathop{\rm Str}}}
\def\epsilon{\epsilon}
\def\DD{{\bf D}}
\def\MCM{{\rm MCM}}
\def\free{{\rm DG}}
\def\mf{{\rm MF}}
\def\MF{\mathfrak{MF}}
\def{\rm ch}{{\rm ch}}
\def{\rm Td}{{\rm Td}}
\def\mathord{\mathchar "0271}{\mathord{\mathchar "0271}}
\def\mathord{\mathchar "0271 \kern-4.5pt \mathchar"0271}{\mathord{\mathchar "0271 \kern-4.5pt \mathchar"0271}}
\def{\mathbb T}{{\mathbb T}}
\def\mathop{\rm Aut}{\mathop{\rm Aut}}
\def{\bf e}{{\bf e}}
\def{\rm Res}{{\rm Res}}
\def\calr_{\mathfrak m}{\calr_{\mathfrak m}}
\def{\tilde\calr}_{\mathfrak m}{{\tilde\calr}_{\mathfrak m}}
\catcode`\@=12
\begin{document}
\title{Opening Mirror Symmetry on the Quintic}
\pubnum{%
hep-th/0605162}
\date{May 2006}
\author{
Johannes Walcher \\[0.2cm]
\it School of Natural Sciences, Institute for Advanced Study\\
\it Princeton, New Jersey, USA
}
\Abstract{
Aided by mirror symmetry, we determine the number of holomorphic disks
ending on the real Lagrangian in the quintic threefold. The tension of
the domainwall between the two vacua on the brane, which is the
generating function for the open Gromov-Witten invariants, satisfies
a certain extension of the Picard-Fuchs differential equation governing
periods of the mirror quintic. We verify consistency of the
monodromies under analytic continuation of the superpotential
over the entire moduli space. We reproduce the first few instanton
numbers by a localization computation directly in the A-model, and
check Ooguri-Vafa integrality. This is the first exact result
on open string mirror symmetry for a compact Calabi-Yau manifold.
}
\makepapertitle
\body
\version dream
\vskip 1em
\section{Introduction and Summary}
It has long been suspected that the enumerative results about
holomorphic curves obtained by mirror symmetry \cite{cdgp} could be
extended to open Riemann surfaces, provided appropriate boundary
conditions are imposed. In the A-model, and at lowest order in the string
coupling expansion, the counting of holomorphic disks ending on
Lagrangian submanifolds is the central ingredient in the definition of
Floer homology and the Fukaya category \cite{fooo}, which appears
on one side of the homological mirror symmetry conjecture
\cite{icm}. From the physics perspective, the chief interest is
to determine the superpotential on the worldvolume of D-branes
wrapping the Lagrangian, with many applications in studies of $\caln=1$
compactifications of string theory.
Until now, the program of extending mirror symmetry to the open string
sector has been successfully implemented only in a rather limited set of
examples with special, toric, symmetries \cite{av1,akv}. While certain
general structures could be extracted from the results obtained
\cite{mayr,lmw,lmw2}, and of course much is known in lower-dimensional
situations \cite{poza,bhlw}, it has remained unclear whether and how these
ideas could be implemented for more general, in particular compact,
Calabi-Yau threefolds. This is precisely what we do in this paper.
The Calabi-Yau manifold $X$ we will consider is the most popular
quintic in $\complex\projective^4$, and our Lagrangian $L$ will be the most
canonical real locus inside of it. This Calabi-Yau-Lagrangian pair
has been contemplated many times in the literature, starting with
\cite{wcs}. First exact results were obtained in \cite{bdlr},
where D-branes wrapping $L$ where identified with certain
RS boundary states at the Gepner point \cite{resc} (see also
\cite{bhhw} for a complementary derivation of this result). In
\cite{howa,strings}, the continuation of these boundary states
over the moduli space was analyzed using matrix factorizations
\cite{kali1,bhls} in the mirror B-model Landau-Ginzburg description.
In particular, it was explained in \cite{strings} that the
singularity in the D-brane moduli space at the Gepner point
could be interpreted as a degeneration of the Morse-Witten-Floer
complex that computes Floer homology. Living in the A-model, the
Floer differential differs from the classical Morse differential
by corrections from holomorphic disks ending on $L$ \cite{horietal,fooo},
which suggested that one should be able to turn these results
into a computation of the number of holomorphic disks as coefficients
in the appropriate large-volume expansion. We will fulfill this
promise in the present work, although following a slightly different
route.
The central technical discovery is that the spacetime superpotential
on the brane worldvolume, which is the generating function capturing
the open string instanton information \cite{extending,oova,kklm},
satisfies a differential equation which is a simple extension of
the standard Picard-Fuchs differential equation whose solutions
are the periods of the holomorphic three-form on the mirror of the
quintic. The possible origin of such differential equations is
discussed in special circumstances in \cite{lmw,lmw2} (see also
\cite{indians,lema}). But for a general brane configuration, or when
the ambient Calabi-Yau is compact, the existence of this differential
equation is, to the very least, surprising. Perhaps the most novel
aspect of the equation that we introduce in this paper is that large
complex structure is not a singular point of maximal unipotent
monodromy. However, this has excellent reasons for being so,
as we will explain below.
\begin{table}[t]
\begin{tabular}{|l|l|l|}
\hline
$d$ & number of disks $n_d$ & number of spheres \\\hline
1 & 30 & 2875 \\
3 & 1530 & 317206375 \\
5 & 1088250 & 229305888887625 \\
7 & 975996780 & 295091050570845659250 \\
9 & 1073087762700 & 503840510416985243645106250 \\
11 & 1329027103924410 & 1017913203569692432490203659468875 \\
13 & 1781966623841748930 & 229948856813626664832516010477226554$\ldots$ \\
15 & 2528247216911976589500 & 562465682466848327417948393837157975$\ldots$ \\
17 & 3742056692258356444651980 & 146020747145890338745688881159596996$\ldots$ \\
19 & 5723452081398475208950800270 & 397016669854518762338361058844977288$\ldots$ \\
\hline
\end{tabular}
\caption{The number (integral invariants) of holomorphic disks in
$X$ ending on $L$, of degree $d$ (only odd $d$ are shown, for
reasons explained in the text), and, for comparison, the number
of holomorphic spheres in $X$, according to \cite{cdgp}.}
\label{open}
\end{table}
Equipped with the differential equation, it is
straightforward to extract the open string instanton numbers,
and we can check the integrality property conjectured in \cite{oova}.
We do all this in section \ref{supo}, and display, for amusement, the
results in table \ref{open}.
It is then also of interest to study the analytic properties of
the brane superpotential over the entire Calabi-Yau moduli space,
and not just around large volume. Referring to section
\ref{analytic} for details, we would like to point out two salient
features here. Firstly, the domainwall tension is invariant under
monodromy around the conifold singularity in the moduli space. To
appreciate the consistency of this result, one has to remember that
the cycle that shrinks to zero volume at the conifold singularity
in K\"ahler moduli space is a holomorphic cycle which can be
wrapped by a B-brane, and it would be somewhat non-obvious why
an A-brane would feel this singularity.
The second interesting feature is that the domainwall tension is
not invariant under monodromy around the small-volume Gepner point.
This is more surprising because, based on the worldsheet results of
\cite{strings}, one would have naively expected the domainwall tension
to vanish at that point where the two vacua on the brane become
degenerate. Instead, what happens is that the tension of the
domainwall, when analytically continued from large volume, becomes
asymptotically equal to a particular closed string period, which
measures flux superpotentials. In other words, this domainwall
only mediates a transition between different flux sectors, and
this is still consistent with the degeneracy of the open string
vacua. What it tells us, however, is that it could be much more
delicate to understand our results from the worldsheet perspective,
which is purportedly insensitive to the flux. It also indicates
that it might be appropriate to include some of the flux data into
the definition of Floer homology and the Fukaya category.
While this derivation of the superpotential and the instanton series
can perfectly well stand alone, the confidence in the enumerative
results of table \ref{open} of course increases dramatically if at
least some of those numbers can be verified mathematically directly
in the A-model. We will do this in section \ref{localization}.
The mathematical definition of open Gromov-Witten invariants in
general still appears lacking \cite{fooo}, although several special
cases have been treated in the literature. Studies of the local
toric situation include \cite{katzliu,grza,peter,liu}. Recently,
Solomon \cite{jakethesis,jakecolumbia} has performed a rigorous
study of open Gromov-Witten invariants in the situation in which
the Lagrangian providing the boundary conditions arises as the
fixed point set of an anti-holomorphic involution. This covers
the situation of our interest, so we can be confident that the
numbers we are claiming are well-defined.
To go ahead with the direct computation of those open Gromov-Witten
invariants, one can exploit the fact that, at least in our situation,
any holomorphic mapping from the disk into $X$ with boundary on $L$
factors through a holomorphic sphere in $X$ meeting $L$ in a circle.
In other words, we can relate the enumeration of holomorphic disks
to the enumeration of holomorphic spheres which are invariant under
the anti-holomorphic involution. For this problem, we have at our
disposal the powerful graph combinatorial method introduced
in \cite{kontsevich}. This technique computes the Euler characteristic
of a particular bundle on the moduli space $\calm(\complex\projective^4)$ of
holomorphic curves in $\complex\projective^4$ by using Atiyah-Bott localization
with respect to the action of the torus $(S^1)^5\subset U(5)$ inside
the symmetry group of $\complex\projective^4$. The anti-holomorphic involution then
acts in a natural way on this moduli space and the bundle over it,
and one can identify the open Gromov-Witten invariant as the Euler
characteristic of the resulting real bundle over the real locus
in $\calm(\complex\projective^4)$ \cite{jake}.
There are then two key points to appreciate in order to proceed. The
first one is that while the anti-holomorphic involution breaks
some of the symmetries of the ambient space, it still leaves an
$(S^1)^2\subset O(5)$ unbroken. In particular, the fixed points on
the real slice with respect to this torus coincide with the real
fixed points of the torus in the complex case. The second point is
that the Euler class of a real bundle is the squareroot of the
Euler class of its complexification, where the sign is determined
by the choice of orientation. With these two ingredients, it is
straightforward to adapt the methods of \cite{kontsevich} to
develop a graphical calculus which computes the open Gromov-Witten
invariants of our interest. We have checked that up to degree 7,
these numbers coincide with those obtained using mirror symmetry.
The number (30) of holomorphic disks of degree 1 was first computed
(without using localization) by Solomon \cite{jake,jakecolumbia}.
We have also checked the number (1530) of holomorphic disks in
degree $3$ by taking a real slice of the localization computation
of \cite{es} on the space of curves (instead of the space of maps).
Besides the many possible applications and extensions of these
results that spring to mind, we would like to mention that the
numbers we get in this paper can also be viewed as providing
lower bounds in real enumerative geometry in the sense of, see,
\eg, \cite{sottile,welschinger}.
\section{The problem and its solution}
\label{supo}
We consider in $\complex\projective^4$ the Calabi-Yau hypersurface given as the vanishing
locus of a polynomial of degree $5$ in the homogeneous coordinates of
$\complex\projective^4$:
\begin{equation}
X = \{ P(z_1,\ldots,z_5) = 0 \} \subset \complex\projective^4
\end{equation}
The choice of $P$ determines the complex structure of $X$, and to define
a $\sigma$-model with target space $X$, we need to pick a choice of
complexified K\"ahler form $B+\ii J = t\omega$, where we denote by
$\omega$ the integral generator of $H^2(X,\zet)=\zet$, and $t$ is
the K\"ahler parameter.
\subsection{On the real quintic}
We want to identify in $X$ a particular Lagrangian submanifold as the
fixed-point locus of an anti-holomorphic involution which
acts on the ambient $\complex\projective^4$ as complex conjugation on the homogeneous
coordinates
\begin{equation}
\eqlabel{conjugate}
[z_1:z_2:\cdots:z_5] \mapsto [\bar z_1:\bar z_2:\cdots:\bar z_5]
\end{equation}
The complex structure on $X$ will be (anti-)invariant under this involution
if the defining polynomial $P$ is {\it real}, in the sense that all its
coefficients are real (up to a common phase). The fixed point locus, $L$,
where $z_i=x_i$ is real is then given by the corresponding real equation
$P(x_1,\ldots,x_5)=0$ inside of $\reals\projective^4\subset \complex\projective^4$. Straightforwardly,
$L$ is a Lagrangian submanifold of $X$. In fact, $L$ is even special
Lagrangian with respect to the holomorphic three-form on $X$.
Now while the topology of $X$ is well-known and independent of the complex
structure, the real locus $L$ can have various topologies and singularities,
with interesting transitions between them as $P$ is varied. We will not
attempt to discuss all the possibilities here, but wish to comment on the
consequences. To fix ideas, let us consider the Fermat quintic
\begin{equation}
\eqlabel{fermat}
P = z_1^5+z_2^5+z_3^5+z_4^5+z_5^5
\end{equation}
Over the reals, $z_i=x_i$, we can solve for $x_5$ uniquely in terms of
$x_1, \ldots x_4$, not all of which can be zero, lest $x_5$ will be zero
too. This identifies $L$ with a copy of $\reals\projective^3$. However, this
identification depends on the fact that $z_5^5=a$ for real $a$ has
only one real root, which will not be useful for a generic $P$.
There are at least two things that can happen to the real locus as
we vary the complex structure. The first one is familiar from studies
of stability conditions on Lagrangian submanifolds, and happens along
a real codimension one locus in complex structure moduli space. When
crossing such a wall of marginal stability, the special Lagrangian $L$
develops a singularity and reconnects on the other side, changing its
topological type (but not its homology class). The second effect is
a remnant of the standard conifold singularity in the complex structure
moduli space. (It might seem that since the discriminant locus is
complex codimension one, it would generically be missed by the
half-dimensional real subspace. But this is untrue.) It was shown
in \cite{hhprw} using a local model that when crossing such a conifold
singularity, the homology class of the real locus always changes by
the homology class of the vanishing cycle.
The second phenomenon is known to happen on the quintic \cite{unpublished},
for example when crossing the standard conifold locus $\psi=1$ along the
one parameter family $P\to P-5\psi z_1z_2z_3z_4z_5$. Since the Lagrangian
is connected at $\psi=0$, it implies that we must also be crossing a line
of marginal stability somewhere between $\psi=0$ and $\psi=\infty$.
In this paper, we are studying $L$ in the A-model, and those aspects
should be independent of the complex structure of $X$, and only depend
on the Hamiltonian deformation class of $L$. Namely, we would
expect to only depend on $L$ being Lagrangian, and not the special
Lagrangian property. On the other hand, the available definitions
of Floer homology for Lagrangians clearly depend on the underlying
topology. (For instance, they depend on $b_1(L)$.) It is therefore not
a priori clear why there should be a well-defined and invariant notion
of Floer homology or of the ``number of disks'' ending on ``the real locus
$L$'' which is independent of the complex structure of $X$. One might
worry slightly less about this in regard to the first phenomenon (marginal
stability) because at least the homology class is preserved. In this paper,
in any case, we will ignore this complication, and just pretend that
$L\cong \reals\projective^3$. The number of disks we will quote can then be understood
as referring to ``the generic quintic in a neighborhood of the Fermat
point''.
For the rest of the paper, we will be concerned with the dependence
on the K\"ahler parameter, $t$, or its exponentiated version
$q=\ee^{2\pi \ii t}$. We begin in the large volume limit $q\to 0$.
\subsection{Vacuum structure at large volume}
\label{vacua}
Recall that to wrap an A-brane on $L$, we also need to specify a
$U(1)$ bundle with a flat connection. Since $H_1(L;\zet)=\pi_1(L)=
\zet_2$ we have two possible choices which are distinguished by
a ``discrete Wilson line'', $W=\epsilon=\pm1$. In fact, these two
choices correspond to topologically distinct bundles on $\reals\projective^3$, as
measured by the first Chern class $c_1\in H^2(L;\zet)$. The latter is
equal to $H_1(L;\zet)$ by Poincar\'e duality. On the other hand, the
K-theory of the quintic does not contain any torsion elements, and
the two choices of flat connection can therefore not be distinguished
by any topological charge \cite{bdlr}.
As a consequence, when wrapping a D6-brane of type IIA string theory
on $L$, the brane worldvolume will support an $\caln=1$ gauge theory
with two vacua corresponding to the two possible discrete Wilson
lines, which are not distinguished by any conserved charge. We can then
ask about the existence of a BPS domainwall that communicates between
these two vacua.
To represent this domainwall in string theory, it is helpful to
understand why the two bundles are topologically equivalent
after inclusion in the quintic. Let us consider the situation with
``non-trivial'' Wilson line $\epsilon=-$ (we will see in a moment that this
isn't really an invariant notion). The non-trivial first Chern class
of the bundle on $L$ can be viewed as resulting from dissolving into
the D6-brane a D4-brane wrapping the non-trivial one-cycle in
$H_1(L;\zet)$. But since the quintic does not contain any non-trivial
one-cycles, we can also contract it away to nothing.
Clearly, then, the BPS domainwall that mediates between the two choices
of Wilson line on the D6-brane wrapping on $L$ is a D4-brane wrapping
a holomorphic disk $D$ in $X$ with boundary on the non-trivial one-cycle
in $L$ and extended along a (2+1)-dimensional subspace of Minkowski
space. This D4-brane is a magnetic source on the D6-brane and hence
changes the (discrete) magnetic flux on $L$. The topological
classification of $D$ is as a non-trivial relative cohomology class
in $H_2(X,L;\zet)$ with non-trivial image in $H_1(L;\zet)$.
It is not difficult to get a first approximation to the tension, $\calt$, of
this domainwall in the large volume limit (here and throughout the paper,
we will refer to the tension as the holomorphic quantity whose absolute
value gives the physical tension). Since $L$ is defined as the fixed
point locus of an anti-holomorphic involution of $X$, any holomorphic
disk ending on $L$ can be complex conjugated to a second holomorphic
disk, and thereby completed to a holomorphic sphere. From the exact
sequence
\begin{equation}
\eqlabel{exact}
H_2(X;\zet) \to H_2(X,L;\zet) \to H_1(L;\zet)
\end{equation}
we see that in fact also a brane wrapped on twice the generator
of $H_2(X,L;\zet)$ will not change the vacuum on the brane, and hence
be equivalent to a holomorphic sphere. The tension of that sphere being
$t$ (the K\"ahler parameter), we infer $2\calt\sim t$.
To see that this argument was in fact quite incomplete, we need another
fact about the relation between the cohomology of $X$ and that of $L$.
Namely, when intersecting a hyperplane in $\complex\projective^4$ with the Lagrangian $L$
(the hyperplane has to be represented by a complex linear equation in
order to intersect $L$ transversely), we can see that the intersection
locus is a non-trivial one-cycle in $L$. The Poincar\'e dual statement
is that the integral generator of $H^2(X;\zet)$ restricts on $L$ to the
non-trivial element of $H^2(L;\zet)$. Since the gauge invariant gauge
field on the brane is $B-F$, this means that changing the flat $B$-field
on $X$ by one unit is equivalent to exchanging the two flat gauge
fields on the brane.
A more elementary way to see this is to note that the path-integral
contribution of a disk worldsheet wrapped on $D$ has a contribution
$\ee^{2\pi\ii t/2}=q^{1/2}$ from its area and a contribution $\epsilon=\pm 1$
from its boundary, so changing $B\to B+1$ is equivalent to changing
$\epsilon\to -\epsilon$. Taking $B\to B+2$ does nothing on the brane. In this sense,
we can specify the Wilson line on the brane only after fixing the
sign of $q^{1/2}$.
Now claiming that $\calt\sim t/2$ raises a puzzle because it is not
invariant under $t\to t+2$. To resolve this, we have to note that
the D4-brane wrapped on $D$ is a magnetic source not only for the
gauge field on $L$, but also for the Ramond-Ramond 3-form field
(we actually used this above to derive $2\calt\sim t$). The change of
$\calt$ under $t\to t+2$ is then explained by the non-invariance of
RR flux under $B$-field monodromies.
So to make the formula for $\calt$ more precise, and work out the spectrum
of domainwalls, we have to include the RR flux quantum numbers in our
labeling of the vacua. For the time being, 4-form flux, $N_4$, and
6-form flux, $N_6$, (around the unique four and 6-cycle of $X$) will
suffice, so our vacua are labeled as $(N_4,N_6,\epsilon)$.
We then require that a domainwall represented by a D4 wrapping an
elementary disk $D$ connects $\epsilon$ $-\epsilon$, and that by juxtaposing
two such disks we obtain a sphere across which the only change is
$N_4\to N_4+1$, that the B-field monodromy $B\to B+1$ changes $N_6
\to N_6+N_4$, and also $\epsilon\to -\epsilon$, but is otherwise a symmetry
of the spectrum. We also wish to keep 4- and 6-form flux integrally
quantized to avoid concluding with fractional D0-branes.
It then turns out that, up to parity, there is only one consistent
solution to these constraints. The change in 4-form flux across a
D4-brane wrapped on $D$ is zero when $\epsilon=-$ on the left of the
domainwall and it is equal to $+1$ when $\epsilon=+$ on the left, and,
we have to let the $B$-field monodromy {\it change the 4-form flux},
in a way depending on $\epsilon$:
\begin{equation}
\eqlabel{spectrum}
B\to B+1:\qquad
\begin{array}{rcl}
(N_4,N_6,-)&\to& (N_4,N_6+N_4,+) \\
(N_4,N_6,+)&\to& (N_4+1,N_6+N_4,-)
\end{array}\qquad\quad
\end{equation}
Let us denote the tension of a domainwall between vacuum $(N_4,N_6,\epsilon)$
on the left and vacuum $(N_4',N_6',\epsilon')$ on the right by
$\calt_{(N_4,N_6,\epsilon)|(N_4',N_6',\epsilon')}$. The above constraints are enough
to determine all $\calt$'s as a function of $t$.
For example, let us consider the most basic $\calt_- \equiv
\calt_{(0,0,-)|(0,0,+)}$ and $\calt_+\equiv \calt_{(0,0,+)|(1,0,-)}$. Since $\calt_-(t+1)
= \calt_+(t)$ and $\calt_++\calt_-=t$, we conclude
\begin{equation}
\eqlabel{classten}
\calt_- = \frac t2-\frac14 \qquad\qquad
\calt_+ = \frac t2+\frac14
\end{equation}
Finally, we can write down the spacetime superpotential, which follows
from \eqref{classten} together with
\begin{equation}
\calt_{(N_4,N_6,\epsilon)|(N_4',N_6',\epsilon')}(t) =
\calw_{N_4',N_6',\epsilon'}(t) - \calw_{N_4,N_6,\epsilon}(t)
\end{equation}
We find
\begin{equation}
\eqlabel{classsupo}
\calw_{N_4,N_6,+}(t) = \frac {t^2}4 + N_4 t + N_6
\qquad\quad
\calw_{N_4,N_6,-}(t) = \frac{t^2}4 - \frac t2+\frac 14 + N_4 t + N_6
\end{equation}
Of course, in this section, the discussion has been entirely classical
and restricted to the large volume limit $t\to\ii\infty$. We now proceed
to study the corrections $\calw^{\rm quant.}$ from worldsheet instantons.
\subsection{Worldsheet instanton corrections}
According to general philosophy \cite{kklm,oova,extending,av1,bcov}, the
spacetime superpotential on the worldvolume of a particular supersymmetric
brane wrapping a cycle in a Calabi-Yau manifold, $X$, when expressed in the
A-model, and expanded in the appropriate variables, becomes the generating
function counting worldsheet instanton corrections from holomorphic disks
ending on the Lagrangian, $L$. Such a statement is in line with the role that
holomorphic disks play in the definition of Fukaya's $A_\infty$
category \cite{fooo}, and the relationship between $A_\infty$ algebras
and D-brane superpotentials \cite{calin,tomasiello}.
More precisely, the spacetime superpotential can be identified with the
topological disk partition function and is conjectured to admit an
expansion of the general form
\begin{equation}
\eqlabel{expansion}
\calw(t,u) = F_{\rm disk} (t,u)
= \sum_{d,e} {\tilde n}_{d,e} q^d y^e = \sum_{d,e}\sum_{k\ge 1}
\frac{n_{d,e}}{k^2} q^{kd} y^{ke}
\end{equation}
Here, the sum is over relative cohomology classes in $H_2(X,L)$, $q=\ee^{2\pi\ii t}$
is the (collection of) closed string K\"ahler parameters of $X$ and
$y=\ee^{2\pi\ii u}$ is the (collection of) exponentiated classical open string
deformation parameters. The latter come from non-Hamiltonian deformations
of the Lagrangian. They are $b_1(L)$ in number and are complexified by the
Wilson line of the gauge field around the corresponding one-cycles
of $L$. The final transformation in \eqref{expansion} is a resummation
of multi-cover contributions and the central part of the conjecture
is that the resulting expansion coefficients $n_{d,e}$ are integers
\cite{oova} (whereas the $\tilde n_{d,e}$ are in general rational
numbers). These integers have a spacetime interpretation as counting
the ``degeneracy of BPS domainwalls'' in the class $(d,e)$.
The existence and integrality of such an expansion has been checked in
many examples involving local toric Calabi-Yau manifolds. Our goal in
this paper is to make sense of and evaluate the formula \eqref{expansion}
for the Calabi-Yau-Lagrangian pair $(X,L)= \text{(quintic, real locus)}$.
At first sight, the fact that we only have a discrete open string
modulus at our disposal is a deficiency because \eqref{expansion}
makes explicit only rational cohomology. On second thought, however,
it's a blessing.
For example, as we have discussed above, domainwalls arising from
D4-branes wrapping holomorphic disks are sources for both the
Ramond-Ramond field and the gauge field on the brane. But if the
disk ends in a rational cycle of $L$, the gauge flux is non-zero
as a differential form. This raises a puzzle because according to
the standard worldsheet analysis, gauge fields on Lagrangian A-branes
should be flat. From the spacetime perspective, this might well be
repaired by a careful analysis of the couplings of the brane to the
Ramond-Ramond fields. But it is clearly not obvious to see that from
the TFT on the worldsheet. In the cases discussed in the literature
(see \cite{av1,akv} and follow-up work), this problem is avoided
because the Lagrangians considered there are non-compact and hence
the flux can disperse to infinity.
A second advantage of having $H_1(L,\zet)=\zet_2$ being torsion
has to do with certain puzzlements \cite{jake} about the multi-cover
formula as well as the integral ``framing'' ambiguity of open string
amplitudes discovered in \cite{akv}. We do not understand either of
those issues sufficiently well enough to usefully discuss here, but
the consistency of our results indicates that both problems are
absent for $H_1(L)=\zet_2$.
Finally, because our Lagrangian is compact, we can also discuss
the classical contributions to the superpotential, as we have
done in the previous subsection. The structure of these classical
terms (which are absent from \eqref{expansion}) will help us to
normalize the computation by imposing consistency of the monodromies
around the various singular loci in the K\"ahler moduli space (see
section \ref{analytic}).
So what is the possible structure of worldsheet instanton corrections
to our formulas \eqref{classten} for the domainwall tensions?
Clearly, the first non-trivial term will arise from worldsheet disks
wrapped in the class $D$ generating $H_2(X,L;\zet)=\zet$, and will
contribute at order $q^{1/2}$. Then there will be higher order terms.
Let us call disks contributing at order $q^{d/2}$ ``of degree $d$''.
It is easy to see that the conditions $\calt_-(t+1)=\calt_+$, $\calt_++\calt_-=t$
that we have used to derive \eqref{classten} hold also after inclusion
of non-perturbative worldsheet corrections. This is because
$t$ is essentially {\it defined} to be the parameter measuring the
tension of the domainwall wrapped on a degree $1$ rational curve.
The only form of the instanton expansion that is consistent with those
constraints is that there are no contributions from even degree disks.
This is in fact not unexpected, because disks of even degree have
trivial boundary on the Lagrangian, and even though we can contemplate
holomorphic disks of even degree ending on $L$, the triviality of their
boundary makes it difficult to keep them there as we vary the complex
structure of the quintic. In other words, we do not expect any invariant
to exist for even degree.
So we expect a result of the form
\begin{equation}
\eqlabel{expected}
\calt_\pm = \frac t4 \pm \frac 12 \pm {\it const.} \sum_{d\; {\rm odd}} \tilde n_d q^{d/2}
\end{equation}
where the $\tilde n_d$ are certain rational numbers such that rewriting
them as in \eqref{expansion},
\begin{equation}
\tilde n_d = \sum_{k|d} \frac{n_{d/k}}{k^2}
\end{equation}
the $n_d$ turn out integer.
\subsection{Mirror Symmetry and open Picard-Fuchs equation}
The easiest way to get an expansion of the form \eqref{expected} is to
make use of mirror symmetry. What this means concretely is that we
should first identify an object in the D-brane category which appears on
the B-model side of the homological mirror symmetry conjecture, and
which, via the equivalence of categories and up to auto-equivalences,
corresponds to the object of the (derived) Fukaya category that is
defined by $L$. We should then compute the appropriate
superpotential/domainwall tension quantity as a function of the mirror
parameter $\psi$ and reexpress it in terms of the flat coordinate $t$.
The Calabi-Yau mirror, $Y$, to the quintic is of course well-known. It
is the resolution of a $(\zet_5)^3$ quotient of the one-parameter family
of quintics $\sum z_i^5-5\psi\prod z_i=0$ in $\complex\projective^4$. Equivalently,
we can consider a Landau-Ginzburg orbifold model with superpotential
$W=\sum z_i^5-5\psi \prod z_i$ and orbifold group $(\zet_5)^4$. The
corresponding B-model category which is conjectured \cite{stability}
to be equivalent to the derived category of $Y$ is the category
of $(\zet_5)^4$ equivariant matrix factorizations of the superpotential
$W$. (The corresponding equivalence was proven for the quintic itself
by Orlov \cite{orlov}.)
And in fact, as we have mentioned in the introduction, the matrix
factorization which is mirror to the Lagrangian $L$ is known explicitly
(see \cite{howa,strings} for details). Given this identification of
the matrix factorization and the equivalence with the derived category,
it should be possible in principle to also describe explicitly a coherent
sheaf on $Y$ corresponding to $L$. This would in fact be very interesting,
because it would allow making use of some of the well-known machinery of
holomorphic vector bundles that applies to problems of this type.
In particular, there is an explicit formula for the superpotential,
namely, the holomorphic Chern-Simons functional \cite{wcs}
\begin{equation}
\eqlabel{hcs}
\calw^B = S_{\rm hCS}(A,A_0) = \int\Omega\wedge \tr \bigl[ A\wedge
\delbar_{A_0} A + \frac 23 A\wedge A\wedge A\bigr]
\end{equation}
No such expression is known in the matrix factorization formulation,
and although there are formulas for TFT correlators \cite{kali2,hela},
they do not appear sufficient to determine the full superpotential.
(See, however \cite{hln} for recent progress in making the
$A_\infty$ constraints of \cite{hll} useful for this type of question.)
Leaving these explicit B-models for future investigations, we will instead
obtain sufficient guidance from the non-compact examples of open mirror
symmetry introduced in \cite{av1}, and studied in depth in
\cite{akv,mayr,indians,lema,lmw,lmw2}.
The main simplification that occurs in these examples is that the B-model
contains only D5-branes wrapped on curves in the Calabi-Yau. For such
a brane configuration, the holomorphic Chern-Simons action \eqref{hcs}
reduces to a ``partial period'' integral of the type
\begin{equation}
\eqlabel{mina}
\calw(C,C_*) = \int_\gamma \Omega
\end{equation}
where $\gamma$ is a three-chain in $X$ with boundary $\del\gamma=C-C_*$
equal to the difference of two possible positions of the D5-branes.
(If $C$ and $C_*$ are holomorphic, \eqref{mina} is literally the tension
of the domainwall between the two vacua.) In the toric case, one can
then further reduce the integral \eqref{mina} to take place on a
Riemann surface, so one has essentially a one-dimensional problem.
This structure was exploited in \cite{lmw,lmw2} to show that the
differential equations obtained in \cite{mayr,lema} could be viewed
as resulting from a certain variation of mixed Hodge structure on
a certain relative cohomology. Explicitly, one retains the boundary
terms arising in the derivation of the GKZ differential system and
converts them into appropriate boundary variations. The upshot is
that the open string mirror computations in the local toric case can
be cast in a form very similar to the standard, closed string computations,
involving Picard-Fuchs differential equations, maximal unipotent
monodromy, mirror map, {\it etc.}. This is called $\caln=1$ special geometry.
We do not know at present whether such considerations make sense for the
general B-model situation. The case at hand, however, is sufficiently well
constrained by our results so far that assuming the existence of a
differential equation with properties as in \cite{lmw,lmw2}, there
is essentially a unique candidate. This moreover turns out to produce
excellent results.
The central idea of $\caln=1$ special geometry is to extend the standard
period vector by certain ``partial periods'' encoding information about
the open string sector. We recall that in standard ($\caln=2$) special
geometry, we have two periods for every closed string modulus, plus one
or two extra ones related to the holomorphic three-form. In $\caln=1$
special geometry, we gain one ``partial period'' for every classical
open string modulus, plus one for every brane vacuum included in the
background. Schematically,
\begin{equation}
\eqlabel{period}
\Pi(t_{\rm closed},u_{\rm open}) = (1, t_{\rm closed}, \del_t
\calf_{\rm closed}, u_{\rm open}, \calw_{\rm brane}, \ldots )^T
\end{equation}
where $\calf_{\rm closed}$ is the standard prepotential and the
$u_{\rm open}$ are the flat coordinates of the open string sector. The
important point is that the period vector \eqref{period} satisfies a
certain extension of the Picard-Fuchs differential equations. This
differential system has all of the closed periods as solutions, plus
extra ones related to $u_{\rm open}$ and $\calw_{\rm brane}$. The
latter gives the open string instanton expansion according to
\eqref{expansion}.
In the case that we have discussed in the previous subsections, we are
not adding any classical open string modulus because $b_1(\reals\projective^3)=0$,
so the only modulus is the K\"ahler parameter $t$ of $X$, or equivalently,
the mirror variable, $z=z(t)$. Moreover, according to \eqref{expected},
we need exactly one non-trivial domainwall tension as function of $t$
to encode the desired open string expansion. Let us call
$\tau \sim q^{1/2} + \cdots$ the quantum part of the expansion \eqref{expected}.
Since to leading order $z = q=\ee^{2\pi\ii t}$, we will also have $\tau(z)
\sim z^{1/2}+\cdots$ when expressed as a function of $z$.
Thus, we are simply seeking an ordinary linear differential equation in
$z$, which, in addition to the four known periods of the mirror quintic,
has exactly one additional linearly independent solution, $\tau$, with a
squareroot behavior at $z=0$. The Picard-Fuchs equation governing periods
of the mirror quintic being
\begin{equation}
\eqlabel{picard}
\call\varpi = \bigl[\theta^4 - 5 z (5\theta+1)(5\theta+2)(5\theta+3)(5\theta+4)
\bigr]\varpi = 0
\end{equation}
where $\theta = z\del_z$, and $z=(5\psi)^{-5}$, virtually the only possible
extension that satisfies our constraints is the differential operator
\begin{equation}
\eqlabel{opf}
(2\theta-1)\call = (2\theta-1)\theta^4 -
5 z(2\theta+1)(5\theta+1)(5\theta+2)(5\theta+3)(5\theta+4)
\end{equation}
We will now analyze this differential equation and show that it satisfies
all the other desirable properties as well.
\subsection{The instanton sum}
We follow conventions of \cite{cdgp}. The differential equation
$\call\varpi=0$ has one distinguished solution, called the fundamental
period, which has a power series expansion around the large complex
structure point $z=0$,
\begin{equation}
\eqlabel{fundamental}
-w^2(z) \equiv \varpi_0(z) = \sum_{m=0}^\infty \frac{(5m)!}{(m!)^5} z^m
\end{equation}
All other solutions contain logarithms as $z\to 0$, large complex
structure being a point of maximal unipotent monodromy. The period
with a single logarithm, $w^1(z)$, has the information about
the mirror map via $t=w^1/w^2$, $q\equiv \ee^{2\pi\ii t}$.
\begin{equation}
\eqlabel{mirrormap}
- 2\pi\ii w^1(z) = \varpi_0(z) \log z + 5 \sum_{m=1}^\infty
\frac{5m)!}{(m!)^5} z^m\bigl[\Psi(1+5m)-\Psi(1+m)\bigr]
\end{equation}
Under large complex structure monodromy, $z\to \ee^{2\pi\ii} z$,
$w^1\to w^1+w^2$ and $t\to t+1$.
There are then two further solutions of \eqref{picard}, both of which
contain the closed string instanton information, in slightly different
forms. Specifically, the solution of \eqref{picard} called $\calf_1$ in
\cite{cdgp} is characterized by the boundary conditions
\begin{equation}
(2\pi\ii)^2 \calf_1 = -5\cdot (2\pi\ii) w^1(z) \log z + \frac 52 w^2(z)(\log z)^2
-\frac{21}2\cdot (2\pi\ii)^2 w^1(z) + \calo(z)
\end{equation}
It transform under large complex structure monodromy as
$\calf_1\to \calf_1 - 5 w^1-8w^2$. Finally, the solution called
$\calf_2$ in \cite{cdgp} is characterized by $\calf_2\to
\calf_2 -\calf_1 - 3 w^1 + 5w^2$ as $t\to t+1$.
These periods $(\calf_1,\calf_2,w^1,w^2)$ can be interpreted as the quantum
corrected masses of D4, D6, D2 and D0-brane on the quintic, respectively
\cite{bdlr}. They therefore also give the tension of domainwalls mediating
between various flux sectors, including the corrections from worldsheet
instantons. For example, in the proper K\"ahler normalization $w^2=1$, one
obtains after inverting \eqref{mirrormap} and expanding in $q=\ee^{2\pi\ii t}$,
\begin{equation}
\eqlabel{unconv}
\frac{\calf_1}{w^2} = - \frac 52 t^2 -\frac{21}2 t +\frac{1}{4\pi^2}\Bigl[
2875 q + \frac{4876875}4 q^2 +\cdots \Bigr]
\end{equation}
The polynomial in $t$ is the classical tension from the geometric volume of
the cycles and the power series in $q$ gives the quantum corrections. The
rational coefficient $\tilde N_d$ of $q^d$ in this expansion gives the
contribution from holomorphic spheres of degree $d$. They satisfy the
property that when reexpressed in terms of $N_d$ via
\begin{equation}
\tilde N_d = \sum_{k|d} \frac{d N_{d/k}}{k^3} \,,
\end{equation}
the $N_d$ are integers. Note that we have here slightly unconventionally
expanded the first derivative of the prepotential instead of the prepotential
itself or the Yukawa coupling as in \cite{cdgp}. Since periods and
brane superpotentials are on equal footing in $\caln=1$ special geometry,
this will make the comparison with the open string version \eqref{opinst}
more natural.
Turning now to the equation \eqref{opf}, it has, by construction, exactly
one additional solution, which we normalize to $\tau(z) = z^{1/2}+\cdots$.
We find,
\begin{equation}
\eqlabel{tau}
\tau(z) = \frac{\Gamma(3/2)^5}{\Gamma(7/2)}\;
\sum_{m=0}^\infty \frac{\Gamma(5m+7/2)}{\Gamma(m+3/2)^5}\; z^{m+1/2}
\end{equation}
In the next section, we will determine from monodromy calculations on
the K\"ahler moduli space that $\tau$ enters the domainwall tension
in the normalization
\begin{equation}
\eqlabel{quantumten}
\calt_\pm (t) = \frac{w^1}{2} \pm \frac {w^2}{4} \pm \frac{15}{\pi^2} \tau(z)
\end{equation}
This then has exactly the expected form \eqref{expected}. Consulting
\eqref{classten} and its relation with \eqref{classsupo}, we then conclude
that the contribution of worldsheet disk instantons to the spacetime
superpotential is
\begin{equation}
\calw^{\rm quant.} = \frac{30}{4\pi^2} \tau(z)
\end{equation}
Dividing by $w^2$ to go to the canonical normalization of the holomorphic
three-form, multiplying by $4\pi^2$ as in \eqref{unconv}, inverting the mirror
map, and doing the expansion, we obtain the open string instanton sum
\begin{equation}
\eqlabel{opinst}
\hat\tau(q) = 30\frac{\tau(z(q))}{\varpi_0(z(q))} =
30 q^{1/2} + \frac{4600}3 q^{3/2} + \frac{5441256}5 q^{5/2} +\cdots
\end{equation}
We can then plug in to $\hat\tau(q)$ the Ooguri-Vafa multi-cover formula
\eqref{expansion}
\begin{equation}
\hat\tau(q) =
\sum_{\topa{d\;{\rm odd}}{k\;{\rm odd}}} \frac{n_d}{k^2} q^{d k/2}
=
\sum_{d\;{\rm odd}} n_d \frac{q^{d/2}}{4} \Phi(q^d,2,1/2)
\end{equation}
where $\Phi$ is the Lerch Transcendent. For reasons explained in a previous
subsection, we only consider disks of odd degree and their odd multi-covers.
The first few $n_d$ are indeed integer and displayed in table \ref{open}
in the introduction.
It should be stressed that we have strictly speaking not shown that
the constant normalization factor in \eqref{expected} is equal to
$\frac{1}{2\pi^2}$ as claimed. It is, however, the most natural choice
and consistent with everything else we know. It would be interesting
to derive this value more directly.
\section{Analytic continuation of the superpotential}
\label{analytic}
The purpose of this section is to analytically continue our result for
the superpotential/domainwall tension over the entire quantum K\"ahler
moduli space of the quintic, much as was done for the closed string
periods in \cite{cdgp}. This will not only help us to fix the normalization
factor anticipated in \eqref{quantumten}, but is interesting in its own
right as it can shed light on intrinsically stringy aspects of D-brane
physics that have hitherto been inaccessible. We will indeed find that
the analytic properties of the $\calt_\pm$ are rather interesting.
Recall that the K\"ahler moduli space of the quintic has three special
points: large volume point $z\to 0$ that we have already discussed in
depth, the conifold singularity $z=5^{-5}$ at which the period $\calf_2$
vanishes, and the so-called Gepner or Landau-Ginzburg point, $z\to\infty$,
which is not a singularity of the CFT, but exhibits a $\zet_5$ orbifold
monodromy. We wish to understand the analytic behavior of $\calw$, or
equivalently $\calt$, around each of these points. We shall work with the
ansatz
\begin{equation}
\eqlabel{ansatz}
\calt_\pm(z) = \frac{w^1(z)}2\pm\frac{w^2(z)}4\pm a \tau(z)
\end{equation}
and determine the coefficient $a$ from consistency requirements.
The standard tool to do the analytic continuation of solutions of a
hypergeometric differential equation of the type \eqref{opf} is the
Barnes integral representation. For $\tau$, this representation takes
the form
\begin{equation}
\tau(z) = \frac{\pi^2}{60} \frac{1}{2\pi\ii}
\int_C \frac{\Gamma(-s+1/2)\Gamma(5s+1)\Gamma(s+1/2)}{\Gamma(s+1)^5}
\ee^{\ii\pi(s-1/2)} z^s
\end{equation}
where the integration contour is straight up the imaginary axis. For
$|z|<5^{-5}$, we close the contour on the positive real axis and recover
\eqref{tau}. For $|z|>5^{-5}$, we instead close the contour on the negative
real axis, and obtain the expansion
\begin{multline}
\eqlabel{small}
\tau(z) =\tau_1(z) + \tau_2(z) =
\frac{\pi^2}{60}
\Biggl[\sum_{m=0}^\infty \frac{-\Gamma(-5m-3/2)}{\Gamma(-m+1/2)^5} z^{-m-1/2}
\\ + \sum_{m=1}^\infty
\frac{-\Gamma(m/5) \ee^{4\pi\ii m/5}}{5\Gamma(m)\Gamma(1-m/5)^4} z^{-m/5} \;
\ee^{-\ii\pi/2} \frac{\sin \pi m/5}{\cos \pi m/5}
\Biggr]
\end{multline}
The first term, $\tau_1(z)$, is simply the unique solution of \eqref{opf} with
a squareroot behavior around $z=\infty$, and changes sign as we circle around
$z^{1/5}\to\ee^{-2\pi\ii/5} z^{1/5}$. The second sum in \eqref{small} is easily
verified to be a solution of the ordinary Picard-Fuchs equation, and hence a
closed string period. To determine which one, we can compare it with the canonical
$\zet_5$ symmetric basis of solutions of \eqref{picard} around the Gepner point
\cite{cdgp}, ($j=0,\ldots,4$)
\begin{equation}
\varpi_j(z) = \sum_{m=1}^\infty
\frac{-\Gamma(m/5)\ee^{4\pi\ii m/5}}{5\Gamma(n)\Gamma(1-m/5)^4} z^{-m/5}
\; \ee^{2\pi\ii j m/5}
\end{equation}
Indeed, the identity
\begin{equation}
\frac{\sin\pi m/5}{\cos\pi m/5}
= 2 \sin 2\pi m/5-2\sin 4\pi m/5
\end{equation}
shows that
\begin{equation}
\tau_2(z) = \frac{\pi^2}{60}\bigl[\varpi_0+2\varpi_4+2\varpi_2\bigr]
\end{equation}
According to the results of \cite{cdgp}, the small volume period vector
$\varpi = (\varpi_2,\varpi_1,\varpi_0,\varpi_4)^T$ is related to the large
volume basis $\mathord{\mathchar "0271}=(\calf_1,\calf_2,w^1,w^2)^T$ via $\mathord{\mathchar "0271} = M\varpi$ with
\begin{equation}
M=\begin{pmatrix}
\frac{3}{5}& \frac{1}{5} & -\frac{21}{5}& -\frac{8}{5}\\
0& -1& 1& 0 \\
-\frac{1}{5}& -\frac{2}{5}& \frac{2}{5} & \frac{1}{5} \\
0& 0& -1& 0
\end{pmatrix}
\end{equation}
This allows us to express $\tau_2(z)$ in the integral basis,
\begin{equation}
\eqlabel{integral}
\tau_2(z) = \frac{\pi^2}{60}\bigl[-4 \calf_1+8\calf_2-11w^1+15 w^2\bigr]
\end{equation}
Moreover, by using the known monodromy matrices around the Gepner point,
we find that as $z^{-1/5} \to \ee^{2\pi \ii /5} z^{-1/5}$,
\begin{equation}
w^1\to -\calf_2+w^1-w^2\,, \qquad
w^2\to \calf_2+w^2\,,\qquad
\tau\to -\tau +\frac{\pi^2}{60} \calf_2
\end{equation}
Thus we see that were it not for the quantum corrections of the domainwall
tension in \eqref{ansatz}, the Gepner monodromy would take $\frac{w^1}2+\frac{w^2}4$
to $\frac{w^1}2-\frac{w^2}4-\frac{\calf_2}4$, and would not induce a symmetry
of the domainwall spectrum as it should. Moreover, we see that the
lucky number that makes the Gepner monodromy integral is indeed
$a=\frac{15}{\pi^2}$. (Strictly speaking, this is only the minimal
possibility, a natural choice.) With this value, the Gepner monodromy acts as
\begin{equation}
\eqlabel{gepner}
A: \qquad \calt_+ \to \calt_-\,,\qquad\qquad
\calt_- \to \calt_+ - w^2 -\calf_2
\end{equation}
on the open string periods. Since as discussed in section \ref{supo},
the large volume monodromy acts by $T_\infty: \calt_-\to\calt_+$,
$\calt_+\to\calt_-+w^2$, we find by combining the two that the conifold
monodromy about $z=5^{-5}$, $T=T_\infty^{-1}\circ A^{-1}$ acts trivially
on both $\calt_+$ and $\calt_-$.
Let us verify this last assertion explicitly, in order to check that
everything is consistent. A straightforward way to compute this monodromy
is to compare the divergence of the large volume expansions
\eqref{fundamental} and \eqref{tau} as $z$ approaches the singularity
$z\to z_*= 5^{-5}$. We know from \cite{cdgp} that at the conifold,
$\calf_2$ vanishes as $\calf_2\sim\alpha_1(z-z_*)+\alpha_2(z-z_*)^2+
\cdots$ and $\varpi_0$ behaves as $\varpi_0\sim \frac{1}{2\pi\ii} \calf_2
\log(z-z_*) + {\it regular}$. To determine the coefficient $b$
in $\tau\sim \frac{b}{2\pi\ii}\calf_2 \log(z-z_*) +{\it regular}$, we
compare the second derivatives of $\varpi_0$ and $\tau$ as $z\to z_*$.
Using Stirling's formula, we find
\begin{equation}
\varpi_0'' \sim \sum_m (5^5z)^m \Bigl[\frac{5^{10}\sqrt{5}}{4\pi^2}
-\frac{7 \cdot 5^9 \sqrt{5}}{4\pi^2} \frac{1}{m} +\cdots \Bigr]
\end{equation}
which determines $\alpha_1$, $\alpha_2$. Doing the same for $\tau$
delivers
\begin{equation}
\tau'' \sim \sum_m (5^5z)^{m+1/2}\Bigl[\frac{5^{10}\sqrt{5}}{4\pi^2}
-\frac{7\cdot 5^9\sqrt{5}}{4\pi^2} \frac 1m +\cdots\Bigr]
\end{equation}
This implies $b=1$.
Thus, we find that the conifold monodromy takes $\tau\to\tau+
\frac{\pi^2}{60} \calf_2$, and since $w^2\to w^2-\calf_2$,
$\calt_\pm$ are invariant when we set $a=\frac{15}{\pi^2}$.
It is also worth pointing out that for $a=\frac{15}{\pi^2}$, the
leading behavior of $\calt_\pm$ as $z\to \infty$ is the same as
that of an integral closed string period. This follows from \eqref{small}
in conjunction with \eqref{integral}. As was mentioned in the
introduction, this is a further consistency check on our results.
It was shown in \cite{howa,strings} that the two open string vacua
associated with the choice of discrete Wilson line (see subsection
\ref{vacua}) could be identified with certain matrix factorization
in the Landau-Ginzburg B-model. At the Gepner point, $z\to\infty$,
the open string spectrum on the brane develops an extra massless
state with a cubic superpotential. (This coalescence of open string
vacua was first proposed in \cite{bdlr}.) There should therefore
be a domainwall between the two vacua that becomes tensionless as
$z\to\infty$. Our result is then that while such a domainwall can
indeed exist, it is not the most naive one obtained by wrapping a
D4-brane on the primitive disk, but has to be combined with the
appropriate integral period from \eqref{integral}.
To conclude this section, we summarize the results for the action of
the monodromies around Gepner point, conifold point, and large volume
point on the extended period vector (we now use $\calt_-=-\calt_++w^1$)
\begin{equation}
\mathord{\mathchar "0271 \kern-4.5pt \mathchar"0271} = \bigl( \calt_+, \calf_1, \calf_2, w^1, w^2 \bigr)^T
\end{equation}
We have:
\begin{equation}
\eqlabel{mondromies}
\begin{array}{ccc}
A & T & T_\infty\\\hline
\begin{pmatrix}
-1& 0& 0& 1& 0\\
0& 1& 3& 5& 3\\
0& 1& -4& 8& -5\\
0& 0& -1& 1& -1\\
0& 0& 1& 0& 1
\end{pmatrix}
&
\begin{pmatrix}
1& 0& 0& 0& 0\\
0& 1& 0& 0& 0\\
0& 0& 1& 0& 0\\
0& 0& 0& 1& 0\\
0& 0& -1& 0& 1
\end{pmatrix}
&
\begin{pmatrix}
-1& 0& 0& 1& 1\\
0& 1& 0& -5& -8\\
0& -1& 1& -3& 5\\
0& 0& 0& 1& 1\\
0& 0& 0& 0& 1
\end{pmatrix}
\end{array}
\end{equation}
These matrices satisfy $A\cdot T\cdot T_\infty = 1$ and $A^{10}=1$,
but $A^5\neq 1$. Thus we find that the combined open-closed moduli
space is a double cover of the quantum K\"ahler moduli space of the
quintic, branched at $z=0$ and $z=\infty$.
\section{Localization in the A-model}
\label{localization}
In this section we shall show how to check the enumerative predictions
that we have obtained using mirror symmetry. We have outlined the main
strategy in the introduction, so we will attempt to be brief. Details
can be filled in from \cite{kontsevich} and \cite{horietal}, Chapter 27.
Consider the moduli space $\calm_d\equiv\overline{\calm}_{0,0}(\complex\projective^4,d)$
of genus zero stable maps to $\complex\projective^4$ in degree $d$. For each point
$f:\Sigma\to\complex\projective^4$ in $\calm$, we can pullback from $\complex\projective^4$ the bundle
$\calo(5)$ of quintic polynomials. The global sections of that bundle
$\calo(5d)$ over $\Sigma$ then fit together to a vector bundle $\cale_d$
as we vary $f$ over $\calm_d$. Any particular quintic polynomial
$P(z_1,\ldots ,z_5)$ in the homogeneous coordinates of $\complex\projective^4$ gives
a section of $\calo(5)$. The resulting section of $\cale_d$ vanishes
at precisely those genus zero maps into $\complex\projective^4$ which happen to be
contained in the quintic given by $P$. This identifies the number of
genus zero, degree $d$ maps to the quintic as the Euler class of $\cale_d$:
\begin{equation}
\eqlabel{euler}
\tilde N_d =\int_{\calm_d} c_{5d+1}(\cale_d)
\end{equation}
It was shown in \cite{kontsevich} that this Euler class can be very
efficiently computed using Atiyah-Bott localization. The entire
structure described above carries an $(S^1)^5$ action inherited
from the standard $U(5)$ action on $\complex\projective^4$. On the homogeneous
coordinates, this torus acts as
\begin{equation}
\eqlabel{action}
{\mathbb T}^5 = (S^1)^5 \ni (\rho_1,\ldots,\rho_5) :[z_1:\cdots :z_5]
\mapsto [\rho_1z_1:\cdots:\rho_5 z_5]
\end{equation}
(This action can be complexified, of course, but we really only need the
real torus.) On $\complex\projective^4$, there are exactly five fixed points, $p_i$, of
this torus action, defined by $z_j=0$, $j\neq i$. The fixed point loci on
$\calm_d$ can be associated combinatorially with certain decorated tree
graphs, $\Gamma$. The vertices of these graphs (which can have arbitrary
valence, ${\rm val}(v)$) correspond to (genus $0$) contracted components
of the source $\Sigma$. They are labeled by one of the fixed points $p_i$
which tells where the component maps. The edges of the graph correspond
to non-contracted rational components of $\Sigma$ mapping onto the
coordinate line joining $p_i$ to $p_j$. They are labeled by a positive
integer $d$ describing the degree of that map. The constraints on this
decoration are that $p_v\neq p_{v'}$ for adjacent vertices $v$, $v'$
and that the sum of degrees on the edges be equal to the total degree
under consideration.
In general, the fixed loci are not isolated points, but consist of
certain moduli spaces $\overline{\calm}_\Gamma$ arising from the contracted
components at the vertices (of valence $\ge 3$). One can then compute
the (${\mathbb T}^5$-equivariant) Euler class of the normal bundle of
$\overline{\calm}_\Gamma$ inside of $\calm_d$, as well as the Euler
class of $\cale_d$ at the fixed points. The integrals over the
$\overline{\calm}_\Gamma$ can be done, and what results is a very
explicit formula for $\tilde N_d$ given by a sum over graphs and
labellings, divided by the appropriate symmetry factor.
We wish to accomplish something similar for the holomorphic maps of disks
to the quintic with boundary on the real locus.
As we have indicated before, any disk with boundary on the real locus
can be completed to a sphere, and the two halves of that sphere
contribute in the same relative homology class. Conversely, any
sphere of {\it odd} degree is cut in two by the real locus in a
non-trivial one-cycle.\footnote{This is not true for even degrees: There
can be real spheres of even degree without real points. In the real
problem, they give rise to maps from the crosscap to the quintic. In other
words, they will play a role in orientifolds. I am grateful to
Jake Solomon for extensive discussions on these issues.} Therefore,
the number of disks of odd degree $d$ is equal to twice the number of
spheres of degree $d$ which are invariant under complex conjugation of
source and target. On the real locus $\calm_d^\reals\subset\calm_d$,
complex conjugation defines a real structure on the bundle of quintics
$\cale_d$ (and, of course, on the tangent bundle). Since we are interested
in maps into a real quintic, we can identify the open Gromov-Witten
invariant as \cite{jake}
\begin{equation}
\eqlabel{jake}
\tilde n_d = 2 \int_{\calm_d^\reals} {\bf e}(\cale_d^\reals)
\end{equation}
In trying to apply localization to this problem, one is naively troubled
by the fact that the torus action \eqref{action} does not commute with the
standard complex conjugation \eqref{conjugate}. However, it is easy to
realize that there is another real subtorus of $U(5)$ which does. This
torus is two-dimensional and is the Cartan torus of $O(5)\subset U(5)$.
It is the natural four-dimensional analogue of the $S^1$ action used in
\cite{katzliu}. An equivalent way to describe this is to choose the
alternative complex conjugation
\begin{equation}
\eqlabel{sigma}
\sigma:\quad[z_1:z_2:z_3:z_4:z_5] \mapsto
[\bar z_2:\bar z_1:\bar z_4:\bar z_3:\bar z_5]
\end{equation}
which commutes with the subtorus ${\mathbb T}^2$ of \eqref{action} defined by
$\rho_2=\rho_1^{-1}$, $\rho_4=\rho_3^{-1}$, $\rho_5=1$. The nifty thing
about this torus is that its fixed points on $\complex\projective^4$ are identical to
those of \eqref{action}. Moreover, it is not hard to see that the
fixed points of ${\mathbb T}^2$ acting on $\calm_d^\reals$ are simply those fixed
points of ${\mathbb T}^5$ acting on $\calm_d $ which are invariant under $\sigma$.
From this discussion, we see that our task is to take a real section of
Kontsevich's calculation \cite{kontsevich} with respect to the complex
conjugation $\sigma$. A moment's thought shows why this is feasible: Any
$\sigma$-invariant decorated graph of odd total degree contains the real
locus of $\Sigma$ at the middle of an edge. In other words, the contracted
components of $\Sigma$ are away from the real locus. The upshot is that
the integrals over the fixed loci are identical to those before.
To understand the Euler class of the normal bundle and of the bundle of
real quintics, we are helped by the following elementary fact: If $V$ is
any real vector bundle, then the square of its Euler class is the Euler
class of its complexification,
\begin{equation}
\eqlabel{root}
{\bf e}(V) = \sqrt{{\bf e}(V\otimes\complex)}
\end{equation}
For bundles of high enough rank, this formula of course only makes sense
for the universal bundle, or in equivariant cohomology. The sign of the
squareroot in \eqref{root} is determined by the choice of orientation on
$V$ (which does not affect the canonical orientation of $V\otimes\complex$).
In our situation, $\cale_d^\reals\otimes\complex=\cale_d|_{\calm_d^\reals}$
and since we already know ${\bf e}(\cale_d)$, we are done.
Our graphical calculus is then very much as in \cite{kontsevich}. A
moduli space of ${\mathbb T}^2$-invariant disks corresponds to a tree graph
$\Gamma$ with vertices mapping to fixed points $p_{\mu(v)}$
(with $\mu(v)\in\{1,\ldots, 5\}$) and edges mapping to coordinate lines
joining $p_i$ to $p_j$. There is one special vertex, call it the first
one, on which ends an extra half-edge with odd degree, call it $d_0$. This
restriction is to ensure that the total degree
\begin{equation}
d = d_0 + 2 \sum_{\rm edges} d(e)
\end{equation}
can be odd. Another condition is that the special vertex cannot
map to $p_5$. This arises from the fact that when we reconstruct
a $\sigma$-invariant sphere by reflecting our graph on the half-edge,
the first vertex will be adjacent to its image, and $\sigma(p_5)=p_5$.
In taking a squareroot of the formulas in \cite{kontsevich}, we have to
fix the signs. In principle, this could be done by a careful analysis
such as advertised in \cite{jakethesis,jakecolumbia}. In practice, the
condition that the answer be independent of the torus weights is enough
to determine the sign. Explicitly, we have
\begin{multline}
\eqlabel{formula}
\int_{\overline{\calm}_\Gamma}\frac{{\bf e}(\cale_d^\reals)}{{\bf e}(N_\Gamma^\reals)}=
\prod_{\rm edges}
\frac{\displaystyle\prod_{a=0}^{5d} \frac{a\lambda_i + (5d-a)\lambda_j}{d}}
{\displaystyle (-1)^d\frac{(d!)^2}{d^{2d}} (\lambda_i-\lambda_j)^{2d}
\prod_{\topa{k\neq i,j}{a=0}}^d\Bigl(\frac{a}d\lambda_i +
\frac{d-a}d\lambda_j-\lambda_k\Bigr)} \\\cdot
\prod_{\rm vertices} \displaystyle\frac{1}{(5\lambda_v)^{{\rm val}(v)-1}}
\prod_{j\neq v}(\lambda_v-\lambda_j)^{{\rm val}(v)-1}
\cdot\biggl(\prod_{\rm flags}\frac{d}{\lambda_v-\lambda_j}\biggr)
\biggl(\sum_{\rm flags} \frac{d}{\lambda_v-\lambda_j}\biggr)^{{\rm val}(v)-3}
\\[.2cm] \cdot
\frac{\displaystyle\prod_{a=0}^{(5d_0-1)/2}
\frac{a\lambda_{\mu(1)}+(5d_0-a)\lambda_{\sigma(\mu(1))}}{d_0}}
{\displaystyle (-1)^{(d_0-1)/2}\frac{d_0!}{d_0^{d_0}}
(\lambda_{\mu(1)}-\lambda_{\sigma(\mu(1))})^{d_0}
\prod_{\topa{k\neq \mu(1),\sigma(\mu(1))}{a=0}}^{(d_0-1)/2}
\Bigl(\frac ad_0\lambda_{\mu(1)} + \frac{d_0-a}{d_0}\lambda_{\sigma(\mu(1))}
-\lambda_k\Bigr)}
\end{multline}
Here, it is understood that the torus weights satisfy $\lambda_2=
-\lambda_1$, $\lambda_4=-\lambda_3$, $\lambda_5=0$. Note that setting
$\lambda_5$ to zero introduces zero weight components in the above formula,
which however always exactly cancel between numerator and denominator.
In formula \eqref{formula}, it is also understood that in counting the
valence of the vertex called $1$, the half edge counts full.
The final formula is
\begin{equation}
\eqlabel{final}
\tilde n_d = 2 \sum_{\Gamma,\; {\rm labellings}}
\frac{1}{|\mathop{\rm Aut}\Gamma|}\; \int_{\overline{\calm}_\Gamma}
\frac{{\bf e}(\cale_d^\reals)}{{\bf e}(N_\calm^\reals)}
\end{equation}
As in \cite{kontsevich}, $|\mathop{\rm Aut}\Gamma|$ is the product of the order of the
automorphism group of $\Gamma$ as a decorated graph times the product of
the degrees on the edges (including $d_0$). For the first few degrees,
one reproduces the results from eq.\ \eqref{opinst} in section \ref{supo}.
\begin{acknowledgments}
My interest in this problem was revived when Jake Solomon told
me that the number of degree 1 holomorphic disks was $30$. I would
like to thank him for several helpful discussions and for sharing parts
of his thesis. I am indebted to Katrin Wehrheim for patiently
explaining what could (and could not) be learned from FO$^3$. I would
also like to thank Simeon Hellerman, Calin Lazaroiu, Wolfang Lerche,
Andy Neitzke, Rahul Pandharipande, and Edward Witten for valuable
discussions and Dan Freed and Frank Sottile for helpful correspondence.
This work was supported in part by the DOE under grant number
DE-FG02-90ER40542.
\end{acknowledgments}
|
1,108,101,564,280 | arxiv | \section{Introduction}
\label{sec_intro}
In these lectures we wish to provide an introduction to the
phase structure of QCD. The phase of QCD that we live in is
characterized by the permanent confinement of quarks, and the
existence of a large mass gap. There are several reasons for
trying to understand whether other phases of QCD exist, and
what conditions are required in order to observe these phases:
1) Other phases of QCD exist in the universe: The universe
started out in a hot and dense phase. It expanded and
cooled and about $10^{-5}$ sec after the big bang it
passed through a transition from a quark gluon plasma
to a hadronic phase. Even today, extreme conditions
exist in the universe. In supernova explosions matter is
heated to several tens of MeV, sufficient to evaporate
ordinary nuclei. The compact remnants have central densities
several times larger than the saturation density of
nuclear matter.
2) Exploring the entire phase diagram help us to understand
the phase that we live in: The structure of hadrons and their
interactions are determined by the symmetries of the QCD vacuum.
Studying the phase diagram of QCD allows us to understand
the possible ways in which the symmetries of QCD can be
realized.
3) QCD simplifies in extreme environments: At scales
relevant to hadrons QCD is strongly coupled and we have
to rely on numerical simulations in order to test predictions
of QCD. In the case of large temperature or large baryon
density there is a large external scale in the problem.
Asymptotic freedom implies that the bulk of the system is
governed by weak coupling. As a result, we can study
QCD matter in a regime where quarks and gluons are indeed
the correct degrees of freedom.
In these lectures we will give a general introduction into
the physics of the QCD phase diagram. There are several excellent
text books and reviews articles that provide a much more detailed
discussion of QCD and hadronic matter at finite temperature and density
\cite{Shuryak:1988,Kogut:2004su,Rischke:2003mt}. We also recommend
more specialized texts on field theory at finite temperature
\cite{Kapusta:1989,LeBellac:1996,Kraemmer:2003gd} and density
\cite{Fetter:1971,Abrikosov:1963}, as well as reviews on the
phase structure of dense matter
\cite{Rajagopal:2000wf,Alford:2001dt,Schafer:2003vz}
and on color superconductivity
\cite{Buballa:2003qv,Ren:2004nn,Huang:2004ik,Shovkovy:2004me}.
In this write-up we will not try to give a summary of the
experimental program at RHIC and the SPS. A useful reference
is the series of white papers that was recently published by the
RHIC collaborations \cite{rhic:2005}. We will also not review
implications of the phase structure of QCD for the structure
of compact stars or observational constraints on the behavior
of dense matter \cite{Alford:2001dt,Nardulli:2002ma,Reddy:2002ri}.
\section{QCD and symmetries}
\label{sec_qcd}
\subsection{Introduction}
\label{sec_qcd_intro}
We begin with a brief review of QCD and the symmetries
of QCD. The elementary degrees of freedom are quark fields
$\psi^a_{\alpha,f}$ and gluons $A_\mu^a$. Here, $a$ is
color index that transforms in the fundamental representation
for fermions and in the adjoint representation for gluons.
Also, $f$ labels the quark flavors $u,d,s,c,b,t$. In practice,
we will focus on the three light flavors up, down and strange.
The QCD lagrangian is
\begin{equation}
\label{l_qcd}
{\mathcal L } = \sum_f^{N_f} \bar{\psi}_f ( iD\hspace*{-0.23cm}/\, - m_f) \psi_f
- \frac{1}{4} G_{\mu\nu}^a G_{\mu\nu}^a,
\end{equation}
where the field strength tensor is defined by
\begin{equation}
G_{\mu\nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a
+ gf^{abc} A_\mu^b A_\nu^c,
\end{equation}
and the covariant derivative acting on quark fields is
\begin{equation}
iD\hspace*{-0.23cm}/\, \psi = \gamma^\mu \left(
i\partial_\mu + g A_\mu^a \frac{\lambda^a}{2}\right) \psi.
\end{equation}
QCD has a number of interesting properties. Most remarkably,
even though QCD accounts for the rich phenomenology of hadronic
and nuclear physics, it is an essentially parameter free
theory. As a first approximation, the masses of the light
quarks $u,d,s$ are too small to be important, while the masses
of the heavy quarks $c,b,t$ are too heavy. If we set
the masses of the light quarks to zero and take the masses
of the heavy quarks to be infinite then the only parameter
in the QCD lagrangian is the coupling constant, $g$. Once
quantum corrections are taken into account $g$ becomes a
function of the scale at which it is measured. Gross, Wilczek
and Politzer showed that \cite{Gross:1973id,Politzer:1973fx}
\begin{equation}
\label{as_fr}
g^2(q^2) = \frac{16\pi^2}{b\log(q^2/\Lambda^2_{QCD})},
\hspace{0.3cm}
b =\frac{11N_c}{3}-\frac{2N_f}{3}.
\end{equation}
If the scale $q^2$ is large then the coupling is small, but
in the infrared the coupling becomes large. This is the famous
phenomenon of asymptotic freedom. Since the coupling
depends on the scale the dimensionless parameter $g$ is
traded for a dimensionful scale parameter $\Lambda_{QCD}$.
In essence, $\Lambda_{QCD}$ is the scale at which the
theory becomes non-perturbative.
Since $\Lambda_{QCD}$ is the only dimensionful quantity
in QCD ($m_q=0$) it is not really a parameter of QCD, but
reflects our choice of units. In standard units, $\Lambda_{QCD}
\simeq 200\,{\rm MeV} \simeq 1\,{\rm fm}^{-1}$. Note that
hadrons indeed have sizes $r\sim\Lambda_{QCD}^{-1}$.
Another important feature of the QCD lagrangian are
its symmetries. First of all, the lagrangian is invariant
under local gauge transformations $U(x)\in SU(3)_c$
\begin{equation}
\psi(x) \to U(x)\psi(x),\hspace{1cm}
A_\mu(x) \to U(x)A_\mu U^\dagger (x)
+ iU(x)\partial_\mu U^\dagger(x),
\end{equation}
where $A_\mu= A_\mu^a(\lambda^a/2)$. While the gauge symmetry
is intimately connected with the dynamics of QCD we observe that
the interactions are completely independent of flavor. If the
masses of the quarks are equal, $m_u=m_d=m_s$, then the theory
is invariant under arbitrary flavor rotations of the quark fields
\begin{equation}
\psi_f\to V_{fg}\psi_g,
\end{equation}
where
$V\in SU(3)$. This is the well known flavor (isospin)
symmetry of the strong interactions. If the quark masses
are not just equal, but equal to zero, then the flavor
symmetry is enlarged. This can be seen by defining left
and right-handed fields
\begin{equation}
\psi_{L,R} = \frac{1}{2} (1\pm \gamma_5) \psi .
\end{equation}
In terms of $L/R$ fields the fermionic lagrangian is
\begin{equation}
{\mathcal L} = \bar{\psi}_L (iD\hspace*{-0.23cm}/\,) \psi_L
+\bar{\psi}_R (iD\hspace*{-0.23cm}/\,) \psi_R +
\bar{\psi}_L M \psi_R + \bar{\psi}_R M\psi_L ,
\end{equation}
where $M = {\rm diag}(m_u,m_d,m_s)$. We observe that if
quarks are massless, $m_u=m_d=m_s=0$, then there is no
coupling between left and right handed fields. As a
consequence, the lagrangian is invariant under independent
flavor transformations of the left and right handed fields.
\begin{equation}
\psi_{L,f}\to L_{fg}\psi_{L,g}, \hspace{1cm}
\psi_{R,f}\to R_{fg}\psi_{R,g},
\end{equation}
where $(L,R)\in SU(3)_L\times SU(3)_R$. In the real world,
of course, the masses of the up, down and strange quarks
are not zero. Nevertheless, since $m_u,m_d\ll m_s < \Lambda_{QCD}$
QCD has an approximate chiral symmetry.
Finally, we observe that the QCD lagrangian has two
$U(1)$ symmetries,
\begin{eqnarray}
U(1)_B: \hspace{1cm}& \psi_L\to e^{i\phi}\psi_L, \hspace{1cm}&
\psi_R\to e^{i\phi}\psi_R , \\
U(1)_A: \hspace{1cm}& \psi_L\to e^{i\alpha}\psi_L,\hspace{1cm} &
\psi_R\to e^{-i\alpha}\psi_R .
\end{eqnarray}
The $U(1)_B$ symmetry is exact even if the quarks are not
massless. The axial $U(1)_A$ symmetry is exact at the classical
level but it is broken in the quantum theory. This phenomenon is
referred to as an anomaly. The divergence of the $U(1)_A$ current
is given by
\begin{equation}
\partial^\mu j_\mu^5 = \frac{N_f g^2}{16\pi^2}
G^a_{\mu\nu}\tilde{G}^a_{\mu\nu},
\end{equation}
where $\tilde{G}^a_{\mu\nu}=\epsilon_{\mu\nu\alpha\beta}
G^a_{\alpha\beta}/2$ is the dual field strength tensor.
\subsection{Phases of QCD}
\label{sec_qcd_phases}
The phases of QCD are related to the different ways in which
the symmetries of QCD can be realized in nature. We first
consider the local gauge symmetry. There are three possible
realizations of a local symmetry:
1) Coulomb Phase: In a Coulomb phase the gauge symmetry is
unbroken, the gauge bosons are massless and mediate long
range forces. In particular, the potential between two heavy
charges is a Coulomb potential, $V(r)\sim 1/r$.
2) Higgs Phase: In a Higgs phase the gauge symmetry is
spontaneously broken and the gauge bosons acquire a mass.
As a consequence, the potential between two heavy charges
is a Yukawa potential, $V(r)\sim \exp(-mr)/r$. We should
note that local gauge symmetry is related to the fact
that we are using redundant variables (the four-component
vector potential $A_\mu$ describes the two polarization states
of a massless vector boson), and that therefore a local symmetry
cannot really be broken (Elitzur's theorem \cite{Elitzur:1975im}).
We will discuss the exact meaning of ``spontaneous gauge
symmetry breaking'' in Sect.~\ref{sec_lg} below.
3) Confinement: In a confined phase all the physical
excitations are singlets under the gauge group. Confinement
can be strictly defined only in theories that do not have
light fields in the fundamental representation. In that
case, confinement implies that the potential between
two heavy charges rises linearly, $V(r)\sim kr$. This is
called a string potential. If there are light fields in
the fundamental representation, as in QCD with light quarks,
then the string can break and the potential levels off.
It is interesting to note that all three realizations
of gauge symmetry play a role in the standard model. The
$U(1)$ of electromagnetism is in a Coulomb phase, the
$SU(2)$ is realized in a Higgs phase, and the $SU(3)$ of
color is confined. Also, as we shall see in these lectures,
there are phases of QCD in which the color symmetry is
not confined but realized in a Higgs or Coulomb phase.
Different phases of matter, like liquid vs solid, superfluid
vs normal, are related to the realization of global symmetries.
In QCD at zero baryon density spacetime symmetries as well as
$U(1)$ symmetries cannot be broken \cite{Vafa:tf,Vafa:1984xg}.
This means that phases of QCD matter are governed by the
realization of the chiral $SU(3)_L\times SU(3)_R$ symmetry.
If the baryon density is not zero both space-time and $U(1)$
symmetries can break and the phase diagram is much richer.
\subsection{The QCD vacuum}
\label{sec_qcd_vac}
In the QCD ground state at zero temperature and density
chiral symmetry is spontaneously broken by a quark-anti-quark
condensate $\langle \bar\psi\psi\rangle$. We can view the chiral
condensate as a matrix in flavor space. In the QCD vacuum
\begin{equation}
\langle\bar\psi_L^f\psi^g_R\rangle =
\langle\bar\psi_R^f\psi^g_L\rangle \simeq
-\delta^{fg} (230\,{\rm MeV})^3 ,
\end{equation}
which implies that chiral symmetry is spontaneously broken according
to $SU(3)_L\times SU(3)_R\to SU(3)_V$. The $SU(3)_V$ flavor symmetry
is broken explicitly by the difference between the masses of the
up, down and strange quark. Since $m_s\gg m_u,m_d$ the $SU(2)$
isospin symmetry is a much better symmetry than $SU(3)$ flavor
symmetry.
Chiral symmetry breaking has important consequences for the
dynamics of QCD at low energy. Goldstone's theorem implies that
the breaking of $SU(3)_L\times SU(3)_R\to SU(3)_V$ is associated
with the appearance of an octet of (approximately) massless
pseudoscalar Goldstone bosons. Chiral symmetry places important
restrictions on the interaction of the Goldstone bosons. These
constraints are most easily obtained from the low energy effective
chiral lagrangian. The transformations properties of the chiral
field $\Sigma$ follow from the structure of the chiral order
parameter,
\begin{equation}
\Sigma \to L\Sigma R^\dagger, \hspace{1cm}
\Sigma^\dagger \to R\Sigma^\dagger L^\dagger,
\end{equation}
for $(L,R)\in SU(3)_L\times SU(3)_R$. In the vacuum we can take
$\langle\Sigma\rangle = 1$. Goldstone modes are fluctuations of
the order parameter in the coset space $SU(3)_L\times SU(3)_R/
SU(3)_V$. They are parameterized by unitary matrices $\Sigma =
\exp(i\lambda^a\phi^a/f_\pi)$ where $\lambda^a$ are the Gell-Mann
matrices and $f_\pi=93$ MeV is the pion decay constant. At low
energy the effective lagrangian for $\Sigma$ can be organized as
an expansion in the number of derivatives. At leading order in
$(\partial/f_\pi)$ there is only one structure which is consistent
with chiral symmetry, Lorentz invariance and C,P,T. This is the
lagrangian of the non-linear sigma model
\begin{equation}
\label{l_chpt}
{\mathcal L} = \frac{f_\pi^2}{4} {\rm Tr}\left[
\partial_\mu\Sigma\partial^\mu\Sigma^\dagger\right]
+ \ldots.
\end{equation}
In order to show that the parameter $f_\pi$ is related to the
pion decay amplitude we have to gauge the non-linear sigma model.
This is achieved by introducing the gauge covariant derivative
$\nabla_\mu\Sigma = \partial_\mu\Sigma+ig_w W_\mu\Sigma$ where
$W_\mu$ is the charged weak gauge boson and $g_w$ is the weak
coupling constant. The gauged non-linear sigma model gives a
pion-$W$ boson interaction ${\mathcal L}=g_w f_\pi W^\pm_\mu
\partial^\mu \pi^\mp$ which agrees with the standard definition
of $f_\pi$ in terms of the pion-weak axial current matrix
element.
Expanding $\Sigma$ in powers of the pion, kaon and eta fields
$\phi^a$ we can derive low energy predictions for Goldstone boson
scattering. In the pion sector we have
\begin{equation}
{\mathcal L} =\frac{1}{2}(\partial_\mu\phi^a)^2
+\frac{1}{6f_\pi^2}\left[ (\phi^a\partial_\mu \phi^a)^2
-(\phi^a)^2(\partial_\mu\phi^b)^2 \right] +
O\left(\frac{\partial^4}{f_\pi^4}\right),
\end{equation}
which shows that the low energy $\pi\pi$-scattering amplitude
is completely determined by $f_\pi$. Higher order corrections
originate from loops and higher order terms in the effective
lagrangian.
In QCD chiral symmetry is explicitly broken by the
quark mass term $\bar\psi M\psi$, where $M={\rm diag}
(m_u,m_d,m_s)$ is the quark mass matrix. In order to determine
how the quark masses appear in the effective lagrangian it
is useful to promote the mass matrix to a field which
transforms as $M\to LMR^\dagger$ under chiral transformations.
This means that the mass term is chirally invariant and
explicit breaking only appears when $M$ is replaced by its
vacuum value. There is a unique term in the chiral lagrangian
which is $SU(3)_L\times SU(3)_R$ invariant and linear in
$M$. To order $O(\partial^2,M)$ the effective lagrangian is
\begin{equation}
\label{l_chpt_m}
{\mathcal L} = \frac{f_\pi^2}{4} {\rm Tr}\left[
\partial_\mu\Sigma\partial^\mu\Sigma^\dagger\right]
+\left[ B {\rm Tr}(M\Sigma^\dagger) + h.c. \right]
+ \ldots.
\end{equation}
The mass term acts a potential for the chiral field. We
observe that if the quark masses are real and positive then
the minimum of the potential is at $\langle\Sigma\rangle = 1$,
as expected. If some of the quark masses are negative unusual
phases of QCD can appear, see \cite{Dashen:1970et}.
The vacuum energy is $E_{vac}=-2B{\rm Tr}[M]$. Using
$\langle\bar\psi\psi\rangle = \partial E_{vac}/(\partial
m)$ we find $\langle\bar\psi\psi\rangle=-2B$. Fluctuations
around the vacuum value $\Sigma=1$ determine the Goldstone
boson masses. The pion mass satisfies the Gell-Mann-Oaks-Renner
relation
\begin{equation}
\label{GMOR}
m_\pi^2 f_\pi^2 = (m_u+m_d)\langle\bar\psi\psi\rangle
\end{equation}
and analogous relations exist for the kaon and eta masses.
\subsection{QCD vacuum for different $N_c$ and $N_f$}
\label{sec_bigpic}
\begin{figure}[t]
\begin{center}\includegraphics[width=11.0cm]{big_pic_new.eps}\end{center}
\caption{\label{fig_bigpic}
Ground state of QCD and SUSY QCD as a function of $N_c$ and
$N_f$. The symmetry breaking pattern in SUSY QCD was clarified
in a series of papers by Seiberg and collaborators. The phase
structure of QCD is an educated guess, discussed in more detail
in the review (Sch\"afer and Shuryak, 1998).} .
\end{figure}
QCD is a strongly interacting gauge theory with almost massless
quarks. It seems natural that in such a theory bound states
of quarks and anti-quarks are formed, that bound states in
the scalar channel condense, and that chiral symmetry is
broken. But even if chiral symmetry breaking is not surprising,
it is not a priori clear whether the observed pattern of
chiral symmetry breaking and confinement is required on general
grounds, or whether it is a particular dynamical feature of QCD.
Some obvious questions are: Are all asymptotically free gauge
theories confining? Does confinement imply chiral symmetry breaking
(or vice versa)? Is the symmetry breaking pattern $SU(3)_L
\times SU(3)_R\to SU(3)_V$ unique?
An interesting context in which these questions can be studied
is the phase diagram of QCD and supersymmetric generalizations
of QCD as a function of $N_c$ and $N_f$, see Fig.~\ref{fig_bigpic}.
For our purposes supersymmetric QCD is simply a QCD-like theory
with extra fermions in the adjoint representation and extra colored
scalar fields. Including supersymmetric theories is useful because
supersymmetry provides additional constraints that determine the
symmetries of the ground state. The following interesting results
have been obtained:
1) In supersymmetric QCD there is a window $N_c+1<N_f<3N_c$ in
which the theory is asymptotically free but not confining
\cite{Intriligator:1995au}. There are several reasons to believe
that such a window exists in QCD, too. One is the fact that as a
function of the number of flavors the second coefficient of the
beta function changes sign before the first one does \cite{Banks:1981nn}.
In this regime the coupling constant flows to a finite value at large
distance and the theory is scale invariant.
2) Supersymmetric QCD also provides examples for theories that have
confinement but no chiral symmetry breaking. This happens for $N_f=
N_c+1$. This theory contains both massless mesons and massless baryons.
An important constraint is provided by the 't Hooft anomaly matching
conditions \cite{tHooft:1980xb,Peskin:1982}. In QCD these relations
show that confinement without chiral symmetry breaking is a possibility
for $N_f=2$, but ruled out for $N_f>2$.
3) The 't Hooft consistency conditions also provide constraints
on the symmetry breaking pattern. In QCD these conditions are
not sufficiently strong to fix the ground state completely, but
one can show that $SU(3)_L\times SU(3)_R \to SU(3)_V$ is favored
in the limit $N_c\to\infty$ \cite{Coleman:1980mx}.
4) One can show that in QCD chiral symmetry breaking implies
a non-zero quark condensate \cite{Kogan:1998zc}. In particular, one
can rule out the possibility that $\langle \bar{\psi}\psi\rangle
=0$, but $\langle (\bar{\psi}\psi)^2\rangle\neq 0$.
\section{QCD at finite Temperature}
\label{sec_T}
\subsection{General Arguments}
\label{sec_arg}
In this section we shall discuss the phase structure of QCD
at non-zero temperature. We begin by reviewing some general
arguments in favor of the existence of a critical temperature
$T_c$ above which quarks and gluons are deconfined and chiral
symmetry is restored.
Asymptotic freedom clearly suggests that the high temperature phase
is a weakly interacting plasma \cite{Collins:1974ky,Shuryak:1977ut}.
Consider a non-interacting gas of quarks and gluons at high temperature.
The typical momenta are on the order of the temperature, $p\sim 3T$,
and the density is $n\sim T^3$. Now imagine that we turn on the
coupling. Does this lead to a qualitative change in the system?
Quarks and gluon can scatter but since the typical momenta are large
a significant change in the momentum of the scattered particles
requires a large momentum transfer. Asymptotic freedom implies
that the effective coupling at this scale is small, and that
large angle scattering events are rare. If the change of momentum
is small then the scattering involves large distances and the interaction
is modified by the dense medium. We will see below that the quark-gluon
medium screens the interaction and that the effective interaction
is again weak. There is a small subtlety here, as static magnetic
interactions are not screened. This implies that high temperature
QCD has a genuinely non-perturbative sector, but this sector is
not important as far as bulk properties of the high temperature
phase are concerned. We conclude that the assumption of a weakly
interacting quark-gluon system at high temperature leads to a
self consistent picture. Since this system will exhibit medium
effects, such as damping and screening, collective modes,
etc.~that are typical of plasmas it was termed the quark gluon
plasma (QGP) \cite{Shuryak:1977ut,Shuryak:1978ij}.
It is instructive to consider a simple model of the equation
of state. The pressure and energy density of non-interacting
massless particles is
\begin{equation}
P = \epsilon/3, \hspace{0.5cm}
\epsilon = g\frac{\pi^2}{30} T^4
\left\{ \begin{array}{cl}
1 & {\rm bosons} \\
7/8 & {\rm fermions } \end{array}\right. ,
\end{equation}
where $g$ is the number of degrees of freedoms. In a quark
gluon plasma we have $g_q=4N_fN_c$ quarks and $g_g=2(N_c^2-1)$
gluon degrees of freedom. For $N_f=2$ we get $g_{eff}=g_g+
7g_q/8=37$ and
\begin{equation}
P =\frac{37\pi^2}{90}T^4.
\end{equation}
At low temperature the relevant degrees are Goldstone bosons. Near
$T_c$ we can assume that Goldstone bosons are approximately massless
and the number of degrees of freedom is $g=(N_f^2-1)$. For $N_f=2$
we get $g=3$ and
\begin{equation}
P =\frac{3\pi^2}{90}T^4.
\end{equation}
This result seems to show that the pressure in the low temperature
phase is always smaller than the pressure in the high temperature
phase. This cannot be right as it would imply that the phase with
chiral symmetry breaking is never favored. The problem is that
there are non-perturbative effects in the low temperature phase.
These effects give rise to a negative vacuum energy and a positive
vacuum pressure. Lorentz invariance implies that the vacuum
energy momentum tensor is of the form $T_{\mu\nu}=Bg_{\mu\nu}$
and
\begin{equation}
\label{bag_const}
\epsilon_{vac} = -P_{vac} = +B .
\end{equation}
In QCD the vacuum energy satisfies the trace anomaly relation
\begin{equation}
\label{trace_an}
\epsilon_{vac} = -\frac{b}{32}\langle \frac{\alpha}{\pi} G^2\rangle
\simeq -0.5 \ {\rm GeV}/{\rm fm}^3.
\end{equation}
The numerical value comes from QCD sum rule determinations of
the gluon condensate \cite{Shifman:1978bx} $\langle \alpha G^2\rangle$
and has a considerable uncertainty. We can now obtain an estimate of
the transition temperature by requiring that the pressure in the
low temperature phase equals the pressure in the quark gluon
phase. We find
\begin{equation}
\label{tc_bag}
T_c = \left(\frac{45B}{17\pi^2}\right)^{1/4} \simeq 180\ {\rm MeV}
\end{equation}
We can also determine the critical energy density. The energy
densities just below and just above the critical temperature are
given by
\begin{eqnarray}
\epsilon(T_c^-)&=&\frac{3\pi^2}{30}T_c^4\simeq 130 \ {\rm MeV}/{\rm fm}^3 , \\
\epsilon(T_c^+)&=&\frac{37\pi^2}{30}T_c^4+B\simeq 2000 \ {\rm MeV}/{\rm fm}^3.
\end{eqnarray}
We observe that the energy density in the QGP exceeds 1 ${\rm GeV}/{\rm fm}^3$.
This should be compared to the energy density in cold nuclear matter which
is about 150 ${\rm MeV}/{\rm fm}^3$.
An independent estimate of the transition temperature can be obtained
using the chiral effective theory. We saw that the chiral condensate
is related to the mass dependence of the vacuum energy. At tree level
and to leading order in the quark masses the condensate is given by
the coefficient $B$ in the chiral lagrangian. We can also calculate
corrections to this result due to thermal Goldstone boson fluctuations.
At leading order it is sufficient to consider the free energy of a
non-interacting pion gas
\begin{equation}
\label{z_pi}
F = (N_f^2-1)T\int \frac{d^3p}{(2\pi)^3} \log
\left( 1- e^{-E_\pi/T} \right),
\end{equation}
where $E_\pi=\sqrt{p^2+m_\pi^2}$. The quark condensate is
$\langle\bar\psi\psi\rangle = (N_f)^{-1}\partial F/\partial m$.
Equation (\ref{z_pi}) depends on the quark mass only through the
pion mass. Using the Gell-Mann-Oakes-Renner relation (\ref{GMOR})
we find \cite{Gasser:1986vb}
\begin{equation}
\langle\bar\psi\psi\rangle_T =
\langle\bar\psi\psi\rangle_0 \left\{ 1-\frac{N_f^2-1}{3N_f}
\left(\frac{T^2}{4f_\pi^2}\right)+\ldots\right\}.
\end{equation}
This result indicates that the chiral condensate vanishes at
a critical temperature
\begin{equation}
\label{tc_chi}
T_c\simeq 2f_\pi\sqrt{\frac{3N_f}{N_f^2-1}}\simeq 200 \,
{\rm MeV} \;\; (N_f=3),
\end{equation}
which is roughly consistent with the estimate obtained in
equ.~(\ref{tc_bag}).
\subsection{Chiral Symmetry Restoration}
\label{sec_csr}
In the vicinity of the phase transition QCD is genuinely
non-perturbative and it is hard to improve on the rough
estimates provided in the previous section. One possibility
for determining the transition temperature and elucidating
the nature of the phase transition is the use of large scale
numerical simulations. We will discuss this approach in
Sect.~\ref{sec_lqcd}. Before we do so we would like to review
certain general statements about the phase transition that
follow from the symmetries of the low and high temperature
phase.
\begin{table}[t]
\tbl{Correspondence between the chiral phase transition in QCD
and the ferromagnetic transition in a four-component magnet.}
{\begin{tabular}{clccl}
$SU(2)_L\times SU(2)_R$ & QCD & \hspace{0.5cm} &
$O(4)$ & magnet \\
$\langle\bar\psi\psi\rangle$ & $\chi$ condensate & &
$\vec{M} $ & magnetization \\
$m_q$ & quark mass & &
$H_3$ & magnetic field \\
$\vec\pi$ & pions & &
$\vec\phi$ & spin waves
\end{tabular}
\label{tab_uni}}
\end{table}
We begin with the chiral phase transition. We shall assume
that the chiral transition is a second order phase transition,
i.e.~the order parameter goes to zero continuously. We will
explore the consequences of this assumption and check whether
it leads to a consistent picture. In the vicinity of a second
order phase transition the order parameter fluctuates on all
length scales and the correlation length diverges. This means
that the details of the interaction are not important, and
only the symmetry of the order parameter matters.
In QCD with two flavors the order parameter is a $2\times 2$
matrix $U^{fg}=\langle \bar{\psi}^f_L\psi^g_R\rangle$. We can
define a four component vector $\phi^a$ by $U^{fg}=\phi^a
(\tau^a)^{fg}$ with $\tau^a=(\vec{\tau},1)$. Chiral transformations
$(L,R)\in SU(3)_L\times SU(3)_R$ correspond to rotations $\phi^a
\to R^{ab}\phi^b$ with $R\in SO(4)$. In the low temperature phase
chiral symmetry is broken and $\langle\phi^a\rangle = \sigma
\delta^{a0}$. Near $T_c$ the order parameter is small and we
we can expand the free energy in powers of the order parameter
and its derivatives. Chiral symmetry implies that only the length
of $\phi^a$ can enter. To order $\phi^4$ and to leading order
in gradients of the fields we have
\begin{equation}
\label{f_lg}
F = \int d^3x\, \left\{ \frac{1}{2}(\vec\nabla\phi^a)^2
+\frac{\mu^2}{2}(\phi^a\phi^a) +\frac{\lambda}{4}
(\phi^a\phi^a)^2 + \ldots \right\},
\end{equation}
where $\mu^2,\lambda$ are parameters that depend on the temperature.
Equ.~(\ref{f_lg}) is the Landau-Ginzburg effective action. Note
that fluctuations of the order parameter are dominated by static
fields $\phi^a(\vec{x},t)=\phi^a(\vec{x})$. This will be explained
in more detail in Sect.~\ref{sec_pqcd}. The main observation is
that fluctuations with energy much smaller than $\pi T$ are
described by a three dimensional theory.
Stability requires that $\lambda>0$. Below the critical temperature
$\mu^2<0$ and chiral symmetry is broken. At $T_c$ the parameter $\mu^2$
changes sign and we can write $\mu^2=\mu_0^2 t$ where $t=(T-T_c)/T_c$
is the reduced temperature. As a first approximation we can
ignore fluctuations and study the Landau-Ginzburg action in the
mean field approximation. In that case the chiral order parameter
goes to zero as $\langle\phi^0\rangle \sim t^{1/2}$. This
result is modified if fluctuations are included. This can
be done using renormalization group methods or numerical
simulations. These methods also demonstrate that near $T_c$
higher order operators not included in equ.~(\ref{f_lg})
are indeed irrelevant. The results are
\begin{equation}
\begin{array}{rclrcl}
C &\sim & t^{-\alpha} & \hspace{0.1\hsize}\alpha &=& -0.19, \\
\langle\bar\psi\psi\rangle
&\sim & t^\beta & \beta &=& 0.38, \\
m_\pi &\sim & t^\nu & \nu &=& 0.73,
\end{array}
\end{equation}
where $C$ is the specific heat and the Goldstone boson mass
$m_\pi$ is defined as the inverse correlation length of spatial
fluctuations of the pion field.
The coefficients $\alpha,\beta,\nu$ are called the critical
indices of the phase transition. These coefficients are
universal, i.e.~independent of the details of the microscopic
details. For example, the critical indices of the chiral
phase transition in QCD agree with the critical indices
of a four-component magnet in $d=3$ space dimensions, see
Table \ref{tab_uni}.
In QCD with $N_f=3$ flavors the order parameter is a $3\times 3$
matrix $U^{fg}$. The main new ingredient in the Landau-Ginzburg
theory is a chirally invariant cubic term $\det(U)+{\rm h.c}$.
It is easy to check that the cubic term will lead to an effective
potential that has two degenerate minima at the transition
temperature. This implies that the transition is first order
and the order parameter goes to zero discontinuously. Even
if the coefficient of the cubic term is zero initially,
fluctuations tend to generate a cubic interaction as $T\to T_c$.
A transition of this type is called fluctuation induced
first order.
\subsection{Deconfinement}
\label{sec_dec}
It is not immediately obvious how to identify the symmetry
associated with the deconfinement transition. In order to make
contact with the lattice formulation discussed in Sect.~\ref{sec_lqcd}
we will study this question in euclidean space, i.~e.~with the
time variable analytically continued to imaginary time, $t\to it$.
We first derive an expression for the potential between two
heavy quarks. Consider the Dirac equation for a massive quark
in a gluon background field
\begin{equation}
\left(\partial_0-igA_0 -\vec\alpha(i\vec\nabla+g\vec{A})
+\gamma_0 M\right)\psi = 0 .
\end{equation}
In the limit $m_Q\to\infty$ we can ignore the spatial components
of the covariant derivative. The quark propagator is
\begin{equation}
S(x,x')\simeq \exp\left(ig\int A_0dt\right) \left(\frac{1+\gamma_0}{2}\right)
e^{-m(t-t')}\delta(\vec{x}-\vec{x}').
\end{equation}
The heavy quark potential is related to the amplitude for creating
a heavy quark-anti-quark pair at time $t=0$, separating the pair
by a distance $R$, and finally annihilating the two quarks at time
$t=T$. The amplitude is related to the Wilson loop
\begin{equation}
\label{w_loop}
W(R,T)=\exp\left(ig\oint A_\mu dz_\mu\right),
\end{equation}
where the integration contour is a $R\times T$ rectangle and
we have dropped the Dirac projection operators $(1+\gamma_0)/2$.
If $T\gg R$ the amplitude is dominated by the ground state and we
expect $W(R,T)\sim \exp(-V(R)T)$ where $V(R)$ is the heavy quark
potential. Confinement implies that $V(R)\sim kR$ and the Wilson
loop satisfies an area law
\begin{equation}
\label{area_law}
W(R,T)\sim \exp(-k A), .
\end{equation}
where $A=RT$. In order to construct a local order parameter
we consider the Polyakov line
\begin{equation}
\label{p_line}
P(\vec{x})=\frac{1}{N_c}{\rm Tr}[L(\vec{x})] =
\frac{1}{N_c}P{\rm Tr}
\left[\exp\left(ig\int_0^\beta A_0 dt\right)\right].
\end{equation}
As we will explain in more detail in the next section
gauge fields are periodic in imaginary time with period
$\beta=1/T$. The Polyakov line can be interpreted as the
free energy of a single quark, $\langle P\rangle \sim
\exp(-m_Q \beta)$. In the confined phase we expect that the
energy of an isolated quark is infinite, while in the
deconfined phase it is finite. This implies that
\begin{equation}
\langle P\rangle = 0 \hspace{0.5cm}{\rm confined},\hspace{1.5cm}
\langle P\rangle \neq 0 \hspace{0.5cm}{\rm deconfined}.
\end{equation}
The global symmetry of the order parameter is \cite{Gross:1980br}
$P\to zP$ with $z=\exp(2\pi ki/N_c)\in Z_{N_c}$. Since
$Z_{N_c}$ is the center of the gauge group $SU(N_c)$ this
is sometimes called the center symmetry. Color singlet
correlation functions always involve combinations of
Polyakov lines that are invariant under center transformations.
A heavy baryon correlation function, for example, is of the
form ${\rm tr}[(L(\vec{x}))^{N_c}]$ and is invariant
because $z^{N_c}=1$. A non-zero expectation value for the
Polyakov line, on the other hand, breaks center symmetry.
\begin{figure}[t]
\begin{center}\includegraphics[width=8.0cm]{phase_mu_ms_2.eps}\end{center}
\caption{\label{fig_uni}
Phase diagram of QCD in the $m-m_s$ mass plane. The plot
shows the universality class of the chiral/deconfinement
transition for different values of $m,m_s$.}
\end{figure}
Note that the symmetry is broken in the high temperature
phase and restored in the low temperature phase. This might
seem somewhat unusual, but there are examples of spin
systems that have an equivalent ``dual'' description in
terms of a gauge theory \cite{Creutz:1984mg}. In the dual theory
the high and low temperature phases are interchanged.
The $Z_{N_c}$ Landau-Ginzburg action is given by
\cite{Svetitsky:1982gs}
\begin{equation}
\label{f_pol}
F = \int d^3x\, \left\{ \frac{1}{2}|\vec\nabla P|^2
+\mu^2|P|^2 + g{\rm Re}(P^3) +\lambda |P|^4 + \ldots \right\}.
\end{equation}
The cubic term is allowed only if $N_c=3$. As in the case
of the chiral phase transition we expect the cubic term
to drive the transition first order. The two color theory
is in the equivalence class of the $Z_2$ Ising model, which
is known to have a second order transition. The three color
theory is in the equivalence class of a three state Potts
model, which does indeed show a first order transition.
The phase structure as a function of the light quark mass
$m=m_u=m_d$ and the strange quark mass $m_s$ is summarized
in Fig.~\ref{fig_uni}. The lower left hand corner of the
diagram is $m=m_s=0$ and corresponds to three massless
quarks. In that case we expect a first order chiral phase
transition. Along the diagonal we have $m=m_s$ and the
$SU(3)_V$ flavor symmetry is exact. As the quark masses
increase the strength of the first order transition becomes
weaker and the transition eventually ends at a second
order critical point. If the light quarks are kept massless
as the strange quark mass is increased the endpoint of
the first order transition is a tricritical point at
which the the endpoint of the first order transition in
the three flavor theory meets the second order transition
of the two flavor theory. This transition turns into a
smooth crossover as soon as the light quarks are given
a small mass.
The upper right hand corner of the plot is $m=m_s\to\infty$
and corresponds to the pure glue theory which has a first
order transition. Dynamical quarks break the $Z_{N_c}$
symmetry and the strength of the first order transition
decreases as the quark masses are lowered. The first order
transition in the pure glue theory ends on a line of
second order transitions.
We do not know with certainty where the physical point is
located on this phase diagram. Lattice calculations currently
favor the possibility that the phase transition is in the
crossover region, closer to the first order chiral transition
than to the first order deconfinement transition.
We should emphasize that Fig.~\ref{fig_uni} focuses on regions
of the phase diagram in which there is a sharp phase transition
and the order parameter is a non-analytic function at $T_c$. The
figure should not be taken to imply that the chiral and
deconfinement transitions are completely separate phenomena,
or that the transition cannot be observed in the crossover
region. Fig.~\ref{fig_sus} shows the order parameter as
well as the order parameter susceptibilities for the chiral
and deconfinement transitions. The results were obtained
from lattice calculations with semi-realistic values of the
quark masses. We observe that even though both transitions
are crossovers clear peaks in the susceptibilities are still
visible. We also note that the chiral and deconfinement
transitions occur at the same temperature. This can be
understood in models that include a non-zero coupling between
the effective actions for the chiral and deconfinement order
parameters \cite{Digal:2000ar,Mocsy:2003qw,Ratti:2005jh}.
\begin{figure}[t]
\begin{center}\includegraphics[width=5.0cm]{sus_chir.eps}
\includegraphics[width=5.0cm]{sus_pol.eps}\end{center}
\caption{\label{fig_sus}
Chiral and deconfinement transitions for $N_f=2$ dynamical
quark flavors, from (Karsch 2002). The two figure show the chiral
condensate and the Polyakov line as a function of the bare coupling
$\beta=6/g^2$ (In this figure, $\beta$ is not $1/T$). Asymptotic
freedom implies that on a fixed lattice $N_\tau\times N_\sigma^3$
increasing $\beta$ corresponds to increasing the temperature. The
susceptibilities $\chi_m,\chi_L$ are related to order parameter
fluctuations, e.g. $\chi_L=n_\sigma^3(\langle L^2\rangle -
\langle L\rangle^2)$. }
\end{figure}
\subsection{Lattice QCD}
\label{sec_lqcd}
Symmetry arguments cannot determine the critical temperature,
the critical energy density, and many other properties of
matter near the phase transition. In order to compute these
properties we have to rely on numerical simulations of the
QCD partition function
\begin{equation}
Z = {\rm Tr}[e^{-\beta H}], \;\; \beta=1/T
\hspace{0.5cm}F=TV{-1}\log(Z) ,
\end{equation}
where $H$ is the QCD Hamiltonian, $\beta$ is the inverse temperature
and $F$ is the free energy. We can write the partition function
as a quantum mechanical evolution operator $Z = {\rm Tr}[e^{-i
(-i\beta) H}]$ with imaginary time $\tau=-i\beta$. The evolution
operator can be written as a path integral
\begin{equation}
\label{z_path}
Z =\int dA_\mu d\psi d\bar{\psi}
\exp\left(-\int_0^\beta d\tau \int d^3x\ {\mathcal L}_E\right),
\end{equation}
where ${\mathcal L_E}$ is the imaginary time (euclidean) lagrangian
and we have to impose (anti)periodic boundary conditions on the
quark and gluon fields
\begin{equation}
A_\mu(\vec{x},\beta)=A_\mu(\vec{x},0), \hspace{0.3cm}
\psi(\vec{x},\beta)=-\psi(\vec{x},0).
\end{equation}
The path integral is an infinite dimensional integral. In order
to perform numerical simulations we have to discretize space
and time and introduce a $N_\tau\times N_\sigma^3$ lattice with
lattice spacing $a$. In order to maintain exact gauge invariance
on a discrete lattice the gauge fields are discretized in terms
of the link variables
\begin{equation}
U_\mu(n)=\exp(igaA_\mu(n)),
\end{equation}
where $n=(n_\tau,n_x,n_y,n_z)$ labels lattice sites and $\mu=1,
\ldots,4$. In terms of the link variables it is easy to define a
gauge covariant discrete derivative
\begin{equation}
D_\mu\phi \to \frac{1}{a}[U_{-\mu}(n)\phi(n+\mu)-\phi(n)],
\end{equation}
where $n+\mu$ is the lattice site reached by starting at $n$
and doing one hop in the $\mu$ direction. $\phi(n)$ is a scalar
field in the fundamental representation. The action of the pure
gauge theory is given by
\begin{equation}
\label{wilson}
S = \frac{2}{g^2}\sum_{n,\mu} {\rm Re\ Tr}
\left[1- U_\mu(n)U_\nu(n+\mu) U_{-\mu}(n+\mu+\nu)U_{-\nu}(n+\nu)
\right],
\end{equation}
which involves a loop of link variables called a plaquette. It is
easy to check that equ.~(\ref{wilson}) reduces to the continuum
action as $a\to 0$. Fermion fields $\psi(n)$ are discretized
on lattice sites. There are some subtleties in discretizing chiral
fermions which we do not discuss here \cite{Chandrasekharan:2004cn}.
The action is bilinear in the fermion fields. This means that the
integral over the fermion fields can be done exactly and we are left
with the determinant of a matrix that depends only on the link
variables. The lattice representation of the partition function
is
\begin{equation}
Z = \int\prod_{n,\mu}dU_\mu(n) \det (M(U)) e^{-S},
\end{equation}
where $M(U)$ is the fermion matrix. The partition function depends
on the number of lattice sites $N_\tau,N_\sigma$, the bare
coupling constant $g$, and the dimensionless quark masses $ma$.
\begin{figure}[t]
\begin{center}\includegraphics[width=10.0cm]{lat_eos.eps}\end{center}
\caption{\label{fig_eos}
Equation of state obtained in lattice calculations with
$N_f=0,2,2+1,3$ flavors, from Karsch (2002). The 2+1 curve
refers to two light flavors and one intermediate mass flavor.
The pressure is given in units of $T^4$. The arrows indicate
the Stefan-Boltzmann limits. }
\end{figure}
Note that the partition function has no explicit dependence
on the lattice spacing. Asymptotic freedom implies that the
bare coupling should go to zero as $a\to 0$ and the continuum
limit corresponds to $g\to 0$. Asymptotically
\begin{equation}
\label{lam_lat}
a\Lambda_{lat}=\exp(-8\pi^2/(bg^2)) ,
\end{equation}
where $b$ is the first coefficient of the beta function, see
equ.~(\ref{as_fr}), and $\Lambda_{lat}$ is the QCD scale
parameter on the lattice. $\Lambda_{lat}$ can be related
to the continuum scale parameter by a perturbative calculation
\cite{Creutz:1984mg}. In practice the lattice spacing is not small
to enough to use the perturbative result equ.~(\ref{lam_lat})
and $a$ is determined by from a physical quantity like
the string tension or the rho meson mass. Once the lattice
spacing is known the temperature is determined by $T=1/
(N_\tau a)$.
Lattice results for the order parameter and the equation
of state are shown in Figs.~(\ref{fig_sus},\ref{fig_eos}).
Current results for the transition temperature are
\cite{Laermann:2003cv}
\begin{equation}
T_c(N_f\!=\!2)= (173\pm 8)\ {\rm MeV},\hspace{0.5cm}
T_c(N_f\!=\!0)= (271\pm 2)\ {\rm MeV},
\end{equation}
where the errors are purely statistical. The equation of
state shows a rapid rise in the pressure near $T_c$, but
the pressure remains significantly below the free gas
limit even at $T\sim (2-3)T_c$.
\subsection{Perturbative QCD}
\label{sec_pqcd}
\begin{figure}[t]
\begin{center}\begin{minipage}{5.5cm}
\includegraphics[width=4.7cm]{pi_munu.eps}
\vspace*{0.7cm}
\end{minipage}\begin{minipage}{5.5cm}
\includegraphics[width=5.0cm]{pi_munu_3.eps}
\end{minipage}\end{center}
\caption{\label{fig_pol}
One loop contribution to the photon polarization tensor
(left panel) and plasmon dispersion relation in a hot QED
plasma. }
\end{figure}
At temperatures significantly above $T_c$ quarks and gluons
are weakly coupled and perturbative methods are expected
to be useful. The starting point of the perturbative
expansion at non-zero temperature is the path integral
representation of the QCD partition function given in
equ.~(\ref{z_path}). The only difference as compared to
the zero temperature result is the fact that the fields
satisfy periodic boundary conditions in imaginary time.
Consider the Fourier representation of the gluon field
\begin{equation}
A_\mu(\vec{x},\tau) = \sum_n\int d^3k\; A_\mu^n(\vec{k})
e^{-i(\vec{k}\vec{x}+\omega_n\tau)}.
\end{equation}
We can also write down an analogous expansion for fermions.
The boundary conditions imply that the allowed frequencies,
called the Matsubara frequencies, are discrete. We have
\begin{equation}
\begin{array}{rcll}
\omega_n &=& 2\pi n T & {\rm bosons} \\
\omega_n &=& (2 n+1)\pi T\hspace{0.2cm} & {\rm fermions}
\end{array}.
\end{equation}
The only change in the Feynman rules is that continuous
energy variables are replaced by discrete Matsubara frequencies,
$p_0\to i\omega_n$, and that integrals over energy are replaced
by discrete sums
\begin{equation}
\int\frac{d^4p}{(2\pi)^4}\to
T\sum_n \int \frac{d^3p}{(2\pi)^3}.
\end{equation}
Typical sums that appear in one-loop calculations in the
Matsubara formalism are
\begin{eqnarray}
\label{mats_sum1}
\sum _k \frac{1}{x^2+k^2} &=&
\frac{2\pi}{x}\left( \frac{1}{2} +\frac{1}{e^{2\pi x}-1}\right), \\
\label{mats_sum2}
\sum _k \frac{1}{x^2+(2k+1)^2} &=&
\frac{\pi}{x}\left( \frac{1}{2} -\frac{1}{e^{\pi x}+1}\right) .
\end{eqnarray}
We observe that performing sums over Matsubara frequencies leads
to Bose-Einstein and Fermi-Dirac distribution functions.
As an application of the finite temperature formalism we wish
to study the one-loop correction to the gluon propagator in a hot
QCD medium. For simplicity we begin with the analogous problem
in a hot QED plasma. The photon polarization function is (see
Fig.~\ref{fig_pol})
\begin{equation}
\label{ph_pol}
\Pi_{\mu\nu}(q) = e^2T\sum_n\int \frac{d^3k}{(2\pi)^3}
{\rm tr}[\gamma_\mu k\slash\gamma_\nu(k\!\!\!/\,-q\!\!\!/\, ) ]
\Delta(k)\Delta(k-q),
\end{equation}
where $\Delta(k)=\omega_n^2+\vec{k}^2$. Using identities like
equ.~(\ref{mats_sum2}) we can decompose the integral into a
$T=0$ and a finite temperature part. In the following we will
only consider the $T\neq 0$ terms. Thermal corrections to the
photon propagator become important when the photon momentum is
much smaller than the temperature. In this case we can assume
that the loop momenta are on the order of $T$. This is called
the hard thermal loop (HTL) approximation \cite{Braaten:1989mz}.
The photon polarization function in the HTL approximation is
\begin{equation}
\label{pi_htl}
\Pi_{\mu\nu} = 2m^2 \int\frac{d\Omega}{4\pi}
\Big(\frac{i\omega\hat{K}_\mu\hat{K}_\nu}{q\cdot\hat{K}}
+\delta_{\mu 0}\delta_{\nu 0}\Big) ,
\end{equation}
where $m^2=e^2T^2/6$, $\hat{K}=(-i,\hat{k})$ and $d\Omega$ is an
integral over the direction of $\hat{k}$. In the case of the
gluon propagator in QCD there are extra graphs generated by the
three and four-gluon vertices as well as ghost contributions but,
remarkably, the structure of the HTL polarization function is
unchanged. In QCD the parameter $m^2$ is given by $m^2=g^2T^2
(1+N_f/6)$.
\begin{figure}[t]
\begin{center}\includegraphics[width=11.0cm]{htl.eps}\end{center}
\caption{\label{fig_htl}
One loop contribution to the gluon polarization tensor
in the quark gluon plasma. Solid lines are quark propagators,
wavy lines are gluons, and dashed lines are ghosts.}
\end{figure}
Insertions of the polarization function into the photon (gluon)
propagator form a simple geometric series. The resummed photon
propagator is
\begin{equation}
\label{d_res}
D_{\mu\nu}= \frac{1}{(D_{\mu\nu}^0)^{-1}+\Pi_{\mu\nu}}.
\end{equation}
This result can be used to study the interaction between two
charges in the plasma. The Coulomb interaction is determined
by the Fourier transform of the static propagator
\begin{equation}
\label{v_scr}
V(r)=e\int \frac{d^3q}{(2\pi)^3}\frac{e^{iqr}}{\vec{q}^{\,2}+\Pi_{00}}
\simeq -\frac{e}{r}\exp(-m_D r) ,
\end{equation}
where $m_D^2=2m^2$ is the Debye mass and we have used $\Pi_{00}(0,
\vec{q}\to 0)=m_D^2$. Equ.~(\ref{v_scr}) shows that the Coulomb
interaction is screened at distances $r\sim m_D^{-1} \sim (eT)^{-1}$.
The mechanism for charge screening is easy to understand. A test
charge polarizes the electron-positron plasma, and the polarization
cloud screens the charge. Note that the typical distance between
charges is $r\sim T^{-1}\ll r_D$ and Debye screening is a collective
effect that involves many charge carriers.
The magnetic interaction is not screened, $\Pi_{ii}(0,\vec{q}\to 0)=0$.
However, if the photon energy is finite the polarization develops
an imaginary part
\begin{equation}
{\rm Im}\Pi_{ii}(\omega,q) \sim \frac{\omega}{q}m_D^2\Theta(q-\omega),
\end{equation}
and non-static exchanges are damped. This phenomenon is called
Landau damping. The mechanism of Landau damping is a transfer
of energy from the electromagnetic field to electrons and positrons
in the plasma. The absence of magnetic screening implies that the
static magnetic sector of the QCD plasma remains non-perturbative
even if the temperature is very high.
In order to study the propagation of collective modes in the
plasma in more detail it is useful to split the polarization tensor
into transverse and longitudinal components
\begin{eqnarray}
\label{proj}
\Pi_{\mu\nu}(q)&=& \Pi^T(q)P^T_{\mu\nu} +\Pi^L(q)P^L_{\mu\nu}\\
P_{ij}^T &=& \delta_{ij}-\hat{q}_i\hat{q}_j , \hspace{1cm}
P_{00}^T = P_{0i}^T = 0, \\
P_{\mu\nu}^L &=& -g_{\mu\nu}+\frac{q_\mu q_\nu}{q^2}
-P_{\mu\nu}^T .
\end{eqnarray}
We can study the propagation of photons by identifying the poles
of the transverse and longitudinal components of the photon
propagator, see Fig.~\ref{fig_pol}. We observe that for large
momenta $|\vec{q}|\gg m$ the dispersion relation is not strongly
affected by the medium. In this limit we also find that the
longitudinal mode has an exponentially small residue. As $\vec{q}
\to 0$ the energy of both longitudinal and transverse modes approach
the plasma frequency $\omega_{pl}=\sqrt{2/3}\ m$.
\section{QCD at Small Density: Nuclear Matter}
\label{sec_ldense}
\subsection{Introduction}
\label{sec_dense_intro}
In this section we study hadronic matter at non-zero
baryon density. In QCD the numbers of all quark flavors
are conserved individually. Once the weak interaction is
taken into account only baryon number and electric charge
are conserved. Bulk matter has to be electrically neutral
because the Coulomb energy of a charged system diverges
in the infinite volume limit. In hadronic matter neutrality
can be achieved by balancing the charge density in hadrons,
which is usually positive, by a finite density of electrons.
The partition function of QCD at non-zero baryon chemical
potential is given by
\begin{equation}
Z = \sum_i \exp\left(-\frac{E_i-\mu N_i}{T}\right),
\end{equation}
where $i$ labels all quantum states of the system, $E_i$ and $N_i$
are the energy and baryon number of the state $i$. If the temperature
and chemical potential are both zero then only the ground state
contributes to the partition function. All other states give
exponentially small contributions. QCD has a massgap for states
with non-zero baryon number. This means that there is an onset
chemical potential
\begin{equation}
\mu_{\it onset}=\min_i (E_i/N_i),
\end{equation}
such that the partition function is independent of $\mu$ for
$\mu<\mu_{\it onset}$. For $\mu>\mu_{\it onset}$ the baryon
density is non-zero. If the chemical potential is just above
the onset chemical potential we can describe QCD, to first
approximation, as a dilute gas of non-interacting nucleons.
In this approximation $\mu_{\it onset}=m_N$. Of course, the
interaction between nucleons is essential. Without it, we
would not have stable nuclei. As a consequence, nuclear matter
is self-bound and the energy per baryon in the ground state
is given by
\begin{equation}
\frac{E_N}{N}-m_N \simeq -15\,{\rm MeV}.
\end{equation}
The onset transition is a first order transition at which
the baryon density jumps from zero to nuclear matter saturation
density, $\rho_0\simeq 0.14\,{\rm fm}^{-3}$. The first order
transition continues into the finite temperature plane and
ends at a critical endpoint at $T=T_c\simeq 10$ MeV, see
Fig.~\ref{fig_phase_1}.
\begin{figure}[t]
\begin{center}\includegraphics[width=7.5cm]{phase_first.eps}\end{center}
\caption{\label{fig_phase_1}
Naive phase diagram of hadronic matter as a function of the
baryon chemical potential and temperature.}
\end{figure}
Nuclear matter is a complicated many-body system and, unlike the
situation at zero density and finite temperature, there is little
information from numerical simulations on the lattice. This is
related to the so-called 'sign problem'. At non-zero chemical
potential the path integral representation of the partition
function is
\begin{equation}
Z=\int dA_\mu \det(iD\slash +i\mu\gamma_4)e^{-S} =
\int dA_\mu e^{i\phi}|\det(iD\slash +i\mu\gamma_4)|e^{-S},
\end{equation}
where $\phi$ is the complex phase of the fermion determinant.
Since the determinant is complex standard Monte-Carlo techniques
based on importance sampling fail. Recently, some progress has been
made in simulating QCD for small $\mu$ and $T\simeq T_c$
\cite{Fodor:2001pe,deForcrand:2002ci,Allton:2002zi}, but the
regime of small temperature remains inaccessible.
However, if the density is very much larger than nuclear matter
saturation density, $\rho\gg\rho_0$, we expect the problem to simplify.
In this regime it is natural to use a system of non-interacting quarks
as a starting point \cite{Collins:1974ky}. The low energy
degrees of freedom are quark excitations and holes in the
vicinity of the Fermi surface. Since the Fermi momentum is
large, asymptotic freedom implies that the interaction between
quasi-particles is weak. As a consequence, the naive expectation
is that chiral symmetry is restored and quarks and gluons are
deconfined. It seems natural to assume that the quark liquid
at high baryon density is continuously connected to the
quark-gluon plasma at high temperature. These naive expectations
are summarized in the phase diagram shown in Fig.~\ref{fig_phase_1}.
\subsection{Fermi liquids}
\label{sec_fl}
Before we study the high density phase in more detail we
would like to discuss systems of nucleons at low density. For
simplicity we will begin with pure neutron matter at densities
below nuclear matter saturation density. This problem is relevant
to the behavior of matter near the surface of a neutron star,
which is at subnuclear densities and has a large neutron-to-proton
ratio. We will also see that pure neutron matter exhibits some
very interesting universal features which can be studied
experimentally using trapped atomic gases.
If the density is low then the typical momenta are small and
neither the structure of the neutron nor the details of the
neutron-neutron interaction are important. This means that the
system can be described by an effective lagrangian of pointlike
nucleons interacting via a short-range interaction
\cite{Abrikosov:1963,Hammer:2000xg}. The lagrangian is
\begin{equation}
\label{l_4f}
{\mathcal L}_0 = \psi^\dagger \left( i\partial_0 +
\frac{\nabla^2}{2m} \right) \psi
- \frac{C_0}{2} \left(\psi^\dagger \psi\right)^2 .
\end{equation}
The coupling constant $C_0$ is related to the scattering
length, $C_0=4\pi a/m$. Note that $C_0>0$ corresponds to
a repulsive interaction, and $C_0<0$ is an attractive interaction.
The lagrangian equ.~(\ref{l_4f}) is invariant under the $U(1)$
transformation $\psi\to e^{i\phi}\psi$. The $U(1)$ symmetry
implies that the fermion number
\begin{equation}
N= \int d^3x\,\psi^\dagger \psi
\end{equation}
is conserved. We introduce a chemical potential $\mu$ conjugate
to the fermion number $N$ and study the partition function
\begin{equation}
\label{Z}
Z(\mu,\beta) = {\rm Tr}\left[e^{-\beta(H-\mu N)}\right].
\end{equation}
Here, $H$ is the Hamiltonian associated with ${\mathcal L}$ and
$\beta=1/T$ is the inverse temperature. The average number of
particles for a given chemical potential $\mu$ and temperature
$T$ is given by $\langle N\rangle =T(\partial \log Z)/(\partial
\mu)$. At zero temperature the chemical potential is the energy
required to add one particle to the system.
\begin{figure}[t]
\includegraphics[width=11.0cm]{fermi.eps}
\caption{\label{fig_fl}
Leading order Feynman diagrams for the ground state
energy of a dilute gas of fermions interacting via
a short range potential.}
\end{figure}
We observe that the chemical potential simply shifts the energy
in the lagrangian. This implies that we have to carefully analyze
the boundary conditions in the path integral in order to fix the
pole prescription. The correct Minkowski space propagator is
\begin{equation}
\label{s_ph}
S^0_{\alpha\beta}(p) =
\frac{\delta_{\alpha\beta}}
{p_0-\epsilon_p+i\delta{\rm sgn}(\epsilon_p)}
= \delta_{\alpha\beta}\left\{
\frac{\Theta(p-p_F)}{p_0-\epsilon_p+i\delta}+
\frac{\Theta(p_F-p)}{p_0-\epsilon_p-i\delta}
\right\},
\end{equation}
where $\epsilon_p=E_p-\mu$, $E_p=\vec{p}^{\, 2}/(2m)$ and
$\delta\to 0^+$. The quantity $p_F=\sqrt{2m\mu}$ is called
the Fermi momentum. The two terms in equ.~(\ref{s_ph}) have a simple
physical interpretation. At finite density and zero temperature all
states with momenta below the Fermi momentum are occupied, while all
states above the Fermi momentum are empty. The possible excitation of
the system are particles above the Fermi surface or holes below the
Fermi surface, corresponding to the first and second term in
equ.~(\ref{s_ph}). The particle density is given by
\begin{equation}
\frac{N}{V} = \int \frac{d^4p}{(2\pi)^4} S^0_{\alpha\alpha}(p)
\left. e^{ip_0\eta}\right|_{\eta\to 0^+}
= 2\int \frac{d^3p}{(2\pi)^3}\Theta(p_F-p)
= \frac{p_F^3}{3\pi^2}.
\end{equation}
As a first simple application we can compute the energy
density as a function of the fermion density. For free
fermions, we find
\begin{equation}
\label{e0}
{\mathcal E} = 2\int \frac{d^3p}{(2\pi)^3}E_p\Theta(p_F-p)
= \frac{3}{5}\frac{p_F^2}{2m}\frac{N}{V}.
\end{equation}
We can also compute the corrections to the ground state
energy due to the interaction $\frac{1}{2}C_0(\psi^\dagger
\psi)^2$. The first term is a two-loop diagram with one
insertion of $C_0$, see Fig.~\ref{fig_fl}. We have
\begin{equation}
\label{e1}
{\mathcal E}_1 = C_0\left(\frac{p_F^3}{6\pi^2}\right)^2.
\end{equation}
We should note that equ.~(\ref{e1}) contains two possible
contractions, called the direct and the exchange term. If the
fermions have spin $s$ and degeneracy $g=(2s+1)$ then equ.~(\ref{e1})
has to be multiplied by a factor $g(g-1)/2$. We also note that
the sum of the first two terms in the energy density
can be written
as
\begin{equation}
\label{e_pfa}
\frac{E}{N} = \frac{p_F^2}{2m}\left(
\frac{3}{5} + \frac{2}{3\pi}(p_Fa)+\ldots \right),
\end{equation}
which shows that the $C_0$ term is the first term in an expansion
in $p_Fa$, suitable for a dilute, weakly interacting, Fermi gas. The
expansion in $(p_Fa)$ was carried out to order $(p_Fa)^2$ by Huang,
Lee and Yang \cite{Lee:1957,Huang:1957}. Since then, the accuracy
was pushed to \cite{Fetter:1971,Hammer:2000xg} $O((p_Fa)^4\log(p_Fa))$.
\begin{figure}[t]
\begin{center}\includegraphics[width=9cm]{pp_bubbles.eps}\end{center}
\caption{\label{fig_lad}
Particle-particle ladder diagrams for the ground state
energy of a dilute gas of fermions.}
\end{figure}
\subsection{Unitary Limit}
\label{sec_uni}
The neutron-neutron scattering length is very large, $a_{nn}=-18$ fm,
and the $(p_Fa)$ expansion is not particularly useful. Indeed, since
the scattering length is so much larger than all other hadronic
length scales it makes sense to consider the opposite limit and
take the scattering length to infinity. This means that there
is a bound state right at threshold and that the low energy cross
section saturates the unitarity bound. If neutron matter is
dilute then we can also assume that $(p_F r)\ll 1$, where $r$
is the range of the potential. In this limit the only energy
scale in the problem is $p_F^2/(2m)$ and the energy per particle
is
\begin{equation}
\label{xi}
\frac{E}{N}=\xi \frac{3}{5}\frac{p_F^2}{2m},
\end{equation}
where $\xi$ is an unknown parameter. Comparison with equ.~(\ref{e0})
shows that for free fermions $\xi=1$.
Neutron matter in the unitary limit is very strongly correlated
and the determination of $\xi$ is a complicated, non-perturbative
problem. However, since $\xi$ is insensitive to the details of
the interaction the result is the same for any dilute Fermi gas
with a two-body bound state near threshold. It is now possible
to create such a system in the laboratory by trapping cold
fermionic atoms. In these systems the scattering length can
be controlled using Feshbach resonances induced by an external
magnetic field. A recent experimental analysis yields the
value \cite{Kinast:2005} $\xi\simeq 0.45$.
\begin{figure}[t]
\begin{center}\includegraphics[width=7.0cm]{pp_ring_inv_sm_txt.eps}\end{center}
\caption{\label{fig_bcs_bec}
Total energy of an interacting fermion gas in units
of the energy of a free fermion gas as a function of $(k_Fa)^{-1}$.
The open circles show the result of a numerical calculation of
particle-particle ladder diagrams. The dashed curve shows the
approximations given in equ.~(\ref{pp_lad}). The star is the
result of the $d\to\infty$ calculation in the unitary limit.}
\end{figure}
There have been a number of theoretical attempts to determine
$\xi$. Since the two-body interaction is large it is natural
to begin with the sum of all two-body diagrams, see Fig.~\ref{fig_lad}.
This sum gives \cite{Schafer:2005kg}
\begin{equation}
\label{pp_lad}
\frac{E}{N} =\frac{p_F^2}{2M}\left\{ \frac{3}{5} +
\frac{2(k_Fa)/(3\pi)}{1-\frac{6}{35\pi}(11-2\log(2))(p_Fa)}\right\}.
\end{equation}
from which we deduce $\xi\simeq 0.32$. This is reasonably close to
the experimental result, but since the system is strongly correlated
there is no obvious reason to restrict ourselves to two-body ladders.
We have recently studied the possibility that equ.~(\ref{pp_lad})
can be justified as the leading term in an expansion in $1/d$, where
$d$ is the number of space dimensions \cite{Steele:2000qt,Schafer:2005kg}.
This approach appears promising, but $1/d$ corrections have not been
worked out yet. Another possibility is to pursue numerical approaches.
Green function Monte Carlo calculations give $\xi=0.44$, in very good
agreement with the experimental result \cite{Carlson:2003wm}. Several
groups have performed euclidean lattice
calculations\cite{Chen:2003vy,Wingate:2004wm,Lee:2004qd,Bulgac:2005pj},
similar to the lattice QCD calculations discussed in Sect.~\ref{sec_lqcd}.
These calculations do not suffer from a sign problem and can be
extended to finite temperature.
\subsection{Nuclear Matter and Chiral Restoration}
\label{sec_nuc}
Ordinary nuclei consist of roughly equal numbers of neutrons
and protons. In order to study heavy nuclei it is useful to
consider nuclear matter in pure QCD, i.e. ignoring the
contribution from electrons as well as the Coulomb repulsion
between protons. As discussed in Sect.~\ref{sec_dense_intro}
nuclear matter saturates at a density $\rho_0\simeq 0.15\,
{\rm fm}^{-3}$. The binding energy of nuclear matter is
$B/A\simeq 15$ MeV. Numerical calculations based on realistic
nucleon-nucleon potentials are successful in reproducing
these numbers, but we do not understand very well why nuclear
matter saturates and how the saturation density and the binding
energy are related to more fundamental properties of QCD.
We also do not know very well how to extrapolate the
equation of state beyond the saturation density. An important
question is whether we expect chiral symmetry to be
restored as the density increases. If the density is small
this question can be studied using the method we employed
in Sect.~\ref{sec_arg}. The quark condensate is given by
\begin{equation}
\langle\bar{q}q\rangle_\rho = T \frac{\partial}{\partial m_q}
\log Z.
\end{equation}
The partition function of a dilute gas of protons and neutrons is
\begin{equation}
\log Z=4\int\frac{d^3p}{(2\pi)^3}\log\Big( 1+e^{-(E_N-\mu)/T}\Big).
\end{equation}
The quark mass dependence of the nucleon mass is related to
$\pi N$ Sigma term $\Sigma_{\pi N}=m_q\partial m_N/\partial m_q$.
We get
\begin{equation}
\langle\bar{q}q\rangle_\rho = 4\int\frac{d^3p}{(2\pi)^3}
\frac{M_N}{E_N}\Big(\frac{\partial{M_N}}{\partial m_q}\Big)
\Theta(p_F-|\vec{p}|)
=\langle\bar{q}q\rangle_0
\Big\{ 1-\frac{\Sigma_{\pi N}\rho_0}{m_\pi^2 f_\pi^2}
\Big(\frac{\rho}{\rho_0}\Big)\Big\}.
\end{equation}
The Sigma term can be extracted in pion-nucleon scattering.
Using $\Sigma_{\pi N}\simeq 45$ MeV we find
\begin{equation}
\langle\bar{q}q\rangle_\rho\simeq\langle\bar{q}q\rangle_0
\Big\{ 1-\frac{1}{3} \Big(\frac{\rho}{\rho_0}\Big)\Big\},
\end{equation}
which indicates that chiral condensate is significantly
modified already at nuclear matter saturation density.
\subsection{Superfluidity}
\label{sec_bcs}
One of the most remarkable phenomena that take place in many body
systems is superfluidity. Superfluidity is related to an instability
of the Fermi surface in the presence of attractive interactions between
fermions. Let us consider fermion-fermion scattering in the simple
model introduced in Sect.~\ref{sec_fl}. At leading order the scattering
amplitude is given by
\begin{equation}
\label{pp_0}
\Gamma_{\alpha\beta\gamma\delta}(p_1,p_2,p_3,p_4) =
C_0 \left( \delta_{\alpha\gamma}\delta_{\beta\delta}
- \delta_{\alpha\delta}\delta_{\beta\gamma} \right).
\end{equation}
At next-to-leading order we find the corrections shown in Fig.~\ref{fig_bcs}.
A detailed discussion of the role of these corrections can be found in
\cite{Abrikosov:1963,Shankar:1993pf,Polchinski:1992ed}. The BCS diagram
is special, because in the case of a spherical Fermi surface it can lead
to an instability in weak coupling. The main point is that if the
incoming momenta satisfy $\vec{p}_1\simeq -\vec{p}_2$ then there are
no kinematic restrictions on the loop momenta. As a consequence, all
back-to-back pairs can mix and there is an instability even in weak
coupling.
\begin{figure}[t]
\includegraphics[width=11.0cm]{bcs.eps}
\caption{\label{fig_bcs}
Second order diagrams that contribute to particle-particle
scattering. The three diagrams are known as ZS (zero sound),
ZS' and BCS (Bardeen-Cooper-Schrieffer) contribution.}
\end{figure}
For $\vec{p}_1= -\vec{p}_2$ and $E_1=E_2=E$ the BCS diagram is given by
\begin{eqnarray}
\label{diag_bcs}
\Gamma_{\alpha\beta\gamma\delta} &=&
C_0^2 \left( \delta_{\alpha\gamma}\delta_{\beta\delta}
- \delta_{\alpha\delta}\delta_{\beta\gamma} \right)
\int \frac{d^4q}{(2\pi)^4}
\frac{1}{E+q_0-\epsilon_q+i\delta{\rm sgn}(\epsilon_q)}
\nonumber \\
& & \hspace{4cm}
\frac{1}{E-q_0-\epsilon_q+i\delta{\rm sgn}(\epsilon_q)}.
\end{eqnarray}
The loop integral has an infrared divergence near the Fermi surface
as $E\to 0$. The scattering amplitude is proportional to
\begin{equation}
\label{cor_bcs}
\Gamma_{\alpha\beta\gamma\delta} =
\left( \delta_{\alpha\gamma}\delta_{\beta\delta}
- \delta_{\alpha\delta}\delta_{\beta\gamma} \right)
\left\{
C_0 - C_0^2\left(\frac{p_Fm}{2\pi^2}\right)
\log\left(\frac{E_0}{E}\right) \right\},
\end{equation}
where $E_0$ is an ultraviolet cutoff. Equ.~(\ref{cor_bcs}) can be
interpreted as an effective energy dependent coupling that satisfies
the renormalization group equation \cite{Shankar:1993pf,Polchinski:1992ed}
\begin{equation}
\label{rge_bcs}
E\frac{dC_0}{dE} = C_0^2 \left(\frac{p_Fm}{2\pi^2}\right),
\end{equation}
with the solution
\begin{equation}
\label{rge_sol}
C_0(E) =\frac{C_0(E_0)}{1+NC_0(E_0)\log(E_0/E)},
\end{equation}
where $N=(p_Fm)/(2\pi^2)$ is the density of states. Equ.~(\ref{rge_sol})
shows that there are two possible scenarios. If the initial coupling is
repulsive, $C_0(E_0)>0$, then the renormalization group evolution will
drive the effective coupling to zero and the Fermi liquid is stable. If,
on the other hand, the initial coupling is attractive, $C_0(E_0)<0$, then
the effective coupling grows and reaches a Landau pole at
\begin{equation}
\label{E_lp}
E_{\it crit} \sim E_0
\exp\left(-\frac{1}{N|C_0(E_0)|}\right).
\end{equation}
At the Landau pole the Fermi liquid description has to break down. The
renormalization group equation does not determine what happens at this
point, but it seems natural to assume that the strong attractive interaction
will lead to the formation of a fermion pair condensate. The fermion
condensate $\langle\epsilon^{\alpha\beta}\psi_\alpha\psi_\beta\rangle$
signals the breakdown of the $U(1)$ symmetry and leads to a gap $\Delta$
in the single particle spectrum.
The scale of the gap is determined by the position of the Landau pole,
$\Delta\sim E_{\it crit}$. A more quantitative estimate of the gap can
be obtained in the mean field approximation. In the path integral formulation
the mean field approximation is most easily introduced using the
Hubbard-Stratonovich trick. For this purpose we first rewrite the
four-fermion interaction as
\begin{equation}
\label{4f_fierz}
\frac{C_0}{2}(\psi^\dagger\psi)^2 =
\frac{C_0}{4} \left\{
(\psi^\dagger\sigma_2\psi^\dagger)
(\psi\sigma_2\psi)
+(\psi^\dagger\sigma_2\vec{\sigma}\psi^\dagger)
(\psi\vec{\sigma}\sigma_2\psi)\right\},
\end{equation}
where we have used the Fierz identity $2\delta^{\alpha\beta}
\delta^{\gamma\rho} = \delta^{\alpha\rho}\delta^{\gamma\beta}+
(\vec{\sigma})^{\alpha\rho}(\vec{\sigma})^{\gamma\beta}$. Note that
the second term in equ.~(\ref{4f_fierz}) vanishes because $(\sigma_2
\vec{\sigma})$ is a symmetric matrix. We now introduce a factor of
unity into the path integral
\begin{equation}
1 = \frac{1}{Z_\Delta}\int D\Delta
\exp\left(\frac{\Delta^*\Delta}{C_0}\right),
\end{equation}
where we assume that $C_0<0$. We can eliminate the four-fermion
term in the lagrangian by a shift in the integration variable $\Delta$.
The action is now quadratic in the fermion fields, but it involves
a Majorana mass term $\psi\sigma_2\Delta \psi+h.c$. The
Majorana mass terms can be handled using the Nambu-Gorkov
method. We introduce the bispinor $\Psi=(\psi,\psi^\dagger
\sigma_2)$ and write the fermionic action as
\begin{equation}
\label{s_ng}
{S} = \frac{1}{2}\int\frac{d^4p}{(2\pi)^4}
\Psi^\dagger
\left(\begin{array}{cc}
p_0-\epsilon_p & \Delta \\
\Delta^* & p_0+\epsilon_p
\end{array}\right) \Psi.
\end{equation}
Since the fermion action is quadratic we can integrate
the fermion out and obtain the effective lagrangian
\begin{equation}
\label{s_ng_eff}
L= \frac{1}{2}{\rm Tr}\left[\log\left(
G_0^{-1}G\right)\right]+\frac{1}{C_0}|\Delta|^2,
\end{equation}
where $G$ is the fermion propagator
\begin{equation}
\label{ng_prop}
G(p) = \frac{1}{p_0^2-\epsilon_p^2-|\Delta|^2}
\left(\begin{array}{cc}
p_0+\epsilon_p & \Delta^* \\
\Delta & p_0-\epsilon_p
\end{array}\right).
\end{equation}
The diagonal and off-diagonal components of $G(p)$ are
sometimes referred to as normal and anomalous propagators.
Note that we have not yet made any approximation. We have
converted the fermionic path integral to a bosonic one, albeit
with a very non-local action. The mean field approximation
corresponds to evaluating the bosonic path integral using
the saddle point method. Physically, this approximation
means that the order parameter does not fluctuate.
Formally, the mean field approximation can be
justified in the large $N$ limit, where $N$ is the
number of fermion fields. The saddle point equation
for $\Delta$ gives the gap equation
\begin{equation}
\Delta = |C_0|\int\frac{d^4p}{(2\pi)^4}
\frac{\Delta}{p_0^2-\epsilon^2_p-\Delta^2}.
\end{equation}
Performing the $p_0$ integration we find
\begin{equation}
\label{4f_gap}
1 = \frac{|C_0|}{2}\int\frac{d^3p}{(2\pi)^3}
\frac{1}{\sqrt{\epsilon^2_p+\Delta^2}}.
\end{equation}
Since $\epsilon_p=E_p-\mu$ the integral in equ.~(\ref{4f_gap})
has an infrared divergence on the Fermi surface $|\vec{p}| \sim
p_F$. As a result, the gap equation has a non-trivial solution even
if the coupling is arbitrarily small. The magnitude of the gap is
$\Delta\sim \Lambda \exp(-1/(|C_0|N))$ where $\Lambda$ is a cutoff
that regularizes the integral in equ.~(\ref{4f_gap}) in the ultraviolet.
If we treat equ.~(\ref{l_4f}) as a low energy effective field theory
we should be able to eliminate the unphysical dependence of the gap
on the ultraviolet cutoff, and express the gap in terms of a physical
observable. At low density this can be achieved by observing that
the gap equation has the same UV behavior as the Lipmann-Schwinger
equation that determines the scattering length at zero density
\begin{equation}
\label{bubble}
\frac{mC_0}{4\pi a} - 1 = \frac{C_0}{2}
\int\frac{d^3p}{(2\pi)^3}\frac{1}{E_P}.
\end{equation}
Combining equs.~(\ref{4f_gap}) and (\ref{bubble}) we can derive an
UV finite gap equation that depends only on the scattering length,
\begin{equation}
-\frac{m}{4\pi a} =
\frac{1}{2}\int\frac{d^3p}{(2\pi)^3} \Big\{
\frac{1}{\sqrt{\epsilon^2_p+\Delta^2}}
-\frac{1}{E_p}\Big\}.
\end{equation}
Solving for the $\Delta$ we find \cite{Papenbrock:1998wb,Khodel:1996}
\begin{equation}
\label{gap_lowd}
\Delta = \frac{8E_f}{e^2}\exp\left(-\frac{\pi}{2p_F|a|}\right).
\end{equation}
Higher order corrections reduce the pre-exponent in this
result by a factor $(4e)^{1/3}\simeq 2.2$ \cite{Gorkov:1961}.
Like the perturbative calculation of the energy per particle
this result is not very useful for neutron matter, since the
scattering length is very large. Taking higher order corrections
into account, Equ.~(\ref{gap_lowd}) suggests that $\Delta \sim
0.49 E_f$ as $p_F|a|\to\infty$. Surprisingly, this estimate
agrees very well with numerical calculations \cite{Chang:2004sj}.
The gap is also quite sensitive to the effective range of the
interaction. Calculations based on potential models give gaps
on the order of 2 MeV at nuclear matter density.
\subsection{Landau-Ginzburg theory}
\label{sec_lg}
In neutron stars there is not only pairing between neutrons
but also pairing between protons. Since protons carry charge
this implies that the material is not only a superfluid but
also a superconductor. Superconductors have many interesting
properties which can be understood from the symmetries
involved. We will consider a system of protons coupled to a
$U(1)$ gauge field $A_\mu$. The order parameter $\Phi=\langle
\epsilon^{\alpha\beta}\psi_\alpha\psi_\beta\rangle$ breaks
$U(1)$ invariance. Consider a gauge transformation
\begin{equation}
A_\mu\to A_\mu +\partial_\mu\Lambda .
\end{equation}
The order parameter transforms as
\begin{equation}
\Phi \to \exp(2ie\Lambda)\Phi.
\end{equation}
The breaking of gauge invariance is responsible for most of the
unusual properties of superconductors \cite{Anderson:1984,Weinberg:1995}.
This can be seen by constructing the low energy effective action
of a superconductor. For this purpose we write the order parameter
in terms of its modulus and phase
\begin{equation}
\Phi(x) = \exp(2ie\phi(x)) \tilde\Phi(x).
\end{equation}
The field $\phi$ corresponds to the Goldstone mode. Under a gauge
transformation $\phi(x)\to\phi(x)+\Lambda(x)$.
Gauge invariance restricts the form of the effective Lagrange
function
\begin{equation}
\label{L_sc}
L = -\frac{1}{4}\int d^3x\, F_{\mu\nu}F_{\mu\nu}
+ L_s (A_\mu-\partial_\mu\phi).
\end{equation}
There is a large amount of information we can extract even
without knowing the explicit form of $L_s$. Stability implies
that $A_\mu=\partial_\mu\phi$ corresponds to a minimum of the
energy. This means that up to boundary effects the gauge
potential is a total divergence and that the magnetic field
has to vanish. This phenomenon is known as the Meissner
effect.
Equ.~(\ref{L_sc}) also implies that a superconductor
has zero resistance. The equations of motion relate
the time dependence of the Goldstone boson field to
the potential,
\begin{equation}
\label{phidot}
\dot\phi(x)=-V(x).
\end{equation}
The electric current is related to the gradient of the
Goldstone boson field. Equ.~(\ref{phidot}) shows that the
time dependence of the current is proportional to the
gradient of the potential. In order to have a static
current the gradient of the potential has to be constant
throughout the sample, and the resistance is zero.
In order to study the properties of a superconductor in more
detail we have to specify $L_s$. For this purpose we assume
that the system is time-independent, that the spatial
gradients are small, and that the order parameter is small.
In this case we can write
\begin{equation}
\label{l_lg}
L_s = \int d^3x\, \left\{
-\frac{1}{2}\left|\left(\nabla-2ie\vec{A}\right)\Phi\right|^2
+\frac{1}{2}m^2_H\left(\Phi^*\Phi\right)^2
-\frac{1}{4}g\left(\Phi^*\Phi\right)^4 + \ldots \right\},
\end{equation}
where $m_H$ and $g$ are unknown parameters that depend
on the temperature. Equ.~(\ref{l_lg}) is known as the
Landau-Ginzburg effective action. Strictly speaking, the
assumption that the order parameter is small can only be
justified in the vicinity of a second order phase transition.
Nevertheless, the Landau-Ginzburg description is instructive
even in the regime where $t=(T-T_c)/T_c$ is not small. It is
useful to decompose $\Phi=\rho\exp(2ie\phi)$. For constant
fields the effective potential,
\begin{equation}
\label{v_lg}
V(\rho)=-\frac{1}{2}m_H^2\rho^2 +\frac{1}{4}g\rho^4 ,
\end{equation}
is independent of $\phi$. The minimum is at $\rho_0^2=m_H^2/g$
and the energy density at the minimum is given by ${E}=
-m_H^4/(4g)$. This shows that the two parameters $m_H$ and
$g$ can be related to the expectation value of $\Phi$ and
the condensation energy. We also observe that the phase
transition is characterized by $m_H(T_c)=0$.
In terms of $\phi$ and $\rho$ the Landau-Ginzburg action
is given by
\begin{equation}
L_s = \int d^3x\, \left\{
-2e^2\rho^2 \left(\vec\nabla\phi-\vec{A}\right)^2
+\frac{1}{2}m_H^2\rho^2 -\frac{1}{4}g\rho^4
-\frac{1}{2}\left(\nabla\rho\right)^2
\right\}.
\end{equation}
The equations of motion for $\vec{A}$ and $\rho$
are given by
\begin{eqnarray}
\label{b_lg}
\vec\nabla\times \vec{B} &=&
4e^2\rho^2 \left(\nabla\phi -\vec{A}\right), \\
\label{rho_lg}
\nabla^2 \rho &=&
-m_H^2\rho^2 + g\rho^3 + 4e^2 \rho
\left( \vec\nabla\phi-\vec{A}\right) .
\end{eqnarray}
Equ.~(\ref{b_lg}) implies that $\nabla^2\vec{B} = -4e^2\rho^2\vec{B}$.
This means that an external magnetic field $\vec{B}$ decays over a
characteristic distance $\lambda=1/(2e\rho)$. Equ.~(\ref{rho_lg})
gives $\nabla^2\rho = -m_H^2\rho+\ldots$. As a consequence, variations
in the order parameter relax over a length scale given by $\xi=1/m_H$.
The two parameters $\lambda$ and $\xi$ are known as the penetration
depth and the coherence length.
The relative size of $\lambda$ and $\xi$ has important consequences
for the properties of superconductors. In a type II superconductor
$\xi<\lambda$. In this case magnetic flux can penetrate the system
in the form of vortex lines. At the core of a vortex the order parameter
vanishes, $\rho=0$. In a type II material the core is much smaller
than the region over which the magnetic field goes to zero. The
magnetic flux is given by
\begin{equation}
\int_A\vec{B}\cdot\vec{S} =
\oint_{\partial A} \vec{A}\cdot d\vec{l} =
\oint_{\partial A} \vec{\nabla}\phi \cdot d\vec{l} =
\frac{n\pi\hbar}{e} ,
\end{equation}
and quantized in units of $\pi\hbar/e$. In a type II superconductor
magnetic vortices repel each other and form a regular lattice known
as the Abrikosov lattice. In a type I material, on the other hand,
vortices are not stable and magnetic fields can only penetrate
the sample if superconductivity is destroyed.
\section{QCD at high density}
\label{sec_dqcd}
\subsection{Color superconductivity}
\label{sec_csc}
In Sect.~\ref{sec_dense_intro} we introduced a few simple arguments
concerning the phase diagram of QCD in the $\mu-T$ plane. These arguments
are summarized in Fig.~\ref{fig_phase_1}. The basic idea is that large
baryon density is just like high temperature: there is a large scale in
the problem, the effective coupling is weak, and the system is described,
to a good approximation, as a weakly interacting quark liquid. We expect,
in particular, that quarks are deconfined and that chiral symmetry is
restored.
We also showed, however, that systems at finite density, as exemplified
by nuclear matter, have a very rich phase diagram. We saw, in particular,
that the BCS instability will lead to pair condensation whenever there
is an attractive fermion-fermion interaction, even if the interaction
is weak. At very large density, the attraction is provided by
one-gluon exchange between quarks in a color anti-symmetric $\bar 3$
state. High density quark matter is therefore expected to behave as a
color superconductor \cite{Frau_78,Barrois:1977xd,Bar_79,Bailin:1984bm}.
Color superconductivity is described by a pair condensate of the form
\begin{equation}
\label{csc}
\Phi = \langle \psi^TC\Gamma_D\lambda_C\tau_F\psi\rangle.
\end{equation}
Here, $C$ is the charge conjugation matrix, and $\Gamma_D,
\lambda_C,\tau_F$ are Dirac, color, and flavor matrices.
Except in the case of only two colors, the order parameter
cannot be a color singlet. Color superconductivity is
therefore characterized by the breakdown of color gauge
invariance. This statement has to be interpreted in the
sense of Sect.~\ref{sec_lg}. Gluons acquire a mass due
to the (Meissner-Anderson) Higgs mechanism.
\begin{figure}[t]
\begin{center}\includegraphics[width=12.0cm]{phase_second.eps}\end{center}
\caption{\label{fig_phase_2}
First revision of the phase diagram of hadronic matter.
This figure shows the phase diagram of strongly interacting
matter obtained from a mean field treatment of chiral symmetry
breaking and color superconductivity in QCD with two flavors.}
\end{figure}
A rough estimate of the critical density for the transition
from chiral symmetry breaking to color superconductivity, the
superconducting gap and the transition temperature is provided
by schematic four-fermion models \cite{Alford:1998zt,Rapp:1998zu}.
Typical models are based on the instanton interaction
\begin{equation}
\label{l_I}
{\mathcal L} = G_{I}\left\{
(\bar\psi\tau^-_\alpha\psi)^2 +
(\bar\psi\gamma_5\tau^-_\alpha\psi)^2
\right\},
\end{equation}
or a schematic one-gluon exchange interaction
\begin{equation}
\label{l_OGE}
{\mathcal L} = G_{OGE}\left(\bar{\psi}\gamma_\mu
\frac{\lambda^a}{2}\psi\right)^2 .
\end{equation}
Here $\tau^-_\alpha=(\vec{\tau},i)$ is an isospin matrix and
$\lambda^a$ are the color Gell-Mann matrices. The strength
of the four-fermion interaction is typically tuned to reproduce
the magnitude of the chiral condensate and the pion decay
constant at zero temperature and density. In the mean field
approximation the effective quark mass associated with chiral
symmetry breaking is determined by a gap equation of the type
\begin{equation}
\label{m_gap}
M_Q = G_M \int^\Lambda\frac{d^3p}{(2\pi)^3}
\frac{M_Q}{\sqrt{{\vec{p}}^{\,2}+M_Q^2}}
\left(1-n_F(E_p)\right),
\end{equation}
where $G_M$ is the effective coupling in the quark-anti-quark
channel, $\Lambda$ is a cutoff, and $n_F(E)$ is the Fermi
distribution. Both the instanton interaction
and the one-gluon exchange interaction are attractive in the color
anti-triplet scalar diquark channel $\epsilon^{abc}(\psi^b C\gamma_5
\psi^c)$. A pure one-gluon exchange interaction leads to a
degeneracy between scalar and pseudoscalar diquark condensation,
but instantons are repulsive in the pseudoscalar diquark
channel. The gap equation in the scalar diquark channel is
\begin{equation}
\label{d_gap}
\Delta = \frac{G_D}{2} \int^\Lambda\frac{d^3p}{(2\pi)^3}
\frac{\Delta}{\sqrt{(|\vec{p}|-p_F)^2+\Delta^2}},
\end{equation}
where we have neglected terms that do not have a singularity
on the Fermi surface $|\vec{p}|=p_F$. In the case of a
four-fermion interaction with the quantum numbers of one-gluon
exchange $G_D=G_M/(N_c-1)$. The same result holds for instanton
effects. In order to determine the correct ground state we have
to compare the condensation energy in the chiral symmetry broken
and diquark condensed phases. We have ${E} \sim f_\pi^2M_Q^2$ in
the $(\bar{q}q)$ condensed phase and ${E}\sim p_F^2\Delta^2/(2\pi^2)$
in the $(qq)$ condensed phase.
\begin{figure}[t]
\begin{center}\includegraphics[width=9.0cm]{compare.eps}\end{center}
\caption{\label{fig_tri}
Location of the (pseudo) critical line and tri-critical point (box)
as measured in different simulations, de Forcrand and Philipsen
(2002) [FP], Fodor and Katz (2002) [FK], Allton et al (2002)
[All], D'Ellia and Lombardo (2002) [EL]. Figure from de Forcrand
and Philipsen (2003).}
\end{figure}
At zero temperature and density both equs.~(\ref{m_gap}) and
(\ref{d_gap}) only have non-trivial solutions if the coupling
exceeds a critical value. Since $G_M>G_D$ we have $M_Q>\Delta$ and
the energetically preferred solution corresponds to chiral symmetry
breaking. If the density increases Pauli-Blocking in equ.~(\ref{m_gap})
becomes important and the effective quark mass decreases. The diquark
gap equation behaves very differently. Equ.~(\ref{d_gap}) has an infrared
singularity on the Fermi surface, $p=p_F$, and this singularity is
multiplied by a finite density of states, $N=p_F^2/(2\pi)^2$. As a
consequence, there is a non-trivial solution even if the coupling is
weak. The gap grows with density until the Fermi momentum becomes on
the order of the cutoff. For realistic values of the parameters we
find a first order transition for remarkably small values of the quark
chemical potential, $\mu_Q\simeq 300$ MeV. The gap in the diquark
condensed phase is $\Delta\sim 100$ MeV and the critical temperature
is $T_c\sim 50$ MeV.
In the same model the finite temperature phase transition at
zero baryon density is found to be of second order. This result
is in agreement with universality arguments \cite{Pisarski:ms}
and lattice results. If the transition at finite density and zero
temperature is indeed of first order then the first order transition
at zero baryon density has to end in a tri-critical point
\cite{Barducci:1989wi,Berges:1998,Halasz:1998}.
The tri-critical point is quite remarkable, because it remains a true
critical point even if the quark masses are not zero. A non-zero quark
mass turns the second order $T\neq 0$ transition into a smooth crossover,
but the first order $\mu\neq 0$ transition persists. It is hard to
predict where exactly the tri-critical point is located in the phase
diagram. Recent lattice calculations suggest that the tri-critical
point is sufficiently close to the finite temperature axis so that
its location can be determined on the lattice, see Fig.~\ref{fig_tri}.
It may also be possible to locate the critical point experimentally.
Heavy ion collisions at relativistic energies produce matter under the
right conditions and experimental signatures of the tri-critical point
have been suggested in \cite{Stephanov:1998}.
A schematic phase diagram is shown in Fig.~\ref{fig_phase_2}. We
should emphasize that this phase diagram is based on simplified models
and that there is no proof that the transition from nuclear matter to
quark matter along the $T=0$ line occurs via a single first order
transition. Chiral symmetry breaking and color superconductivity
represent two competing forms of order, and it seems unlikely that
the two phases are separated by a second order transition. However,
since color superconductivity modifies the spectrum near the Fermi
surface, whereas chiral symmetry breaking operates near the surface
of the Dirac sea, it is not clear that the two phases cannot coexist.
Indeed, there are models in which a phase coexistence region appears
\cite{Kitazawa:2002bc}.
\subsection{Phase structure in weak coupling}
\label{sec_phases}
\subsubsection{QCD with two flavors}
\label{sec_nf2}
In this section we shall discuss how to use weak coupling methods
in order to explore the phases of dense quark matter. We begin with
what is usually considered to be the simplest case, quark matter with
two degenerate flavors, up and down. Renormalization group arguments
suggest \cite{Evans:1999ek,Schafer:1999na}, and explicit calculations
confirm \cite{Brown:1999yd,Schafer:2000tw}, that whenever possible quark
pairs condense in an $s$-wave state. This means that the spin wave function
of the pair is anti-symmetric. Since the color wave function is also
anti-symmetric, the Pauli principle requires the flavor wave function
to be anti-symmetric too. This essentially determines the structure of
the order parameter \cite{Alford:1998zt,Rapp:1998zu}
\begin{equation}
\label{2sc}
\Phi^a = \langle \epsilon^{abc}\psi^b C\gamma_5 \tau_2\psi^c
\rangle.
\end{equation}
This order parameter breaks the color $SU(3)\to SU(2)$ and
leads to a gap for up and down quarks with two out of the
three colors. Chiral and isospin symmetry remain unbroken.
\begin{figure}[t]
\begin{center}\includegraphics[width=11.0cm]{sc3.eps}\end{center}
\caption{\label{fig_gap}
Fig.~a) shows the leading contribution to the Dyson-Schwinger
(gap) equation in QCD at finite density. The open square denotes an
anomalous self energy (gap) insertion and the solid square is a gluon
self energy insertion. Figs.~b) and c) show quark self energy insertions
and vertex corrections.}
\end{figure}
We can calculate the magnitude of the gap and the condensation energy
using weak coupling methods. In weak coupling the gap is determined by
ladder diagrams with the one gluon exchange interaction. These diagrams
can be summed using the gap equation
\cite{Son:1999uk,Schafer:1999jg,Pisarski:2000tv,Hong:2000fh,Brown:1999aq}
\begin{eqnarray}
\label{eliash}
\Delta(p_4) &=& \frac{g^2}{12\pi^2} \int dq_4\int d\cos\theta\,
\left(\frac{\frac{3}{2}-\frac{1}{2}\cos\theta}
{1-\cos\theta+G/(2\mu^2)}\right. \\
& & \hspace{3cm}\left. +\frac{\frac{1}{2}+\frac{1}{2}\cos\theta}
{1-\cos\theta+F/(2\mu^2)} \right)
\frac{\Delta(q_4)}{\sqrt{q_4^2+\Delta(q_4)^2}}. \nonumber
\end{eqnarray}
Here, $\Delta(p_4)$ is the frequency dependent gap, $g$ is the QCD
coupling constant and $G$ and $F$ are the self energies of magnetic
and electric gluons. This gap equation is very similar to the BCS gap
equation equ.~(\ref{d_gap}) obtained in four-fermion models. The terms
in the curly brackets arise from the magnetic and electric components
of the gluon propagator. The numerators are the on-shell matrix elements
${M}_{ii,00}=[\bar{u}_h(p_1)\gamma_{i,0}u_h(p_3)][\bar{u}_h(p_2)\gamma_{i,0}
u_h(p_4)]$ for the scattering of back-to-back fermions on the Fermi surface.
The scattering angle is $\cos\theta=\hat{p}_1\cdot\hat{p}_3$. In the case
of a spin zero order parameter, the helicity $h$ of all fermions is the
same, see \cite{Schafer:1999jg} for more detail.
The main difference between equ.~(\ref{eliash}) and the BCS gap equation
(\ref{d_gap}) is that because the gluon is massless, the gap equation
contains a collinear divergence for $\cos\theta\sim 1$. In a dense medium
the collinear divergence is regularized by the gluon self energy. For
$\vec{q}\to 0$ and to leading order in perturbation theory we have
\begin{equation}
\label{pi_qcd}
F = 2m^2, \hspace{1cm}
G = \frac{\pi}{2}m^2\frac{q_4}{|\vec{q}|},
\end{equation}
with $m^2=N_fg^2\mu^2/(4\pi^2)$. In the electric part, $m_D^2=2m^2$ is
the familiar Debye screening mass. In the magnetic part, there is no
screening of static modes, but non-static modes are modes are dynamically
screened due to Landau damping. This is completely analogous to the
situation at finite temperature \cite{Manuel:1995td}, see
Sect.~\ref{sec_pqcd}.
For small energies dynamic screening of magnetic modes is much weaker
than Debye screening of electric modes. As a consequence, perturbative
color superconductivity is dominated by magnetic gluon exchanges. Using
equ.~(\ref{pi_qcd}) we can perform the angular integral in equ.~(\ref{eliash})
and find
\begin{equation}
\label{eliash_mel}
\Delta(p_4) = \frac{g^2}{18\pi^2} \int dq_4
\log\left(\frac{b\mu}{\sqrt{|p_4^2-q_4^2|}}\right)
\frac{\Delta(q_4)}{\sqrt{q_4^2+\Delta(q_4)^2}},
\end{equation}
with $b=256\pi^4(2/N_f)^{5/2}g^{-5}$. We can now see why it was important
to keep the frequency dependence of the gap. Because the collinear divergence
is regulated by dynamic screening, the gap equation depends on $p_4$
even if the frequency is small. We can also see that the gap scales
as $\exp(-c/g)$. The collinear divergence leads to a gap equation with
a double-log behavior. Qualitatively
\begin{equation}
\label{dlog}
\Delta \sim \frac{g^2}{18\pi^2}\Delta
\left[\log\left(\frac{\mu}{\Delta}\right)\right]^2,
\end{equation}
from which we conclude that $\Delta\sim\exp(-c/g)$. Equ.~(\ref{dlog})
is not sufficiently accurate to determine the correct value of the
constant $c$. A more detailed analysis shows that the gap on the
Fermi surface is given by
\begin{equation}
\label{gap_oge}
\Delta_0 \simeq 512\pi^4(2/N_f)^{5/2}b'\mu g^{-5}
\exp\left(-\frac{3\pi^2}{\sqrt{2}g}\right).
\end{equation}
The factor $b'$ is related to non-Fermi liquid effects, see
\cite{Brown:2000eh,Ipp:2003cj,Schafer:2004zf}. In perturbation
theory $b'=\exp(-(\pi^2+4)(N_c-1)/16)$
\cite{Brown:1999aq,Wang:2001aq,Schafer:2003jn}. The condensation
energy is given by
\begin{equation}
\epsilon = -N_d \Delta_0^2\left(\frac{\mu^2}{4\pi^2}\right),
\end{equation}
where $N_d=4$ is the number of condensed species. In the mean field
approximation the critical temperature is $T_c/\Delta_0 =e^\gamma/
\pi\simeq 0.56$, as in standard BCS theory \cite{Pisarski:2000tv}.
Fluctuations of the gauge field drive the transition first order
\cite{Bailin:1984bm,Matsuura:2003md}. Surprisingly, gauge field
fluctuations increase the critical temperature as compared to the
BCS result \cite{Giannakis:2004xt}.
For chemical potentials $\mu<1$ GeV, the coupling constant is not
small and the applicability of perturbation theory is in doubt. If
we ignore this problem and extrapolate the perturbative calculation
to densities $\rho\simeq 5\rho_0$ we find gaps on the order of 10's
of MeV. However, perturbative estimates also show that instanton
effects cannot be neglected for $\mu<1$ GeV, and that instantons
increase the gap \cite{Schafer:2004yx}.
We note that the 2SC phase defined by equ.~(\ref{2sc}) has two gapless
fermions and an unbroken $SU(2)$ gauge group. The gapless fermions are
singlets under the unbroken $SU(2)$. As a consequence, we expect the
$SU(2)$ gauge group to become non-perturbative. An estimate of the
$SU(2)$ confinement scale was given in \cite{Rischke:2000cn}. We also
note that even though the Copper pairs carry electric charge the $U(1)$
of electromagnetism is not broken. The generator of this symmetry is a
linear combination of the original electric charge operator and the
diagonal color charges. Under this symmetry the gapless fermions carry
the charges of the proton and neutron. Possible pairing between the
gapless fermions was discussed in \cite{Alford:1998zt,Alford:2002xx}.
\begin{figure}[t]
\includegraphics[width=9.0cm]{phase_third.eps}
\caption{\label{fig_phase_3}
Conjectured phase diagram of $N_f=3$ hadronic matter in the
limit of exact flavor symmetry.}
\end{figure}
\subsubsection{QCD with three flavors: Color-Flavor-Locking}
\label{sec_cfl}
If quark matter is formed at densities several times nuclear matter
density we expect the quark chemical potential to be larger than the
strange quark mass. We therefore have to determine the structure of the
superfluid order parameter for three quark flavors. We begin with the
idealized situation of three degenerate flavors. From the arguments
given in the last section we expect the order parameter to be
color-flavor matrix of the form
\begin{equation}
\label{order}
\Phi^{ab}_{ij}=
\langle \psi^a_i C\gamma_5\psi^b_j\rangle.
\end{equation}
The structure of this matrix can be determined by extremizing
the grand canonical potential. We find \cite{Schafer:1999fe,Evans:1999at}
\begin{equation}
\label{cfl}
\Delta^{ab}_{ij} =
\Delta_A (\delta_i^a\delta_j^b-\delta_i^b\delta_j^a)
+\Delta_S (\delta_i^a\delta_j^b+\delta_i^b\delta_j^a),
\end{equation}
which describes the color-flavor locked (CFL) phase proposed by Alford,
Rajagopal, and Wilczek \cite{Alford:1999mk}. In the weak coupling limit
$\Delta_S \ll\Delta_A$ and $\Delta_A=2^{-1/3}\Delta_0$ where $\Delta_0$
is the gap in the 2SC phase, equ.~(\ref{gap_oge}) \cite{Schafer:1999fe}.
In the CFL phase both color and flavor symmetry are completely broken.
There are eight combinations of color and flavor symmetries that generate
unbroken global symmetries. The unbroken symmetries are
\begin{equation}
\psi^a_{L,i}\to (U^*)^{ab}U_{ij}\psi^b_{Lj},
\hspace{1cm}
\psi^a_{R,i}\to (U^*)^{ab}U_{ij}\psi^b_{Rj},
\end{equation}
for $U\in SU(3)_V$. The symmetry breaking pattern is
\begin{equation}
\label{sym_3}
SU(3)_L\times SU(3)_R\times U(1)_V\to SU(3)_V .
\end{equation}
We observe that color-flavor-locking implies that chiral symmetry is
broken. The mechanism for chiral symmetry breaking is quite unusual. The
primary order parameter $\langle \psi^a_{Li}C\Delta^{ab}_{ij}\psi^b_{Lj}
\rangle=-\langle \psi^a_{Ri}C\Delta^{ab}_{ij}\psi^b_{Rj}\rangle$ involves
no coupling between left and right handed fermions. In the CFL phase both
left and right handed flavor are locked to color, and because of the
vectorial coupling of the gluon left handed flavor is effectively locked
to right handed flavor. Chiral symmetry breaking also implies that
$\langle \bar{\psi}\psi\rangle$ has a non-zero expectation value
\cite{Schafer:2002ty}. In the CFL phase $\langle \bar{\psi}\psi
\rangle^2\ll \langle(\bar{\psi}\psi)^2 \rangle$. Another measure of
chiral symmetry breaking is provided by the pion decay constant.
We will see that in the weak coupling limit $f_\pi^2$ is proportional
to the density of states on the Fermi surface.
The symmetry breaking pattern $SU(3)_L\times SU(3)_R \to SU(3)_V$
in the CFL phase is identical to the symmetry breaking pattern in QCD at
low density. The spectrum of excitations in the color-flavor-locked (CFL)
phase also looks remarkably like the spectrum of QCD at low density
\cite{Schafer:1999ef}. The excitations can be classified according to
their quantum numbers under the unbroken $SU(3)$, and by their electric
charge. The modified charge operator that generates a true symmetry of
the CFL phase is given by a linear combination of the original charge
operator $Q_{em}$ and the color hypercharge operator $Q={\rm diag}(-2/3,
-2/3,1/3)$. Also, baryon number is only broken modulo 2/3, which means
that one can still distinguish baryons from mesons. We find that the
CFL phase contains an octet of Goldstone bosons associated with chiral
symmetry breaking, an octet of vector mesons, an octet and a singlet of
baryons, and a singlet Goldstone boson related to superfluidity. All of
these states have integer charges.
With the exception of the $U(1)$ Goldstone boson, these states exactly
match the quantum numbers of the lowest lying multiplets in QCD at low
density. In addition to that, the presence of the $U(1)$ Goldstone boson
can also be understood. The $U(1)$ order parameter is $\langle (uds)(uds)
\rangle$. This order parameter has the quantum numbers of a $0^+$ $(\Lambda
\Lambda)$ pair condensate. In $N_f=3$ QCD, this is the most symmetric two
nucleon channel, and a very likely candidate for superfluidity in nuclear
matter at low to moderate density. We conclude that in QCD with three
degenerate light flavors, there is no fundamental difference between the
high and low density phases. This implies that a low density hyper-nuclear
phase and the high density quark phase might be continuously connected,
without an intervening phase transition. A conjectured phase diagram is
shown in Fig.~\ref{fig_phase_3}.
\subsection{The role of the strange quark mass}
\label{sec_ms}
At baryon densities relevant to astrophysical objects dis\-tor\-tions
of the pure CFL state due to non-zero quark masses cannot be neglected
\cite{Alford:1999pa,Schafer:1999pb}. The most important effect of a
non-zero strange quark mass is that the light and strange quark
Fermi momenta will no longer be equal. When the mismatch is much
smaller than the gap one calculates assuming degenerate quarks, we
might expect that it has very little consequence, since at this
level the original particle and hole states near the Fermi surface
are mixed up anyway. On the other hand, when the mismatch is much
larger than the nominal gap,we might expect that the ordering one
would obtain for degenerate quarks is disrupted, and that to a first
approximation one can treat the light and heavy quark dynamics separately.
\begin{figure}[t]
\begin{center}\includegraphics[width=10cm]{hdet.eps}\end{center}
\caption{\label{fig_eft}
Hierarchy of effective field theories in the CFL phase.}
\end{figure}
This argument is qualitatively right, but the correct picture
turns out to be much more complicated, and much more interesting.
If the strange quark mass is taken into account microscopic
calculations based on the Dyson-Schwinger equation become much
more complicated, because there are many more gap parameters,
and maintaining electric neutrality and color gauge invariance
is difficult \cite{Steiner:2002gx,Alford:2002kj,Neumann:2002jm}.
However, since chiral symmetry is broken in the CFL phase we know
that the dependence on the quark masses is constrained by chiral
symmetry. It is therefore natural to study the problem using
effective field theories. In practice we will employ a two-step
procedure, see Fig.~\ref{fig_eft}. In the first step we match the
microscopic theory, QCD, to an effective field theory of quasi-particles
and holes in the vicinity of the Fermi surface. In the second step we
match this theory to an effective chiral theory for the CFL phase.
\subsubsection{High density effective theory}
\label{sec_hdet}
The QCD Lagrangian in the presence of a chemical potential is given by
\begin{equation}
\label{qcd}
{\mathcal L} = \bar\psi \left( iD\hspace*{-0.23cm}/\, +\mu\gamma_0 \right)\psi
-\bar\psi_L M\psi_R - \bar\psi_R M^\dagger \psi_L
-\frac{1}{4}G^a_{\mu\nu}G^a_{\mu\nu},
\end{equation}
where $D_\mu=\partial_\mu+igA_\mu$ is the covariant derivative, $M$ is
the mass matrix and $\mu$ is the baryon chemical potential. If the baryon
density is very large perturbative QCD calculations can be further
simplified. The main observation is that the relevant degrees of
freedom are particle and hole excitations in the vicinity of the
Fermi surface. We shall describe these excitations in terms of the
field $\psi_+(\vec{v},x)$, where $\vec{v}$ is the Fermi velocity.
At tree level, the quark field $\psi$ can be decomposed as $\psi=
\psi_++\psi_-$ where $\psi_\pm=\frac{1}{2}(1\pm\vec{\alpha}\cdot\hat{v})
\psi$. Note that $(1\pm\vec{\alpha}\cdot\hat{v})/2$ is a projector
on states with positive/negative energy. To leading order in $1/p_F$
we can eliminate the field $\psi_-$ using its equation of motion. The
lagrangian for the $\psi_+$ field is given by
\cite{Hong:2000tn,Hong:2000ru,Beane:2000ms}
\begin{equation}
\label{hdet}
{\mathcal L} = \psi_{+}^\dagger (iv\cdot D) \psi_{+}
- \frac{ \Delta}{2}\left(\psi_{+}^{ai} C \psi_{+}^{bj}
\left(\delta_{ai}\delta_{bj}-
\delta_{aj}\delta_{bi} \right)
+ {\rm h.c.} \right) + \ldots ,
\end{equation}
with $v_\mu=(1,\vec{v})$ and $i,j,\ldots$ and $a,b,\ldots$ denote
flavor and color indices. The magnitude of the gap $\Delta$ is
determined order by order in perturbation theory from the requirement
that the thermodynamic potential is stationary with respect to $\Delta$.
With the gap term included the perturbative expansion is well defined.
\begin{figure}[t]
\begin{center}\includegraphics[width=7.5cm]{hdet_mass.eps}\end{center}
\caption{\label{fig_hdet_m}
Mass terms in the high density effective theory. The first
diagram shows a $O(MM^\dagger)$ term that arises from integrating
out the $\psi_-$ field in the QCD lagrangian. The second
diagram shows a $O(M^2)$ four-fermion operator which arises from
integrating out $\psi_-$ and hard gluon exchanges.}
\end{figure}
The effective theory contains an infinite set of operators that
have additional derivatives or more powers of $\psi_+$. These
operators are suppressed by inverse powers of the Fermi momentum.
Here, we will only consider operators that contain quark masses. To
leading order in $1/p_F$ there is only one operator in the high density
effective theory
\begin{equation}
\label{m_kin}
{\mathcal L} = -\frac{1}{2p_F} \left( \psi_{L+}^\dagger MM^\dagger \psi_{L+}
+ \psi_{R+}^\dagger M^\dagger M\psi_{R+} \right).
\end{equation}
This term arises from expanding the kinetic energy of a massive
fermion around $p=p_F$. At $O(1/p_F^2)$ we find four-fermion
operators that contain two powers of the quark mass. The
coefficients of these operators are obtained by computing
chirality violating quark-quark scattering amplitudes for
quasi-particles near the Fermi surface \cite{Schafer:2001za},
see Fig.~\ref{fig_hdet_m}. At leading order in $1/p_F$ these
amplitudes are independent of the scattering angle and can be
represented as local four-fermion operators
\begin{equation}
\label{hdet_m}
{\mathcal L} = \frac{g^2}{8p_F^4}
\left( ({\psi^A_L}^\dagger C{\psi^B_L}^\dagger)
(\psi^C_R C \psi^D_R) \Gamma^{ABCD} +
({\psi^A_L}^\dagger \psi^B_L)
({\psi^C_R}^\dagger \psi^D_R) \tilde{\Gamma}^{ACBD} \right).
\end{equation}
There are two additional terms with $(L\leftrightarrow R)$ and
$(M\leftrightarrow M^\dagger)$. We have introduced the CFL
eigenstates $\psi^A$ defined by $\psi^a_i=\psi^A (\lambda^A)_{ai}
/\sqrt{2}$, $A=0,\ldots,8$. The tensor $\Gamma$ is defined by
\begin{eqnarray}
\Gamma^{ABCD} &=& \frac{1}{8}\Big\{ {\rm Tr} \left[
\lambda^A M(\lambda^D)^T \lambda^B M (\lambda^C)^T\right]
\nonumber \\
& & \hspace{1cm}\mbox{}
-\frac{1}{3} {\rm Tr} \left[
\lambda^A M(\lambda^D)^T \right]
{\rm Tr} \left[
\lambda^B M (\lambda^C)^T\right] \Big\}.
\end{eqnarray}
The explicit expression for $\tilde\Gamma$ is given
in \cite{Schafer:2001za}, but we will not need here.
\subsubsection{CFL chiral theory}
\label{sec_CFLchi}
For excitation energies smaller than the gap the only relevant
degrees of freedom are the Goldstone modes associated with the
breaking of chiral symmetry and baryon number, see Fig.~\ref{fig_eft}.
Since the pattern of chiral symmetry breaking is identical to
the one at $T=\mu=0$ the effective lagrangian has the same
structure as chiral perturbation theory. The main difference is
that Lorentz-invariance is broken and only rotational invariance
is a good symmetry. The effective lagrangian for the Goldstone
modes is \cite{Casalbuoni:1999wu}
\begin{eqnarray}
\label{l_cheft}
{\mathcal L}_{eff} &=& \frac{f_\pi^2}{4} {\rm Tr}\left[
\nabla_0\Sigma\nabla_0\Sigma^\dagger - v_\pi^2
\partial_i\Sigma\partial_i\Sigma^\dagger \right]
+\left[ B {\rm Tr}(M\Sigma^\dagger) + h.c. \right]
\nonumber \\
& & \hspace*{0cm}\mbox{}
+\left[ A_1{\rm Tr}(M\Sigma^\dagger)
{\rm Tr} (M\Sigma^\dagger)
+ A_2{\rm Tr}(M\Sigma^\dagger M\Sigma^\dagger) \right.
\nonumber \\[0.1cm]
& & \hspace*{0.5cm}\mbox{}\left.
+ A_3{\rm Tr}(M\Sigma^\dagger){\rm Tr} (M^\dagger\Sigma)
+ h.c. \right]+\ldots .
\end{eqnarray}
Here $\Sigma=\exp(i\phi^a\lambda^a/f_\pi)$ is the chiral field,
$f_\pi$ is the pion decay constant and $M$ is a complex mass
matrix. The chiral field and the mass matrix transform as
$\Sigma\to L\Sigma R^\dagger$ and $M\to LMR^\dagger$ under
chiral transformations $(L,R)\in SU(3)_L\times SU(3)_R$. We
have suppressed the singlet fields associated with the breaking
of the exact $U(1)_V$ and approximate $U(1)_A$ symmetries.
At low density the coefficients $f_\pi$, $B,A_i,\ldots$ are
non-perturbative quantities that have to extracted from
experiment or measured on the lattice. At large density, on
the other hand, the chiral coefficients can be calculated in
perturbative QCD. The leading order terms are \cite{Son:1999cm}
\begin{equation}
\label{cfl_fpi}
f_\pi^2 = \frac{21-8\log(2)}{18}
\left(\frac{p_F^2}{2\pi^2} \right),
\hspace{0.5cm} v_\pi^2=\frac{1}{3}.
\end{equation}
Mass terms are determined by the operators studied in the
previous section. We observe that both equ.~(\ref{m_kin})
and (\ref{hdet_m}) are quadratic in $M$. This implies that $B=0$
in perturbative QCD. $B$ receives non-perturbative contributions
from instantons, but these effects are small if the density is
large \cite{Schafer:2002ty}.
We observe that $X_L=MM^\dagger/(2p_F)$ and $X_R=M^\dagger M/
(2p_F)$ in equ.~(\ref{m_kin}) act as effective chemical potentials
for left and right-handed fermions, respectively. Formally, the
effective lagrangian has an $SU(3)_L\times SU(3)_R$ gauge
symmetry under which $X_{L,R}$ transform as the temporal components
of non-abelian gauge fields. We can implement this approximate gauge
symmetry in the CFL chiral theory by promoting time derivatives
to covariant derivatives \cite{Bedaque:2001je},
\begin{equation}
\label{mueff}
\nabla_0\Sigma = \partial_0 \Sigma
+ i \left(\frac{M M^\dagger}{2p_F}\right)\Sigma
- i \Sigma\left(\frac{ M^\dagger M}{2p_F}\right) .
\end{equation}
The four-fermion operator in equ.~(\ref{hdet_m}) contributes to
the coefficients $A_i$.We find \cite{Son:1999cm,Schafer:2001za}
\begin{equation}
A_1= -A_2 = \frac{3\Delta^2}{4\pi^2},
\hspace{1cm} A_3 = 0.
\end{equation}
We can now summarize the structure of the chiral expansion in the
CFL phase. The effective lagrangian has the form
\begin{equation}
{\mathcal L}\sim f_\pi^2\Delta^2 \left(\frac{\partial_0}{\Delta}\right)^k
\left(\frac{\vec{\partial}}{\Delta}\right)^l
\left(\frac{MM^\dagger}{p_F\Delta}\right)^m
\left(\frac{MM}{p_F^2}\right)^n
\big(\Sigma\big)^o\big(\Sigma^\dagger\big)^p.
\end{equation}
Loop graphs in the effective theory are suppressed by powers of
$\partial/(4\pi f_\pi)$. Since the pion decay constant scales as $f_\pi
\sim p_F$ Goldstone boson loops are suppressed compared to higher
order contact terms. We also note that the quark mass expansion
is controlled by $m^2/(p_F\Delta)$, as expected from the arguments
presented in Sect.~\ref{sec_ms}.
\begin{figure}[t]
\begin{center}\includegraphics[width=9.5cm]{kaplan.ps}\end{center}
\caption{\label{fig_kcond}
This figure shows the phase structure of CFL matter as
a function of the strange quark mass $m_s$ and the lepton
chemical potential $\mu_Q$, from Kaplan and Reddy (2001).}
\end{figure}
\subsubsection{Kaon condensation}
\label{sec_kcond}
Using the chiral effective lagrangian we can now determine
the dependence of the order parameter on the quark masses. We will
focus on the physically relevant case $m_s>m_u=m_d$. Because
the main expansion parameter is $m_s^2/(p_F\Delta)$ increasing
the quark mass is roughly equivalent to lowering the density.
The effective potential for the order parameter is
\begin{equation}
\label{v_eff}
V_{eff} = \frac{f_\pi^2}{2} {\rm Tr}\left[
X_L\Sigma X_R\Sigma^\dagger \right]
+ A_1\left[ \left({\rm Tr}(M\Sigma^\dagger)\right)^2
- {\rm Tr}(M\Sigma^\dagger M\Sigma^\dagger) \right].
\end{equation}
If the strange quark mass is small then the minimum of the
effective potential is $\Sigma=1$. However, when the strange
quark mass exceeds a certain critical value it becomes favorable
to rotate the order parameter in the kaon direction. The physical
reason is that the system tries to reduce its strangeness content
by forming a kaon condensate. Consider the ansatz $\Sigma = \exp
(i\alpha\lambda_4)$. The vacuum energy is
\begin{equation}
\label{k0+_V}
V(\alpha) = -f_\pi^2 \left( \frac{1}{2}\left(\frac{m_s^2-m^2}{2p_F}
\right)^2\sin(\alpha)^2 + (m_{K}^0)^2(\cos(\alpha)-1)
\right),
\end{equation}
where $(m_K^0)^2= (4A_1/f_\pi^2)m(m+m_s)$. Minimizing the vacuum
energy we obtain $\alpha=0$ if $\mu_s<m_K^0$ and $\cos(\alpha)
=(m_K^0)^2/\mu_s^2$ if $\mu_s >m_K^0$. Here, we have defined
$\mu_s=m_s^2/(2p_F)$. Using the perturbative result for $A_1$
the critical strange quark mass is
\begin{equation}
\label{ms_crit}
\left. m_s \right|_{crit}= 3.03\cdot m_d^{1/3}\Delta^{2/3}.
\end{equation}
Using $\Delta\simeq 50$ MeV we get $m_s(crit)\simeq 70$ MeV. This
result suggests that strange quark matter at densities $\rho \sim
(5-10)\rho_0$ is in a kaon condensed phase. The kaon condensate
breaks $SU(2)_I\times U(1)_Y$ to $U(1)_Q$. The phase structure
as a function of the strange quark mass and non-zero lepton chemical
potentials was studied by Kaplan and Reddy \cite{Kaplan:2001qk},
see Fig.~\ref{fig_kcond}. We observe that if the lepton chemical
potential is non-zero charged kaon and pion condensates are also
possible. It was also shown that there is a range of light quark
masses in which simultaneous kaon and eta condensation takes
place \cite{Kryjevski:2004cw}.
\begin{figure}[t]
\begin{center}\includegraphics[width=9.5cm]{cfl_baryon_k0_ax_rev.eps}\end{center}
\caption{\label{fig_cfl_spec}
This figure shows the fermion spectrum in the CFL phase. For
$m_s=0$ there are eight fermions with gap $\Delta$ and one
fermion with gap $2\Delta$ (not shown). Without kaon condensation
gapless fermion modes appear at $\mu_s=\Delta$ (dashed lines).
With kaon condensation gapless modes appear at $\mu_s=4\Delta/3$.}
\end{figure}
\section{Fermions in the CFL phase}
\label{sec_gCFL}
So far we have only studied Goldstone modes in the CFL phase.
However, as the strange quark mass is increased it is possible
that some of the fermion modes become light or even gapless
\cite{Alford:2003fq}. In order to study this question we
have to include fermions in the effective field theory.
The effective lagrangian for fermions in the CFL phase
is \cite{Kryjevski:2004jw,Kryjevski:2004kt}
\begin{eqnarray}
\label{l_bar}
{\mathcal L} &=&
{\rm Tr}\left(N^\dagger iv^\mu D_\mu N\right)
- D{\rm Tr} \left(N^\dagger v^\mu\gamma_5
\left\{ {\mathcal A}_\mu,N\right\}\right)
- F{\rm Tr} \left(N^\dagger v^\mu\gamma_5
\left[ {\mathcal A}_\mu,N\right]\right)
\nonumber \\
& & \mbox{} + \frac{\Delta}{2} \left\{
\left( {\rm Tr}\left(N_LN_L \right)
- \left[ {\rm Tr}\left(N_L\right)\right]^2 \right)
- (L\leftrightarrow R) + h.c. \right\}.
\end{eqnarray}
$N_{L,R}$ are left and right handed baryon fields in the
adjoint representation of flavor $SU(3)$. The baryon fields
originate from quark-hadron complementarity as explained
in Sect.~\ref{sec_cfl}. We can think of $N$ as being composed
of a quark and a diquark field, $N_L \sim q_L\langle q_L q_L
\rangle$. The covariant derivative of the nucleon field is given
by $D_\mu N=\partial_\mu N +i[{\mathcal V}_\mu,N]$. The vector
and axial-vector currents are
\begin{equation}
{\mathcal V}_\mu = -\frac{i}{2}\left\{
\xi \partial_\mu\xi^\dagger + \xi^\dagger \partial_\mu \xi
\right\}, \hspace{1cm}
{\mathcal A}_\mu = -\frac{i}{2} \xi\left(\nabla_\mu
\Sigma^\dagger\right) \xi ,
\end{equation}
where $\xi$ is defined by $\xi^2=\Sigma$. It follows that $\xi$
transforms as $\xi\to L\xi U(x)^\dagger=U(x)\xi R^\dagger$ with
$U(x)\in SU(3)_V$. For pure $SU(3)$ flavor transformations $L=R=V$
we have $U(x)=V$. $F$ and $D$ are low energy constants that
determine the baryon axial coupling. In perturbative QCD we
find $D=F=1/2$.
Mass terms can be introduced as in Sect.~\ref{sec_CFLchi}.
The $(X_L,X_R)$ covariant derivative of the nucleon field is
\begin{eqnarray}
\label{V_X}
D_0N &=& \partial_0 N+i[\Gamma_0,N], \\
\Gamma_0 &=& -\frac{i}{2}\left\{
\xi \left(\partial_0+ iX_R\right)\xi^\dagger +
\xi^\dagger \left(\partial_0+iX_L\right) \xi
\right\}, \nonumber
\end{eqnarray}
where $X_L=MM^\dagger/(2p_F)$ and $X_R=M^\dagger M/(2p_F)$ as before.
The spectrum of fermion is shown in Fig.~\ref{fig_cfl_spec}. A gapless
fermion mode appears at $\mu_s=4\Delta/3$. In the vicinity of this
point the homogeneous CFL phase becomes unstable
\cite{Huang:2004bg,Casalbuoni:2004tb}. In the effective field theory
this manifests itself as an instability with respect to the generation
of a non-zero current \cite{Kryjevski:2005qq,Schafer:2005ym}. From
the effective lagrangian equ.~(\ref{l_cheft}) we see that a meson
current has energy ${\mathcal E}\sim f_\pi^2 j^2$. This is not the
end of the story, however, because a meson current also modifies
the fermion dispersion relation. The energy of the lowest mode
in the background of a hypercharge current $j_K$ is given by
\begin{equation}
\label{disp_ax}
\omega_l = \Delta +\frac{l^2}{2\Delta}-\frac{3}{4}
\mu_s -\frac{1}{4}\vec{v}\cdot\vec{j}_K,
\end{equation}
where $l$ is the momentum relative to the Fermi surface. We observe
that the current lowers the energy of the fermions on part of the Fermi
surface. When these states become gapless the total energy is lowered
and the system can become unstable. The new ground state is a
$p$-wave meson condensate in which the non-zero meson current is
balanced by a backflow of gapless fermions. At even larger values
of $\mu_s$ this state may evolve into an inhomogeneous superconductor
of the type considered by Larkin, Ovchinnikov, Fulde and Ferrell
\cite{Larkin:1964,Fulde:1964}, see \cite{Casalbuoni:2005zp}.
\begin{figure}[t]
\includegraphics[width=11.0cm]{toe.eps}
\caption{\label{fig_toe}
The many facets of QCD.}
\end{figure}
\section{Outlook}
\label{sec_sum}
Figure \ref{fig_toe} indicates that there are many interesting
connections that we have not been able to explore in these
lectures. There has been a lot of progress in connecting the
large $N_c$ world to a theory of strings, and this connection
also sheds some light on the behavior of a strongly coupled
QCD plasma. The transport properties of the strongly coupled
plasma are probably quite similar to the transport behavior
of the strongly correlated neutron fluid, and this system
is related to cold trapped fermionic atoms near a Feshbach
resonance. A lot of progress has been made in understanding
hot and dense pre-equilibrium states, and these states share
some of the properties of equilibrium phases. Many more
surprising connections are likely to energy in the future.
Acknowledgments: I would like to thank the organizers of the
HUGS summer school, especially Jose Goity, for their hospitality.
The original lectures contained a summary of the experimental
work at RHIC and CERN. This material is not included in the
write-up, but the slides are still available on my website.
The second half of the lectures is an abridged and updated version
of the NPSS lectures \cite{Schafer:2003vz}. Fig.~\ref{fig_toe}
was inspired by R.~Brower. This work was supported in part by
a US DOE grant DE-FG-88ER40388.
|
1,108,101,564,281 | arxiv | \section{Introduction}
Historically, thermodynamics started with the goal of trying to understand how to convert heat into useful mechanical motion. For that purpose, steam engines have been developed which revolutionized our lives. Other useful thermal machines, such as refrigerators and heat pumps, followed. Naturally, these thermal machines are all large macroscopic entities that one uses classical laws to describe them and study their performances.
Recently, as our ability to control small quantum systems progresses, it has now become not only interesting, but also crucial to study what happens when these machines become microscopic where quantum features become important \cite{Horodecki+Oppenheim:13, Skrzypczyk+2:14}. More specifically, there has been significant interest in determining the extent to which quantum effects may help surpass classical limits such as the Carnot efficiency \cite{Scully2003, Dillenschneider2009, Rossnagel+4:14, Correa+3:14,Strasberg2017,Niedenzu2017},
and in how to export classical notions such as work to the quantum domain \cite{Allahverdyan2004,Talkner2007,Campisi2011,Roncaglia+2:14, Talkner+Hanggi:16}. Various quantum thermal machines have been proposed and studied, with potential realizations in optomechanical setups, superconducting circuits, atom-cavity systems and trapped ions \cite{Geva1996,Venturelli+2:13, Kosloff+Levy:14, Rossnagel+4:14, Mari+2:15, Hofer+2:16, Joulain+3:16, Mitchison+4:16, Hofer+5:16, Karimi+Pekola:16, Roulet+3:16, Bissbort+5:16,Zhang2017,Reid2017,Hardal2017,Mu2017}.
In the context of refrigerators, two quantum approaches stand out: the smallest possible refrigerator consists of a qutrit \cite{Palao2001} or three interacting qubits \cite{Linden+2:10}, but a quantum absorption refrigerator can also be realised with three interacting harmonic modes \cite{Levy+Kosloff:12, Goold+4:16}. This latter interaction, which has been demonstrated with trapped ions \cite{Maslennikov+8:17}, is the object of this study. We focus on the unitary dynamics of the refrigerator itself and highlight phenomena like effective equilibration, as well as the challenge of identifying genuine quantum features. Elements on the process of refrigeration, i.e. the transfer of energy from a cold to a hot bath, mediated by the machine, are given in the Appendix.
Section~\ref{UnitaryDynamics} introduces the model Hamiltonian, the initial states that will be considered (independent thermal states of different temperatures) and the dynamical variables whose evolution is studied (occupation number of each mode). We discuss why we focus on the unitary dynamics. This dynamics is then solved and discussed in Section~\ref{secUnit}, first in general, then in the context of refrigeration. The following two sections are devoted to the two main observed features of the dynamics of the occupation numbers. In the short-term regime, there is a \textit{cooling enhancement} which would be absent if the interaction were incoherent (Section~\ref{SingleShot}). In the long-term regime, one observes \textit{effective equilibration} of the occupation numbers even if there is no dissipative dynamics (Section~\ref{QuantumEquilibration}). Finally, in Section~\ref{classicalModel} we study the classical model obtained by replacing the mode operators with conjugate variables in the Hamiltonian. We show that the dynamics of the occupation numbers exhibits the same features observed in the quantum formalism.
\section{Model}\label{UnitaryDynamics}
The model we study here is a system of three interacting harmonic oscillators, with the free Hamiltonian
\begin{equation}\label{freeHamiltonian}
\Op{H}_0=\sum_{i=h,w,c}\hbar\omega_i\left(\Op{a}_i^\dagger\Op{a}_i+\frac{1}{2}\right),
\end{equation}
and the interaction Hamiltonian
\begin{equation}\label{Hinteraction}
\Op{H}_1=\hbar g(\Op{a}_h^\dagger \Op{a}_w \Op{a}_c+\Op{a}_h\Op{a}_w^\dagger\Op{a}_c^\dagger).
\end{equation}
It is convenient to work in the interaction picture where
\begin{equation}\label{HinteractionPicture}
\Op{H}_\mathrm{int}=\hbar g(\Op{a}_h^\dagger \Op{a}_w \Op{a}_c \,e^{i\Delta t}+ \Op{a}_h \Op{a}_w^\dagger \Op{a}_c^\dagger \,e^{-i\Delta t}),
\end{equation}
with $\Delta=\omega_h-\omega_w-\omega_c$ and $g$ the coupling constant. We will focus on the resonant case where $\Delta=0$ in what follows.
This Hamiltonian describes a wide range of physical processes: parametric amplification, frequency conversion, and second harmonic generation \cite{Walls+Barakat:70, Agrawal+Mehta:74, Gambini:77}. This work is based on considering the three interacting modes as an absorption refrigerator \cite{Levy+Kosloff:12, Goold+4:16, Maslennikov+8:17}.
The refrigeration process works as follows. Each of the three modes is in contact with a thermal bath: the cold bath at $T_c$, the hot bath at $T_h>T_c$, and the work mode at $T_w$. We shall correspondingly refer to $\Op{a}_h$, $\Op{a}_w$ and $\Op{a}_c$ as the hot mode, work mode, and cold mode, respectively. The dynamical variables of interest are the occupation numbers $\bar{n}_i(t)=\mathrm{tr}\{\rho(t)\Op{a}_i^\dagger \Op{a}_i\}$ with $i=h,w$ or $c$. Absorption refrigeration occurs if the interaction can induce a stationary heat flow from the cold to the hot mode ($\dot{n}_c(t)<\dot{n}_c(0)$, $\dot{n}_h(t)>\dot{n}_h(0)$), the work mode providing a sufficient amount of free energy.
A full description of the refrigeration process takes into account both the unitary interaction generated by the trilinear Hamiltonian and the dissipative dynamics due to the interaction of each mode with its bath. This is a numerically involved task, further complicated by the subtle issue of formulating a master equation for composite systems coupled to multiple baths that is consistent with the second law of thermodynamics \cite{Levy+Kosloff:14,Gonzalez2017,Hofer2017}. Yet, we are interested in the regime of weak coupling to the baths such that all the interesting aspects of the dynamics of refrigeration are captured by restricting the analysis to the unitary dynamics. This is confirmed in the Appendix, where we provide a comparison of the dynamics with and without the dissipative coupling. In the rest of this paper, we shall thus focus on the unitary dynamics, which has been experimentally implemented with trapped ions~\cite{Maslennikov+8:17}.
The initial state of the system consists in the three modes being prepared in uncorrelated thermal states, $\rho (t=0) = \rho_h^\mathrm{th}\otimes \rho_w^\mathrm{th}\otimes \rho_c^\mathrm{th}$ with
\begin{equation}\label{thermalState}
\rho_i^{\mathrm{th}}= \left[ 1 - \exp \left( - \frac{\hbar \omega_i}{k_B T_i} \right) \right] \exp \left( - \frac{\hbar \omega_i}{k_B T_i} \Op{a}_i^\dagger \Op{a}_i \right).
\end{equation}
This corresponds to the situation in which each modes have been kept in contact with their respective baths for long time, before turning on the interaction. From here, focusing on the unitary dynamics implies that the modes are effectively decoupled from their baths during the evolution; this will capture the full refrigeration dynamics in the limit of slow thermalization rate.
Before proceeding, let us notice that some states of this family are stationary with respect to the unitary dynamics, and therefore also to the complete dynamics too. Indeed $[\rho_\text{st},\Op{H}_{\mathrm{int}}]=0$ if
\begin{equation}\label{stationaryCond}
\frac{1}{\bar{n}_h(0)}+1=\left(\frac{1}{\bar{n}_w(0)}+1\right)\left(\frac{1}{\bar{n}_c(0)}+1\right).
\end{equation}
\section{Unitary dynamics}
\label{secUnit}
\subsection{Methods and general features}
We will mostly focus on temperatures $T_i$ that correspond to comparably small initial average occupation numbers, $\bar{n}_i(0)=\mathrm{tr}\{\rho(t=0)\Op{a}_i^\dagger \Op{a}_i\}$ with $i=h,w$ or $c$.
For later comparison with the classical framework, we will plot the initial average energy of each mode in the diagrams, $\epsilon_i(0)=\hbar\omega_i(\bar{n}_i(0) + 1/2)$, instead of the occupation number.
At time $t=0$, the interaction Hamiltonian $\Op{H}_\mathrm{int}$ is switched on and the system evolves unitarily according to $\rho(t)=\Op{U}\rho_0 \Op{U}^\dagger$ with $\Op{U}=\mathrm{exp}(-i\Op{H}_\mathrm{int} t)$. This coherently transfers populations between the three modes and changes the average energies $\epsilon_i(t)$. However, we note that even the closed system is not amenable to an exact analytical solution; previous studies focused on either short-time behavior, resorted to using products of coherent or Fock states as the initial state, or considered limiting cases of average occupation numbers much larger than one \cite{Walls+Barakat:70, Bonifacio+Preparata:70, Agrawal+Mehta:74, Gambini:77}.
For an efficient simulation of the system dynamics, we take advantage of the fact that the interaction Hamiltonian couples only Fock states of the form $|n_h,n_w,n_c \rangle = |n, N-n, M-n\rangle$ with fixed integers $N$ and $M$ and $0\leq n\leq \mathrm{min}\{N,M \}$. That is, $\Op{H}_{\rm int}$ is block-diagonal with respect to finite-dimensional subspaces characterized by the two conserved quantities $N$ and $M$ and the dimension $d = \min\{N,M\} + 1$. The unitary evolution can then be efficiently computed by diagonalizing the Hamiltonian in each of the contributing subspaces, up to a cutoff for both $N$ and $M$ that truncates the originally infinite dimensional Hilbert space. For the simulations presented in this paper, the cutoff is chosen to ignore populations in the initial density matrix smaller than $10^{-4}$.
Fig.~\ref{fig:systemEvolution} shows the typical time evolution of the modes' energy under the unitary dynamics for a thermal state of the form~\eqref{thermalState}. In particular, we have chosen initial average occupations of $\bar{n}_h(0)=0.5$, $\bar{n}_w(0)=2.5$, and $\bar{n}_c(0)=2.0$, so that the order of magnitude of the contributing $N$ and $M$ are $10^0$ or $10^1$. It is readily seen that $\bar{N}=\bar{n}_h(t)+\bar{n}_w(t)$, and $\bar{M}=\bar{n}_h(t)+\bar{n}_c(t)$ are conserved.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{Time evolution of the average energy (in units of $\hbar\omega_i$) of the hot, work and cold mode. The initial state is a product thermal state with average occupations
$\bar{n}_h(0)=0.5$, $\bar{n}_w(0)=2.5$, and $\bar{n}_c(0)=2.0$.
Time is measured in units of the inverse coupling strength $1/g$. The dashed lines show the values associated to the infinite time-averaged state \eqref{infTimeAve}. The initial transient oscillations are magnified in the inset.}
\label{fig:systemEvolution}
\end{figure}
The curves in Fig.~\ref{fig:systemEvolution} exhibit two noteworthy features. Firstly, the evolution starts with a \textit{transient oscillatory stage}, where the largest fluctuation away from the initial value is found. We shall come back to this feature in Section~\ref{SingleShot}. Secondly, the system energies seem to approach some apparent equilibrium values (dashed lines), around which only small residual oscillations persist. One can obtain those values from the infinite time-averaged state \begin{equation}\label{infTimeAve}
\sigma\equiv\langle\rho(t)\rangle_\infty=\lim_{\tau\rightarrow\infty}\frac{1}{\tau}\int_0^\tau\text{d}t\,\rho(t).
\end{equation}
The observation that the expectation value of certain dynamical variables approach their long-time average, in spite of the fact that the system is not converging to a steady state because the dynamics is unitary, is a phenomenon known as effective equilibration \cite{Gogolin+Eisert:16}. We will get back to it in Section~\ref{QuantumEquilibration}.
\begin{figure}[ht]
\centering
\includegraphics[width=\columnwidth]{fig2.pdf}
\caption{Change in the cold mode energy from the initial thermal average $\epsilon_c (0) = 2.5\hbar\omega_c$ to the infinite time-averaged value for varied initial energies of the hot and the work mode. All energies are given in units of the respective excitation quanta. The thick line indicates the stationary configurations of the interaction Hamiltonian, where no change occurs.}
\label{fig:changeInCmode}
\end{figure}
Conservation of $N$ and $M$ implies that any increase in the average energy of the hot mode corresponds to an equal decrease in the work mode and cold mode energies. Hence it is sufficient to focus on just one of the modes. In Fig.~\ref{fig:changeInCmode}, we plot the change of the average cold mode energy (or occupation number) between the long-time equilibrium state $\sigma$ and the initial state for different initial energies of the work and the hot mode.
The initial occupation number in mode $c$ is fixed at $\bar{n}_c(0)=2.0$. One sees that, depending on the values of $\bar{n}_h(0)$ and $\bar{n}_w(0)$, the equilibrium energy of mode $c$ can either increase (red), decrease (blue), or stay unchanged (thick line).
Let us remark again that, by preparing the initial system in a product of thermal states and then bringing them into interaction, we mimic the thermodynamic situation where these three subsystems each have thermalized with their own baths and are then allowed to exchange heat among each other.
In Appendix~\ref{OpenSystem}, we explicitly compare this two-step treatment with a simultaneous interaction-dissipation model based on a heuristic master equation. The latter contains the interaction Hamiltonian and independent thermal dissipators for each mode.
When the dissipation rates are small compared to the coupling rate $g$, we find that the time evolution of the system energies is similar to that of the purely unitary dynamics. Specifically, the initial transient features are approximately the same, and the only relevant difference is a slight deviation of the steady-state energies from the long-time averages predicted by the unitary model. Strong dissipation rates, on the other hand, would only thwart the three-mode interaction dynamics and freeze the system state close to the initial thermal state.
\subsection{Analogies and differences with refrigeration processes}\label{AbsFridge}
After presenting the general features of the closed system dynamics governed by the trilinear Hamiltonian \eqref{HinteractionPicture}, we now focus on the thermodynamic aspects.
As already mentioned, the interaction is capable of driving a quantum absorption refrigerator, in close analogy to the earlier proposed three-qubit fridge \cite{Linden+2:10} with its interaction Hamiltonian $\Op{H}_\mathrm{int}=g(|010\rangle\langle 101|+|101\rangle\langle 010|)$. The cooling performance of the latter has been extensively studied, and we will point out similarities and crucial differences for the present three-oscillator case.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{Time evolution of the average cold mode energy (in units of $\hbar\omega_c$) for five different initial conditions ranging from a net heating to a net cooling scenario (top to bottom). The initial hot and cold mode energies are fixed at $\epsilon_h=1.5 \hbar\omega_h$ and $\epsilon_c=2.5 \hbar\omega_c$, while the work mode energy starts from $\epsilon_w=1.5 \hbar\omega_w$ in the top panel and is increased in steps of $\hbar\omega_w$ to the bottom. Note that this still admits a lower temperature in the cold than in the hot mode, as $\omega_c < \omega_h$. The middle panel corresponds to a stationary configuration.}
\label{fig:heating2cooling}
\end{figure}
As proposed in theory \cite{Levy+Kosloff:12} and backed by observations with normal modes of trapped ytterbium ions \cite{Maslennikov+8:17}, the average energy of the $c$-mode can decrease or increase, depending on the initial work mode value $\bar{n}_w(0)$. In this sense, we can speak of cooling or heating, as demonstrated in Fig.~\ref{fig:heating2cooling} for an exemplary choice of initial values, $\bar{n}_h(0)=1.0$, $\bar{n}_c(0)=2.0$, and $\bar{n}_w(0)$ varying from $1.0$ to $5.0$ in unit steps.
In fact, the contour plot in Fig.~\ref{fig:changeInCmode} can be viewed as a phase diagram, where the thick solid line separates the heating (red) from the cooling (blue) regime. For the latter, the initial occupation numbers must satisfy
\begin{equation}
\bar{n}_w (0)>\bar{n}_h (0) \frac{1+\bar{n}_c (0)}{\bar{n}_c (0) - \bar{n}_h (0)}. \label{eq:coolingRegime}
\end{equation}
Since the initial states are thermal, the inequality translates into a relation between the temperatures, $T_i=\hbar\omega_i / k_B\text{ln}(1+1/\bar{n}_i (0))$. Notice here that the refrigerator setting $T_c < T_h$ does not necessarily imply $\bar{n}_c(0)<\bar{n}_h(0)$ due to the different eigenfrequencies.
The cooling regime \eqref{eq:coolingRegime} is consistent with the general \emph{virtual qubit} framework developed recently \cite{Brunner+3:12}. In this picture, one thinks of the hot mode and work mode forming a set of virtual qubits by the levels $|n_h,n_w\rangle$ and $|n_h-1,n_w+1\rangle$ at an \emph{effective virtual temperature}
\begin{equation}\label{VitualTemDef}
T_v=\frac{\hbar\omega_h-\hbar\omega_w}{\hbar\omega_h/T_h-\hbar\omega_w/T_w}.
\end{equation}
The cold mode at initial temperature $T_c$ then interacts with the virtual mode at $T_v$. In the regime $0\leq T_v< T_c$, cooling occurs since net energy is transferred from the cold mode to the virtual mode at a lower virtual temperature. At $T_v = T_c$, which corresponds to the equilibrium condition \eqref{stationaryCond}, no energy exchange takes place between the two modes.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{fig4.pdf}
\caption{Thermal Fano factor \eqref{eq:Fano} of the infinite time-averaged state reduced to the cold mode for varied initial conditions as in Fig.~\ref{fig:changeInCmode}. A value of zero indicates a thermal energy distribution, i.e.~that the energy variance is that of a Gibbs state. We find that the variance is consistently above (below) thermal in the cooling (heating) regime.}
\label{fig:varainceAthermal}
\end{figure}
Two important remarks are now in order: Firstly and unsurprisingly, the asymptotic three-mode state \eqref{infTimeAve}, which determines the long-time average energies after interaction, is in general not a tensor product of thermal states $\sigma\neq\rho_h^{\mathrm{th}} \otimes \rho_w^{\mathrm{th}} \otimes \rho_c^{\mathrm{th}}$. Secondly, the reduced single-mode states $\mathrm{tr}_{j\neq i}\{\sigma\}=\rho_i^{\mathrm{th}}$ do not exhibit a thermal energy distribution either. This can be easily illustrated by looking at the second moments of the local distributions. The deviation of, say, the cold mode energy variance in the time-averaged state $\sigma$ from that of a Gibbs distribution can be captured in the Fano factor
\begin{equation}\label{eq:Fano}
q = \frac{\braket{(\Op{a}_c^\dagger \Op{a}_c)^2} - \left( \braket{\Op{a}_c^\dagger \Op{a}_c} \right)^2}{\braket{\Op{a}_c^\dagger \Op{a}_c} \left( \braket{\Op{a}_c^\dagger \Op{a}_c} + 1 \right)} - 1.
\end{equation}
Fig.~\ref{fig:varainceAthermal} shows a contour plot of this quantity as a function of initial average energies as in Fig.~\ref{fig:changeInCmode}.
One sees that the variance is consistently higher ($q>0$) than that of a thermal state of the same energy in the cooling regime, and lower in the heating regime. At the same time, we found that the single-mode entropy is always lower than that of the thermal state, suggesting that the time-averaged energy distribution is more biased.
Hence, and even though the reduced single-mode states remain diagonal at all times, we cannot assign temperatures to the asymptotic state $\sigma$ that the trilinear system approaches in the long-time average. This is in contrast to the three-qubit scenario, where one can formally associate a temperature to any diagonal single-qubit state \cite{Brask+Brunner:15, Mitchison+3:15}. Here, the three modes are correlated and explicitly driven out of thermal equilibrium by the trilinear interaction (which remains the case in the presence of simultaneous weak bath couplings). This highlights the difference between genuine thermalization in open systems undergoing thermodynamic cycles and effective equilibration at an athermal state in quantum systems with non-trivial Hamiltonians \cite{Kulkarni+2:12, Gogolin+Eisert:16, Farrelly+2:17}.
\section{Enhanced cooling in the single-shot regime}\label{SingleShot}
One important signature of the cooling dynamics that can be seen from Fig.~\ref{fig:heating2cooling} is that the cold mode always overshoots to a lower-energy state at transient time scales, before it approaches the long-time average value. Hence, if one can control the interaction and halt the dynamics at the appropriate time, the cooling performance can be enhanced.
A similar, measurement-induced transient cooling of a single qubit was found in Ref.~\cite{Erez2008}.
In the three-qubit scenario, this feature termed single-shot cooling was linked to the presence of quantum coherence \cite{Brask+Brunner:15, Mitchison+3:15}. Here, a similar effect can be observed as well \cite{Maslennikov+8:17}. The difference is that, in the three-qubit case, the system passes through many transient oscillations before reaching an equilibrium with help of the simultaneous thermalization with three independent baths. Here, the closed system alone approaches an effective equilibrium rather rapidly after the first overshooting oscillation. Irregular oscillations around the asymptotic state $\sigma$ then prevail, but at comparably small amplitudes. The higher the initial temperatures, the more negligible these oscillations become.
Let us now examine the role of coherence in this transient effect. Following the qubit analogy \cite{Mitchison+3:15}, we compare the coherent energy exchange driven by the trilinear Hamiltonian to incoherent implementations of the same exchange process. The simplest incoherent model is to assume the excitation of the hot mode $\Op{L} = \Op{a}_h^\dagger \Op{a}_w \Op{a}_c$ and the de-excitation $\Op{L}^\dagger$ occur spontaneously at the same rate $\gamma$. The corresponding master equation in the interaction frame reads as
\begin{equation}\label{incoherent1}
\dot{\rho} = \gamma \left( \Op{L} \rho \Op{L}^\dagger + \Op{L}^\dagger \rho \Op{L} -\frac{1}{2} \left\{ \Op{L}^\dagger \Op{L} + \Op{L}\oL^\dagger, \rho \right\} \right),
\end{equation}
which replaces the von Neumann equation with the trilinear Hamiltonian \eqref{Hinteraction} of the coherent model. One can easily check that the master equation still conserves both $N$ and $M$, and that it does not build up coherence between the combined Fock basis vectors $|n,N-n,M-n \rangle$; initially diagonal states remain diagonal at all times. The latter facilitates an efficient numerical implementation.
Alternatively, one can conceive a random unitary model where the coherent energy exchange driven by $\Op{H}_{\rm int} \propto \Op{L} + \Op{L}^\dagger$ is switched on and off at random times with an average exchange rate $\gamma$,
\begin{equation}\label{incoherent2}
\dot{\rho} = \frac{\gamma}{2} \left[ \Op{L} + \Op{L}^\dagger, \left[ \rho, \Op{L} + \Op{L}^\dagger \right] \right].
\end{equation}
The master equation describes dephasing in the eigenbasis of the interaction Hamiltonian. The final equilibrium state is given by the fully dephased, or infinite time-averaged, initial state \eqref{infTimeAve}. Note however that the final state $\sigma$ does contain nondiagonal coherence terms when represented in terms of the $|n,N-n,M-n \rangle$ basis. At the same time, neither the coherent nor the two incoherent models generate coherence locally, i.e.~nondiagonal elements in the Fock state representation of the reduced single-mode states.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{fig5.pdf}
\caption{Comparison of coherent and incoherent time evolution for the average cold mode energy, starting from initial thermal energies $\epsilon_h(0)=1.5 \hbar\omega_h$, $\epsilon_w(0)=5.5 \hbar\omega_w$, and $\epsilon_c(0)=2.5 \hbar\omega_c$ (bottom panel in Fig.~\ref{fig:heating2cooling}). The solid line represents the coherent evolution, which exhibits a characteristic transient overshoot to below the long-time average for $gt < 1$. This behavior is not predicted by the incoherent models \eqref{incoherent1} and \eqref{incoherent2} for the same energy exchange between the modes, based on spontaneous excitation processes (dashed) or on phase-incoherent exchange (dotted), respectively. The latter yields the correct long-time average. We assume exchange rates equal to the coherent coupling constant $g$.}
\label{fig:VSIncoherent}
\end{figure}
An exemplary comparison of the models in terms of the time evolution of the average cold mode energy is shown in Fig.~\ref{fig:VSIncoherent}, assuming $\gamma=g$ for simplicity. As expected, neither the fully incoherent model (green dashed line) nor the dephasing model (orange dotted) reproduce the enhanced cooling feature of the coherent model (blue solid). Instead, both incoherent models lead to an exponential decay towards their equilibrium values; the one of the dephasing model \eqref{incoherent2} is the value that the coherent evolution oscillates around and approaches in the long time average.
The time evolution and the equilibrium states differ slightly here, whereas in the previously studied three-qubit scenario there is no difference between the two incoherent models. This can be understood if we restrict to initial states $\rho(0)$ from the two-dimensional subspace $N=M=1$. The trilinear interaction is then equivalent to that of three qubits, and the two master equations \eqref{incoherent1} and \eqref{incoherent2} are identical.
Finally, we note that the comparison between the coherent and incoherent exchange process might suggest a quantum advantage in the transient cooling performance over ``classical'' implementations. This argument is based on the notion of ``classical'' states as a subset of quantum states without coherence in the relevant basis representation. In Section~\ref{classicalModel}, we provide an alternative view by comparing the coherent quantum model to its counterpart in the fully classical framework. It will turn out that the cooling enhancement is not a genuine quantum effect.
Indeed, the fact that the reduced single-mode quantum states always remain diagonal in the single-mode Fock basis suggests that the trilinear Hamiltonian only establishes coherence \emph{between} the modes. Such a notion of inter-mode coherence, in the sense of a finite interference capability and well-defined phase relation over a given coherence time, can also be understood in classical optics \cite{Glauber2006}.
\section{Effective Equilibration}\label{QuantumEquilibration}
A striking feature of the studied three-mode interaction, which we have observed and exploited in the previous sections, is the fast \textit{effective equilibration} of the average single-mode energies: even though the time evolution is strictly reversible, they appear to converge towards the stationary values of the time-averaged, fully dephased state, with only minor oscillations around those values for times $t \gg 1/g$. The phenomenon of effective equilibration has been explored in finite systems of high dimension \cite{Jensen+Shankar:85, Tasaki1998, Reimann:08, Rigol+2:08, Short:11,Ponomarev+2:11, Short+Farrelly:12, Reimann:12, Malabarba+4:14, Malabarba+2:15, Goldstein+2:15, Kaufman+6:16, Gogolin+Eisert:16, Pintos+4:15}, and is observed here for the case of initially thermal states. The effect is most pronounced at high initial temperatures, when the thermal occupation of the modes extends to high-dimensional subspaces associated to large values of the conserved quantities $N$ and $M$. A proper equilibration in the presence of a bath would lead to the same values, as long as the three-body coupling $g$ remains the dominant rate (see Appendix~\ref{OpenSystem}). This underpins the thermodynamical assessment based on the time-averaged steady state of the three modes as a model absorption refrigerator system in Sect.~\ref{AbsFridge}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=\textwidth]{fig6.pdf}
\caption{\label{fig:quantumEquilibration}Features of effective equilibration:\quad (a) Time $gt_{\rm ex}$ of the transient overshoot extremum in the coherent evolution of the average mode energies for varied initial conditions as in Fig.~\ref{fig:changeInCmode}. The values were extracted from a numerical simulation in steps of $0.01/g$. The black pixels correspond to stationary states satisfying \eqref{stationaryCond}, where no such extremum is found. (b) Purity of the reduced single-mode state as a function of time when the unitary evolution starts from a pure triple Fock state $|100,N-100,M-100\rangle$ with varying values for $N=M$. Notice the double-logarithmic scale. The dashed lines represent the fully dephased states. Both the characteristic half-life time and the long-time averages of the purity scale in proportion to the inverse effective Hilbert subspace dimension $N+1$. (c) Exemplary energy gap spectra associated to the trilinear Hamiltonian in the subspaces of $N=10^4$ and varying $M$. The histograms are binned in units of $2\hbar g$.}
\end{figure*}
Quantitatively, one may speak of effective equilibration with respect to a set of observables if the cumulative time averages of their expectation values over the coherent time evolution quickly converge close to the values associated to a dephased stationary state \cite{Reimann:08,Linden2009,Short:11,Reimann2012a}. The characteristic convergence time scale and the residual difference should typically decrease with the (finite or effective) system dimension \cite{Short+Farrelly:12, Malabarba+4:14}. This implies that recurrences of the initial values, if present, must be short-lived and rare under unitary evolution.
We observe such behaviour in the present, formally infinite-dimensional, scenario where the observables of interest are the single-mode energies. Our numerical studies show that both the characteristic equilibration time and the residual difference to the expectation values associated to the dephased state \eqref{infTimeAve} decrease with increasing initial occupation numbers $\bar{n}_i$.
We could not observe recurrences in our simulations, some of which extended over time intervals a few orders of magnitude greater than $1/g$, see e.g.~Appendix~\ref{OpenSystem}.
In Fig.~\ref{fig:quantumEquilibration}(a), we plot the time at which the cold mode energy assumes its extremal overshoot value, which can serve as an estimate for the characteristic equilibration time. The times are extracted from unitary time evolutions with time increments of $0.01/g$ for varying initial conditions as in Fig.~\ref{fig:changeInCmode}. We observe that the values decrease with growing mode temperatures and effective Hilbert space dimension covered by the initial state, except at thermal equilibrium (black line).
We note that for pure initial Fock product states $|n,N-n,M-n \rangle$, one rather observes persistent strong and irregular oscillations of the average energies instead of the described equilibration. The latter is a consequence of averaging over the incommensurate sub-spectra of the trilinear Hamiltonian associated to its different orthogonal subspaces. Nevertheless, signatures of effective equilibration also exist at the level of quantum states in high-dimensional subspaces, e.g.~by looking at entangling dynamics \cite{Shaffer+3:14}. As an example, we show in Fig.~\ref{fig:quantumEquilibration}(b) how the reduced single-mode purity decays as a function of time for an initial Fock product state with $n=100$ at varying subspace dimension. The blue, red, and green lines (top to bottom) in the double-logarithmic plot correspond to $N=M=300$, $3000$, and $30000$, respectively. The dashed lines represent the reduced single-mode purities of the respective dephased states.
As expected, the trilinear interaction rapidly entangles the three modes, and we observe a fast decay of single-mode purity close to the dephased values. Both these values and the characteristic half-life time decrease roughly in proportion to the inverse subspace dimension $d=\min \{N,M \} + 1$.
On the other hand, rapid equilibration in finite high-dimensional (e.g.~interacting many-body) systems has been linked to the non-Poissonian nature and spread of the underlying energy gap spectrum \cite{Sengupta2004,Manmana2007,Luitz2015,Gogolin+Eisert:16}, and it is assumed to appear in nonintegrable systems whose classical counterpart may exhibit signatures of chaos \cite{Kollath2007,Rigol2009,Banuls2011}.
The energy gap spectrum of the quantum model varies strongly with the choice of the conserved quantities $(N,M)$, if restricted to a single invariant subspace. At large subspace dimension, non-Poissonian energy gap distributions are observed, in agreement with the observation of rapid equilibration. Three exemplary histograms of energy gaps (i.e.~differences between subsequent energy levels) are shown in Fig.~\ref{fig:quantumEquilibration}(c). They correspond to increasing values of $M$ at fixed $N=10000$ and subspace dimension $d=N+1$, using a bin width of $2\hbar g$. In all cases, the probability does not drop with the gap size in a Poisson-like fashion, as associated to non-equilibrating Hamiltonians.
Taken together, these intricate equilibration features distinguish the resonant three-mode Hamiltonian \eqref{HinteractionPicture} from other simple exchange coupling models. The resonant two-mode exchange coupling, for instance, can be diagonalized explicitly and leads merely to a normal mode splitting with a single fixed energy gap given by the coupling frequency. The same reversible dynamics can be seen in the two-dimensional subspaces (for $N=1$ or $M=1$) of the present system. At high initial excitations, the three-oscillator interaction model presented here could provide a minimal, accessible, and practically relevant testbed for studies of effective equilibration.
\section{Classical analysis}\label{classicalModel}
The present quantum refrigerator model of three trilinearly coupled harmonic oscillators admits direct benchmarking against the predictions of classical physics. We arrive at the classical version of the system straightforwardly by replacing the mode operators $\Op{a}_i$ and $\Op{a}^\dagger_i$ with canonical complex variables $\alpha_i$ and $\alpha^{*}_i$. The Hamilton function becomes
\begin{equation}\label{classicalHint}
H = \sum_{i=h,w,c}\hbar \omega_i |\alpha_i|^2 + \hbar g(\alpha_h\alpha^*_w\alpha^*_c+\alpha_h^*\alpha_w\alpha_c).
\end{equation}
The corresponding Hamilton equations of motion can be solved analytically after switching to the action-angle representation \cite{Bajer+Miranowicz:01}. For this, the complex amplitudes are expressed in terms of magnitude and phase, $\alpha_i=\sqrt{\iota_i}\mathrm{exp}(i\phi_i)$, where the $\iota_i$ represents the energy contained in mode $i$ in units of $\hbar \omega_i$.
As in the quantum case, one finds that the sums $I_1=\iota_h+\iota_w$ and $I_2=\iota_h+\iota_c$ are conserved, as well as the total phase difference $\Phi = \phi_w + \phi_c - \phi_h $.
Given the initial $\iota_i (0)$ and $\phi_i (0)$ and the additional constant of motion $L=\iota_h\iota_w\iota_c\mathrm{cos}^2(\Phi)$, this leads to the solution
\begin{align}\label{classicalSol}
\iota_h(t) &= c+(b-c)\mathrm{sn}^2 \left( \pm \sqrt{a-c}gt+\theta_0 \,|\, m \right), \\
\theta_0 &= \text{sn}^{-1} \left( \sqrt{\frac{\iota_h (0)-c}{b-c}} \,\Big|\, m \right), \quad m = \frac{b-c}{a-c}. \nonumber
\end{align}
Here, $\mathrm{sn}( u \,|\, m )$ denotes the Jacobi elliptic function\footnote{We adopt the usual convention for the elliptic integrals \cite{Abramowitz1965}, which differs from the one in \cite{Bajer+Miranowicz:01}.}
and $a>b>c$ are the three ordered solutions of the equations: $a+b+c=I_1+I_2$, $ab+bc+ca=I_1I_2$, and $abc=L$.
The correct sign in the argument is given by that of $ \dot{\iota}_h (0) \propto - \sin \Phi(0)$.
The solution \eqref{classicalSol} is periodic with period $2K(m)$, where $K$ denotes a complete elliptic integral of the first kind. The time average of the energy contained in each mode is obtained by integrating \eqref{classicalSol} over one period, yielding a compact expression in terms of elliptic integrals of first and second kind
\begin{equation}\label{classicalSolTimeAvg}
\langle \iota_h \rangle_t = c + \frac{b-c}{m}\left [1-\frac{E(m)}{K(m)}\right].
\end{equation}
This is the classical counterpart of the quantum long time average, which is the mean energy associated to a completely dephased state.
For a comparison to the quantum simulation with thermal initial states, we evaluate the classical time evolution in a Monte-Carlo simulation by drawing initial angles $\phi_i (0)$ according to a uniform distribution and $\iota_i (0)$ according to an exponential distribution with mean values $\bar{\iota}_i = k_B T_i/\hbar \omega_i$.
However, the different energy statistics leaves room for a certain ambiguity in the matching of quantum and classical initial conditions. One can either match the initial temperatures $T_i$, arguing that the quantum and classical predictions shall be based on the same physical boundary conditions. Then the initial mean energies $\hbar \omega_i \bar{\iota}_i$ and $\epsilon_i = \hbar\omega_i (\bar{n}_i + 1/2)$ will differ slightly. Or one matches $\bar{\iota}_i = \bar{n}_i + 1/2$, which implies a better matching of the predicted energy trajectories at the cost of slightly different reservoir temperatures.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{fig7.pdf}
\caption{Comparison between the quantum and classical predictions for the time evolution of the average cold mode energy, starting from the initial conditions of Fig.~\ref{fig:VSIncoherent}. The classical results are obtained by a Monte-Carlo simulation of the solution \eqref{classicalSol} associated to the Hamilton function \eqref{classicalHint}, averaging over random samples of initial conditions. The two curves each correspond to $10^7$ trajectories drawn from a thermal distribution using either the same average mode energies (I, green) or the same initial temperatures (II, orange) as in the quantum case. The dotted lines indicate the infinite time averages.}
\label{fig:classicalAnalysis}
\end{figure}
In Fig.~\ref{fig:classicalAnalysis}, we compare both classical options to the quantum prediction for an exemplary initial configuration in the cooling regime. The classical simulations were averaged over $10^7$ trajectories each.
The relative difference to the quantum result decreases at higher initial temperatures where the continuous classical distribution of energies approximates well the discrete quantum statistics.
As expected, the classical result with matching initial energies (I) starts at the same point and remains closer to quantum case than the result with matching temperatures (II).
Most importantly, the classical model predicts the same dynamical features as the quantum model, namely the short-time cooling enhancement and the effective equilibration at long times.
The latter can again be attributed to thermal averaging: Even though single classical trajectories \eqref{classicalSol} describe periodic orbits, their periods depend on the initial energies and relative phases between the modes, and so they vary broadly over the thermal distribution of initial conditions. The averaged time evolution is not periodic and does not exhibit recurrences even for very long interaction times $gt \gg 1$.
In Section~\ref{SingleShot}, we have seen that the transient enhancement known as single-shot cooling is linked to the presence of coherence between eigenstates of the interaction Hamiltonian in the quantum framework.
Now the quantum-classical comparison shows that this feature is not relying on genuinely quantum coherence, but is rather a generic feature of the particular three-mode exchange interaction. Indeed, it appears as well in the classical framework where coherence between harmonic modes exists in the form of a fixed phase relation between their oscillations.
The transient cooling enhancement can be suppressed by subjecting the classical phase coordinate $\Phi(t)$ to external noise, just as the feature will disappear in the quantum case if dephasing is present.
\section{Conclusion}\label{conclusion}
In this article, we have provided an in-depth study of a quantum absorption refrigerator model based on three resonantly coupled harmonic oscillator modes, as recently realized in an ion-trap experiment \cite{Maslennikov+8:17}. A closed-system analysis already exhibits the key features of the earlier studied analoguous three-qubit refrigerator model, in particular, the description of heating and cooling regimes in terms of virtual temperatures, and the enhanced cooling performance at transient time scales.
On the other hand, we have found important differences and new insights in the more complex three-mode system. Even in the absence of simultaneous bath coupling, the initially thermal energy distributions of the cold, the hot, and the work modes equilibrate effectively around athermal distributions corresponding to a state that is fully dephased in the eigenbasis of the interaction Hamiltonian. This state exhibits residual coherences in the combined three-mode Fock representation, and it differs from the steady state of a fully incoherent implementation based on spontaneous excitation and de-excitation.
Most strikingly, we found that all essential features of the model can be explained by entirely classical means, with only minor quantitative differences that become irrelevant with growing temperatures. This includes, in particular, the enhanced transient cooling feature that had previously been attributed to quantum coherence in the three-qubit scenario. With the possibility of a one-to-one comparison between quantum and classical predictions at hand, the three-mode system may in fact serve as an ideal testbed to elucidate the role of quantum resources in thermodynamics.
\begin{acknowledgments}
We acknowledge clarifying discussions with Nicolas Brunner, Christian Gogolin, Alioscia Hamma, Sandu Popescu, Anthony Short, and Paul Skrzypczyk.
This research is supported by the Ministry of Education, Singapore, and by the National Research Foundation, Prime Ministers Office, Singapore, under the Research Centres of Excellence Programme and through the Competitive Research Programme (Award No. NRF-CRP12-2013-03).
\end{acknowledgments}
|
1,108,101,564,282 | arxiv | \section{Introduction}
\label{sec:introduction}
Laboratory plasma-astrophysics has became one of the most significant branches engrossing in astrophysics over the last decades. As an alternative method apart from the classical observation and numerical simulation \cite{Overview1, Overview2}, it provides a novel methodology for exploring the mechanisms of astrophysical phenomena in a realistic laboratory environment based on advanced technologies and modern instrumentation in high energy physics. A research program was launched at Deutsches Elektronen-Synchrotron for establishing a so-called plasma-astrophysical laboratory for experimental studies of beam-plasma instabilities \cite{NIMA:Ye, Prestudy, Prestudy2}.
With feasible experimental setups in such a laboratory environment, particle beams need to be produced in a controlled manner to fulfill a set of compromised requirements between several crucial electron beam parameters for developing the instabilities. This does include not only the duration (preferably continuous), the average current ($\geq$mA), the average energy (several MeV at least), the number density, but also other instability-mechanism-oriented quality properties of the required particle beam. In principle, a long beam (continuous) of high average current (mA) is required as the energy source to excite the plasma wave. A higher beam energy is expected to induce stronger perturbation of the instability due to the fact that plasma wave excitation is led by the drift energy of streaming particles. Higher beam energy ($\geq$MeV) results also in a higher growth rate of the instability. From this point of view the appliance of RF injectors for our purpose can be much more beneficial due to higher beam energy and principally better beam qualities compared to DC guns \cite{Book:FS, Mikhail:PRAB2012}. A matched number density of the particle beam with the plasma density is of another crucial importance to grow the instability. This requires specially designed beam transport system for properly focusing the beam with large energy spread at the plasma cell location while maintaining other beam quality parameters as required.
A preliminary design was reported in \cite{NIMA:Ye}. It demonstrated electron bunch extraction based on field emission from a designed metallic needle cathode and quasi-cw beam formation using velocity bunching over subsequent radio-frequency (RF) cycles of a cut disk structure (CDS) downstream of an L-band RF electron gun. In this note, an improved beam dynamics design is proposed and will be further discussed based on 3D particle tracking simulations (Sec.\ \ref{BeamDynamics}). Based on the improved design, the quasi-cw beams with fulfilled requirements produced in a realistic laboratory environment are applied in nonlinear plasma-astrophysical simulations for studying the beam-plasma instabilities. This involves the electrostatic streaming instability \cite{Estatic1, Estatic2, Estatic3}, the filamentation instability \cite{Fila1, Fila2} and a non-resonant streaming instability (i.e.\ Bell's instability\cite{Bells}). Among those instabilities, the electrostatic instability is commonly known as the unstable mode with the shortest wavelength and the shortest development time \cite{Beamins}. As a competition mode, the prior existence of the electrostatic instability is critical to the laboratory observation of other aforestated instabilities, in terms of its parametric dependencies on the realistic particle beams produced in the laboratory and thereby plausible suppression schemes for it. The obtained results consisting of case studies for the so-called cold beam, warm beam and quasi-cw beam will be shown in this note (Sec.\ \ref{BeamPlasma}). A summary and an outlook will be given in Sec.\ \ref{Summary}.
\section{Design of electron beam dynamics}\label{BeamDynamics}
Figure \ref{Beamline} shows a proposed experimental setup. It consists of a specially designed needle cathode, an L-band RF injector \cite{Book:FS}, a pair of focusing magnets, a cut disk structure (CDS), a circular collimator at the exit of the CDS, a periodic solenoidal focusing system, a plasma cell and a beam dump. The field-emitted (FE) electron bunch is generated from the needle cathode sitting on the backplane of the 1.3 GHz copper resonator through highly enhanced electric field gradient up to 8 GV/m. The electron bunches produced over subsequent RF cycles are accelerated by the RF gun and velocity debunched through the 1.3 GHz CDS forming a quasi-cw beam by jointing existing gaps in the time domain profile of the produced FE beam between neighbouring RF periods. A set of required beam parameters are produced in \cite{NIMA:Ye}, however, the scheme of beam focusing is not efficient for transporting the produced particle beam to the plasma cell location due to large beam energy spread. Consequently, the resulting electron number density when transported to the entrance of the plasma cell location are limited to approx.\ $10^7-10^8$/cm$^3$ for an enhanced electric field gradient of 5.7 GV/m at the cathode.
\begin{figure}[!htb]
\centering
\includegraphics[width=140mm,height=30mm]{newbeamline.jpg}
\caption{A proposed laboratory experimental setup for the enhancement of electron number density in the plasma cell.}
\label{Beamline}
\end{figure}
As shown in Fig.\ \ref{Beamline}, a periodic magnetic focusing system is applied. Such a system provides a longitudinal mixed magnetic field formed by overlapping fields of an array of solenoidal lenses (e.g. four solenoids exemplarily shown in Fig.\ \ref{Beamline}). A mathematical description of the periodical field on the longitudinal axis can be represented using a Fourier series \cite{EPAC:Militsyn}
\begin{equation}
Bz(z) = B_0[1+\Sigma_na_n cos(k_0z)],
\end{equation}where $Bz$ represents the mixed guiding magnetic field in the longitudinal (z) direction and $B_0$ stands for the magnetic field strength of the solenoid. The term $k_0$=2$\pi$/$T$ with $T$ denoting the period of the solenoidal array. The symbol $a_n$ is defined as the Fourier coefficient with $n=1...\infty$. The mixed guiding field $Bz(z)$ is numerically reconstructed by using four solenoidal lenses periodically arranged in the beam propagation direction and used in the following beam dynamics simulations.
Figure \ref{Bseed} shows the rms size of the particle beam in the longitudinal direction with the gray diagram indicating the location of the plasma cell. The evolution of the beam size is compared among multiple cases with the seeding magnetic field $B_{seed}$ (maximum field amplitude of $B_z$) as a variable. The gray dashed curve shows the pattern of the longitudinal magnetic field used for the simulations. The black curve shows in the seeding-field-free case a significant beam size growth within the plasma cell while the ones in cyan, blue and red show a suppressed behavior of the beam divergence and a much reduced beam size within the plasma cell for an applied seeding field of 10, 25 and 50 mT, respectively. A significant role plays the applied seeding magnetic field in maintaining the beam size and therefore in increasing the electron number density.
\begin{figure}[!htb]
\centering
\includegraphics[width=95mm,height=70mm]{Bseedfocusing.jpg}
\caption{Significant improvement of beam focusing within the plasma cell using a periodic magnetic focusing system. $\sigma_{rms}$: rms beam size.}
\label{Bseed}
\end{figure}
In Fig.\ \ref{ND}, the right inset shows the transverse beam profile right after the CDS where the majority of electrons is concentrated in the central part of the bunching area while for a large beam-halo area much less electrons are populated. Therefore, properly collimating the beam may increase the electron number density while maintaining a sufficiently high beam current. For such a purpose, a circular collimator is applied at the exit of the CDS. Figure \ref{ND} shows the electron number density as a function of the seeding magnetic field (i.e.\ 1-60 mT) with the radius of the collimator as a variable (i.e.\ 0.5, 1.0 and 2.0 mm). The left inset of Fig.\ \ref{ND} shows the average beam current plotted versus the collimator radius. As shown, with a collimator radius of 0.5 mm and a seeding magnetic field of 50 mT, the average beam current can be maintained at about 10 mA for a reached number density on the order of $10^{10}$/cm$^3$.
\begin{figure}[!htb]
\centering
\includegraphics[width=90mm,height=70mm]{numberdensity.jpg}
\caption{Simulated electron number density within the plasma cell as a function of the seeding magnetic field and the collimator radius.}
\label{ND}
\end{figure}
Furthermore, Fig.\ \ref{CWBeam} shows an exemplary temporal profile of the produced quasi-cw beam based on the improved beam dynamics design. The base current is increased to 1 mA and the electron number density within the plasma cell is on the order of $10^{10}$/cm$^3$. In the following, such electron beams produced in the realistic laboratory environment are used for the nonlinear plasma-astrophysical simulations.
\begin{figure}[!htb]
\centering
\includegraphics[width=95mm,height=65mm]{Beamprofile.jpg}
\caption{Temporal profile of a produced quasi-cw electron beam with a base current of about 1 mA over RF periods at $\lambda=231$ mm.}
\label{CWBeam}
\end{figure}
\section{Plasma-astrophysical studies using realistic laboratory electron beam distributions\label{BeamPlasma}}
To understand the physical conditions for beam-plasma instabilities to occur, the extracted parameters and/or distributions of the realistic laboratory electron beams (Sec.\ \ref{BeamDynamics}) are used for plasma-astrophysical simulations. A fully relativistic particle-in-cell code (OSIRIS) \cite{OSIRIS1, OSIRIS2, OSIRIS3} is employed. Regarding a basic setup for simulations \cite{Prestudy}, a uniform stationary plasma is filled in a two-dimensional open-boundary system ($43.14$~cm~$\times~5.9$~cm) and an electron beam with an initial width of 0.17 cm is applied in the simulation. The number density of stationary plasma is set as $n_0=10^{13}$~cm$^{-3}$. The width of the electron beam is roughly equal to the skin length of the background plasma ($c/\omega_{pe}$ with $\omega_{pe}$ standing for the electron plasma frequency). In exemplary simulations, the injection rate is set as 1.78 GHz instead of the RF frequency of 1.3 GHz based on the assumption that the growth rate of the involved instability can only be slightly varied by changing the injection rate. The settings indicate that the electron bunches are injected into the simulation system every 100 $\omega_{pe}^{-1}$.
Referred to the results from 3D particle tracking simulation, as indicated in Fig.\ \ref{BeamProfile}, a typical momentum distribution of the produced quasi-cw electron beam is shown for a single RF cycle. Due to the time-of-flight effect, a beam momentum shift with a positive slope ($p_z$=10 to 4 $m_ec$ with $m_e$ and $c$ denoting the electron mass and the speed of light, respectively) is clearly observed in the longitudinal direction at the entrance of the plasma cell. A fundamental question arises when a realistic particle distribution is applied in our simulations, in terms of the differences in exciting the streaming instability compared to the conventional assumption of using a cold beam and/or a warm beam. These assumptions essentially refer to the beam temperature which strongly influences the growth rate of the electrostatic streaming instability with the shortest development time among the instabilities of our interest (Sec.\ \ref{sec:introduction}). The linear growth rate of the electrostatic instability, in particular, decreases with increasing the beam temperature\cite{Estatic1, Estatic2, Estatic3, Prestudy, Prestudy2}. Therefore, the impacts of different beam distributions including the produced quasi-cw beam onto the electrostatic instability are worth of first investigations. Nonlinear PIC simulations are thus carried out for three cases, namely, the cold beam case, the warm beam case and the quasi-cw beam case. A homogeneous Maxwell-Boltzmann momentum distribution centering at $7 m_ec$ is set with a thermal velocity ($v_{th,e} $) of 0.01$c$ and 0.2$c$ for the cold and warm beam cases, respectively. For the quasi-cw beam case, a realistic beam momentum distribution is used, as shown in Fig.\ \ref{BeamProfile}.
\begin{figure}[!htb]
\centering
\includegraphics[width=85mm,height=40mm]{Beamdata.png}
\caption{The longitudinal momentum distribution of a quasi-cw electron beam imported in PIC simulations.}
\label{BeamProfile}
\end{figure}
\begin{figure*}[!htb]
\centering
\includegraphics[width=\textwidth]{EFdata.png}
\caption{Spatial profiles of the longitudinal electric field components (left panels, in units $m_ec\omega_p/e$) and the corresponding Fourier plots (right panels).}
\label{EFProfile}
\end{figure*}
Figure \ref{EFProfile} shows the spatial profile of longitudinal electric field components (panel a) and the corresponding Fourier plots (panel b) for the three cases at T=5.783 ns (1024 $\omega_{pe}^{-1}$), respectively. As shown, the electric field perturbation is present as the electron beam is propagating through the plasma environment. The major perturbations are along X=0, at which position the electron beam is traveling along a central path with an initial width of 0.17 cm. The wavelength (wave number) of the major perturbations is about 1.05 cm (5.88 cm$^{-1}$) indicating a reciprocal relation of the wave number
to the skin length of the background plasma. This is well resolved in the PIC simulations. It can therefore be confirmed from the Fourier plots in Fig.\ \ref{EFProfile} (b), that the electrostatic instability occurs in all three cases. Furthermore, the cold beam case shows the strongest electric field perturbation while in the other two cases the instability strengths are on the same order of magnitude and both are much reduced. It should be noted, that the warm beam case turns out to be a good approximation of the quasi-cw beam. Note, in addition, that the appliance of a quasi-cw beam produced in a realistic laboratory environment may be even more beneficial, in terms of the suppression of the electrostatic instability for relieving its competition with the target instability, for studying the beam-plasma instabilities compared to the ideal cold beam case which is typically used in conventional simulations.
\section{Conclusion\label{Summary}}
An improved experimental setup is proposed for studies of beam-plasma instabilities in the laboratory environment. The associated beam dynamics design is carried out to provide quasi-cw electron beams with required parameters. With a circular collimator placed at the exit of the debunching structure, the appliance of a periodic longitudinal magnetic focusing system renders significant improvements in production of mA and MeV quasi-cw electron beams with a three orders higher number density within the plasma cell for a seeding magnetic field of approx. 50 mT, in comparison to the previous design at a conservative working point of the field emitter.
The produced electron beams are employed in the plasma-astrophysical simulations for investigating the parametric dependencies and thereby the suppression scheme for the competition of the fast-growing electrostatic streaming instability with other instabilities of our interest. Numerical results for multiple cases (i.e.\ cold beam, warm beam and realistic beam) show the well-resolved growing electrostatic instability in the beam-plasma system. The obtained wave numbers well agrees with theoretical predictions. A further comparison of electric field perturbations between the multiple cases demonstrates the strongest instability growth in the cold beam case while comparable growths of the instability for the warm beam and the quasi-cw beam. The results show the warm beam model can be a good approximation of the realistic quasi-cw beam, and that the appliance of the quasi-cw beam may be even more beneficial for studying the beam-plasma instabilities in comparison to the ideal cold beam model conventionally used in many theoretical and/or numerical analysis.
\vspace{10pt}
|
1,108,101,564,283 | arxiv | \section{Introduction}
Exactly four centuries ago, spots on the surface of the Sun were first detected by Galileo. Sunspots are cool regions of strong concentration of magnetic fields on the photosphere of the Sun. Presently, thanks to the observations with unprecedented photometric precision of the CoRoT satellite \citep{baglin06}, spots on the surface of another star can be detected, that of CoRoT-2, a much more active star. Such activity is probably a consequence of its young age, estimated in 0.5 Gyr \citep{bouchy08}. CoRoT-2 is one of the 7 stars with transiting planets detected so far with CoRoT.
Spot activity has been inferred in the past from the modulation observed in the light of stars. As the star rotates, different spots on its surface are brought into view. Because spots are cooler, and therefore darker, than the surrounding photosphere, the detected total light of the star diminishes as different spots of varying temperature and size face the observer. This periodic modulation enables the determination of the rotation period of the star.
The magnetic activity of CoRoT-2 star has been studied in detail by \cite{lanza09} who analyzed its out of transit light curve with modulations of $\sim$6\% of its total flux. Using maximum entropy regularized models, Lanza and collaborators (2009) modeled the light curve considering both the presence of sunspots and faculae (dark and bright regions, respectively). This study detected the existence of two active longitudes located in opposite hemispheres, and also that these longitudes varied with time. The active longitudes are regions where the spots preferably form. The total area covered by the spots were seen to vary periodically with a period of $29 \pm 4$ days. The authors were also able to estimate an amplitude for the relative differential rotation of $\leq 0.7$\%.
Here we propose to study these same spots, however, with a different approach. A transiting planet can be used as a probe of the contrasting features in the photosphere of the star. When the planet occults a dark spot on the surface of its host star, small variations in the light curve may be detected \citep{silva03, pont07, silva-valio08}. From modeling of these variations, the properties of the starspots can be determined, such as size, position, and temperature. Moreover, from the continuous and long duration of observation provided by the CoRoT satellite, the temporal evolution of individual spots can be obtained.
The next session describes the observation performed by CoRoT, whereas the model used here is introduced in the following session. The main results are presented in Session 4. Finally, a discussion of the main results and the conclusions are listed in Session 5.
\section{Observation of CoRoT-2}
A planet around the star CoRoT-2 was detected during one of the long run observations of a field toward the Galactic center performed by the CoRoT satellite, and is the second transiting planet discovered. The planet, a hot Jupiter, with a mass of $3.3 M_{Jup}$ and radius of $1.47 R_{Jup}$, orbits its host star in just 1.73 day \citep{alonso08}. CoRoT-2 is a solar-like star of type G7, with 0.97 $M_\odot$ and 0.902 $R_\odot$ that rotates with a period of 4.54 days \citep{alonso08}. The parameters of the star and the planet plus the orbital ones were determined from a combination of CoRoT photometric light curve \citep{alonso08} and ground based spectroscopy \citep{bouchy08}.
The data analyzed here were reduced following the same procedure as \cite{alonso08}.
The light curve was filtered from cosmic ray impacts and orbital residuals. Then this cleaned light curve was folded considering an orbital period of 1.743 day, from which the parameters of the planetary system were derived. Beside the planet orbital period of $P_o = 1.743$ day and stellar rotation of $P_s = 4.54$ days, the orbital parameters were: semi-major axis of $a = 6.7$ star radius ($R_s$) and inclination angle of $87.84^\circ$.
A total of 77 transits were detected in the light curve with a high temporal resolution of 32 s, in a total of 134 days. The rms of this signal was estimated from the out of transit data points to be $\sigma = 6\times 10^{-4}$ in relative flux units. Small light variations were detected during the transits, usually with fluxes between 3 and 10 sigma above the transit model without spots, the largest variation reaching 18 sigma. These intensity variations were interpreted as the signatures of star spots and were thus modeled. Figure~\ref{all} shows the light curves from all transits, where the vertical extent of each data point represents the rms of the signal. Also plotted on this figure is the model transit considering that no spots are present on the stellar surface (gray curves).
\begin{figure}
\centering
\includegraphics[width=8cm]{fig1.eps}
\caption{All the 77 light curves during the transit of CoRoT-2b in front of its spotted host star. The gray solid line represent the model of a spotless star. The error bars of the data points indicate
the rms of 0.0006.}
\label{all}
\end{figure}
\section{The model}
The physical characteristics of star spots is obtained by fitting the model described in \cite{silva03}. In the model adopted here, the star is a 2-D image with intensity decreasing following a quadratic limb darkening law according to \cite{alonso08}, whereas the planet is taken as a dark disc. The modeled light curve of the transit is obtained as follows. The planet position in its orbit is calculated every two minutes and the total flux is just the sum of all the pixels in the image (star plus dark planet). This yields the light curve, that is, the relative intensity as a function of time during the transit.
The model assumes that the orbit is circular, that is, null eccentricity (consistent with the measured eccentricity of $0.003 \pm 0.003$), and that the orbital plane is aligned with the star equator. In the case of CoRoT-2 the latter is a good assumption since, by measuring the Rossiter-McLaughlin effect, \cite{bouchy08} obtained that the angle between the stellar rotation axis and the normal of the orbital plane is only $7.2 \pm 4.5^o$.
The orbital parameters were taken from \cite{alonso08}, such as period of 1.743 day and orbital radius of 6.7 $R_{star}$.
The model also allows for the star to have features on its surface such as spots. The round spots are modeled by three parameters:
(i) intensity, as a function of stellar intensity at disk center, $I_c$ (maximum value); (ii) size, or radius, as a function of planet radius, $R_p$; and (iii) position: latitude (restricted to the transit path) and longitude.
On all the fitting runs, the latitude of the spots has remained fixed and equal to the planetary transit line, which for an inclination angle of $87.84^\circ$ is $-14.6^\circ$. This latitude was arbitrarily chosen to be South, thus the minus sign. The longitude of the spot is measured with respect to the central meridian of the star, that coincides with the line-of-sight and the planet projection at transit center.
When the spot is near the limb, the effect of foreshortening is also featured in the model. However, this model does not account for faculae. Solar-like faculae have a negligible contrast close to the disc center, having significant contrast only close to the limb. Since the facular-to-spotted area ratio for CoRoT-2 is only Q=1.5 \citep{lanza09} instead of 9 as in the Sun, and we limit our analysis to spots between -70 and +70$^o$ from the central meridian, the photometric effect of faculae is very small and can be safely neglected.
The blocking of a spot by the planet during a transit implies an increase in the light detected, because a region darker than the stellar photosphere is being occulted. Thus the effect of many spots on the stellar surface is to decrease the depth of the transit, as can be seen in Figure~\ref{all}. This is turn will influence the determination of the planet radius, since a shallower transit depth results in a smaller estimate of the planet diameter when spots are ignored \citep{silva-valio10}.
In this work, to estimate the best model parameters for the planet and its orbit, the deepest transit was sought.
Of all the 77 transits, the 32nd transit displays the lower light variation (see Figure~\ref{all}). This was interpreted as the star having the minimum number of spots on its surface within the transit latitudes during the whole period of observation (134 days). The 32nd transit is shown in Figure~\ref{lc} as black crosses, for comparison, the fifth transit (gray crosses) is also shown in the same figure.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig2.eps}
\caption{Top: Examples of transit light curves of CoRoT-2. The black crosses represent the 32nd transit assumed to occur when the star has minimum spot activity within the transit latitudes, whereas the gray data points represent a typical transit (the fifth one). The model light curve without any spots is shown as the thick solid curve. Bottom: Residual from the subtraction of the data minus the model for a star with no spots for the 5th (gray) and 32nd (black) transits. The vertical bars in both panels represent the estimated uncertainty of 0.0006 from out of transit data. }
\label{lc}
\end{figure}
A light curve obtained from a model without any spot is shown as a thick solid curve on the figure. However, to obtain this model light curve it was necessary to use a planet radius of 0.172 stellar radius, instead of the 0.1667 stellar radius quoted on Table 1 of \cite{alonso08}, an increase of about 3\%. We note that this may not be a difference in the actual radius but rather an artifact due to the uneven spot coverage on the total surface of the star during that specific transit. Nevertheless, this implies that star spots can hinder the exact size estimate of a planet by making the transit light curve shallower than it would otherwise be \citep{silva-valio10}.
The actual radius of the planet may very well be 0.1667 $R_{star}$, since this was calculated from phase folded and averaged light curve. Supposing that the average area of the star covered by spots does not change during the whole period of observation (134 days), then when there are few spots along the transit line band ({\it e.g.} 32nd transit), there should be more spots on the remainder of the star.
\subsection{Spot modeling}
As mentioned in the previous section, the spots can be modeled by three basic parameters: intensity, radius, and longitude (the latitude is fixed at $-14.6^o$). All the fits were performed using the AMOEBA routine \citep{press92}. The longitude of the spot, is defined by the timing of the light variation within the transit. For example, the small ``bump" seen in the fifth transit (gray crosses in Figure~\ref{lc}) slightly to the left of the transit center, at approximately 0.1 h, is interpreted as being due to the presence of a spot at a longitude of $5.6^\circ$, where 0 longitude corresponds to the line-of-sight direction, taken as the central meridian of the star. According to the diagram shown in Figure~\ref{diag1}, the longitude of a spot may be estimated as:
\begin{equation}
\theta = \sin^{-1} \left( \sin\beta {a \over R_s} \cos\alpha \right) \\
{\rm where} \quad \beta = 2 \pi {{(t/24)} \over P_o}
\end{equation}
\noindent where $t$ is the time, measured with respect to the transit center, and given in hours, $P_o$ is the orbital period in days, and $\alpha$ is the latitude of the transit. The above equation is used to estimate the initial guess of the spots longitude, which was one of the parameters to be determined from the model fit to the data.
\begin{figure}
\centering
\includegraphics[width=5cm]{fig3.eps}
\caption{Top: Top view of the star and the planet in its orbit. A spot at 45$^o$ longitude on the stellar surface is depicted. Bottom: Light curve obtained from a star with a spot at 45$^o$ longitude. The dotted line represents a model for a transit in front of a spotless star. }
\label{diag1}
\end{figure}
The next step was to decide the maximum number of spots that were needed to fit each transit. Models with a maximum of 7, 8, and 9 spots on the stellar surface during each transit were tried. The results obtained from each run were qualitatively similar in all cases. It was found that in the case of 9 spots, the residuals were smaller than the uncertainty of the data (0.0006). Therefore, the results reported here are those from the fits with up to a total of nine different spots per transit.
\section{Spot parameters resulting from the fits}
CoRoT-2 is a very active star, and many intensity variations were identified in each transit, implying that there are many spots present on the surface of the star at any given time. Beside its longitude, the spot signature also depends on its intensity and size. In fact, the flux perturbation from the spot is the product of the spot intensity and its area. Thus there is a degeneracy between the values of the radius and the intensity of one spot.
Thus, to minimize the number of free parameters in the modeling, there are two alternatives. The first approach is to fix the radius of all the spots, for example, as half the planet radius, and fit each spot intensity and longitude for all transits. Another way is to fix the intensity of all the spots with respect to the maximum central intensity, $I_c$, and allow the spot radius and longitude to vary in each fit of the transit light curve.
Examples of the fitting by the two methods are shown in Figure~\ref{ex}. The two top panels represent the synthesized star with spots of: varying intensity and fixed radius of 0.5 $R_p$ (left) and varying radius and fixed intensity at 0.5 $I_c$ (right). The residuals of the data minus the two fits are shown in the bottom panel and show the little difference between the methods.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig4.eps}
\caption{Top: Two examples of the model of the synthesized star with spots for transit 13. The left panel shows spots with fixed radius of 0.5 $R_p$ and varying intensity, whereas the right panel presents spots with constant intensity of 0.5 $I_c$ and changing radius. Bottom: Residuals of the two models of constant radius (black) and intensity (red). }
\label{ex}
\end{figure}
The star was considered to have a varying number of spots (from 2 to a maximum of 9), with spot position defined at a certain longitude but a constant latitude of $-14.6^\circ$ (the transit latitude). The longitude considered is the topocentric longitude, that is, zero angle is defined as that of the line-of-sight, or transit center, when star, planet, and Earth are aligned.
Figure~\ref{flux} shows the total relative flux calculation, that is, the sum of all spot contrast times the squared radius (or area) in each transit, for both models. The spot contrast is taken as $1-f_i$, where $f_i$ is the relative intensity of the spot with respect to disc center intensity. As mentioned above, the flux, $F$, of a single spot is: $F \propto (1-f_i) R_{spot}^2$. For each transit the total relative flux was calculated by summing the flux of individual spots. As expected, the results from both models agree.
\begin{figure}
\centering
\includegraphics[width=8cm]{flux.eps}
\caption{ Total relative flux of all spots per transit for both the model with constant spot intensity, 0.5 $I_c$ (black), and constant radius, 0.5 $R_p$ (gray). }
\label{flux}
\end{figure}
\subsection{Spot longitudes}
A histogram of the spots longitudes is shown in Figure~\ref{long} for the two models considered. Basically, seven main longitudes may be identified in the figure. Fortunately, they are approximately the same seven longitudes on both models, which is reassuring. The figure shows a slight predominance of the spots location at zero longitude, that is, the angle in the direction of the planet.
\begin{figure}
\centering
\includegraphics[width=8cm]{fig8.eps}
\caption{Histogram of spot longitudes for both models with constant intensity (top) and radius (bottom). The vertical dotted lines represent longitudes of -67, -43, -18, -1, 25, 42, and 67$^o$.}
\label{long}
\end{figure}
These are topocentric longitudes, that is, they are not the ones located on the rotating frame of the star, but rather are measured with respect to an external reference frame. In order to obtain the longitudes in the stellar rotating frame, one needs an accurate period for the star. \cite{alonso08} report a $4.54$ days period, whereas \cite{lanza09} obtained a period of $4.52 \pm 0.14$ days in order to fit the rotational modulation of the out of transit data. A more precise estimate of the period is need in order to analyze the spot results. A detailed investigation of the rotational longitudes and the spot lifetime is underway and will be reported in an accompanying paper \citep{valio-lanza10}.
\subsection{Spot intensity and temperature}
Spots with smaller intensity values, or high contrast spots, are spots cooler than those with intensity values close to $I_c$. The spot intensities obtained from the model with spots of fixed radius of $0.5 R_p$ are shown in the top panel of Figure~\ref{temp}. The figure shows that the spot intensities range from 0.4 to 0.9 $I_c$, with an average value of $0.60 \pm 0.19$ of the stellar central intensity. This value is close to the value of $0.665 I_c$ used by \cite{lanza09}, that is the mean value of spots on the Sun.
\begin{figure}
\centering
\includegraphics[width=8cm]{temperature.eps}
\caption{Results from the fit of the model with spots of fixed radius for all transits. Distributions of the spot intensity as a fraction of the central intensity (top), and spot temperature (bottom). }
\label{temp}
\end{figure}
These intensities can be converted to spot temperature by assuming blackbody emission for both the photosphere and the spots. The temperature is estimated as:
\begin{equation}
T_{spot} = {{h \nu} \over K_B} \left[ \ln \left( 1 + {{\exp^{\left( {{h \nu} \over {K_B T_{eff}}}\right)} - 1} \over f_i} \right) \right]^{-1}
\end{equation}
\noindent where $K_B$ and $h$ are Boltzmann and Planck constants, respectively, $\nu$ is the frequency associated to a wavelength of 600 nm, $f_i$ is the fraction of spot intensity with respect to the central stellar intensity, $I_c$, and $T_{eff}$ is the effective temperature of the star. Considering $T_{eff}=5625$ K \citep{alonso08}, the spots temperature range from 3600 to 5500 K, which are 100-2000 K cooler than the rest of the disk. The mean temperature of constant size spots on CoRoT-2 is $4600 \pm 400$ K.
\subsection{Spot radius}
The distribution of spot radius from all the transits resulting from three simulations with different spot constant intensities: $0.3$, $0.5$, and $0.665$ of the central intensity, $I_c$ are shown on Figure~\ref{rad}. As can be seen from the histograms, as the spot constant intensity increases (or conversely its contrast decreases), the resulting radius also increases. This occurs in order to keep the spot flux the same.
The models with fixed spot intensity at $0.3$, $0.5$, and $0.665 I_c$ resulted in spots with average radius of $0.34 \pm 0.10\ R_p$, $0.41 \pm 0.13\ R_p$, and $0.50 \pm 0.17\ R_p$, respectively. The 0.5 $R_p$ value assumed in the spot model with fixed radius agrees with the average radius of spots with $0.665 I_c$, which interestingly is the mean intensity of sunspots \cite{lanza09}.
The results show that the radius of the modeled spots varies from 0.2 to 0.7 $R_p$. Assuming a planet of $1.465 R_{Jup}$, this implies in spots with diameters of 40 to 150 Mm, with a mean value of about 100 Mm.
\begin{figure}
\centering
\includegraphics[width=8cm]{radius.eps}
\caption{Results from the model fit for all transits: distributions of the spot radius in units of planet radius for three different fixed intensities as a fraction of the central intensity: 0.3 $I_c$ (top), 0.5 $I_c$ (middle), and 0.665 $I_c$(bottom). }
\label{rad}
\end{figure}
\subsection{Stellar surface area covered by spots}
To estimate the area covered by spots on the surface of the star, only the significant spots should be taken into account. By significant we mean spot with contrast larger than 10\% of the stellar central intensity or radius larger than $0.1 R_p$, depending on the model.
According to this criterion, the number of spots on the surface of the star during each transit varied from 2 to 9 spots, with an average of 7 (for the constant radius model) or 8 (model with constant spot contrast) spots per transit. The transit with smaller number of spots corresponds to the 32nd transit discussed before. Histograms of the number of spots per transit are shown in the two top panels of Figure~\ref{area} for models with spots of constant radius (left) and intensity (right).
\begin{figure}
\centering
\includegraphics[width=8cm]{area.eps}
\caption{Top: Distribution of the number of spots detected on each planetary transit for both the constant radius, 0.5 $R_p$ (left), and intensity 0.5 $I_c$ (right) spot models. Bottom: Histograms of the stellar surface area covered by spots within the transit latitudes for each model (constant radius and intensity, left and right panels, respectively). }
\label{area}
\end{figure}
The mean surface area covered by the spots during each transit is the sum of the area of all spots detected in that transit divided by the total area occulted by the passage of the planet (see the diagram on Figure~\ref{diag2}).
The total area, $A_{tot}$, of the transit band occulted by the planet is calculated as:
\begin{eqnarray}
A_{tot} = 2 R_p {7 \over 9} \pi \ {(x_1 + x_2) \over 2} R_s\\
{\rm where} \quad x_1 = \sqrt{1-\left[ \sin\alpha - {R_p \over R_s}\right]^2} \\
x_2 = \sqrt{ 1-\left[ \sin\alpha + {R_p \over R_s}\right]^2}
\end{eqnarray}
\noindent The $7/9$ factor arises because in the fit, only the spots between longitudes $-70^\circ$ and $+70^\circ$ are considered due to the difficulties in fitting spots too close to the limb where the light curve is very steep.
\begin{figure}
\centering
\includegraphics[width=6cm]{fig6.eps}
\caption{Diagram of the total area of the star occulted by the planet transit, represented by the hatched region.}
\label{diag2}
\end{figure}
The surface area covered by spots on each transit, taking into account only the area of the transit band, of course, computed from the above equation is show in the bottom panels of Figure~\ref{area} for both models.
For the model with constant radius (bottom left), the spot surface area for each transit is the product of the number of spots on that transit multiplied by $\pi (0.5 R_p)^2$. On the other hand, for the model with constant spot intensity, the star surface area covered by spots is the sum of the area of all spots, that is, $ \Sigma \pi R_{spot}^2$, where $R_{spot}$ is the radius of the spot obtained from the fits.
The average values of the stellar surface area covered by spots are $16 \pm 3$\% and $13 \pm 5$\% for the models with constant radius ($0.5 R_p$) and intensity ($0.5 I_c$), respectively.
Because the data is only sensitive to the flux decrease due to the presence of spots, different values of spots intensity will produce fits with spots of different radius. The stellar area covered by spots was also calculated for the models with different intensities ($0.3$, $0.5$, and $0.665 I_c$) discussed in the previous subsection.
A plot of the mean stellar area coverage as a function of the fixed spot intensity for each run is shown in Figure~\ref{marea}.
Also plotted on the figure as an asterisk is the area coverage obtained from the model with fixed radius at $0.5 R_p$, where the mean intensity of $0.60 I_c$ was considered.
As can be seen from the figure, the area covered by spots within the transit latitudes increases as the spot intensity increases, varying, on average, between 10 and 20\%. This occurs because to fit the same variation in flux, a hotter spot (less contrast) needs to be larger, since the occulted area of the stellar surface is not so dark in this case. In summary, to account for the total decrease in light of the star due to the presence of hotter spots, one needs larger spots. This same trend was seen in the results of \cite{wolter09} from modelling of a single spot.
\begin{figure}
\centering
\includegraphics[width=6cm]{area2.eps}
\caption{Mean stellar surface area covered by spots within the transit latitudes as a function of fixed spot intensity. }
\label{marea}
\end{figure}
\section{Discussion and conclusions}
CoRoT-2 star is a young and reasonably active star. The presence of spots on the surface of the star can influence the determination of the orbital parameters of a planet in orbit by distorting the transit light curve in two ways \citep{silva-valio10}. One is the presence of spots on the limb of the star which will cause the transit duration to be shorter than it really is.
The other distortion is to make the transit shallower if there are many spots on the surface of the star. This would cause the planet radius estimate to be smaller than its real value. The latter effect was observed in the dataset analyzed here, where the spot model applied here yield a radius of 0.172 $R_{star}$ instead of the 0.1667 $R_{star}$ listed in \cite{alonso08}. In this case, we do not believe that this represents a real difference in the planet radius, but rather an artifact of the spot distribution on the surface of the star at a given time.
Here the star was modeled as having up to 9 round spots at any given time on its surface at fixed positions (latitude and longitude) during the uninterrupted 134 days of observation by the CoRoT satellite. For each transit, the longitudes of the spots were obtained from a fit of the model to the data. The other free parameter obtained from the fit was either the spot radius or its intensity.
Two fitting approaches were performed on the transit light curves of CoRoT-2: the first one considered the radius of the spots to be fixed at 0.5 $R_p$, the second one kept the spot intensity at a constant value of the stellar central intensity, $I_c$. In the second approach, three different values of the fixed spot intensity were considered, one of them being the value of 0.665 $I_c$ used in \cite{lanza09}.
On every transit there were, on average, 7-8 spots on the visible stellar hemisphere within longitudes of $\pm 70^\circ$.
Despite the two methods with different fixed parameters of the spots, either radius or intensity, the results obtained for the spots characteristics were very similar. For example, the longitudes are approximately the same and the spot surface area coverage are compatible. Also, the model with spots of fixed intensity at $0.665 I_c$ yields a mean value for the radius of $0.51 \pm 0.18 R_p$, agrees with the mean intensity of $0.60 \pm 0.19 I_c$ obtained from the model with spots of constant radius, $0.5 R_p$.
This mean intensity is close to the value of $0.665\ I_c$ used by \cite{lanza09} which is the same as the sunspot bolometric contrast \citep{chapman94}.
This agreement between the various approaches shows the robustness of the model.
From the method of spots with fixed radius but varying intensities, the mean temperature of the spots was estimated as $4600 \pm 400$ K by considering blackbody emission for both the stellar photosphere and spots. These spots are about 1000 K cooler than the surrounding photosphere. On the other hand the runs of spots with varying radius but fixed intensity of 0.3, 0.5, and 0.665 $I_c$ yield mean radius of 0.35, 0.41, and 0.50 $R_p$. The increase in radius size means that darker spots (smaller intensity) are smaller than brighter ones (intensities close to $I_c$).
\cite{wolter09} modeled a single spot on one transit (here transit 54 of Figure~\ref{all}). The authors modeled the spot with different intensities, analogous to our procedure, and obtained a spot radius of $4.8^o$ (in degrees of stellar surface) for $0.3 I_c$. This specific feature, a ``bump" on the light curve transit was better fit by our model by two spots at longitudes of 3 and 18$^o$, with radius of $0.43$ and $0.39 R_p$ for the model of constant spot intensity of $0.3 I_c$. These radii are equivalent to $4.2$ and $3.9^o$, very similar to the results obtained by \cite{wolter09}. Moreover, \cite{wolter09} also confirm that spots with smaller contrast (higher intensity relative to the photosphere) need to be larger.
The spots on CoRoT-2, of the order of $\sim$100,000 km, are much larger than sunspots, about 10 times the size of a large sunspot (10 Mm).
The mean surface area of the star covered by spots within the transit latitudes is about 10-20\%. This is larger than the 7 to 9\% of the total spotted area found by \cite{lanza09}. However, these values were estimated considering the whole star, that is, also the pole areas, where there are no spots in the case of the Sun. The values obtained here are only for the transit latitudes, which span approximately $20^\circ$ and are close to the equator. In this case, the latitudes coincide with he so called royal latitudes of the Sun, where most of the sunspots occurs.
It was reassuring to see that the results from both methods agreed very well with each other and specially with those of the out of transit data analysis \cite{lanza09}.
Long term observations such as the one provided by CoRoT are paramount to understand the physics of stars spots. The model applied here to the CoRoT-2 data is capable of obtaining the physical properties of spots with the advantage of following their temporal evolution that is done in an accompanying paper \citep{valio-lanza10}. It will be very interesting to perform similar analyzes on data from other stars with planetary transits observed by the CoRoT satellite, especially for stars which are not solar-like.
\begin{acknowledgements}
We would like to thank everyone involved in the planning and operation of the CoRoT satellite which made these observations possible. A.S.V. acknowledges partial financial support from the Brazilian agency FAPESP (grant number 2006/50654-3).
\end{acknowledgements}
|
1,108,101,564,284 | arxiv | \section{Introduction}
The purpose of this paper is to give a reasonably self-contained account of some key geometric features of a class of (nonlinear) scalar second order hyperbolic partial dif\/ferential equations (PDE) in the plane (i.e.\ in two independent variables) that has received surprisingly very little attention in the literature, namely hyperbolic PDE of {\em generic} (also known as {\em class 7-7}) type. Even this terminology is not well-known, and so it deserves some clarif\/ication.
In the geometric study of dif\/ferential equations, there is a natural notion of equivalence associated with the pseudo-group of local point transformations, i.e.\ local dif\/feomorphisms which mix the independent and dependent variables. Another natural (but coarser) notion is to def\/ine equivalence up to the larger pseudo-group of local contact transformations and one of the principal goals of the geometric theory is to f\/ind invariants to distinguish dif\/ferent contact-equivalence classes. Restricting now (and for the remainder of this paper) to scalar second order PDE in the plane, we have that given certain nondegeneracy conditions (i.e.\ one can locally solve the equation for one of the highest derivatives), there is a contact-invariant trichotomy into equations of elliptic, parabolic and hyperbolic type. In the quasi-linear case, invariance of these classes under point transformations appears in \cite{CourantHilbert}. (Inequivalent normal forms are derived in each case.) An elegant geometric proof of invariance under contact transformations in the general case was given by R.B. Gardner \cite{Gardner1969-char}. In the hyperbolic case, there exist two characteristic subsystems which give rise to a f\/iner contact-invariant trichotomy into equations of \MA/ (class 6-6), Goursat (class 6-7), and generic (class 7-7) type. While this was known to Vranceanu and almost certainly to E.~Cartan and Lie, a modern exposition of these ideas f\/irst appeared in \cite{GK1993}. To keep our exposition as self-contained as possible, we include the details of these classif\/ications in this paper. For hyperbolic equations given in the form $z_{xx} = f(x,y,z,z_x,z_y,z_{xy},z_{yy})$,
the (relative) invariants characterizing the three types of hyperbolic equations were calculated parametrically by Vranceanu (c.f.\ the $B$, $C$ invariants in \cite{Vranceanu1940}). For a general equation $F(x,y,z,z_x,z_y,z_{xx},z_{xy},z_{yy}) = 0$, these invariants appeared in Chapter~9 of Jur\'a\v{s}' thesis in his characterization of the \MA/ class (c.f.\ $M_\sigma$, $M_\tau$ in \cite{Juras1997}). Our derivation of these invariants (labelled $I_1$, $I_2$ in this article) is quite dif\/ferent and the novelty in our exposition (see Theorem~\ref{thm:hyp-contact-inv}) is obtaining simpler expressions expressed in terms of certain determinants. Moreover, we use these invariants to give general examples of hyperbolic equations of Goursat and generic type (see Table~\ref{table:hyp-examples}).
Hyperbolic equations of \MA/ type have been well-studied in the literature from a geometric point of view (e.g.\ see \cite{Lepage1929, Lychagin1979, Morimoto1979, BGH1-1995, BGH2-1995, Kruglikov1999, Biesecker2003, KLR2007, MVY2007} and references therein). This class of equations includes the \MA/, wave, Liouville, Klein--Gordon and general $f$-Gordon equations. At the present time and to the best of our knowledge, there exists only one paper in the literature that has been devoted to the study of hyperbolic equations of generic type. This paper, {\em ``La g\'eom\'etrisation des \'equations aux d\'eriv\'ees partielles du second ordre''} \cite{Vranceanu1937}, was published by Vranceanu in 1937. Despite its appearance over 70 years ago, and much attention having been focused on applications of Cartan's equivalence method in the geometric theory of PDE, very few references to \cite{Vranceanu1937} exist. Currently, the paper does not appear on MathSciNet; the only reference to it by internet search engines appears on Zentralblatt Math.
In \cite{Vranceanu1937}, Vranceanu uses the exterior calculus and Cartan's method of equivalence to study generic hyperbolic equations. One of the most intriguing results of the paper is that all equations of generic type admit at most a {\em nine}-dimensional local Lie group of (contact) symmetries. This is in stark contrast to the \MA/ class, where the wave equation is well-known to admit an inf\/inite-dimensional symmetry group. Vranceanu is able to isolate the correspon\-ding maximally symmetric structure equations as well as some submaximally symmetric structures. Furthermore, he is able to integrate these abstract structure equations to obtain an explicit parametrization of the corresponding coframe, leading to normal forms for the contact-equivalence classes. As any practitioner of the Cartan equivalence method can probably attest, this is an impressive computational feat. Nevertheless, as in the style of Cartan's writings, Vranceanu's arguments are at times dif\/f\/icult to decipher, hypotheses are not clearly stated or are dif\/f\/icult to discern amidst the quite lengthy calculations, and some of his results are not quite correct. In this paper, we reexamine, clarify, and sharpen some of Vranceanu's results with the perspective of our modern understanding of the geometric theory of dif\/ferential equations through exterior dif\/ferential systems and Cartan's equivalence method. The hope is that this exposition will provide a clearer understanding of the geometry of this class of equations for a~contemporary audience.
In Section \ref{background} we recall the contact-invariant classif\/ication of second order scalar PDE into elliptic, parabolic, and hyperbolic classes based on invariants of a (conformal class of a) symmetric bilinear form, and def\/ine the $M_1$ and $M_2$ characteristics in the hyperbolic case. This leads to a preliminary set of structure equations for hyperbolic equations. In Section \ref{sec:hyperbolic-subclassify}, the structure equations are further tightened, and using them we show how the class of $M_1$ and $M_2$ leads to the f\/iner classif\/ication into equations of \MA/, Goursat, and generic types. In Theorem \ref{thm:hyp-contact-inv}, these subclasses of hyperbolic equations are characterized by means of the relative invariants $I_1$, $I_2$. We then restrict to the generic case and derive the generic hyperbolic structure equations. We note that in Vranceanu's derivation of the generic hyperbolic structure equations (c.f.\ \eqref{StrEqns123} in this paper), the $\epsilon={\rm sgn}(I_1I_2) = \pm 1$ contact invariant was overlooked. This carries through to the normal forms for the contact-equivalence classes. Section \ref{str-grp-eq-prb} formulates the equivalence problem for generic hyperbolic equations and recalls some facts from Cartan's theory of $G$-structures applied to our situation. The structure group that we consider here is strictly larger than Vranceanu's, dif\/fering by certain discrete components. These naturally arise when considering automorphisms which interchange the $M_1$ and $M_2$ characteristics. Both Vranceanu and Gardner--Kamran consider only automorphisms which preserve each of $M_1$ and $M_2$. The nine-dimensional bound on the symmetry group of any generic hyperbolic equation is established in Section \ref{9d-sym}.
In Section \ref{complete-str-eqs}, we give a clear enumeration of several generic hyperbolic structures which result from Vranceanu's reduction of the structure equations. These include the aforementioned maximally symmetric (nine-dimensional) structure equations as well as some new submaximally symmetric (eight and seven-dimensional) structures including some with nonconstant torsion. (Vranceanu derived two eight-dimensional structures with constant torsion in addition to the maximally symmetric structures.) Finally, Section \ref{maxsym-case} gives a detailed account of the maximally symmetric case. Integration of the abstract maximally symmetric structure equations leads to the contact-equivalence classes of maximally symmetric generic hyperbolic PDE being parametrized by $(\epsilon, a) \in \{ \pm 1 \} \times (0,1]$, with normal forms given by
\begin{gather*}
(\epsilon, a) = (1,1) : \quad 3z_{xx}(z_{yy})^3 + 1 = 0,\\
(\epsilon, a) \neq (1,1) : \quad (\epsilon + a)^2 \left(2 z_{xy} - (z_{yy})^2 \right)^3 + \epsilon a \left( 3z_{xx} - 6z_{xy}z_{yy} + 2(z_{yy})^3 \right)^2 = 0.
\end{gather*}
The isomorphism type of the symmetry algebra for the second equation is {\em independent of $(\epsilon,a)$} and is non-isomorphic to the symmetry algebra of the f\/irst equation. Thus, there are precisely {\em two} non-isomorphic (abstract) symmetry algebras that arise for maximally symmetric generic hyperbolic equations. These equations are further distinguished in a contact-invariant way using a contact invariant $\Delta_1$ and a relative contact invariant $\Delta_2$. Both equations satisfy $\Delta_1=0$, but the former satisf\/ies $\Delta_2=0$ while the latter satisf\/ies $\Delta_2 \neq 0$.
Let us point out two additional points of discrepancy with Vranceanu's calculations: (1) the restriction of the range of the parameter $a$ to $(0,1]$, and (2) a missing factor of 2 for the $z_{xy}z_{yy}$ term in the second equation above. The f\/irst point is a consequence of the aforementioned larger structure group used in our formulation of the equivalence problem. The additional discrete factors lead to identif\/ications of dif\/ferent parameter values. The second point was minor and the error was only introduced by Vranceanu in the last step of his derivation. To give added justif\/ication to the calculation of the normal forms above, we give explicitly the nine-dimensional symmetry algebras for the normal forms listed above. Both equations admit the symmetries
\begin{gather*}
X_1 = \parder{x}, \qquad X_2 =\parder{y}, \qquad X_3 =\parder{z}, \qquad X_4 =x\parder{z}, \qquad X_5 =y\parder{z},\\
X_6=x\parder{x} + y\parder{y} + 2z\parder{z}.
\end{gather*}
The former admits the additional symmetries
\begin{gather*}
X_7 = xy\parder{z}, \qquad
X_8 = 2y\parder{y} + 3z\parder{z}, \qquad
X_9 = x^2\parder{x} + xz\parder{z},
\end{gather*}
while the latter admits the additional symmetries
\begin{gather*}
X_7 = y\parder{y} + 3z\parder{z},
\qquad X_8 = x \parder{y} - \frac{1}{2} y^2 \parder{z},
\qquad X_9= x^2 \parder{x} + xy \parder{y} + \left(xz-\frac{1}{6} y^3\right) \parder{z}.
\end{gather*}
The calculation of these symmetries (especially in the latter case) is in general a nontrivial task considering the complexity of the equation.
Numerous appendices provide the details of the proofs of the main statements in the body of this article.
All considerations in this paper are local, and we will work in the smooth category. We use the Einstein summation convention throughout. We will make the convention of using braces enclosing 1-forms to denote the corresponding submodule generated by those 1-forms. In general, we will abuse notation and not distinguish between a submodule of 1-forms and its corresponding algebraic ideal (i.e.\ with respect to the wedge product) in the exterior algebra. This is useful when stating structure equations, e.g.\ $d\omega^1 \equiv 0\; \mod I_F$.
\section{Contact-equivalence of PDE}
\label{background}
Consider a scalar second order PDE
\begin{equation}
F\left(x,y,z, \frac{\partial z}{\partial x}, \frac{\partial z}{\partial y}, \frac{\partial^2 z}{\partial x^2}, \frac{\partial^2 z}{\partial x \partial y}, \frac{\partial^2 z}{\partial y^2}\right) = 0
\label{pde}
\end{equation}
in two independent variables $x$, $y$ and one dependent variable $z$. A natural geometrical setting for~\eqref{pde} is the space of 2-jets
$J^2(\mathbb{R}^2,\mathbb{R})$ with standard local coordinates $(x,y,z,p,q,r,s,t)$ ({\em Monge coordinates}), and the equation above yields a locus
\begin{gather*}
L_F = \left\{ (x,y,z,p,q,r,s,t) \in J^2(\mathbb{R}^2,\mathbb{R}): F(x,y,z,p,q,r,s,t) = 0 \right\}.
\end{gather*}
We assume that $L_F$ is the image of an open subset $\Sigma_7 \subset \mathbb{R}^7$ under a smooth map $i_F : \Sigma_7 \ra J^2(\mathbb{R}^2,\mathbb{R})$.
\begin{definition} We will say that $i_F$ is a {\em nondegenerate parametrization} of the equation $F=0$ if~$i_F$ has maximal rank and $L_F$ is everywhere transverse to the f\/ibers of the natural projection
\begin{gather*}
\pi^2_1 : J^2(\mathbb{R}^2,\mathbb{R}) \ra J^1(\mathbb{R}^2,\mathbb{R}),
\end{gather*}
i.e.\ ${\rm im}(i_{F*}) + \ker(\pi^2_{1\,*}) = TJ^2(\mathbb{R}^2,\mathbb{R})$.
\end{definition}
We will always work with nondegenerate parametrizations in this paper. By the transversality assumption $(F_r,F_s,F_t) \neq 0$, and so by the implicit function theorem, one can locally solve \eqref{pde} for one of the highest-order derivatives.
Since ${\rm im}((\pi^2_1 \circ i_F)_*) = TJ^1(\mathbb{R}^2,\mathbb{R})$, then $(\pi^2_1 \circ i_F)^* (dx \wedge dy \wedge dz \wedge dp \wedge dq)\neq 0$ and so the standard coordinates $(x,y,z,p,q)$ on $J^1(\mathbb{R}^2,\mathbb{R})$ along with two additional coordinates $u$, $v$ may be taken as coordinates on $\Sigma_7$. Thus, without loss of generality, we may assume the parametrization $i_F$ has the form $i_F(x,y,z,p,q,u,v) = (x,y,z,p,q,r,s,t)$, expressing $r$, $s$, $t$ as functions of $(x,y,z,p,q,u,v)$.
The contact system $\contact{2}$ on $J^2(\mathbb{R}^2,\mathbb{R})$ is generated by the standard contact forms
\begin{gather*}
\theta^1 = dz - pdx - qdy, \qquad \theta^2 = dp - rdx - sdy, \qquad \theta^3 = dq - sdx - tdy
\end{gather*}
and pulling back by $i_F$, we obtain a Pfaf\/f\/ian system (i.e.\ generated by 1-forms) $I_F$ on $\Sigma_7$,
\begin{gather*}
I_F = i_F^*(\contact{2}) = \{ \omega^1, \omega^2, \omega^3 \},
\end{gather*}
where $\omega^\alpha = i_F^* \theta^\alpha$. There is a correspondence between local solutions of \eqref{pde} and local integral manifolds of $I_F$.
\begin{definition}
The equations \eqref{pde} and
\begin{gather}
\bar{F}\left(\bar{x},\bar{y},\bar{z}, \frac{\partial \bar{z}}{\partial \bar{x}}, \frac{\partial \bar{z}}{\partial \bar{y}}, \frac{\partial^2 \bar{z}}{\partial \bar{x}^2}, \frac{\partial^2 \bar{z}}{\partial \bar{x} \partial \bar{y}}, \frac{\partial^2 \bar{z}}{\partial \bar{y}^2}\right) = 0,
\qquad \mbox{(with} \ \ i_{\bar{F}} : \bar\Sigma_7 \ra J^2(\mathbb{R}^2,\mathbb{R}) )\label{pde-Fbar}
\end{gather}
are {\em contact-equivalent} if there exists a local dif\/feomorphism $\phi : \Sigma_7 \ra \bar\Sigma_7$ such that $\phi^*I_{\bar{F}} = I_F$. The collection of all such maps will be denoted $\contactmaps{}$. A {\em contact symmetry} is a~self-equivalence.
\end{definition}
\begin{remark} More precisely, the above def\/inition refers to {\em internal} contact-equivalence. There is another natural notion of equivalence: namely, \eqref{pde} and \eqref{pde-Fbar} are {\em externally} contact-equivalent if there exists a local dif\/feomorphism $\rho : J^2(\mathbb{R}^2,\mathbb{R}) \ra J^2(\mathbb{R}^2,\mathbb{R})$ that restricts to a local dif\/feomorphism $\tilde\rho : i_F(\Sigma_7) \ra i_{\bar{F}}(\bar\Sigma_7)$ and preserves the contact system, i.e.\ $\rho^*(\contact{2}) = \contact{2}$. It is clear that any external equivalence induces a corresponding internal equivalence, but in general the converse need not hold. The dif\/ference between these two natural notions of equivalence is in general quite subtle and has been investigated in detail in \cite{AKO1993}. A corollary of their results (c.f.\ Theorem 18 therein) is that for \eqref{pde}, under the maximal rank and transversality conditions, any internal equivalence extends to an external equivalence, and thus the correspondence is one-to-one.
\end{remark}
As shown by Gardner \cite{Gardner1969-char}, the (pointwise) classif\/ication of \eqref{pde} into mutually exclusive elliptic, parabolic and hyperbolic classes is in fact a contact-invariant classif\/ication which arises through invariants (i.e.\ rank and index) of a (conformal class of a) symmetric $C^\infty(\Sigma_7)$-bilinear form $\langle \cdot , \cdot \rangle_7$ on $I_F$, namely
\begin{gather}
\langle \varphi, \psi \rangle_7 {\rm Vol}_{\Sigma_7} := d\varphi \wedge d\psi \wedge \omega^1 \wedge \omega^2 \wedge \omega^3,
\label{bilinear-form-defn}
\end{gather}
where ${\rm Vol}_{\Sigma_7}$ denotes any volume form on $\Sigma_7$.
Since $i_F^*$ is surjective, there exists a 7-form $\nu$ on $J^2(\mathbb{R}^2,\mathbb{R})$ such that $i_F^* \nu = {\rm Vol}_{\Sigma_7}$, and so
\begin{gather*}
\langle \varphi, \psi \rangle_7 i_F^* \nu = i_F^*(d\tilde\varphi \wedge d\tilde\psi \wedge \theta^1 \wedge \theta^2 \wedge \theta^3 ),
\end{gather*}
where $\tilde\varphi$ and $\tilde\psi$ are any forms on $J^2(\mathbb{R}^2,\mathbb{R})$ such that $\varphi = i_F^* \tilde\varphi$ and $\psi = i_F^* \tilde\psi$.
Since $i_{F*} : T\Sigma_7 \ra TJ^2(\mathbb{R}^2,\mathbb{R})$ is rank 7 (as is $i_F^* : T^*J^2(\mathbb{R}^2,\mathbb{R}) \ra T^*\Sigma_7$) and $i_F^*dF = 0$, then $\ker(i_F^*) = \{ dF \}$, and
\begin{gather}
i_F^* \eta = 0 \qquad \mbox{if\/f} \qquad \eta \wedge dF = 0, \qquad \forall \; \eta \in \Omega^*(J^2(\mathbb{R}^2,\mathbb{R})).
\label{dF-lemma}
\end{gather}
Consequently, letting ${\rm Vol}_{J^2(\mathbb{R}^2,\mathbb{R})} = \nu \wedge dF$, we see that \eqref{bilinear-form-defn} is equivalent to
\begin{gather*}
(\langle \varphi, \psi \rangle_7)_p ({\rm Vol}_{J^2(\mathbb{R}^2,\mathbb{R})} )_{i_F(p)} = (d\tilde\varphi \wedge d\tilde\psi \wedge \theta^1 \wedge \theta^2 \wedge \theta^3 \wedge dF)_{i_F(p)},
\end{gather*}
where $p \in \Sigma_7$.
This def\/inition is well-def\/ined: it is independent of the choice of $\tilde\varphi$ and $\tilde\psi$ so long as $\varphi = i_F^* \tilde\varphi$ and $\psi = i_F^* \tilde\psi$.
A computation in the basis $\omega^1$, $\omega^2$, $\omega^3$ reveals that a volume form may be chosen so that
\begin{gather}
(\langle \omega^\alpha, \omega^\beta \rangle_7)_p = \left( \begin{array}{ccc} 0 & 0 & 0\\ 0 & F_t & -\frac{1}{2} F_s\\ 0 & -\frac{1}{2} F_s & F_r \end{array} \right)_{i_F(p)}. \label{bilinear-form-matrix}
\end{gather}
Our assumption that $i_F$ have maximal rank implies that $\langle \cdot , \cdot \rangle_7$ cannot have rank zero.
Def\/ining
\begin{gather*}
\Delta = i_F^*\left(F_r F_t - \frac{1}{4} F_s{}^2\right),
\end{gather*}
we have the following mutually exclusive cases at each point $p \in \Sigma_7$:
\begin{table}[h] \centering
\caption{Contact-invariant classif\/ication of scalar second order PDE in the plane.}
\vspace{1mm}
$\begin{array}{|c|c|c|} \hline
\mbox{elliptic} & \mbox{parabolic} & \mbox{hyperbolic}\\ \hline \hline
\Delta(p) > 0 & \Delta(p) = 0 & \Delta(p) < 0 \\ \hline
\end{array}$
\end{table}
By the commutativity of pullbacks with $d$, it is clear that this classif\/ication is a priori contact-invariant. We remark that in the classical literature on the geometry of hyperbolic equations, the terminology {\em Monge cha\-rac\-teristics} appears. These are determined by the roots of the {\em cha\-rac\-teristic equation}
\begin{gather}
\lambda^2 - F_s \lambda + F_t F_r = 0. \label{char-eqn}
\end{gather}
The discriminant of this equation (with the coef\/f\/icients evaluated on $F=0$) is precisely $-\frac{1}{4} \Delta$, and so the elliptic, parabolic, and hyperbolic cases correspond to the existence of no roots, a~double root, and two distinct roots respectively.
In the analysis to follow, all constructions for a PDE $F=0$ will implicitly be repeated for a~second PDE $\bar{F}=0$ (if present). We will concern ourselves exclusively with the hyperbolic case, that is, an open subset of $\Sigma_7$ on which $F=0$ is hyperbolic.
By the hyperbolicity condition, the two nonzero eigenvalues of $\langle \cdot, \cdot \rangle_7$ have opposite sign, and hence there exists a pair of rank two maximally isotropic subsystems $M_1$ and $M_2$ of $I_F$ at every point of consideration.
\begin{definition} Given hyperbolic PDE $F=0$ and $\bar{F}=0$, def\/ine
\begin{gather*}
\contactmaps{+} = \{ \phi \in \contactmaps{} : ~\phi^* \bar{M}_1 = M_1, ~\phi^*\bar{M}_2 = M_2 \},\\
\contactmaps{-} = \{ \phi \in \contactmaps{} : ~\phi^* \bar{M}_1 = M_2, ~\phi^*\bar{M}_2 = M_1 \}.
\end{gather*}
If $\bar{F} = F$, we take $\bar\Sigma_7=\Sigma_7$ and use the notation ${\rm Aut}(I_F) := \selfcontactmaps{}$, etc.
\end{definition}
\begin{remark}
Implicitly, given the Pfaf\/f\/ian system $I_F$ corresponding to a hyperbolic PDE $F=0$, we assume that a choice of labelling for the $M_1$ and $M_2$ characteristics has been made. This is of course not intrinsic. All of our f\/inal results will not depend on this choice.
\end{remark}
Both Vranceanu \cite{Vranceanu1937} and Gardner--Kamran \cite{GK1993} consider only local dif\/feomorphisms which preserve each of $M_1$ and $M_2$.
\begin{example}
For the wave equation written as $z_{xy}=0$, we have the pullbacks of the contact forms on $J^2(\mathbb{R}^2,\mathbb{R})$ to the parameter space $\Sigma_7 : (x,y,z,p,q,r,t)$,
\begin{gather*}
\omega^1 = dz - pdx - qdy, \qquad
\omega^2 = dp - r dx, \qquad
\omega^3 = dq - t dy
\end{gather*}
and
\begin{gather*}
(\langle \omega^\alpha, \omega^\beta \rangle_7)_p = \left( \begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & -\frac{1}{2} \\ 0 & -\frac{1}{2} & 0 \end{array} \right).
\end{gather*}
Thus, $M_1 = \{ \omega^1, \omega^2 \}$ and $M_2 = \{ \omega^1, \omega^3 \}$. Interchanging the independent variables induces $\phi_0 : \Sigma_7 \ra \Sigma_7$, $(x,y,z,p,q,r,t) \mapsto (y,x,z,q,p,t,r)$, which satisf\/ies
\begin{gather*}
\phi_0^*\omega^1 = \omega^1, \qquad
\phi_0^*\omega^2 = \omega^3, \qquad
\phi_0^*\omega^3 = \omega^2,
\end{gather*}
and hence $\phi_0 \in {\rm Aut}^-(I_F)$.
\end{example}
The hyperbolicity condition implies that there exists a local basis of $I_F$ which by abuse of notation we also denote $\omega^1$, $\omega^2$, $\omega^3$ such that
\begin{gather*}
M_1 = \{ \omega^1, \omega^2 \}, \qquad
M_2 = \{ \omega^1, \omega^3 \}
\end{gather*}
and the matrix representing $\langle \cdot , \cdot \rangle_7$ is in Witt normal form
\begin{gather*}
(\langle \omega^\alpha, \omega^\beta \rangle_7)_p = \left( \begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{array} \right).
\end{gather*}
\begin{lemma}[{\bf Preliminary hyperbolic structure equations}]
There exists a (local) coframe $\bm\omega = \{ \omega^i \}_{i=1}^7$ on $\Sigma_7$ such that $I_F = \{ \omega^1, \omega^2, \omega^3 \}$ and
\begin{gather}
d\omega^1 \equiv 0, \nonumber\\
d\omega^2 \equiv \omega^4 \wedge \omega^5, \qquad\mod I_F, \label{hyp-str-eqns}\\
d\omega^3 \equiv \omega^6 \wedge \omega^7,
\nonumber
\end{gather}
with
\begin{gather*}
\omega^1 \wedge \omega^2 \wedge \omega^3 \wedge \omega^4 \wedge \omega^5 \wedge \omega^6 \wedge \omega^7 \neq 0.
\end{gather*}
\end{lemma}
\begin{proof}
In Theorem 1.7 in \cite{BCGGG1991}, an algebraic normal form for a 2-form $\Omega$ is given. In particular, if $\Omega \wedge \Omega =0$, then $\Omega = \sigma^1 \wedge \sigma^2$ is decomposable. This statement is also true in a relative sense: if $\Omega \wedge \Omega \equiv 0 \mod I$, then $\Omega \equiv \sigma^1 \wedge \sigma^2 \mod I$, where $I$ is an ideal in the exterior algebra.
Using this fact, let us deduce consequences of the Witt normal form. By def\/inition of $\langle \cdot, \cdot \rangle_7$, we have (taking congruences below modulo $I_F$)
\begin{align*}
&\langle \omega^2, \omega^2 \rangle_7 = 0 \quad\Leftrightarrow\quad d\omega^2 \wedge d\omega^2 \equiv 0 \quad\Leftrightarrow\quad d\omega^2 \equiv \omega^4 \wedge \omega^5, \\
&\langle \omega^3, \omega^3 \rangle_7 = 0 \quad\Leftrightarrow\quad d\omega^3 \wedge d\omega^3 \equiv 0 \quad\Leftrightarrow\quad d\omega^3 \equiv \omega^6 \wedge \omega^7, \\
& \langle \omega^2, \omega^3 \rangle_7 = 1 \quad\Leftrightarrow\quad
d\omega^2 \wedge d\omega^3 \wedge \omega^1 \wedge \omega^2 \wedge \omega^3 = \omega^4 \wedge \omega^5 \wedge \omega^6 \wedge \omega^7 \wedge \omega^1 \wedge \omega^2 \wedge \omega^3 \neq 0.
\end{align*}
Using $\langle \omega^1, \omega^2 \rangle_7 = \langle \omega^1, \omega^3 \rangle_7 = 0$, we have
\begin{align*}
0 &= d\omega^1 \wedge d\omega^2 \wedge \omega^1 \wedge \omega^2 \wedge \omega^3 = d\omega^1 \wedge \omega^4 \wedge \omega^5 \wedge \omega^1 \wedge \omega^2 \wedge \omega^3, \\
0 &= d\omega^1 \wedge d\omega^3 \wedge \omega^1 \wedge \omega^2 \wedge \omega^3 = d\omega^1 \wedge \omega^6 \wedge \omega^7 \wedge \omega^1 \wedge \omega^2 \wedge \omega^3,
\end{align*}
and thus $d\omega^1 \equiv 0$.
\end{proof}
Consequently, $\{ \omega^i \}_{i=1}^7$ is a (local) coframe on $\Sigma_7$, and the structure equations can be written
\begin{gather}
d\omega^i = \frac{1}{2} \gamma^i{}_{jk} \omega^j \wedge \omega^k.
\label{gamma-defn}
\end{gather}
\section{\MA/, Goursat and generic hyperbolic equations}
\label{sec:hyperbolic-subclassify}
The congruences appearing in the preliminary hyperbolic structure equations can be tightened with a more careful study of integrability conditions and further adaptations of the coframe. The details are provided in Appendix \ref{app-A}.
\begin{theorem}[{\bf Hyperbolic structure equations}] \label{general-hyp-str-eqns}
Given any hyperbolic equation $F=0$ with nondegenerate parametrization $i_F : \Sigma_7 \ra J^2(\mathbb{R}^2,\mathbb{R})$, there is an associated coframe $\bm\omega = \{ \omega^i \}_{i=1}^7$ on $\Sigma_7$ such that
\begin{enumerate}\itemsep=0pt
\item $I_F = \{ \omega^1, \omega^2, \omega^3 \}$, \ $
M_1 = \{ \omega^1, \omega^2 \}$, \ $
M_2 = \{ \omega^1, \omega^3 \}$.
\item We have the structure equations
\begin{alignat}{3}
& d\omega^1 \equiv \omega^3 \wedge \omega^6 + \omega^2 \wedge \omega^4 \quad && \mod \{ \omega^1 \},&\nonumber \\
& d\omega^2 \equiv \omega^4 \wedge \omega^5 + U_1 \omega^3 \wedge \omega^7 \quad && \mod \{ \omega^1,\omega^2 \}, & \label{U1U2-streq}\\
& d\omega^3 \equiv \omega^6 \wedge \omega^7 + U_2 \omega^2 \wedge \omega^5 \quad && \mod \{ \omega^1,\omega^3 \}&\nonumber
\end{alignat}
for some functions $U_1$, $U_2$ on $\Sigma_7$.
\end{enumerate}
\end{theorem}
A f\/iner contact-invariant classif\/ication of hyperbolic equations arises from the study of the {\em class} of $M_1$ and $M_2$. Let us recall some basic def\/initions.
\begin{definition}
Let $I$ be a Pfaf\/f\/ian system on a manifold $\Sigma$. Def\/ine the
\begin{enumerate}\itemsep=0pt
\item {\em Cauchy characteristic space} $\Char{I} = \{ X \in\mathfrak{X}(\Sigma) : X \in I^\perp,~ X \intprod dI \subset I \}$.
\item {\em Cartan system} ${\cal C}(I) = \Char{I}^\perp$. The {\em class} of $I$ is the rank of ${\cal C}(I)$ (as a $C^\infty(\Sigma)$-module).
\end{enumerate}
Here, $\perp$ refers to the annihilator submodule.
\end{definition}
The hyperbolic structure equations indicate that there are only two possibilities for the class of $M_1$ and $M_2$.
\begin{corollary} \label{class-cor} For $i=1,2$, $\class(M_i) = 6 \mbox{ or } 7$. Moreover, $\class(M_i)=6$ iff $U_i =0$.
\end{corollary}
\begin{proof} Let $\{ \Pder{}{i} \}_{i=1}^7$ denote the dual basis to $\{ \omega^i \}_{i=1}^7$. From \eqref{U1U2-streq}, we have
\begin{gather*}
\Char{M_1} \subset \left\{ \Pder{}{7} \right\}, \qquad \Char{M_2} \subset \left\{ \Pder{}{5} \right\}.
\end{gather*}
Moreover, $\class(M_1) = 6$ if\/f $\Pder{}{7} \in \Char{M_1}$ if\/f $U_1=0$. Similarly for $M_2$.
\end{proof}
Consequently, we obtain the subclassif\/ication of hyperbolic equations given in Table~\ref{table:hyp-eqns}.
\begin{table}[h]\centering
\caption{Contact-invariant classif\/ication of hyperbolic PDE.}
\label{table:hyp-eqns}
\vspace{1mm}
\begin{tabular}{|c|c|} \hline
Type & Contact-invariant classif\/ication\\ \hline\hline
\MA/ (6-6)& $\class(M_1)=\class(M_2)=6$\\
Goursat (6-7) & $\{ \class(M_1), \class(M_2) \} = \{ 6, 7\}$ \\
generic (7-7) & $\class(M_1)=\class(M_2)=7$ \\ \hline
\end{tabular}
\end{table}
\begin{example} \label{hyp-examples} We give some known examples of each type of hyperbolic equation:
\begin{itemize}\itemsep=0pt
\item \MA/: wave equation $z_{xy}=0$, Liouville equation $z_{xy} = e^z$, Klein--Gordon equation $z_{xy} = z$, or more generally the $f$-Gordon equation $z_{xy} = f(x,y,z,z_x,z_y)$, and \MA/ equation $z_{xx} z_{yy} - (z_{xy})^2 = f(x,y)$.
\item Goursat: $z_{xx} = f(z_{xy})$ where $f'' \neq 0$.
\item generic: $z_{xy} = \frac{1}{2} \sin(z_{xx})\cos(z_{yy})$, or $3z_{xx} (z_{yy})^3 + 1=0$.
\end{itemize}
\end{example}
The terminology for class 6-6 equations is justif\/ied by the following result, known to Vran\-cea\-nu~\cite{Vranceanu1940}. We refer the reader to Gardner--Kamran \cite{GK1993} for a modern proof.
\begin{theorem} \label{thm:MA}
A second-order hyperbolic equation has $\class(M_i)=6$, $i=1,2$ if and only if its locus can be given by an equation of the form
\begin{gather*}
a ( z_{xx} z_{yy} - (z_{xy})^2 ) + b z_{xx} + 2c z_{xy} + d z_{yy} + e = 0,
\end{gather*}
where $a$, $b$, $c$, $d$, $e$ are functions of $x$, $y$, $z$, $z_x$, $z_y$.
\end{theorem}
The examples given above were obtained by constructing explicit coframes which realize the abstract structure equations given in Theorem~\ref{general-hyp-str-eqns}, which in general is a very tedious task and is equation-specif\/ic. We state here two relative invariants $I_1$, $I_2$ (which are related to the two relative invariants $U_1$, $U_2$) whose vanishing/nonvanishing determine the type of any hyperbolic equation. Given any hyperbolic equation $F=0$, def\/ine
\begin{gather*}
\lambda_\pm = \frac{F_s}{2} \pm \sqrt{|\Delta|},
\end{gather*}
which are the roots of the characteristic equation~\eqref{char-eqn}.
Without loss of generality, we may assume that $F_s \geq 0$. (If not, take $\hat{F} = -F$ instead.) By the hyperbolicity assumption $\lambda_+ > 0$. The proof of the following theorem is given in Appendix~\ref{app:hyp-contact-inv}.
\begin{theorem}[{\bf Relative contact invariants for hyperbolic equations}] \label{thm:hyp-contact-inv}
Suppose that $F=0$ is a hyperbolic equation with $F_s \geq 0$. Let
\begin{gather*}
\tilde{I}_1 = \det\left( \begin{array}{ccc} F_r & F_s & F_t\\ \lambda_+ & F_t & 0\\
\left( \frac{F_t}{\lambda_+} \right)_r &
\left( \frac{F_t}{\lambda_+} \right)_s &
\left( \frac{F_t}{\lambda_+} \right)_t
\end{array} \right), \qquad
\tilde{I}_2 = \det\left( \begin{array}{ccc} 0 & F_r & \lambda_+ \\ F_r & F_s & F_t\\
\left( \frac{F_r}{\lambda_+} \right)_r &
\left( \frac{F_r}{\lambda_+} \right)_s &
\left( \frac{F_r}{\lambda_+} \right)_t
\end{array} \right),
\end{gather*}
and $I_i = i_F^* \tilde{I}_i$. Then we have the following classification of $F=0$:
\begin{center}
\begin{tabular}{|c|c|} \hline
Type & Contact-invariant classification\\ \hline\hline
\MA/ & $I_1=I_2=0$\\
Goursat & exactly one of $I_1$ or $I_2$ is zero\\
generic & $I_1I_2 \neq 0$ \\ \hline
\end{tabular}
\end{center}
Moreover, we have the scaling property: If $\phi$ is a function on $J^2(\mathbb{R}^2,\mathbb{R})$ such that $i_F^* \phi > 0$, then
\begin{gather*}
\hat{F} = \phi F \quad\Rightarrow\quad \hat{I}_i = (i_F^*\phi)^2 I_i, \quad i=1,2.
\end{gather*}
\end{theorem}
We note that the scaling property is a fundamental property of these relative invariants: their vanishing/nonvanishing depends only on the equation locus.
\begin{remark} For a general hyperbolic equation $F(x,y,z,p,q,r,s,t) = 0$, Jur\'a\v{s}' \cite{Juras1997} calculated two (relative) invariants $M_\sigma$, $M_\tau$ whose vanishing characterizes the \MA/ class. His invariants were given explicitly in terms of two non-proportional real roots $(\mu,\lambda) = (m_x,m_y)$ and $(\mu,\lambda) = (n_x,n_y)$ of the characteristic equation
\begin{gather*}
F_r \lambda^2 - F_s \lambda\mu + F_t \mu^2 = 0,
\end{gather*}
which he associates to the given PDE. We note here that the characteristic equation \eqref{char-eqn} that we have used dif\/fers from that of Jur\'a\v{s} (but has the same discriminant). Our invariants $I_1$, $I_2$ appear to be simpler written in this determinantal form.
\end{remark}
Using the relative contact invariants $I_1$, $I_2$ we can identify some more general examples of hyperbolic equations of Goursat and generic type.
\begin{corollary} \label{cor:hyp-examples} The classification of hyperbolic equations of the form $F(x,y,z,p,q,r,t)=0$, $G(x,y,z,p,q,r,s)=0$, and $rt = f(x,y,z,p,q,s)$ is given in Table~{\rm \ref{table:hyp-examples}} below.
\end{corollary}
\begin{proof} The hyperbolicity condition in each case is clear. Def\/ine the function
\begin{gather*}
\Delta_{r,t}^F = F_r{}^2 F_{tt} - 2F_r F_t F_{rt} + F_t{}^2 F_{rr},
\end{gather*}
and similarly for $\Delta_{r,s}^G$.
Without loss of generality $G_s$, $f_s \geq 0$. The calculation of $\tilde{I}_1$, $\tilde{I}_2$ leads to
\begin{alignat*}{4}
&F(x,y,z,p,q,r,t)=0: \qquad && \tilde{I}_1 = \frac{-F_t{}^2\Delta_{r,t}^F}{2(-F_t F_r)^{3/2}},\qquad && \tilde{I}_2 = \frac{-F_r{}^2\Delta_{r,t}^F}{2(-F_t F_r)^{3/2}}, &\\
&G(x,y,z,p,q,r,s)=0: \qquad && \tilde{I}_1 = 0,\qquad && \tilde{I}_2 = \frac{-\Delta_{r,s}^G}{G_s}, &\\
&rt = f(x,y,z,p,q,s): \qquad && \tilde{I}_1 = \frac{(f_{ss} - 2) r^2}{\sqrt{f_s{}^2 - 4rt}},\qquad && \tilde{I}_2 = \frac{(f_{ss} - 2) t^2}{\sqrt{f_s{}^2 - 4rt}}, &\\
&rt = -f(x,y,z,p,q,s):\qquad && \tilde{I}_1 = \frac{-(f_{ss} + 2) r^2}{\sqrt{f_s{}^2 - 4rt}},\qquad && \tilde{I}_2 = \frac{-(f_{ss} + 2) t^2}{\sqrt{f_s{}^2 - 4rt}}.&
\end{alignat*}
For $F(x,y,z,p,q,r,t)=0$: Since $i_F^*(F_t F_r) < 0$, then either $I_1$, $I_2$ both vanish or both do not vanish, i.e.\ either class 6-6 or class 7-7. The vanishing of $I_1$, $I_2$ is completely characterized by the vanishing of $i_F^*(\Delta_{r,t}^F)$. By Theorem \ref{thm:MA}, we know what all class 6-6 equations of the form $F(x,y,z,p,q,r,t)=0$ look like. Hence,
\begin{gather*}
i_F^*(\Delta_{r,t}^F)=0 \qquad \mbox{if\/f its locus can be given by} \quad F(x,y,z,p,q,r,t) = a r + bt + c = 0,
\end{gather*}
where $a$, $b$, $c$ are functions of $x$, $y$, $z$, $p$, $q$ only. The proof for $G(x,y,z,p,q,r,s)=0$ is similar and the result for the last equation is immediate.
\end{proof}
\begin{table}[h]\centering
\caption{General examples of hyperbolic equations and their types.}
\label{table:hyp-examples}
\vspace{1mm}
$\begin{array}{|c|c|c|c|c|} \hline
\mbox{Equation} & \begin{tabular}{c} Hyperbolicity\\ condition \end{tabular} & \mbox{Type}\\ \hline \hline
F(x,y,z,p,q,r,t)=0 & i_F^*(F_r F_t) < 0 &\begin{array}{c} \mbox{6-6 if\/f } F \mbox{ is an af\/f\/ine function of } r,t \mbox{ (*)} \\ \mbox{7-7 otherwise} \end{array} \\ \hline
G(x,y,z,p,q,r,s)=0 & i_G^*(G_s) \neq 0 & \begin{array}{c} \mbox{6-6 if\/f } G \mbox{ is an af\/f\/ine function of } r,s \mbox{ (*)} \\ \mbox{6-7 otherwise } \end{array} \\ \hline
rt = f(x,y,z,p,q,s) & 4f < f_s{}^2 & \begin{array}{c} \mbox{Assuming $rt \neq 0$:}\\ \mbox{6-6 if\/f } f_{ss}=2 \\ \mbox{7-7 if\/f } f_{ss} \neq 2 \end{array} \\ \hline
\end{array}$
\vspace{1mm}
(*) More precisely, it is the zero-locus of such a function.
\end{table}
\begin{remark}
Hyperbolic equations of Goursat and generic type are necessarily {\em non}-variational. This is because a variational formulation for a second order PDE requires a f\/irst order Lagrangian (density) $L(x,y,z,p,q)$ and the corresponding Euler--Lagrange equation is
\begin{gather*}
\Parder{L}{z} - D_x\left( \Parder{L}{p} \right)- D_y\left( \Parder{L}{q} \right) = 0,
\end{gather*}
where $D_x$ and $D_y$ are total derivative operators
\begin{gather*}
D_x = \parder{x} + p \parder{z} + r\parder{p} + s\parder{q}, \qquad
D_y = \parder{y} + q \parder{z} + s\parder{p} + t\parder{q}.
\end{gather*}
Thus, the Euler--Lagrange equation is quasi-linear and, if hyperbolic, is of \MA/ type.
\end{remark}
For the remainder of the paper we will deal exclusively with the generic case. In this case~$U_1$,~$U_2$ in \eqref{U1U2-streq} are nonzero and can be further normalized through a coframe adaptation. Before carrying out this normalization, we recall some more basic def\/initions.
\begin{definition}
Given a Pfaf\/f\/ian system $I$ on a manifold $\Sigma$, recall that the {\em first derived system} $I^{(1)} \subset I$ is the Pfaf\/f\/ian system def\/ined by the short exact sequence
\begin{equation*}
0 \lra I^{(1)} \lhook\joinrel\longrightarrow I \stackrel{\pi \circ d}{\lra} dI \mbox{ mod } I \lra 0,
\end{equation*}
where $\pi : \Omega^*(\Sigma) \ra \Omega^*(\Sigma) / I$ be the canonical surjection. (Here we abuse notation and identify~$I$ with the {\em algebraic} ideal in $\Omega^*(\Sigma)$ that it generates.) Iteratively, we def\/ine the {\em derived flag}
$\cdots \subset I^{(k)} \subset \cdots \subset I^{(1)} \subset I$.
\end{definition}
\begin{remark} $I$ is completely integrable (in the Frobenius sense) if\/f $I^{(1)} = I$.
\end{remark}
Since $d$ commutes with pullbacks, each derived system $I^{(k)}$ is invariant under any automorphism of $I$, i.e.\ if $\phi \in {\rm Aut}(I)$, then $\phi^* I^{(k)} = I^{(k)}$.
\begin{definition}
For hyperbolic equations, def\/ine
\begin{gather*}
{\rm Char}(I_F,dM_i) = \{ X \in\mathfrak{X}(\Sigma_7) : X \in I_F^\perp,~ X \intprod dM_i \subset I_F \}, \\
C(I_F,dM_i) = {\rm Char}(I_F,dM_i)^\perp.
\end{gather*}
\end{definition}
We now normalize the coef\/f\/icients $U_1$, $U_2$ in the generic case. Moreover, explicit generators for the f\/irst few systems in the derived f\/lag of $C(I_F,dM_1)$ and $C(I_F,dM_2)$ are obtained. The proofs of the following theorem and subsequent corollaries are provided in Appendix~\ref{app:gen-hyp}.
\begin{theorem}[{\bf Generic hyperbolic structure equations}] \label{generic-hyp-str-eqns}
Given any generic hyperbolic equation $F=0$ with nondegenerate parametrization $i_F : \Sigma_7 \ra J^2(\mathbb{R}^2,\mathbb{R})$, there is an associated coframe $\bm\omega = \{ \omega^i \}_{i=1}^7$ on $\Sigma_7$ such that
\begin{enumerate}\itemsep=0pt
\item[1)] $I_F = \{ \omega^1, \omega^2, \omega^3 \}$, \ $I_F^{(1)} = \{ \omega^1 \}$, \ $
M_1 = \{ \omega^1, \omega^2 \}$, \ $ M_2 = \{ \omega^1, \omega^3 \}$,
\item[2)] we have the structure equations
\begin{alignat}{3}
& d\omega^1 \equiv \omega^3 \wedge \omega^6 + \omega^2 \wedge \omega^4 \quad && \mod I_F^{(1)}, & \nonumber \\
&d\omega^2 \equiv \omega^4 \wedge \omega^5 + \omega^3 \wedge \omega^7 \quad && \mod M_1, &\label{StrEqns123}\\
& d\omega^3 \equiv \omega^6 \wedge \omega^7 + \epsilon \omega^2 \wedge \omega^5 \quad&& \mod M_2, & \nonumber
\end{alignat}
where $\epsilon = \pm 1$,
\item[3)] for {\em some} choice of coframe satisfying the above structure equations, we have
\begin{alignat}{3}
& C(I_F,dM_1) = \{ \omega^1, \omega^2, \omega^3, \omega^4, \omega^5 \}, \qquad & &
C(I_F,dM_2) = \{ \omega^1, \omega^2, \omega^3, \omega^6, \omega^7 \}, &\nonumber \\
& C(I_F,dM_1)^{(1)} = \{ \omega^1, \omega^2, \omega^4, \omega^5 \},\qquad &&
C(I_F,dM_2)^{(1)} = \{ \omega^1, \omega^3, \omega^6, \omega^7 \}, & \label{charsys}\\
& C(I_F,dM_1)^{(2)} = \{ \omega^4, \omega^5 \},\qquad &&
C(I_F,dM_2)^{(2)} = \{ \omega^6, \omega^7 \}. &\nonumber
\end{alignat}
\end{enumerate}
\end{theorem}
\begin{corollary} \label{gamma-cor}
For the choice of coframe as in Theorem {\rm \ref{generic-hyp-str-eqns}}, we have the additional structure equations
\begin{alignat}{3}
& d\omega^4 \equiv \epsilon \omega^5 \wedge \omega^6 \quad &&\mod \{ \omega^1, \omega^2, \omega^4 \}, & \nonumber\\
& d\omega^5 \equiv 0\quad &&\mod \{\omega^1, \omega^2, \omega^4, \omega^5\}, & \nonumber\\
& d\omega^6 \equiv - \omega^4 \wedge \omega^7 \quad &&\mod \{\omega^1, \omega^3, \omega^6\}, & \label{StrEqns4567}\\
& d\omega^7 \equiv 0 \quad &&\mod \{\omega^1, \omega^3, \omega^6, \omega^7\}. &\nonumber
\end{alignat}
\end{corollary}
We will refer to \eqref{StrEqns123} and \eqref{StrEqns4567} collectively as the generic hyperbolic structure equations.
\begin{corollary} \label{epsilon-cor}
$\epsilon$ is a contact invariant, and moreover $\epsilon = {\rm sgn}(I_1 I_2)$.
\end{corollary}
\begin{example} From Table~\ref{table:hyp-examples} and the proof of Corollary \ref{cor:hyp-examples}, we see $\epsilon=1$ for:{\samepage
\begin{itemize}\itemsep=0pt
\item $F(x,y,z,p,q,r,t)=0$ whenever $F$ is {\em not} an af\/f\/ine function of $r$, $t$.
\item $rt = f(x,y,z,p,q,s)$ whenever $f_{ss} \neq 2$.
\end{itemize}}
\end{example}
\begin{remark} We have the following dictionary of notations for the adapted coframe labelling:
$$
\begin{array}{c|c|c|c}
& \mbox{Gardner--Kamran \cite{GK1993}} & \mbox{Vranceanu \cite{Vranceanu1937}} & \mbox{The}\\ \hline
I_F & \omega^1, \ \pi^2, \ \pi^3 & ds^1, \ ds^2, \ ds^3 & \omega^1, \ \omega^2, \ \omega^3 \\
M_1 & \omega^1, \ \pi^2 & ds^1, \ ds^2 & \omega^1, \ \omega^2 \\
M_2 & \omega^1, \ \pi^3 & ds^1, \ ds^3 & \omega^1, \ \omega^3 \\
C(I_F,dM_1) & \omega^1, \ \pi^2, \ \pi^3, \ \omega^4, \ \omega^5 & ds^1, \ ds^2, \ ds^3, \ ds^5, \ ds^6 & \omega^1, \ \omega^2, \ \omega^3, \ \omega^4, \ \omega^5 \\
C(I_F,dM_2) & \omega^1, \ \pi^2, \ \pi^3, \ \omega^6, \ \omega^7 & ds^1, \ ds^2, \ ds^3, \ ds^4, \ ds^7 & \omega^1, \ \omega^2, \ \omega^3, \ \omega^6, \ \omega^7
\end{array}
$$
\end{remark}
\section{The structure group and the Cartan equivalence problem}
\label{str-grp-eq-prb}
In this section, we reformulate the problem of contact-equivalence of PDE as a Cartan equivalence problem. The reader will notice the similarities in the calculation of the structure group in this section and in the calculations in the proof of Corollary~\ref{epsilon-cor} provided in Appendix~\ref{app:gen-hyp}.
For any $\phi \in \contactmaps{+}$,
\begin{gather*}
\phi^* I_{\bar{F}}^{(1)} = I_F^{(1)}, \qquad \phi^*(C(I_{\bar{F}},d\bar{M}_i)^{(k)}) = C(I_F,dM_i)^{(k)}, \quad i=1,2, \quad \forall \; k \geq 0.
\end{gather*}
Consequently, with respect to the adapted coframe $\bm\omega$ on $\Sigma_7$ (as specif\/ied in Theorem \ref{generic-hyp-str-eqns}) and corresponding coframe $\bm{\bar\omega}$ on $\bar\Sigma_7$, we have
\begin{gather*}
\phi^*\mat{c}{\bar\omega^1 \\ \bar\omega^2\\ \bar\omega^3\\ \bar\omega^4\\ \bar\omega^5\\ \bar\omega^6\\ \bar\omega^7}
= \mat{ccccccc}{
\lambda_1 & 0 & 0 & 0 & 0 & 0 & 0\\
\mu_1 & \lambda_2 & 0 & 0 & 0 & 0 & 0\\
\mu_2 & 0 & \lambda_3 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & \lambda_4 & \nu_1 & 0 & 0\\
0 & 0 & 0 & \mu_3 & \lambda_5 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & \lambda_6 & \nu_2\\
0 & 0 & 0 & 0 & 0 & \mu_4 & \lambda_7\\
}
\mat{c}{\omega^1 \\ \omega^2\\ \omega^3\\ \omega^4\\ \omega^5\\ \omega^6\\ \omega^7}.
\end{gather*}
Applying $\phi^*$ to the $d\bar\omega^1$ structure equation in \eqref{StrEqns123} yields
\begin{gather*}
\phi^* d\bar\omega^1 = d\phi^*\bar\omega^1 = d(\lambda_1 \omega^1) \equiv
\lambda_1 (\omega^3 \wedge \omega^6 + \omega^2 \wedge \omega^4) \quad \mod I_F^{(1)},
\end{gather*}
and also
\begin{gather*}
\phi^* d\bar\omega^1 \equiv \lambda_3 \omega^3 \wedge (\lambda_6 \omega^6 + \nu_2 \omega^7) + \lambda_2 \omega^2 \wedge (\lambda_4 \omega^4 + \nu_1 \omega^5)\quad \mod I_F^{(1)},
\end{gather*}
which implies $\nu_1=\nu_2=0$, $\lambda_1 = \lambda_3 \lambda_6 = \lambda_2 \lambda_4$. Similarly, using the $d\bar\omega^2$, $d\bar\omega^3$ equations yields
\begin{alignat*}{3}
&\lambda_1 = \lambda_3 \lambda_6 = \lambda_2 \lambda_4, \qquad && \nu_1=\nu_2=0, &\\
&\lambda_2 = \lambda_4\lambda_5 = \lambda_3 \lambda_7, \qquad && \mu_1 = \lambda_3 \mu_4, &\\
&\lambda_3 = \lambda_6\lambda_7 = \lambda_2 \lambda_5, \qquad && \mu_2 = \epsilon \lambda_2 \mu_3.&
\end{alignat*}
Then
\begin{gather*}
\beta :=\frac{\lambda_6}{\lambda_4} = \frac{\lambda_2}{\lambda_3} = \frac{1}{\lambda_5} = \frac{\lambda_4}{\lambda_2}, \qquad \beta= \frac{\lambda_2}{\lambda_3} = \lambda_7 = \frac{\lambda_3}{\lambda_6} \quad\Rightarrow\quad \beta^4 = 1 \quad\Rightarrow\quad \beta = \pm 1,
\end{gather*}
and so
\begin{gather}
(\lambda_1, \lambda_2, \lambda_3, \lambda_4, \lambda_5, \lambda_6,\lambda_7) = (\beta a_1{}^2, a_1,\beta a_1,\beta a_1,\beta,a_1,\beta), \nonumber\\
(\mu_1,\mu_2,\mu_3,\mu_4) = (\beta a_1a_2,\epsilon a_1a_3,a_3,a_2),
\qquad \forall \; a_1 \neq 0, a_2,a_3 \in \mathbb{R}.
\label{str-grp-calc}
\end{gather}
This leads us to def\/ine
\begin{gather*}
S = {\rm diag}(-1,1,-1,-1,-1,1,-1),
\end{gather*}
and the {\em connected} matrix Lie group
\begin{gather}
G^0 = \left\{ M({\bf a}) : {\bf a} \in \mathbb{R}^+ \times \mathbb{R}^2 \right\}, \qquad
M({\bf a}) = \mat{ccccccc}{
a_1{}^2 & 0 & 0 & 0 & 0 & 0 & 0\\
a_1 a_2 & a_1 & 0 & 0 & 0 & 0 & 0\\
\epsilon a_1 a_3 & 0 & a_1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & a_1 & 0 & 0 & 0\\
0 & 0 & 0 & a_3 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & a_1 & 0\\
0 & 0 & 0 & 0 & 0 & a_2 & 1
}.
\label{G0-group}
\end{gather}
Let us also def\/ine
\begin{gather*}
R = \mat{ccccccc}{
-\epsilon & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 0 & \epsilon & 0 & 0 & 0 & 0\\
0 & -\epsilon & 0 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & -1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -\epsilon\\
0 & 0 & 0 & 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -\epsilon & 0 & 0\\
} \quad\Rightarrow\quad R^2 = {\rm diag}(1,-1,-1,-1,1,-1,1).
\end{gather*}
We note that $R^4=S^2=1$ and we have the relation $RS = SR^{-1}$, and consequently $R$, $S$ generate the dihedral group of order 8
\begin{gather*}
D_8 = \langle R,S : R^4=S^2=SRSR=1 \rangle.
\end{gather*}
The results \eqref{str-grp-calc} of the previous calculations establish that
\begin{gather*}
\phi \in \contactmaps{+} \qquad \mbox{if\/f} \quad \phi^* \bm{\bar\omega} = g \bm\omega, \qquad \mbox{where} \quad g : \Sigma_7 \ra G^+,
\end{gather*}
where $G^+$ is the group generated by $G^0$, $S$, $R^2$, which we can realize as the semi-direct product
\begin{gather*}
G^+ = G^0 \rtimes \langle S, R^2 \rangle
\end{gather*}
induced by the adjoint action.
If $\phi_0^* \bm{\bar\omega} = R \bm\omega$, then $\phi_0 \in \contactmaps{-}$. Conversely, given any $\phi \in \contactmaps{-}$, we have $\phi = \phi_0 \circ \tilde\phi$, where $\tilde\phi \in {\rm Aut}^+(I_F)$. Thus,
\begin{gather*}
\phi^* \bm{\bar\omega} = \tilde\phi^* \phi_0^* \bm{\bar\omega} = \tilde\phi^* R\bm\omega = Rg\bm\omega = {\rm Ad}_R(g) R\bm\omega, \qquad \forall \;g \in G^+.
\end{gather*}
Since $G^0$ is ${\rm Ad}_R$-invariant, and ${\rm Ad}_R(S) = SR^2$, then $G^+$ is ${\rm Ad}_R$-invariant and so
\begin{gather*}
\phi \in \contactmaps{-} \qquad \mbox{if\/f} \quad \phi^* \bm{\bar\omega} = g \bm\omega, \qquad \mbox{where} \quad g : \Sigma_7 \ra G^-,
\end{gather*}
where $G^- = G^+ \cdot R$. (Note that $G^-$ is {\em not} a group.) Consequently, we def\/ine
\begin{gather*}
G = G^0 \rtimes D_8,
\end{gather*}
and we have established:
\begin{lemma}
$\phi \in \contactmaps{}$ if and only if
\begin{gather}
\phi^* \bm{\bar\omega} = g \bm\omega, \qquad \mbox{for some} \quad g : \Sigma_7 \ra G.
\label{base-equivalence}
\end{gather}
\end{lemma}
The group $G$ will play the role of our initial structure group in the application of the Cartan equivalence method \cite{Cartan1953, Gardner1989, Olver1995}. Specif\/ically, the Cartan equivalence problem for generic hyperbolic equations can be stated as follows: Given the coframes $\bm\omega$, $\bm{\bar\omega}$ on $\Sigma_7$, $\bar\Sigma_7$ respectively, f\/ind necessary and suf\/f\/icient conditions for the existence of a local dif\/feomorphism $\phi : \Sigma_7 \ra \bar\Sigma_7$ satisfying \eqref{base-equivalence}. This is also known as the isomorphism problem for $G$-structures $(\bm{\omega},G)$ and $(\bm{\bar\omega},G)$.
\begin{remark}
Vranceanu (c.f.\ page~366 in~\cite{Vranceanu1937}) considered the equivalence problem with respect to a smaller group which, in our notation, is $G^0 \rtimes \langle R^2 \rangle$. This has index~4 in~$G$.
\end{remark}
The solution of the general Cartan equivalence problem leads to either the structure equations of an $\{e\}$-structure or of an inf\/inite Lie pseudo-group. However, for the equivalence problem for generic hyperbolic equations only the former case occurs. In particular, we will show in the next section that we are led to $\{e\}$-structures on $\Sigma_7 \times G_\Gamma$, where $G_\Gamma \subset G$ is a subgroup of dimension at most {\em two}. (Dif\/ferent $\{e\}$-structures arise due to normalizations of {\em nonconstant} type, and will depend on choices of normal forms $\Gamma$ in dif\/ferent orbits.) For the moment, let us recall the general solution to the coframe ($\{e\}$-structure) equivalence problem. Our description below is abbreviated from the presentation given in \cite{Olver1995}.
Let $\bm\Theta$, $\bm{\bar\Theta}$ be local coframes on manifolds $M$, $\bar{M}$ respectively of dimension $m$, and let $\Phi$ satisfy $\Phi^* \bm{\bar\Theta} = \bm\Theta$. If the structure equations for the $\{e\}$-structures are correspondingly
\begin{gather*}
d\Theta^a = \frac{1}{2} T^a{}_{bc} \Theta^b \wedge \Theta^c, \qquad
d\bar\Theta^a = \frac{1}{2} \bar{T}^a{}_{bc} \bar\Theta^b \wedge \bar\Theta^c, \qquad 1 \leq a,b,c \leq m,
\end{gather*}
then by commutativity of $\Phi^*$ and $d$, the structure functions $T^a{}_{bc}$ are invariants, i.e.
\begin{gather*}
\bar{T}^a{}_{bc} \circ \Phi = T^a{}_{bc}.
\end{gather*}
For any local function $f$ on $M$, def\/ine the {\em coframe derivatives} $\Parder{f}{\Theta^a}$ by
\begin{gather*}
df = \Parder{f}{\Theta^k} \Theta^k.
\end{gather*}
Let us write the iterated derivatives of the structure functions as
\begin{gather*}
T_\sigma = \frac{\partial^s T^a{}_{bc}}{\partial \Theta^{k_s} \cdots \partial \Theta^{k_1}}, \qquad \mbox{where} \quad \sigma = (a,b,c,k_1,\dots,k_s)\qquad \mbox{and} \quad s = {\rm order}(\sigma)
\end{gather*}
and $1 \leq a,b,c,k_1,\dots, k_s \leq m$. We repeat this construction for the barred variables.
Necessarily, again as a consequence of commutativity of~$\Phi^*$ and~$d$, the derived structure functions~$T_\sigma$ and~$\bar{T}_\sigma$ satisfy the {\em invariance equations}
\begin{gather}
\bar{T}_\sigma(\bar{x}) = T_\sigma(x), \qquad \mbox{when} \quad \bar{x} = \Phi(x), \qquad \forall \; {\rm order}(\sigma) \geq 0. \label{invar}
\end{gather}
Note that these equations are not independent: there are generalized Jacobi identities (which we will not describe explicitly here) which allow the permutation of the coframe derivatives, so in general only {\em nondecreasing} coframe derivative indices are needed.
\begin{definition} Let $\bm\Theta$ be a coframe with def\/ined on an open set $U \subset M$.
\begin{enumerate}\itemsep=0pt
\item Let $\mathbb{K}^{(s)}$ be the Euclidean space of dimension equal to the number of multi-indices
\begin{gather*}
\sigma = (a,b,c,k_1,\dots,k_r), \qquad b<c, \qquad k_1 \leq \cdots \leq k_r, \qquad 0 \leq r \leq s.
\end{gather*}
\item The $s^{\rm th}$ order {\em structure map} associated to $\bm\Theta$ is
\begin{gather*}
{\bf T}^{(s)} : U \ra \mathbb{K}^{(s)}, \qquad z_\sigma = T_\sigma(x), \qquad {\rm order}(\sigma) \leq s.
\end{gather*}
\item The coframe $\bm\Theta$ is {\em fully regular} if ${\bf T}^{(s)}$ is regular for all $s \geq 0$. In this case,
let $\rho_s = {\rm rank}({\bf T}^{(s)})$, and def\/ine the {\em rank} of $\bm\Theta$ as the minimal $s$ such that $\rho_s = \rho_{s+1}$.
\item The $s^{\rm th}$ order {\em classifying set} is
${\cal C}^{(s)}(\bm\Theta,U) = \{ {\bf T}^{(s)}(x) : x \in U \} \subset \mathbb{K}^{(s)}.$
\end{enumerate}
\end{definition}
As a consequence of the invariance equations \eqref{invar}, if $\bm\Theta$ and $\bm{\bar\Theta}$ are equivalent coframes via $\Phi : U \ra \bar{U}$, then
\begin{gather*}
{\cal C}^{(s)}(\bm\Theta,U) = {\cal C}^{(s)}(\bm{\bar\Theta},\Phi(U)), \qquad \forall \; s \geq 0.
\end{gather*}
This is suf\/f\/icient in the fully regular case. We refer the reader to \cite{Olver1995} for a proof of the following theorem.
\begin{theorem} \label{equiv-soln} Suppose $\bm\Theta$, $\bm{\bar\Theta}$ are fully regular coframes on $U$, $\bar{U}$ respectively.
There exists $\Phi$ satisfying $\Phi^*\bm{\bar\Theta} = \bm\Theta$ if and only if for each $s \geq 0$, ${\cal C}^{(s)}(\bm\Theta,U) \cap {\cal C}^{(s)}(\bm{\bar\Theta},\bar{U})$ is nonempty. The set of self-equivalences $\Phi$ (i.e.\ satisfying $\Phi^* \bm\Theta = \bm\Theta$) defines a $p$-dimensional local Lie group of transformations, where $p = m - {\rm rank}(\bm\Theta) \geq 0$.
\end{theorem}
\section{Nine-dimensional maximal symmetry}
\label{9d-sym}
The solution to the Cartan equivalence problem \eqref{base-equivalence} begins by lifting the problem to the {\em left} principal bundles $\Sigma_7 \times G \stackrel{\pi}{\ra} \Sigma_7$ and $\bar\Sigma_7 \times G \stackrel{\bar\pi}{\ra} \bar\Sigma_7$ by def\/ining
\begin{gather*}
\bm{\hat{\omega}}|_{(u,g)} = g \pi^* \bm{\omega}|_{u}, \qquad
\bm{\hat{\bar\omega}}|_{(\bar{u},g)} = g \bar\pi^* \bm{\bar\omega}|_{\bar{u}}, \qquad
\mbox{where} \quad u \in \Sigma_7, \quad \bar{u} \in \bar\Sigma_7, \quad g \in G,
\end{gather*}
and noting the following key lemma \cite{Gardner1989}.
\begin{lemma} \label{Gardner-lemma}
There exists an equivalence $\phi$ as in \eqref{base-equivalence} if and only if there exists a local diffeomorphism $\Phi : \Sigma_7 \times G \ra \bar\Sigma_7 \times G$ satisfying $\Phi^* \bm{\hat{\bar\omega}} = \bm{\hat\omega}$.
\end{lemma}
Identifying the coframe $\bm\omega$ on $\Sigma_7$ with its pullback by the canonical projection $\Sigma_7 \times G \ra \Sigma_7$, we can write
\begin{gather*}
\hat\omega^i = g^i{}_j \omega^j, \qquad g \in G.
\end{gather*}
Using \eqref{gamma-defn}, the structure equations for these lifted forms are then
\begin{gather*}
d\hat\omega^i = (dg \cdot g^{-1})^i{}_j \wedge \hat\omega^j + \frac{1}{2} \hat\gamma^j{}_{k\ell} \hat\omega^k \wedge \hat\omega^\ell,
\end{gather*}
where the coef\/f\/icients $\gam{i}{jk}$ transform tensorially under the $G$-action
\begin{gather}
\hat\gamma^i{}_{jk} := g^i{}_\ell \gamma^\ell{}_{mn} (g^{-1})^m{}_j (g^{-1})^n{}_k,
\label{gamma-transform}
\end{gather}
and $dg \cdot g^{-1}$ refers to the {\em right}-invariant Maurer--Cartan form on $G$. Since $D_8$ is discrete, then if $(g,k) \in G^0 \times D_8$,
\begin{gather*}
d(gk) \cdot (gk)^{-1} = dg \cdot k \cdot k^{-1} g^{-1} = dg \cdot g^{-1},
\end{gather*}
and so we can identify the Maurer--Cartan form on $G$ with that on $G^0$. For $g = M(a_1,a_2,a_3) \in G^0$ as in \eqref{G0-group}, we have $g^{-1} = M\left(\frac{1}{a_1},-\frac{a_2}{a_1},-\frac{a_3}{a_1} \right)$ and
\begin{gather*}
dg \cdot g^{-1} = \mat{ccccccc}{
2\alpha^1 & 0 & 0 & 0 & 0 & 0 & 0\\
\alpha^2 & \alpha^1 & 0 & 0 & 0 & 0 & 0\\
\epsilon \alpha^3 & 0 & \alpha^1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & \alpha^1 & 0 & 0 & 0\\
0 & 0 & 0 & \alpha^3 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & \alpha^1 & 0\\
0 & 0 & 0 & 0 & 0 & \alpha^2 & 0
},
\end{gather*}
where
\begin{gather*}
\alpha^1 = \frac{da_1}{a_1}, \qquad \alpha^2 = \frac{da_2}{a_1}, \qquad \alpha^3 = \frac{da_3}{a_1}
\end{gather*}
are a basis for the right-invariant 1-forms on $G^0$ and hence $G$. Identifying $\alpha^i$ on $G$ with their pullback by the canonical projection $\Sigma_7 \times G \ra G$, we have the structure equations for the lifted coframe:
\begin{gather}
d\hat\omega^1 = 2\alpha^1 \wedge \hat\omega^1 + \hat\omega^3 \wedge \hat\omega^6 + \hat\omega^2 \wedge \hat\omega^4 + \eta_1 \wedge \hat\omega^1,\nonumber\\
d\hat\omega^2 = \alpha^2 \wedge \hat\omega^1 + \alpha^1 \wedge \hat\omega^2 + \hat\omega^4 \wedge \hat\omega^5 + \hat\omega^3 \wedge \hat\omega^7 + \eta_2 \wedge \hat\omega^1 + \eta_3 \wedge \hat\omega^2,\nonumber\\
d\hat\omega^3 = \epsilon \alpha^3 \wedge \hat\omega^1 + \alpha^1 \wedge \hat\omega^3 + \hat\omega^6 \wedge \hat\omega^7 + \epsilon\hat\omega^2 \wedge \hat\omega^5 + \eta_4 \wedge \hat\omega^1 + \eta_5 \wedge \hat\omega^3,\nonumber\\
d\hat\omega^4 = \alpha^1 \wedge \hat\omega^4 + \eta_6 \wedge \hat\omega^1 + \eta_7 \wedge \hat\omega^2 + \eta_8 \wedge \hat\omega^4 + \eta_9 \wedge \hat\omega^5,\nonumber\\
d\hat\omega^5 = \alpha^3 \wedge \hat\omega^4 + \eta_{10} \wedge \hat\omega^1 + \eta_{11} \wedge \hat\omega^2 + \eta_{12} \wedge \hat\omega^4 + \eta_{13} \wedge \hat\omega^5 ,\label{G-lifted-coframe}\\
d\hat\omega^6 = \alpha^1 \wedge \hat\omega^6 + \eta_{14} \wedge \hat\omega^1 + \eta_{15} \wedge \hat\omega^3 + \eta_{16} \wedge \hat\omega^6 + \eta_{17} \wedge \hat\omega^7,\nonumber\\
d\hat\omega^7 = \alpha^2 \wedge \hat\omega^6 + \eta_{18} \wedge \hat\omega^1 + \eta_{19} \wedge \hat\omega^3 + \eta_{20} \wedge \hat\omega^6 + \eta_{21} \wedge \hat\omega^7,\nonumber\\
d\alpha^1 = 0,\nonumber\\
d\alpha^2 = -\alpha^1 \wedge \alpha^2,\nonumber\\
d\alpha^3 = -\alpha^1 \wedge \alpha^3,\nonumber
\end{gather}
where $\eta_i$ are semi-basic 1-forms with respect to the projection $\Sigma_7 \times G \ra \Sigma_7$.
The structure equations for the lifted forms $\hat\omega^i$ can be written
\begin{gather}
d\hat\omega^i = a^i{}_{\rho j} \alpha^\rho \wedge \hat\omega^j + \frac{1}{2} \hgam{i}{jk} \hat\omega^j \wedge \hat\omega^k,
\label{gen-lifted-streqns}
\end{gather}
where $a^i{}_{\rho j}$ are constants (c.f.\ Maurer--Cartan form) and $\hgam{i}{jk}$ is def\/ined as in~\eqref{gamma-transform}.
\begin{definition}
The degree of indeterminacy $r^{(1)}$ of a lifted coframe is the number of free variables in the set of transformations $\alpha^\rho \mapsto \alpha^\rho + \lambda^\rho{}_i \hat\omega^i$ which preserve the structure equations for $d\hat\omega^i$.
\end{definition}
For later use, we note the following:
\begin{lemma} For our lifted coframe $\bm\Theta = \{ \bm{\hat\omega}, \bm\alpha \}$ on $\Sigma_7 \times G$ satisfying \eqref{G-lifted-coframe}, we have $r^{(1)} = 0$.
\label{lem:indeterminacy}
\end{lemma}
\begin{proof} From the $d\hat\omega^1$, $d\hat\omega^2$, $d\hat\omega^3$ equation in \eqref{G-lifted-coframe}, we must have
\begin{gather*}
\alpha^1 \mapsto \alpha^1 + \lambda \hat\omega^1, \qquad
\alpha^2 \mapsto \alpha^2 + \lambda \hat\omega^2, \qquad
\alpha^3 \mapsto \alpha^3 + \epsilon \lambda\hat\omega^3.
\end{gather*}
However, to preserve the form of $d\hat\omega^i$, $i=4,5,6,7$, we must have $\lambda=0$. Since there are no free variables, then $r^{(1)}=0$.
\end{proof}
The goal in Cartan's solution algorithm is to reduce to an $\{e\}$-structure so that Theorem~\ref{equiv-soln} can be invoked. This amounts to essentially adapting the coframes on the base, i.e.\ f\/ixing a~map $g : \Sigma_7 \ra G$. Using Lemma~\ref{Gardner-lemma}, coef\/f\/icients in the structure equations are candidates for normalization, from which the structure group $G$ can be subsequently reduced. However, we only use those coef\/f\/icients which are not af\/fected by the choice of any map $g : \Sigma_7 \ra G$. Note that pulling the Maurer--Cartan forms back to the base by such a map will express each $\alpha^\rho$ in terms of the new coframe $\bm{\hat\omega}$ (pulled back to the base). This motivates the following def\/inition.
\begin{definition} Given a lifted coframe, {\em Lie algebra valued compatible absorption} refers to redef\/ining the right-invariant 1-forms $\alpha^\rho$ by
$\hat\alpha^\rho = \alpha^\rho + \lambda^\rho{}_i \hat\omega^i$, where $\lambda^\rho{}_i$ are functions on the bundle.
The terms involving the coef\/f\/icients $\hgam{i}{jk}$ which cannot be eliminated by means of Lie algebra valued compatible absorption are called {\em torsion terms} and the corresponding coef\/f\/icients are referred to as {\em torsion coefficients}.
\end{definition}
From \eqref{G-lifted-coframe}, the $d\hat\omega^5$ and $d\hat\omega^7$ structure equations indicate that $\hat\gamma^5{}_{56}$, $\hat\gamma^5{}_{57}$, $\hat\gamma^7{}_{47}$, $\hat\gamma^7{}_{57}$ are torsion coef\/f\/icients. Using \eqref{StrEqns123}, \eqref{StrEqns4567}, and the tensor transformation law \eqref{gamma-transform} for the $\gamma$'s, we see that there is a well-def\/ined $G$-action on $\mathbb{R}^4$ (i.e.\ the range of $(\gam{5}{56},\gam{5}{57},\gam{7}{47}, \gam{7}{57})$) given by the formulas
\begin{equation}
\begin{array}{|c|c|c|c|}\hline
& G^0\mbox{-action by } g=M(a_1,a_2,a_3) & R\mbox{-action} & S\mbox{-action}\\ \hline
\hat\gamma^5{}_{56} & \frac{1}{a_1} (\gamma^5{}_{56} - \gamma^5{}_{57} a_2 + a_3 \epsilon) & -\gamma^7{}_{47} & \gamma^5{}_{56}\\
\hat\gamma^7{}_{47} & \frac{1}{a_1} (\gamma^7{}_{47} - a_2 - \gamma^7{}_{57} a_3) & \gamma^5{}_{56} & -\gamma^7{}_{47}\\
\hat\gamma^5{}_{57} & \gamma^5{}_{57} & \epsilon\gamma^7{}_{57} & -\gamma^5{}_{57}\\
\hat\gamma^7{}_{57} & \gamma^7{}_{57} & \epsilon\gamma^5{}_{57} & -\gamma^7{}_{57}\\ \hline
\end{array}
\label{gam-main-transform}
\end{equation}
We can always normalize $\hat\gamma^5{}_{56}$ to zero by using the $G^0$-action and setting
\begin{gather}
a_3 = \epsilon(-\gam{5}{56} + \gam{5}{57} a_2).
\label{a3-normalize}
\end{gather}
The matrix factorization
\begin{gather*}
M(a_1,a_2,\epsilon(-\gam{5}{56} + \gam{5}{57} a_2)) = M(a_1,a_2,\epsilon \gam{5}{57} a_2)M(1,0,-\epsilon\gam{5}{56})
\end{gather*}
indicates that we can normalize $\gamma^5{}_{56}$ to 0 for the base coframe via
\begin{gather*}
\bar\omega^3 = -\gam{5}{56} \omega^1 + \omega^3, \qquad
\bar\omega^5 = -\epsilon\gam{5}{56} \omega^4 + \omega^5.
\end{gather*}
This change of coframe is {\em admissible} in the sense that it preserves the form of the structure equations in \eqref{StrEqns123} and \eqref{StrEqns4567}. (We henceforth drop the bars.) Thus, we have the normal form $\Gamma = (\gam{5}{56}=0,\gam{5}{57},\gam{7}{47}, \gam{7}{57})$. In general, however, this is a normalization of {\em nonconstant type} since $\Gamma$ still may depend on $x \in \Sigma_7$. Pointwise, we def\/ine the reduced structure group $G_\Gamma$ as the stabilizer of $\Gamma$, i.e.\ it is the subgroup of $G$ preserving the structure equations together with the normalization given by $\Gamma$. Clearly, the 1-parameter subgroup generated $M(1,0,a_3)$ ($a_3 \in \mathbb{R}$) yields a 1-dimensional orbit through $\Gamma$ and so $\dim(G_\Gamma) \leq 2$ since $\dim(G)=3$.
The algorithm continues by means of further normalizations and reductions of the structure group until one of two possibilities occurs:
\begin{enumerate}\itemsep=0pt
\item[1)] the structure group has been reduced to the identity, i.e.\ get an $\{e\}$-structure on $\Sigma_7$, or
\item[2)] the structure group has {\em not} been reduced to the identity but the structure group acts trivially on the torsion coef\/f\/icients.
\end{enumerate}
By Theorem \ref{equiv-soln}, the former possibility yields a symmetry group of dimension at most seven. In the latter case, the next step in the algorithm is to prolong the problem to the space $\Sigma_7 \times G_\Gamma$. Here, we have abused notation and written $G_\Gamma$ also for the structure group in the latter possibility above. Since, by Lemma \ref{lem:indeterminacy}, $r^{(1)}=0$ with respect to the lifted coframe on $\Sigma \times G$, it is clear that we must have $r^{(1)}=0$ for the lifted coframe on $\Sigma_7 \times G_\Gamma$. Finally, we invoke the following standard theorem (Proposition~12.1 in \cite{Olver1995}) written here in our notation:
\begin{proposition} \label{prop:prolong}
Let $\bm{\hat\omega}$, $\bm{\hat{\bar\omega}}$ be lifts of coframes $\bm\omega$, $\bm{\bar\omega}$ having the same structure group $G_\Gamma$, no group dependent torsion coefficients, and $r^{(1)}=0$. Let $\bm{\hat\alpha}$, $\bm{\hat{\bar\alpha}}$ be modified Maurer--Cartan forms obtained by performing a full Lie algebra-valued compatible absorption. Denote $\bm\Theta = \{ \bm{\hat\omega}, \bm{\hat\alpha} \}$, $\bm{\bar\Theta} = \{ \bm{\hat{\bar\omega}}, \bm{\hat{\bar\alpha}} \}$. Then there exists $\phi : \Sigma_7 \ra \bar\Sigma_7$ satisfying $\phi^*\bm{\bar\omega} = g\bm\omega$ for some $g : \Sigma_7 \ra G_\Gamma$ if and only if there exists $\Phi : \Sigma_7 \times G_\Gamma \ra \bar\Sigma_7 \times G_\Gamma$ satisfying $\Phi^*\bm{\bar\Theta} = \bm\Theta$.
\end{proposition}
In other words, we have prolonged to an $\{e\}$-structure on $\Sigma_7 \times G_\Gamma$. Since $\dim(G_\Gamma) \leq 2$ for any choice of $\Gamma$, then the symmetry group of the coframe is at most nine-dimensional. Thus, we have proven:
\begin{theorem} The (contact) symmetry group of any generic hyperbolic equation is finite dimensional and has maximal dimension~$9$.
\end{theorem}
In fact, this upper bound is sharp. We will give explicit normal forms for all contact-equivalence classes of generic hyperbolic equations with 9-dimensional symmetry along with their corresponding symmetry generators and corresponding structure equations.
Def\/ine
\begin{gather*}
m := \gamma^5{}_{57} \in C^\infty(\Sigma_7), \qquad
n := \gamma^7{}_{57} \in C^\infty(\Sigma_7),
\end{gather*}
and note that although $m$ and $n$ are $G^0$-invariant, they are {\em not} $G$-invariant. However, along each $G$-orbit
the product $mn$ is invariant.
We def\/ine two functions which will play an important role in the classif\/ications to follow. Def\/ine
\begin{gather*}
\Delta_1 = mn + \epsilon, \qquad \Delta_2 = m^2 - \epsilon n^2
\end{gather*}
Note that $\Delta_1$ is a contact invariant, and $\Delta_2$ is a relative contact invariant: it is $G^+$-invariant, but under the $R$-action, $\hat\Delta_2 = -\epsilon \Delta_2$.
\begin{corollary}\label{mn-epsilon} If a generic hyperbolic equation has 9-dimensional symmetry group, then \mbox{$\Delta_1\!=\!0$}.
\end{corollary}
\begin{proof} Under the assumption of maximal symmetry, all torsion coef\/f\/icients must be constant. Thus, $\hat{m}$, $\hat{n}$ and consequently $m$, $n$ must be constant. If $\Delta_1 \neq 0$, then there is a {\em unique} solution to the linear system
\begin{gather}
\mat{cc}{m & -\epsilon\\ 1 & n}\mat{c}{a_2\\a_3} = \mat{c}{\gam{5}{56} \\ \gam{7}{47}},
\label{mn-linear-sys}
\end{gather}
which yields the normalizations $\hgam{5}{56} = \hgam{7}{47} = 0$ and a {\em two} dimensional reduction of the initial structure group $G$. Consequently, the stabilizer $G_\Gamma$ would be at most 1-dimensional and the symmetry group would be at most 8-dimensional. Thus, we must have $\Delta_1 = 0$.
\end{proof}
\section{Complete structure equations}
\label{complete-str-eqs}
In Appendix \ref{Vranceanu-reduction}, we provide details of Vranceanu's reduction of the generic hyperbolic structure equations which allowed him to isolate the maximally symmetric and two sets of submaximally symmetric structures.
\begin{theorem} \label{mn-str-eqns}
Let $K^0 = \{ {\rm diag}(a_1^2,a_1,a_1,a_1,1,a_1,1) : a_1 > 0 \} \subset G$. Consider a coframe $\{ \omega^i \}_{i=1}^7$ on $\Sigma_7$ satisfying the generic hyperbolic structure equations \eqref{StrEqns123} and \eqref{StrEqns4567}, and the corresponding lifted coframe on $\Sigma_7 \times K^0 \ra \Sigma_7$. If:
\begin{enumerate}\itemsep=0pt
\item[1)] all torsion coefficients on which $K^0$ acts nontrivially are constants, and
\item[2)] $K^0$ cannot be reduced to the identity,
\end{enumerate}
then the structure equations can be put in the form
\begin{gather}
d\omega^1 = \omega^3 \wedge \omega^6 + \omega^2 \wedge \omega^4,\nonumber \\
d\omega^2 = \omega^4 \wedge \omega^5 + \omega^3 \wedge \omega^7 + \omega^2 \wedge \left( -\frac{3n}{2} \omega^5 + \frac{m}{2} \omega^7 \right),\nonumber\\
d\omega^3 = \omega^6 \wedge \omega^7 + \epsilon \omega^2 \wedge \omega^5 + \omega^3 \wedge \left( -\frac{n}{2} \omega^5 + \frac{3m}{2} \omega^7 \right),\nonumber\\
d\omega^4 = \epsilon \omega^5 \wedge \omega^6 + \omega^2 \wedge \left(B \omega^5 + \gam{4}{27} \omega^7\right) + \omega^4 \wedge \left(\frac{3n}{2} \omega^5 - \frac{m}{2} \omega^7\right), \label{Vranceanu-red-coframe}\\
d\omega^5 = m \omega^5 \wedge \omega^7,\nonumber \\
d\omega^6 = - \omega^4 \wedge \omega^7 + \omega^3 \wedge \left(\gam{6}{35} \omega^5 + \epsilon B \omega^7\right) + \omega^6 \wedge \left(\frac{n}{2} \omega^5 - \frac{3m}{2} \omega^7\right),\nonumber \\
d\omega^7= n \omega^5 \wedge \omega^7,\nonumber
\end{gather}
where $m,n,B \in C^\infty(\Sigma_7)$,
\begin{gather}
dm = m_5 \omega^5 + m_7 \omega^7, \qquad
dn = n_5 \omega^5 + n_7 \omega^7, \qquad m_{57} = \Parder{m_5}{\omega^7} = \Parder{}{\omega^7} \left( \Parder{m}{\omega^5} \right), \quad etc.\nonumber\\
dB = \epsilon \left( - 4m\Delta_1 - 2n\epsilon B - 6 mm_5 - m n_7 + nm_7 + \frac{3}{2} m_{57} + \frac{1}{2} n_{77}\right)\omega^5 \nonumber\\
\phantom{dB =}{} + \left(4n\Delta_1 + 2mB - 6 n n_7 - n m_5 + mn_5 - \frac{1}{2} m_{55} - \frac{3}{2} n_{75}\right) \omega^7
\label{gamma425}
\end{gather}
and
\begin{gather*}
\gam{4}{27} = mn + \epsilon - \frac{3}{2} n_7 - \frac{1}{2} m_5, \qquad
\gam{6}{35} = mn + \epsilon + \frac{1}{2} n_7 + \frac{3}{2} m_5.
\end{gather*}
Finally, the integrability conditions for \eqref{Vranceanu-red-coframe} (i.e.\ $d^2\omega^i=0$ for all $i$) reduce to the integrability conditions for $dm$, $dn$, $dB$ as given above.
\end{theorem}
\begin{remark}
All structures admitting a 9-dimensional symmetry group are included in \eqref{Vranceanu-red-coframe} (since $K^0$ cannot be reduced to the identity).
\end{remark}
\begin{remark}
For all valid structures arising from \eqref{Vranceanu-red-coframe}, the function $\Delta_3 = B := \gam{4}{25}$ is a~relative contact invariant: it is $G^+$-invariant, and under the $R$-action, $\hat\Delta_3 = \epsilon \Delta_3$.
\end{remark}
\begin{corollary} \label{HK-reduced} For all valid structures arising from \eqref{Vranceanu-red-coframe}, the original $G$-structure on $\Sigma_7$ can be reduced to an $H$-structure, where $H = H^0 \rtimes D_8$ and
\begin{gather}
H^0 = \left\{ \mat{ccccccc}{
a_1{}^2 & 0 & 0 & 0 & 0 & 0 & 0\\
a_1 a_2 & a_1 & 0 & 0 & 0 & 0 & 0\\
m a_1 a_2 & 0 & a_1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & a_1 & 0 & 0 & 0\\
0 & 0 & 0 & \epsilon m a_2 & 1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & a_1 & 0\\
0 & 0 & 0 & 0 & 0 & a_2 & 1
} : (a_1,a_2) \in \mathbb{R}^+ \times \mathbb{R} \right\}.
\label{H0-group}
\end{gather}
Moreover, wherever $\Delta_1 \neq 0$, or $B \neq 0$ there is a further reduction to a $K$-structure, where $K = K^0 \rtimes D_8$.
\end{corollary}
\begin{proof}
For all valid structures satisfying \eqref{Vranceanu-red-coframe}, $\gam{5}{56} = \gam{7}{47} = 0$, so from the $G$-action described in \eqref{gam-main-transform}, the stabilizer $G_\Gamma$ of $\Gamma = (\gam{5}{56},\gam{7}{47},\gam{5}{57},\gam{7}{57}) = (0,0,m,n)$ is contained in $H$ (since we can always keep $\hgam{5}{56}=0$ using $a_3 = \epsilon m a_2$).
If $\Delta_1 \neq 0$, then $a_2=a_3=0$ is the unique solution to \eqref{mn-linear-sys} and $G_\Gamma \subset K$.
Alternatively, suppose $B \neq 0$. Note that $\hgam{4}{15}$ and $\hgam{6}{17}$ are torsion coef\/f\/icients, and for the structure equations~\eqref{Vranceanu-red-coframe}, we have $\gam{4}{15}=\gam{6}{17} = 0$, and the transformation laws (under $H^0$):
\begin{gather*}
\hgam{4}{15} = \frac{-Ba_2}{a_1}, \qquad \hgam{6}{17} = \frac{-B\epsilon m a_2}{a_1}.
\end{gather*}
Consequently, we can normalize $\hgam{4}{15} = \hgam{6}{17} = 0$ and reduce the connected component of the structure group to $K^0$ by setting $a_2=0$. The discrete part of the structure group will preserve this reduction since
\begin{gather*}
R\mbox{-action}: \quad \hgam{4}{15} = -\gam{6}{17}, \quad \hgam{6}{17} = \gam{4}{15},\\
S\mbox{-action}: \quad \hgam{4}{15} = -\gam{4}{15}, \quad \hgam{6}{17} = \gam{6}{17}.\tag*{\qed}
\end{gather*}\renewcommand{\qed}{}
\end{proof}
Let us now examine in detail the case when $m$, $n$ are {\em constants}. Then \eqref{gamma425} becomes
\begin{gather}
dB = -2\left( 2\epsilon m\Delta_1 + n B \right)\omega^5 + 2\left(2n\Delta_1 + mB \right) \omega^7.
\label{gamma425-mn-const}
\end{gather}
Applying $d$ to \eqref{gamma425-mn-const} and simplifying, we obtain the integrability condition
\begin{gather*}
0 = -12\epsilon\Delta_1 \Delta_2 \omega^5 \wedge \omega^7.
\end{gather*}
\begin{corollary} Suppose $m$, $n$ are constants. Then $\Delta_1 \Delta_2 = 0$ if and only if \eqref{Vranceanu-red-coframe} are valid structure equations. Moreover, in this case:
\begin{enumerate}\itemsep=0pt
\item[1)] $\sigma= -n\omega^5 + m\omega^7$ is closed, so $\sigma = dh$ for some function $h \in C^\infty(\Sigma_7)$;
\item[2)] $m=0$ iff $n=0$ iff $\sigma=0$ iff $h$ is constant;
\item[3)] if $\Delta_1 = 0$, then $n = -\frac{\epsilon}{m}$, and $dB = 2B \sigma$, so $B = be^{2h}$, where $b$ is an arbitrary constant;
\item[4)] if $\Delta_2 = 0$, then:
\begin{itemize}\itemsep=0pt
\item if $\epsilon = -1$, then $m=n=0$, and $B$ is an arbitrary constant;
\item if $\epsilon = 1$, then letting $n = \epsilon_1 m$, $\epsilon_1=\pm 1$ we have $dB = 2( 2(m^2 + \epsilon_1) + B)\sigma$. If $m\neq 0$, then $B = -2(m^2+\epsilon_1) + be^{2h}$, where $b$ is an arbitrary constant. If $m=0$, then $B$ is an arbitrary constant;
\end{itemize}
\item[5)] If $\Delta_1 = \Delta_2 = 0$, then $\epsilon=1$, and $(m,n)=(1,-1)$ or $(-1,1)$.
\end{enumerate}
All of the above structures have a symmetry group with dimension at least seven.
\end{corollary}
\begin{proof} We prove only the f\/inal assertion as the others are straightforward to prove. Let $G_\Gamma$~be the reduced structure group for which there is no group dependent torsion. By construction (c.f.\ Theo\-rem~\ref{mn-str-eqns}), we must have $K^0 \subset G_\Gamma$, and by Proposition \ref{prop:prolong} we prolong to an $\{e\}$-structure on~$\Sigma_7 \times G_\Gamma$. If $B$ is constant, then by Theorem \ref{equiv-soln} the symmetry group has dimension $\dim(\Sigma_7 \times G_\Gamma) \geq 8$. If $B$ is nonconstant, then by Corollary \ref{HK-reduced}, $G_\Gamma \subset K$. Note that $\hat{B} = B$, so equation \eqref{gamma425-mn-const} implies that on $\Sigma_7 \times G_\Gamma$, we have
\begin{gather*}
dB = -2\left( 2\epsilon m\Delta_1 + n B \right) \hat\omega^5 + 2\left(2n\Delta_1 + mB \right) \hat\omega^7.
\end{gather*}
Thus, the coframe derivatives of $B$ are functions of $B$. Thus, if $B$ is nonconstant, then the rank of the lifted coframe $\bm\Theta$ is 1 and by Theorem \ref{equiv-soln} the symmetry group will be at least $\dim(\Sigma_7 \times G_\Gamma) - {\rm rank}(\bm\Theta) \geq 8-1=7$ dimensional.
\end{proof}
\begin{remark}
In the case $\Delta_2=0$, $\epsilon = 1$, we note that $\epsilon_1$ is a contact invariant.
\end{remark}
Certain values of $m$, $n$, $B$ lead to equivalent structures owing to the presence of the $D_8$ discrete subgroup of the original structure group $G$.
Suppose $\Delta_1 = 0$, so $n = -\frac{\epsilon}{m}$. Then
\begin{gather*}
R\mbox{-action}: \quad \hat{m} = -\frac{1}{m}, \quad \hat{B} = \epsilon B,\\
S\mbox{-action}: \quad \hat{m} = -m, \quad \hat{B} = B.
\end{gather*}
In this case, by choosing a representative element $m \in (0,1]$, we can reduce $D_8$ to $\mathbb{Z}_2 = \langle R^2 \rangle$. If $\epsilon = 1$, no further reduction occurs. If $\epsilon = -1$ and $B \neq 0$, we choose a representative out of $\{ B, -B \}$ to reduce the discrete subgroup to the identity. A similar argument is used in the case $\Delta_1 \neq 0$, where $\Delta_2=0$, $n=\epsilon_1 m$, and
\begin{gather*}
R\mbox{-action}: \quad \hat{m} = \epsilon \epsilon_1 m, \quad \hat{B} = \epsilon B,\\
S\mbox{-action}: \quad \hat{m} = -m, \quad \hat{B} = B.
\end{gather*}
The results are organized in Table \ref{streqn-classification}
according to the dimension of the symmetry group of the resulting $\{e\}$-structures on $\Sigma_7 \times G_\Gamma$.
\begin{table}[h]
\centering
\caption{All generic hyperbolic structures for which $m,n$ are constants and $K^0 \subset G_\Gamma$.}
\label{streqn-classification}
\vspace{-5mm}
\begin{align*}
\begin{array}{|c|c|c|c|c|c|c|c|} \hline
\mbox{Sym.\ grp.} & \Delta_1 & \Delta_2 & (\epsilon, m) & n & B & \mbox{Str.\ grp. } G_\Gamma\\ \hline\hline
9 & 0 & \neq 0 & \{\pm 1\} \times (0,1] & -\frac{\epsilon}{m} & 0 & H^0 \rtimes \langle R^2 \rangle\\
& & & \mbox{except } (1,1) & & & \\
9 & 0 & 0 & (1,1) & -1 & 0 & H^0 \rtimes \langle R^2 \rangle\\ \hline\hline
8 & \neq 0 & 0 & (-1,0) & 0 & b > 0 & K^0 \rtimes \langle R^2,S \rangle\\
8 & \neq 0 & 0 & (-1,0) & 0 & 0 & K^0 \rtimes D_8\\
8 & \neq 0 & 0 & (1,0) & 0 & b \in \mathbb{R} & K^0 \rtimes D_8 \\ \hline
8 & \neq 0 & 0 & \{1\} \times (0,\infty) & m & -2(m^2+1) & K^0 \rtimes \langle R \rangle\\
8 & \neq 0 & 0 & \{1\} \times (0,\infty) & -m & -2(m^2-1) & K^0 \rtimes \langle R^2 \rangle\\ \hline\hline
7 & 0 & \neq 0 & \{-1\} \times (0,1] & \frac{1}{m} & be^{2h},\ b > 0 & K^0\\
7 & 0 & \neq 0 & \{1\} \times (0,1) & -\frac{1}{m} & be^{2h},\ b \in \mathbb{R}^\times & K^0 \rtimes \langle R^2 \rangle\\ \hline
7 & 0 & 0 & (1,1) & -1 & be^{2h},\ b \in \mathbb{R}^\times & K^0 \rtimes \langle R^2 \rangle\\ \hline
7 & \neq 0 & 0 & \{1\} \times (0,\infty) & m & -2(m^2+1) + be^{2h},\ b \in \mathbb{R} & K^0 \rtimes \langle R \rangle\\
7 & \neq 0 & 0 & \{1\} \times (0,\infty) & -m & -2(m^2-1) + be^{2h},\ b \in \mathbb{R} & K^0 \rtimes \langle R^2 \rangle\\ \hline
\end{array}
\end{align*}
($h$ is a nonconstant function such that $dh = -n\omega^5 + m\omega^7$)
\end{table}
\begin{remark}
Vranceanu explicitly derived the following constant torsion cases:
\begin{itemize}\itemsep=0pt
\item 9-dim. symmetry: $\epsilon = 1$, $\Delta_1=0$, $B=0$;
\item 8-dim. symmetry:
\begin{enumerate}\itemsep=0pt
\item[1)] $\epsilon = 1$, $\Delta_1\neq 0$, $\Delta_2 =0$, $m=n=0$,
\item[2)] $\epsilon = 1$, $\Delta_1\neq 0$, $\Delta_2 =0$, $n=\pm m$, $B = -2(m^2\pm 1)$.
\end{enumerate}
\end{itemize}
\end{remark}
\begin{theorem}
All contact-inequivalent generic hyperbolic structures for which:{\samepage
\begin{enumerate}\itemsep=0pt
\item[1)] $K^0$ is a subgroup of the structure group, and
\item[2)] $m$, $n$ are constants,
\end{enumerate}
are displayed in Table~{\rm \ref{streqn-classification}}.}
\end{theorem}
For ease of reference, we state below the structure equations explicitly for each of the cases above. For the maximally symmetric cases, we state the structure equations for both the base coframe $\{ \omega^1, \dots, \omega^7 \}$ and the lifted coframe on $\Sigma_7 \times G_\Gamma$. In the submaximally symmetric cases, we only display structure equations for the lifted coframe. (One can obtain the structure equations on the base simply by setting $\hat\alpha^1 = 0$ and removing all hats from the remaining variables.)
In each case, we assume that $G_\Gamma$ and all parameters are as in Table~\ref{streqn-classification}. Note that $d\hat\omega^i$ are determined by~\eqref{gen-lifted-streqns}. Following potentially some Lie algebra valued compatible absorption, $\hat\alpha^\rho = \alpha^\rho + \lambda^\rho{}_i \hat\omega^i$, the structure equations $d\hat\alpha^\rho$ are determined by the integrability conditions $d^2\hat\omega^i=0$. (We only display the f\/inal results.)
For those coframes whose structure equations depend explicitly on the (nonconstant) function~$h$, we have $m\neq 0$ (c.f.\ Table~\ref{streqn-classification}) and the symmetry algebra is determined by restricting to the level set $h=h_0$, where $h_0$ is a constant. (Note: We will abuse notation and identify $h \in C^\infty(\Sigma_7)$ with its pullback to the bundle.) On this level set, we have $0 = dh = -n \hat\omega^5 + m\hat\omega^7$. We can choose (the pullback of) $\{ \hat\omega^1, \dots, \hat\omega^6, \hat\alpha^1 \}$ as a coframe on each level set, and the corresponding structure equations will have constant coef\/f\/icients. Thus, these are Maurer--Cartan equations for a local Lie group. A well-known fact is that the isomorphism type of the symmetry algebra of a coframe determined in this way is independent of the level set chosen. Consequently, we make the canonical choice and restrict to the level set $h=0$ in these cases.
The structure constants for the (contact) symmetry algebra for each of the structures can be read of\/f from the structure equations for the coframe (or its pullback to the level set $h=0$ if~$h$ appears explicitly). Only the symmetry algebras appearing in the 9-dimensional case will be studied in further detail in this article.
\subsection[Case 1: $\Delta_1 = 0$, $B = 0$]{Case 1: $\boldsymbol{\Delta_1 = 0}$, $\boldsymbol{B = 0}$}
This branch consists of precisely all maximally symmetric generic hyperbolic equations.
Parameters: \qquad $(\epsilon,m) \in \{\pm 1\} \times (0,1]$.
Base coframe:
\begin{gather}
d\omega^1 = \omega^2 \wedge \omega^4 + \omega^3 \wedge \omega^6,\nonumber\\
d\omega^2 = \omega^4 \wedge \omega^5 + \omega^3 \wedge \omega^7 + \omega^2 \wedge \left( \frac{3\epsilon}{2m} \omega^5 + \frac{m}{2} \omega^7 \right),\nonumber\\
d\omega^3 = \omega^6 \wedge \omega^7 + \epsilon\omega^2 \wedge \omega^5 + \omega^3 \wedge \left( \frac{\epsilon}{2m} \omega^5 + \frac{3m}{2} \omega^7 \right),\nonumber\\
d\omega^4 = \epsilon \omega^5 \wedge \omega^6 - \omega^4 \wedge \left( \frac{3\epsilon}{2m} \omega^5 + \frac{m}{2} \omega^7 \right), \label{9dim-streqns}\\
d\omega^5 = m \omega^5 \wedge \omega^7,\nonumber\\
d\omega^6 = - \omega^4 \wedge \omega^7 - \omega^6 \wedge \left( \frac{\epsilon}{2m} \omega^5 + \frac{3m}{2} \omega^7 \right),\nonumber\\
d\omega^7 = -\frac{\epsilon}{m} \omega^5 \wedge \omega^7.\nonumber
\end{gather}
Lifted coframe on $\Sigma_7 \times G_\Gamma$:
\begin{gather*}
d\hat\omega^1 = 2\hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^2 \wedge \hat\omega^4 + \hat\omega^3 \wedge \hat\omega^6,\\
d\hat\omega^2 = \hat\alpha^2 \wedge \hat\omega^1 + \hat\alpha^1 \wedge \hat\omega^2 + \hat\omega^4 \wedge \hat\omega^5 + \hat\omega^3 \wedge \hat\omega^7 + \hat\omega^2 \wedge \left( \frac{3\epsilon}{2m} \hat\omega^5 + \frac{m}{2} \hat\omega^7 \right),\\
d\hat\omega^3 = m \hat\alpha^2 \wedge \hat\omega^1 + \hat\alpha^1 \wedge \hat\omega^3 + \hat\omega^6 \wedge \hat\omega^7 + \epsilon\hat\omega^2 \wedge \hat\omega^5 + \hat\omega^3 \wedge \left( \frac{\epsilon}{2m} \hat\omega^5 + \frac{3m}{2} \hat\omega^7 \right),\\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + \epsilon \hat\omega^5 \wedge \hat\omega^6 - \hat\omega^4 \wedge \left( \frac{3\epsilon}{2m} \hat\omega^5 + \frac{m}{2} \hat\omega^7 \right), \\
d\hat\omega^5 = \epsilon m \hat\alpha^2 \wedge \hat\omega^4 + m \hat\omega^5 \wedge \hat\omega^7,\\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 - \hat\omega^4 \wedge \hat\omega^7 - \hat\omega^6 \wedge \left( \frac{\epsilon}{2m} \hat\omega^5 + \frac{3m}{2} \hat\omega^7 \right),\\
d\hat\omega^7 = \hat\alpha^2 \wedge \hat\omega^6 - \frac{\epsilon}{m} \hat\omega^5 \wedge \hat\omega^7,\\
d\hat\alpha^1 = \frac{1}{2} \hat\alpha^2 \wedge (\hat\omega^4 + m \hat\omega^6), \\
d\hat\alpha^2 = \hat\alpha^2 \wedge \left(\hat\alpha^1 + \frac{3}{2} \left(\frac{\epsilon}{m} \hat\omega^5 + m \hat\omega^7\right)\right).
\end{gather*}
\subsection[Case 2: $\Delta_2 = 0$, $B$ constant]{Case 2: $\boldsymbol{\Delta_2 = 0}$, $\boldsymbol{B}$ constant}
\label{8d-str-eqs}
This branch contains two families of equations with 8-dimensional symmetry. All coef\/f\/icients in both sets of structure equations are constants.
\subsubsection[Case 2a: $m=n=0$]{Case 2a: $\boldsymbol{m=n=0}$}
\vspace{-5mm}
\begin{gather*}
d\hat\omega^1 = 2 \hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^3 \wedge \hat\omega^6 + \hat\omega^2 \wedge \hat\omega^4, \\
d\hat\omega^2 = \hat\alpha^1 \wedge \hat\omega^2 + \hat\omega^4 \wedge \hat\omega^5 + \hat\omega^3 \wedge \hat\omega^7, \\
d\hat\omega^3 = \hat\alpha^1 \wedge \hat\omega^3 + \hat\omega^6 \wedge \hat\omega^7 + \epsilon \hat\omega^2 \wedge \hat\omega^5, \\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + \epsilon \hat\omega^5 \wedge \hat\omega^6 + b\hat\omega^2 \wedge \hat\omega^5 +\epsilon \hat\omega^2 \wedge \hat\omega^7, \\
d\hat\omega^5 = 0, \\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 - \hat\omega^4 \wedge \hat\omega^7 + \epsilon \hat\omega^3 \wedge \hat\omega^5 + \epsilon b \hat\omega^3 \wedge \hat\omega^7, \\
d\hat\omega^7 = 0, \\
d\hat\alpha^1 = 0.
\end{gather*}
\subsubsection[Case 2b: $n=\epsilon_1 m \neq 0$ (and $\epsilon = 1$)]{Case 2b: $\boldsymbol{n=\epsilon_1 m \neq 0}$ (and $\boldsymbol{\epsilon = 1}$)}
\vspace{-5mm}
\begin{gather*}
d\hat\omega^1 = 2 \hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^3 \wedge \hat\omega^6 + \hat\omega^2 \wedge \hat\omega^4, \\
d\hat\omega^2 = \hat\alpha^1 \wedge \hat\omega^2 + \hat\omega^4 \wedge \hat\omega^5 + \hat\omega^3 \wedge \hat\omega^7 - \hat\omega^2 \wedge \left( \frac{3\epsilon_1 m}{2} \hat\omega^5 - \frac{m}{2} \hat\omega^7 \right),\\
d\hat\omega^3 = \hat\alpha^1 \wedge \hat\omega^3 + \hat\omega^6 \wedge \hat\omega^7 + \epsilon \hat\omega^2 \wedge \hat\omega^5 - \hat\omega^3 \wedge \left( \frac{\epsilon_1 m}{2} \hat\omega^5 - \frac{3m}{2} \hat\omega^7 \right),\\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + \epsilon \hat\omega^5 \wedge \hat\omega^6 + (m^2+\epsilon_1) \hat\omega^2 \wedge \left(-2 \hat\omega^5 +\epsilon_1 \hat\omega^7\right) + \hat\omega^4 \wedge \left(\frac{3\epsilon_1 m}{2} \hat\omega^5 - \frac{m}{2} \hat\omega^7\right), \\
d\hat\omega^5 = m \hat\omega^5 \wedge \hat\omega^7, \\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 - \hat\omega^4 \wedge \hat\omega^7 + (m^2+\epsilon_1) \hat\omega^3 \wedge \left( \epsilon_1 \hat\omega^5 - 2 \hat\omega^7\right) + \hat\omega^6 \wedge \left(\frac{\epsilon_1 m}{2} \hat\omega^5 - \frac{3m}{2} \hat\omega^7\right), \\
d\hat\omega^7 = \epsilon_1 m \hat\omega^5 \wedge \hat\omega^7,\\
d\hat\alpha^1 = 0.
\end{gather*}
\subsection[Case 3: $B$ nonconstant]{Case 3: $\boldsymbol{B}$ nonconstant}
\label{7d-str-eqs}
This branch contains two families of equations with 7-dimensional symmetry. Note the case $\Delta_1=\Delta_2=0$, $\epsilon=m=-n=1$ is contained in both families.
\subsubsection[Case 3a: $\Delta_1 = 0$, $B$ nonconstant]{Case 3a: $\boldsymbol{\Delta_1 = 0}$, $\boldsymbol{B}$ nonconstant}
\vspace{-5mm}
\begin{gather*}
d\hat\omega^1 = 2\hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^2 \wedge \hat\omega^4 + \hat\omega^3 \wedge \hat\omega^6, \\
d\hat\omega^2 = \hat\alpha^1 \wedge \hat\omega^2 + \hat\omega^4 \wedge \hat\omega^5 + \hat\omega^3 \wedge \hat\omega^7 + \hat\omega^2 \wedge \left( \frac{3\epsilon}{2m} \hat\omega^5 + \frac{m}{2} \hat\omega^7 \right), \\
d\hat\omega^3 = \hat\alpha^1 \wedge \hat\omega^3 + \hat\omega^6 \wedge \hat\omega^7 + \epsilon\hat\omega^2 \wedge \hat\omega^5 + \hat\omega^3 \wedge \left( \frac{\epsilon}{2m} \hat\omega^5 + \frac{3m}{2} \hat\omega^7 \right),\\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + \epsilon \hat\omega^5 \wedge \hat\omega^6 + be^{2h} \hat\omega^2 \wedge \hat\omega^5 - \hat\omega^4 \wedge \left(\frac{3\epsilon}{2m} \hat\omega^5 + \frac{m}{2} \hat\omega^7\right), \\
d\hat\omega^5 = m \hat\omega^5 \wedge \hat\omega^7,\\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 - \hat\omega^4 \wedge \hat\omega^7 + \epsilon be^{2h} \hat\omega^3 \wedge \hat\omega^7 - \hat\omega^6 \wedge \left(\frac{\epsilon}{2m} \hat\omega^5 + \frac{3m}{2} \hat\omega^7\right), \\
d\hat\omega^7 = - \frac{\epsilon}{m} \hat\omega^5 \wedge \hat\omega^7,\\
d\hat\alpha^1 = 0.
\end{gather*}
On the level set $\{ h = 0 \}$: In this case, $\hat\omega^7 = -\frac{\epsilon}{m^2} \hat\omega^5$.
\begin{gather*}
d\hat\omega^1 = 2\hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^2 \wedge \hat\omega^4 + \hat\omega^3 \wedge \hat\omega^6, \\
d\hat\omega^2 = \hat\alpha^1 \wedge \hat\omega^2 + \left( \frac{\epsilon}{m} \hat\omega^2 - \frac{\epsilon}{m^2} \hat\omega^3 + \hat\omega^4 \right) \wedge \hat\omega^5, \\
d\hat\omega^3 = \hat\alpha^1 \wedge \hat\omega^3 + \left( \epsilon\hat\omega^2 - \frac{\epsilon}{m} \hat\omega^3 - \frac{\epsilon}{m^2} \hat\omega^6 \right) \wedge \hat\omega^5,\\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + \left(b \hat\omega^2 - \frac{\epsilon}{m} \hat\omega^4 - \epsilon \hat\omega^6 \right) \wedge \hat\omega^5, \\
d\hat\omega^5 = 0,\\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 - \frac{1}{m^2} ( b \hat\omega^3 - \epsilon \hat\omega^4 - \epsilon m \hat\omega^6 ) \wedge \hat\omega^5, \\
d\hat\alpha^1 = 0.
\end{gather*}
\subsubsection[Case 3b: $\Delta_2 = 0$, $B$ nonconstant]{Case 3b: $\boldsymbol{\Delta_2 = 0}$, $\boldsymbol{B}$ nonconstant}
\vspace{-5mm}
\begin{gather*}
d\hat\omega^1 = 2 \hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^3 \wedge \hat\omega^6 + \hat\omega^2 \wedge \hat\omega^4, \\
d\hat\omega^2 = \hat\alpha^1 \wedge \hat\omega^2 + \hat\omega^4 \wedge \hat\omega^5 + \hat\omega^3 \wedge \hat\omega^7 - \hat\omega^2 \wedge \left( \frac{3\epsilon_1 m}{2} \hat\omega^5 - \frac{m}{2} \hat\omega^7 \right),\\
d\hat\omega^3 = \hat\alpha^1 \wedge \hat\omega^3 + \hat\omega^6 \wedge \hat\omega^7 + \hat\omega^2 \wedge \hat\omega^5 - \hat\omega^3 \wedge \left( \frac{\epsilon_1 m}{2} \hat\omega^5 - \frac{3m}{2} \hat\omega^7 \right),\\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + \hat\omega^5 \wedge \hat\omega^6 + (-2(m^2+\epsilon_1)+be^{2h}) \hat\omega^2 \wedge \hat\omega^5 +(\epsilon_1 m^2+1) \hat\omega^2 \wedge \hat\omega^7 \\
\phantom{d\hat\omega^4 =}{} + \hat\omega^4 \wedge \left(\frac{3\epsilon_1 m}{2} \hat\omega^5 - \frac{m}{2} \hat\omega^7\right), \\
d\hat\omega^5 = m \hat\omega^5 \wedge \hat\omega^7, \\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 - \hat\omega^4 \wedge \hat\omega^7 + (\epsilon_1 m^2+1) \hat\omega^3 \wedge\hat\omega^5 + (-2(m^2+\epsilon_1)+be^{2h}) \hat\omega^3 \wedge \hat\omega^7 \\
\phantom{d\hat\omega^6 =}{} + \hat\omega^6 \wedge \left(\frac{\epsilon_1 m}{2} \hat\omega^5 - \frac{3m}{2} \hat\omega^7\right), \\
d\hat\omega^7 = \epsilon_1 m \hat\omega^5 \wedge \hat\omega^7,\\
d\hat\alpha^1 = 0.
\end{gather*}
On the level set $\{ h = 0 \}$: In this case, $\hat\omega^7 = \epsilon_1 \hat\omega^5$.
\begin{gather*}
d\hat\omega^1 = 2 \hat\alpha^1 \wedge \hat\omega^1 + \hat\omega^3 \wedge \hat\omega^6 + \hat\omega^2 \wedge \hat\omega^4, \\
d\hat\omega^2 = \hat\alpha^1 \wedge \hat\omega^2 + ( \hat\omega^4 + \epsilon_1 \hat\omega^3 - \epsilon_1 m \hat\omega^2 ) \wedge \hat\omega^5,\\
d\hat\omega^3 = \hat\alpha^1 \wedge \hat\omega^3 + \epsilon_1 (\hat\omega^6 + \epsilon_1 \hat\omega^2 + m \hat\omega^3 ) \wedge \hat\omega^5 ,\\
d\hat\omega^4 = \hat\alpha^1 \wedge \hat\omega^4 + (- \hat\omega^6 + (-(m^2+\epsilon_1)+b) \hat\omega^2 + \epsilon_1 m \hat\omega^4 ) \wedge \hat\omega^5, \\
d\hat\omega^5 = 0,\\
d\hat\omega^6 = \hat\alpha^1 \wedge \hat\omega^6 + \epsilon_1 (- \hat\omega^4 + (-(m^2+\epsilon_1)+b) \hat\omega^3 - m \hat\omega^6 ) \wedge \hat\omega^5, \\
d\hat\alpha^1 = 0.
\end{gather*}
\section{The maximally symmetric case}
\label{maxsym-case}
\subsection{A coframing in local coordinates}
For the remainder of the paper we focus on the maximally symmetric generic hyperbolic structures. In Appendix~\ref{maxsym-param}, we outline how Vranceanu arrived at an explicit coframe $\{ \omega^i \}_{i=1}^7$ on~$\Sigma_7$ given in local coordinates which satisf\/ies the structure equations \eqref{9dim-streqns}. In local coordinates $(x,y,z,p,q,u,v)$ on $\Sigma_7$, the coframe is given by
\begin{gather}
\omega^1 = dz - pdx - qdy, \nonumber\\
\omega^2 = \left( \frac{\epsilon m^2}{6} - \frac{ m \alpha v^3}{3u^3} + \frac{\alpha v^2}{2u^2} \right)\omega^6 + \left( -\frac{\epsilon m}{3} - \frac{\alpha v^3}{3u^3} \right) \omega^4 - u^{-3/2} (dp + vdq),\nonumber\\
\omega^3 = \left( \frac{\epsilon m^2}{6} - \frac{ m \alpha v^3}{3u^3} + \frac{\alpha v^2}{2u^2} \right) \omega^4 + \left( -\frac{\epsilon m^3}{3} - \frac{ m^2 \alpha v^3}{3u^3} + \frac{ m \alpha v^2}{u^2} - \frac{\alpha v}{u} \right) \omega^6 \nonumber\\
\phantom{\omega^3 =}{} - m u^{-3/2} (dp + v dq) + u^{-1/2} dq, \label{9d-explicit-coframe}\\
\omega^4 = u^{3/2} dx + m \sqrt{u}(dy - vdx),
\qquad \omega^5 = \frac{\epsilon m( du - m dv)}{u}, \nonumber\\
\omega^6 = -\sqrt{u} (dy-vdx),
\qquad \omega^7 = \frac{dv}{u},\nonumber
\end{gather}
which is valid on the open set $u > 0$, and where $\alpha = 1 - \epsilon m^4$.
The coordinates $(x,y,z,p,q)$ are identif\/ied with the corresponding coordinates on $J^1(\mathbb{R}^2,\mathbb{R})$.
Note that $\alpha$ in the case $\Delta_1=0$ is a relative contact invariant since $\alpha = -\epsilon m^2 \left( m^2 - \frac{\epsilon}{m^2} \right) = -\epsilon m^2 \Delta_2$, $m \neq 0$, and $\Delta_2$ is relative contact invariant. Since the contact-inequivalent structures are parametrized by $(\epsilon, m) \in \{\pm 1\} \times (0,1]$, then $\alpha \in [0,1) \cup (1,2]$.
\subsection{Normal forms}
Let us determine how the coordinates $(u,v)$ on $\Sigma_7$ are related to the standard 2-jet coordinates $(x,y,z,p,q,r,s,t) \in J^2(\mathbb{R}^2,\mathbb{R})$ . Let $\chi : \mathbb{R}^2 \ra \Sigma_7$ be any integral manifold of $I_F$ with independence condition $\chi^*(dx \wedge dy) \neq 0$. Without loss of generality, we identify the coordinates $(x,y)$ on $\mathbb{R}^2$ with the $(x,y)$ coordinates on $\Sigma_7$. The composition $i_F \circ \chi$ is then an integral manifold of the contact system $\contact{2}$ and on $\mathbb{R}^2$ we can write
\begin{gather}
dp = r dx + s dy, \qquad dq = sdx + tdy, \label{dp-dq}
\end{gather}
where for convenience $p$ is identif\/ied with $(i_F \circ \chi)^* p$, and similarly for the coordinates $q$, $r$, $s$,~$t$. Substituting \eqref{dp-dq} into the conditions $0 = \chi^* \omega^2 = \chi^* \omega^3$, and extracting the coef\/f\/icients of $dx$ and $dy$, we obtain the relations
\begin{gather*}
0 = 6vs+6r+2\epsilon m u^3-3\epsilon m^2u^2v-\alpha v^3, \\
0 = 2vt+2s+\epsilon m^2u^2+\alpha v^2, \\
0 = -6su+ m(6sv + 6r - \alpha v^3)+3v\epsilon m^3u^2+3\alpha v^2u -\epsilon m^2u^3, \\
0 = -2tu+ m (2tv+2s+\alpha v^2)-\epsilon m^3u^2-2\alpha vu,
\end{gather*}
or equivalently, using the coordinate $w = u - m v$ instead of $u$, we have
\begin{gather}
r = -\frac{1}{3} (\epsilon m w^3 + v^3 ),
\qquad s = -\frac{1}{2} ( \epsilon m^2 w^2 - v^2),
\qquad t = - (\epsilon m^3 w+ v). \label{rst-param}
\end{gather}
Thus, our PDE is of the form
\begin{gather*}
F(r,s,t) = 0,
\end{gather*}
and we have a nondegenerate parametrization $i_F : \Sigma_7 \ra J^2(\mathbb{R}^2,\mathbb{R})$ (for $u = w + mv > 0$).
Consider the case $\alpha = 0$, i.e.\ $(\epsilon, m) = (1,1)$. In this case, it is straightforward to eliminate both parameters $w$, $v$ and obtain the equation
\begin{gather}
rt - s^2 - \frac{t^4}{12} = 0.
\label{special-maxsym-eqn}
\end{gather}
Now consider the general case $\alpha \neq 0$. Let us write $u = -\frac{1}{\tilde{u}}$, $v = \tilde{v}$ and rewrite \eqref{rst-param} as
\begin{gather*}
\tilde{u} t = \epsilon m^3 - \alpha \tilde{u} \tilde{v}, \\
\tilde{u}^2 s = -\frac{1}{2} \epsilon m^2 - \epsilon m^3 \tilde{u}\tilde{v} + \frac{1}{2} \alpha (\tilde{u} \tilde{v})^2
= -\frac{\epsilon m^2}{2\alpha} + \frac{\tilde{u}^2 t^2}{2\alpha},\\
\tilde{u}^3 r = \frac{1}{3} \epsilon m + \epsilon m^2 \tilde{u} \tilde{v} + \epsilon m^3 (\tilde{u} \tilde{v})^2 - \frac{1}{3} \alpha (\tilde{u} \tilde{v})^3
= \frac{m(\epsilon+m^4)}{3\alpha^2} - \frac{\epsilon m^2 \tilde{u} t}{\alpha^2} + \frac{\tilde{u}^3 t^3}{3\alpha^2},
\end{gather*}
and so using $\nu = (\epsilon m^3 - \alpha \tilde{u} \tilde{v})^{-1}$ as a new parameter, we arrive at
\begin{gather*}
2\alpha s - t^2 = -\epsilon m^2 \nu^2 t^2, \qquad
3\alpha^2 r = m(\epsilon+m^4) \nu^3 t^3 - 3\epsilon m^2 \nu^2 t^3 + t^3.
\end{gather*}
Eliminating the parameter $\nu$, we obtain
\begin{gather}
(\epsilon+m^4)^2 (2\alpha s - t^2)^3 + \epsilon m^4 (3\alpha^2 r - 6\alpha st + 2t^3)^2 = 0. \label{gen-eqn-alpha}
\end{gather}
Finally, use the scaling $\bar{x} = \frac{1}{\alpha} x$, which induces
\begin{gather*}
(\bar{r},\bar{s},\bar{t}) = \left( \alpha^2 r, \alpha s, t\right)
\end{gather*}
to eliminate $\alpha$ from \eqref{gen-eqn-alpha}. Dropping bars, and letting $a=m^4$ we obtain
\begin{gather}
(\epsilon + a)^2 \left(2 s - t^2 \right)^3 + \epsilon a \left( 3r - 6st + 2t^3 \right)^2 = 0. \label{general-maxsym-eqn}
\end{gather}
Note that in the case $\epsilon=1$ considered by Vranceanu, the $st$ term has a missing factor of 2.
\begin{theorem} \label{param-maxsym-eqn} The contact-equivalence classes of maximally symmetric generic hyperbolic PDE are parametrized by $(\epsilon, a) \in \{ \pm 1 \} \times (0,1]$. Normal forms from each equivalence class are given by \eqref{special-maxsym-eqn} in the case $(\epsilon,a)=(1,1)$ and \eqref{general-maxsym-eqn} otherwise.
\end{theorem}
\begin{remark}
Letting $\epsilon=a=1$ in \eqref{general-maxsym-eqn}, we have $F= 4 \left(2 s - t^2 \right)^3 + \left( 3r - 6st + 2t^3 \right)^2 = 0$, and
\begin{gather*}
\Delta = F_r F_t - \frac{1}{4} F_s{}^2 = -36 (2s - t^2) F.
\end{gather*}
On the equation manifold (and hence on $\Sigma_7$), we have $\Delta=0$ and consequently, this limiting equation is parabolic.
\end{remark}
\subsection{Nine-dimensional symmetry algebras}
The calculations leading to \eqref{rst-param} are quite long and consequently to conf\/irm the validity of~\eqref{rst-param} (and in turn, Theorem \ref{param-maxsym-eqn}), it is useful to describe the nine-dimensional (contact) symmetry algebra explicitly for the normal forms given in the previous section. Calculating the symmetry algebra is a nontrivial task however -- the standard Lie method of calculating symmetries (by working in $J^2(\mathbb{R}^2,\mathbb{R})$ on the equation locus) is highly impractical owing to the complexity of the equations. In Appendix~\ref{9d-sym-alg}, we describe how the symmetry algebra was found by an alternative method.
In order to give a unif\/ied description of the symmetry algebras, we work with the normal forms~\eqref{special-maxsym-eqn} and~\eqref{gen-eqn-alpha} as these arise from the parametrization~\eqref{rst-param}.
\begin{proposition} \label{9d-syms} Any equation of the form $F(r,s,t)=0$ admits the symmetries
\begin{gather*}
X_1 = \parder{x}, \qquad X_2 =\parder{y}, \qquad X_3 =\parder{z}, \qquad X_4 =x\parder{z}, \qquad X_5 =y\parder{z},\\
X_6=x\parder{x} + y\parder{y} + 2z\parder{z}.
\end{gather*}
The equations \eqref{special-maxsym-eqn} and \eqref{gen-eqn-alpha} have the following additional symmetries:
\begin{gather*}
X_7 = y\parder{y} + 3z\parder{z},
\qquad X_8 = x \parder{y} - \frac{\alpha}{2} y^2 \parder{z},
\qquad X_9= x^2 \parder{x} + xy \parder{y} + \left(xz-\frac{\alpha}{6} y^3\right) \parder{z}.
\end{gather*}
In particular, all of these symmetries are projectable point symmetries.
\end{proposition}
(Recall that a {\em point} symmetry here is a vector f\/ield on $J^0(\mathbb{R}^2,\mathbb{R})$. A point symmetry is {\em projectable} if it projects to a vector f\/ield on the base $\mathbb{R}^2$.)
The normalization of \eqref{gen-eqn-alpha} to \eqref{general-maxsym-eqn} is carried out by letting $\bar{x} = \frac{1}{\alpha} x$ from which we get:
\begin{corollary} \label{cor:general-maxsym-eqn}
The generic hyperbolic equation \eqref{general-maxsym-eqn} has symmetry generators $X_1,\dots, X_6$ as in Proposition~{\rm \ref{9d-syms}} as well as
\begin{gather*}
X_7 = y\parder{y} + 3z\parder{z},
\qquad X_8 = x \parder{y} - \frac{1}{2} y^2 \parder{z},
\qquad X_9= x^2 \parder{x} + xy \parder{y} + \left(xz-\frac{1}{6} y^3\right) \parder{z}.
\end{gather*}
\end{corollary}
We will denote the corresponding abstract Lie algebras as $\mathfrak{g}_\alpha$ and express their commutator relations in a canonical basis. Let
\begin{gather*}
(e_1,e_2,e_3,e_4,e_5,e_6,e_7,e_8,e_9) = (X_2,X_3,X_4,X_5,-X_8,X_7,X_1,-2X_6+X_7,-X_9).
\end{gather*}
The commutator relations in this basis are
\begin{gather*}
\begin{array}{c|cccccc|ccc}
& e_1 & e_2 & e_3 & e_4 & e_5 & e_6 & e_7 & e_8 & e_9 \\ \hline
e_1 & \cdot & \cdot & \cdot & e_2 & \alpha e_4 & e_1 & \cdot & -e_1 & e_5 \\
e_2 & & \cdot & \cdot & \cdot & \cdot & 3 e_2 & \cdot & -e_2 & -e_3 \\
e_3 & & & \cdot & \cdot & \cdot & 3e_3 & -e_2 & e_3 & \cdot \\
e_4 & & & & \cdot & e_3 & 2e_4 & \cdot & \cdot & \cdot \\
e_5 & & & & & \cdot & e_5 & e_1 & e_5 & \cdot \\
e_6 & & & & & & \cdot & \cdot & \cdot &\cdot \\ \hline
e_7 & & & & & & & \cdot & -2e_7 & e_8\\
e_8 & & & & & & & & \cdot & -2e_9\\
e_9 & & & & & & & & & \cdot
\end{array}
\end{gather*}
In the case $\alpha \neq 0$, redef\/ining
\begin{gather*}
(\bar{e}_2, \bar{e}_3, \bar{e}_4 ) = (\alpha e_2, \alpha e_3, \alpha e_4)
\end{gather*}
and dropping the bars, we have the same commutator relations as above except $\alpha$ has been normalized to~1. Thus, in the case $\alpha \neq 0$, {\em all symmetry algebras are isomorphic}. (This is also obvious from the fact that the symmetry generators in Corollary \ref{cor:general-maxsym-eqn} are independent of~$\alpha$.)
Let $\mathfrak{g}_1$ denote the abstract symmetry algebra in the case $\alpha \neq 0$, although this is a slight abuse of notation since $\alpha \in (0,1) \cup (1,2]$ in this case. We calculate for $\mathfrak{g} = \mathfrak{g}_\delta$ ($\delta=0,1$),
\begin{alignat*}{3}
& \text{Killing form:} && \kappa = {\rm diag}\left(0,0,0,0,0,24,\left( \begin{array}{ccc} 0 & 0 & 6\\ 0 & 12 & 0\\ 6 & 0 & 0 \end{array}\right)\right),&\\
& \text{derived subalgebra:} & & \mathfrak{g}^{(1)} = \langle e_1, e_2, e_3, e_4, e_5, e_7, e_8, e_9 \rangle, & \\
& \text{radical:} && \gothic{r} = (\mathfrak{g}^{(1)})^{\perp_\kappa} = \langle e_1, e_2, e_3, e_4, e_5, e_6 \rangle, &\\
& \text{(semi-simple) Levi factor:}\quad && \mathfrak{g}_{ss} = \langle e_7, e_8, e_9 \rangle \cong \gothic{sl}(2,\mathbb{R}), &\\
& \text{Levi decomposition:} & & \mathfrak{g} = \gothic{r} \rtimes \mathfrak{g}_{ss}, &\\
& nilradical: && \gothic{n} = \langle e_1, e_2, e_3, e_4, e_5 \rangle, & \\
& \text{derived series of} \ \gothic{r}: && \gothic{r}^{(1)} = \gothic{n}, \quad \gothic{r}^{(2)} = \langle e_2, e_3, \delta e_4 \rangle, \quad
\gothic{r}^{(\infty)} = \gothic{r}^{(3)} = 0, & \\
& \text{lower central series of} \ \gothic{r}: && \gothic{r}^\infty = \gothic{r}^{1} = \gothic{n}. &
\end{alignat*}
An isomorphism between two Lie algebras must restrict to an isomorphism of their radicals and the corresponding derived f\/lags of the radicals. Since $\gothic{r}^{(2)}$ is two-dimensional for $\mathfrak{g}_0$ and three-dimensional for $\mathfrak{g}_1$, then we must have $\mathfrak{g}_0 \not\cong \mathfrak{g}_1$.
\begin{theorem} The contact symmetry algebra of any maximally symmetric generic hyperbolic PDE is:
\begin{enumerate}\itemsep=0pt
\item[1)] nine-dimensional,
\item[2)] contact-equivalent to a (projectable) point symmetry algebra.
\end{enumerate}
Moreover, there are exactly two isomorphism classes of Lie algebras (represented by $\mathfrak{g}_0$ and $\mathfrak{g}_1$) that arise as such symmetry algebras.
\end{theorem}
We remark that Mubarakzjanov has classif\/ied all f\/ive-dimensional real solvable Lie algebras (labelled by~$g_{5,*}$)~\cite{Mubar5-1963} and all six-dimensional non-nilpotent real solvable Lie algebras (labelled by~$g_{6,*}$)~\cite{Mubar6-1963}. The nilradicals of $\mathfrak{g}_0$ and $\mathfrak{g}_1$ can be identif\/ied in the former classif\/ication as:
\begin{gather*}
\gothic{n}_0 \cong g_{5,1}: \quad (\bar{e}_1,\bar{e}_2,\bar{e}_3,\bar{e}_4,\bar{e}_5) = (e_2,-e_3,e_1,e_5,e_4),\\
\gothic{n}_1 \cong g_{5,3}: \quad (\bar{e}_1,\bar{e}_2,\bar{e}_3,\bar{e}_4,\bar{e}_5) = (e_3,e_4,-e_2,e_1,e_5).
\end{gather*}
The radicals of $\mathfrak{g}_0$ and $\mathfrak{g}_1$ can be identif\/ied in the latter classif\/ication as
\begin{gather*}
\gothic{r}_0 \cong g_{6,54}: \quad (\bar{e}_1,\bar{e}_2,\bar{e}_3,\bar{e}_4,\bar{e}_5,\bar{e}_6)= \left(e_2,-e_3,e_1,e_5,e_4,\frac{1}{3} e_6\right), \quad\mbox{param.: } (\lambda,\gamma) = \left( 1, \frac{2}{3} \right),\\
\gothic{r}_1 \cong g_{6,76}: \quad (\bar{e}_1,\bar{e}_2,\bar{e}_3,\bar{e}_4,\bar{e}_5,\bar{e}_6) = \left(e_3,e_4,-e_2,e_1,e_5,\frac{1}{3} e_6 \right), \quad\mbox{param.: } h = 1.
\end{gather*}
Let us be more explicit about the direct verif\/ication of Proposition~\ref{9d-syms} from the point of view of external symmetries, internal symmetries, and symmetries of the lifted coframe on $\Sigma_7 \times H$.
\subsubsection{External symmetries}
Given any vector f\/ield $X$ on $J^0(\mathbb{R}^2,\mathbb{R})$, there is a corresponding prolonged vector f\/ield $X^{(2)}$ on~$J^2(\mathbb{R}^2,\mathbb{R})$. This prolongation is uniquely determined by the condition that $\Lieder{X^{(2)}} \contact{2} \subset \contact{2}$, where $\contact{2}$ is the contact system on $J^2(\mathbb{R}^2,\mathbb{R})$. See \eqref{prolongation} for the standard prolongation formula. For the vector f\/ields in Proposition \ref{9d-syms}, we have
\begin{gather*}
X_1^{(2)} = X_1, \qquad X_2^{(2)} = X_2, \qquad X_3^{(2)} = X_3, \\
X_4^{(2)} = X_4 + \parder{p}, \qquad X_5^{(2)} = X_5 + \parder{q}, \qquad
X_6^{(2)} = X_6 + p\parder{p} + q\parder{q}, \\
X_7^{(2)} = X_7 + 3p \parder{p} + 2q \parder{q} + 3r \parder{r} + 2s \parder{s} + t \parder{t},\\
X_8^{(2)} = X_8 - q \parder{p} - \alpha y \parder{q} - 2s \parder{r} - t \parder{s} - \alpha \parder{t},\\
X_9^{(2)} = X_9 + (z - xp - yq) \parder{p} - \frac{\alpha}{2} y^2 \parder{q} - (3xr+2ys) \parder{r} - (2xs+yt) \parder{s} - (xt+\alpha y) \parder{t}.
\end{gather*}
For \eqref{special-maxsym-eqn} or \eqref{gen-eqn-alpha}, we verify the external symmetry condition
\begin{gather*}
\Lieder{X_i{}^{(2)}} F =0 \qbox{whenever} F=0.
\end{gather*}
Clearly this is satisf\/ied by $X_i^{(2)}$, $i=1,\dots,6$ since they have no components in the $\parder{r}$, $\parder{s}$, $\parder{t}$ direction and since $F=F(r,s,t)$ for \eqref{special-maxsym-eqn} and~\eqref{gen-eqn-alpha}. For the remaining vector f\/ields we have
\begin{gather*}
\eqref{special-maxsym-eqn}: \qquad \Lieder{X_7^{(2)}} F = 4 F, \qquad
\Lieder{X_8^{(2)}} F = 0, \qquad
\Lieder{X_9^{(2)}} F = -4x F,\\
\eqref{gen-eqn-alpha}: \qquad \Lieder{X_7^{(2)}} F = 6 F, \qquad
\Lieder{X_8^{(2)}} F = 0, \qquad
\Lieder{X_9^{(2)}} F = -6x F,
\end{gather*}
and so the external symmetry condition is satisf\/ied.
\subsubsection{Internal symmetries}
The symmetry generators $X_i^{(2)}$ are all tangent to the equation manifold $F=0$, so they induce (via the parametrization \eqref{rst-param}) corresponding vector f\/ields $Z_i$ on $\Sigma_7$. Letting $X_i^{(1)} = (\pi^2_1)_* X_i^{(2)}$ denote the projection onto $J^1(\mathbb{R}^2,\mathbb{R})$, and identifying the coordinates $(x,y,z,p,q)$ on $J^1(\mathbb{R}^2,\mathbb{R})$ with corresponding coordinates on $\Sigma_7$, we have
\begin{gather*}
Z_i = X_i^{(1)}, \quad i=1,\dots,6, \qquad
Z_7 = X_7^{(1)} + w\parder{w} + v\parder{v}, \\
Z_8 = X_8^{(1)} + \parder{v} - m\parder{w}, \qquad
Z_9 = X_9^{(1)} - (m y + xw) \parder{w} + (y-xv) \parder{v},
\end{gather*}
with $u = w + mv$. One can verify directly that these vector f\/ields satisfy the internal symmetry condition
\begin{gather*}
\Lieder{Z_i} I_F \subset I_F,
\end{gather*}
where $I_F = \langle \omega^1, \omega^2, \omega^3 \rangle$ is given by the explicit coframing \eqref{9d-explicit-coframe}.
\subsubsection[Symmetries of the lifted coframe on $\Sigma_7 \times H'$, where $H' = H^0 \rtimes \langle R^2 \rangle$]{Symmetries of the lifted coframe on $\boldsymbol{\Sigma_7 \times H'}$, where $\boldsymbol{H' = H^0 \rtimes \langle R^2 \rangle}$}
The lifted coframe $\bm{\hat\omega} = \{ \hat\omega^1, \dots, \hat\omega^7, \hat\alpha^1, \hat\alpha^2 \}$ on $\Sigma_7 \times H'$ is parametrized by
\begin{gather*}
\hat\omega^1 = a_1{}^2 \omega^1,
\qquad \hat\omega^2 = a_1 \omega^2 + a_1 a_2 \omega^1,
\qquad \hat\omega^3 = a_1 \omega^3 + m a_1 a_2 \omega^1,\\
\hat\omega^4 = a_1 \omega^4,
\qquad \hat\omega^5 = \omega^5 + \epsilon m a_2 \omega^4,
\qquad \hat\omega^6 = a_1 \omega^6, \qquad
\hat\omega^7 = \omega^7 + a_2 \omega^6,\\
\hat\alpha^1 = \frac{da_1}{a_1} + \frac{a_2}{2a_1} (\hat\omega^4 + m \hat\omega^6), \qquad
\hat\alpha^2 = \frac{da_2}{a_1} + \frac{3a_2}{2a_1} \left( \frac{\epsilon}{ m} \hat\omega^5 + m\hat\omega^7 \right) - \frac{a_2{}^2}{2a_1{}^2} (\hat\omega^4 + m \hat\omega^6),
\end{gather*}
and by construction $\bm{\hat\omega}$ is an $\{e\}$-structure on $\Sigma_7 \times H'$, so that a symmetry is by def\/inition a~map $\Phi : \Sigma_7 \times H' \ra \Sigma_7 \times H'$ such that
\begin{gather*}
\Phi^* \bm{\hat\omega}^i = \bm{\hat\omega}^i, \qquad i=1,\dots, 9,
\end{gather*}
with inf\/initesimal analogue
\begin{gather*}
{\cal L}_{\hat{Z}} \bm{\hat\omega}^i = 0, \qquad i=1,\dots, 9.
\end{gather*}
Explicitly, these lifted vector f\/ields are given by
\begin{gather*}
\hat{Z}_i = Z_i, \quad i = 1,\dots,5, \qquad
\hat{Z}_6 = Z_6 - a_1 \parder{a_1} - a_2 \parder{a_2}, \qquad
\hat{Z}_7 = Z_7 - \frac{3a_1}{2} \parder{a_1} - \frac{3a_2}{2} \parder{a_2},\! \\
\hat{Z}_8 = Z_8, \qquad
\hat{Z}_9 = Z_9 - \frac{x a_1}{2} \parder{a_1} + \left( \frac{1}{(w+m v)^{3/2}} - \frac{x a_2}{2}\right) \parder{a_2}.
\end{gather*}
\subsection[Amp\`ere contact transformations and $3z_{xx} (z_{yy})^3 + 1=0$]{Amp\`ere contact transformations and $\boldsymbol{3z_{xx} (z_{yy})^3 + 1=0}$}
After deriving the normal form
\begin{gather}
rt - s^2 - \frac{t^4}{12} = 0, \label{special-maxsym-eqn2}
\end{gather}
which appeared in \eqref{special-maxsym-eqn}, Vranceanu remarks that if one makes an {\em Amp\`ere contact transformation}, then \eqref{special-maxsym-eqn2} can be reduced to the simpler form
\begin{gather}
rt^3 + \frac{1}{12} = 0. \label{special-maxsym-eqn3}
\end{gather}
The notion of an Amp\`ere contact transformation is never def\/ined in Vranceanu's paper and does not appear to be common terminology in the literature. This terminology is, however, referred to brief\/ly in recent work by Stormark (see page~275 in \cite{Stormark2000}). Namely, Stormark def\/ines it as the genuine (i.e.\ non-point) contact transformation $\Phi$ of $J^1(\mathbb{R}^2,\mathbb{R})$ given by
\begin{gather*}
(\bar{x},\bar{y},\bar{z},\bar{p},\bar{q}) = \left(p,y,z-px, -x, q\right)
\end{gather*}
which is clearly contact since
\begin{gather*}
d\bar{z} - \bar{p} d\bar{x} - \bar{q} d\bar{y} = d(z-px) + x dp - q dy = dz - p dx - q dy.
\end{gather*}
This is essentially akin to the Legendre transformation from Hamiltonian mechanics, but only acting with respect to the $x$, $z$, $p$ variables.
For our purposes, we consider the corresponding Legendre-like transformation acting with respect to $y$, $z$, $q$ variables, namely
\begin{gather*}
(\bar{x},\bar{y},\bar{z},\bar{p},\bar{q}) = \left(x,q,z-qy, p, -y\right).
\end{gather*}
The prolongation of this transformation to $J^2(\mathbb{R}^2,\mathbb{R})$ satisf\/ies
\begin{gather*}
d\bar{p} - \bar{r} d\bar{x} - \bar{s} d\bar{y}
= dp - \bar{r} dx - \bar{s} dq \equiv rdx + sdy - \bar{r} dx - \bar{s}( sdx + tdy) \qquad \mod \contact{2}\\
\phantom{ d\bar{p} - \bar{r} d\bar{x} - \bar{s} d\bar{y}}{} \equiv (r - s\bar{s} - \bar{r}) dx + (s - t\bar{s}) dy \qquad \mod \contact{2},\\
d\bar{q} - \bar{s} d\bar{x} - \bar{t} d\bar{y}
= -dy - \bar{s} dx - \bar{t} dq \equiv -dy - \bar{s} dx - \bar{t} (sdx + tdy) \qquad \mod \contact{2}\\
\phantom{d\bar{q} - \bar{s} d\bar{x} - \bar{t} d\bar{y}}{} \equiv -(\bar{s} + s\bar{t}) dx - (1 + t\bar{t}) dy \qquad \mod \contact{2},
\end{gather*}
and hence
\begin{gather*}
(\bar{r},\bar{s},\bar{t}) = \left(\frac{rt-s^2}{t}, \frac{s}{t}, -\frac{1}{t} \right).
\end{gather*}
Consequently
\begin{gather*}
0 = rt - s^2 - \frac{t^4}{12} = -\frac{\bar{r}}{\bar{t}} - \frac{1}{12 \bar{t}^4} = -\frac{1}{\bar{t}^4} \left(\bar{r}\bar{t}^3 + \frac{1}{12} \right) \quad\Rightarrow\quad \bar{r}\bar{t}^3 + \frac{1}{12} = 0.
\end{gather*}
By applying the subsequent scaling $x = \frac{1}{2} \bar{x}$ (and hence $(r,s,t) = (4\bar{r},2\bar{s},\bar{t})$), we are led to the equation
\begin{gather}
3rt^3+1=0, \label{rt3}
\end{gather}
which was investigated by Goursat \cite{Goursat1898} who recognized its Darboux integrability.
Since \eqref{rt3} is contact-equivalent to \eqref{special-maxsym-eqn2}, it is clear that \eqref{rt3} is hyperbolic of generic type with $\Delta_1 = \Delta_2 = 0$ and $\epsilon = a = 1$. The standard Lie algorithm to calculate symmetries can be applied for this equation in a straightforward manner. Its contact symmetry algebra consists of (projectable) point symmetries $X_1,\dots, X_6$ as in Proposition~\ref{9d-syms} as well as
\begin{gather}
X_7 = xy\parder{z}, \qquad
X_8 = 2y\parder{y} + 3z\parder{z}, \qquad
X_9 = x^2\parder{x} + xz\parder{z}.
\label{rt3-sym-alg}
\end{gather}
We note that the vector f\/ields $X_7$, $X_8$, $X_9$ have prolongations
\begin{gather*}
X_7^{(2)} = X_7 + y\parder{p} + x\parder{q} + \parder{s},\\
X_8^{(2)} = X_8 + 3p\parder{p} + q\parder{q} + 3r\parder{r} + s\parder{s} - t\parder{t},\\
X_9^{(2)} = X_9 + (z-xp)\parder{p} + xq\parder{q} - 3xr\parder{r} + (q-xs)\parder{s} + xt\parder{t}.
\end{gather*}
\subsection{Darboux integrability}
\begin{definition} For a hyperbolic PDE $F=0$, $I_F$ is said to be Darboux-integrable (at level two) if each of $C(I_F,dM_1)$ and $C(I_F,dM_2)$ contains a completely integrable subsystem of rank two that is independent from $I_F$.
\end{definition}
Recall that for our adapted coframe as in Theorem \ref{generic-hyp-str-eqns}, we have
\begin{gather*}
C(I_F,dM_1)^{(2)} = \{ \omega^4, \omega^5 \} \qquad\mbox{and}\qquad C(I_F,dM_2)^{(2)} = \{ \omega^6, \omega^7 \}.
\end{gather*}
\begin{theorem} \label{thm:Darboux-int} Given a generic hyperbolic PDE $F=0$ with (maximal) $9$-dimensional symmetry group, the second derived systems $C(I_F,dM_1)^{(2)}$ and $C(I_F,dM_2)^{(2)}$:
\begin{enumerate}\itemsep=0pt
\item[1)] are completely integrable, and hence $I_F$ is Darboux integrable, and
\item[2)] contain rank one completely integrable subsystems.
\end{enumerate}
\end{theorem}
\begin{proof}
Referring to the maximally symmetric structure equations \eqref{9dim-streqns}, we have that
\begin{gather*}
d\omega^4 \equiv d\omega^5 \equiv 0 \mod C(I_F,dM_1)^{(2)}, \qquad
d\omega^6 \equiv d\omega^7 \equiv 0 \mod C(I_F,dM_2)^{(2)}.
\end{gather*}
Hence, the rank two systems $C(I_F,dM_i)^{(2)}$, $i=1,2$ are complete integrable and $I_F = \{ \omega^1, \omega^2$, $ \omega^3 \}$ is Darboux integrable. Moreover, since
\begin{gather*}
d\omega^5 = m \omega^5 \wedge \omega^7, \qquad
d\omega^7 = -\frac{\epsilon}{m} \omega^5 \wedge \omega^7,
\end{gather*}
then the rank one subsystems $\{ \omega^5 \}$ and $\{ \omega^7\}$ are also completely integrable.
\end{proof}
Abstractly, Darboux's integration method for these systems proceeds as follows. Darboux integrability of $I_F$ implies the existence of completely integrable subsystems $J_i \subset C(I_F,dM_i)$. Applying the Frobenius theorem to each subsystem $J_i$, there exist local functions $f_i$, $g_i$ called {\em Riemann invariants} such that
\begin{gather*}
J_1 = \{ df_1, dg_1 \} \subset C(I_F,dM_1), \qquad
J_2 = \{ df_2, dg_2 \} \subset C(I_F,dM_2).
\end{gather*}
If $\varphi_1$, $\varphi_2$ are arbitrary functions, then restricting to any submanifold determined by
\begin{gather*}
S: \quad g_1 = \varphi_1 (f_1), \qquad g_2 = \varphi_2 (f_2),
\end{gather*}
the structure equations \eqref{hyp-str-eqns} become
\begin{gather*}
d\tilde{\omega}^i \equiv 0 \quad \mod \tilde{I}_F, \qquad i=1,2,3,
\end{gather*}
where $\tilde{I}_F = \{ \tilde{\omega}^1, \tilde{\omega}^2, \tilde{\omega}^3 \}$ is the restriction of $I_F$ to $S$. Hence, $\tilde{I}_F$ is completely integrable, and so there exist local functions $h_1$, $h_2$, $h_3$ on $S$ such that
\begin{gather*}
\tilde{I}_F = \{ dh_1, dh_2, dh_3 \}.
\end{gather*}
Hence, these functions $h_1$, $h_2$, $h_3$ are f\/irst integrals of $\tilde{I}_F$, and together with the constraint $S$ determine f\/irst integrals of $I_F$.
Explicitly, from our parametrization of the coframe $\{ \omega^i \}_{i=1}^7$ on $\Sigma_7$ (c.f.\ \eqref{9d-explicit-coframe}), we have:
\begin{gather*}
\omega^4 = u^{3/2} dx + m\sqrt{u} (dy - vdx) = \sqrt{u} (wdx + mdy)
= \sqrt{u}(d(my+wx) - xdw),\\
\omega^5 = \frac{\epsilon m}{u} (du - mdv) = \frac{\epsilon m}{u} dw, \\
\omega^6 = - \sqrt{u} (dy - vdx) = -\sqrt{u} (d(y-vx) + xdv),\\
\omega^7 = \frac{dv}{u},
\end{gather*}
and so
\begin{gather*}
C(I_F,dM_1)^{(2)} = \{ \omega^4, \omega^5 \} = \{ dw, d(my+wx) \},\\
C(I_F,dM_2)^{(2)} = \{ \omega^6, \omega^7 \} = \{ dv, d(y-vx) \},
\end{gather*}
where $w = u - mv$. Thus,
\begin{gather*}
w, \quad my+wx, \qquad\mbox{and}\qquad v, \quad y-vx
\end{gather*}
are Riemann invariants and, in principle, Darboux's integration method may be applied to f\/ind solutions or f\/irst integrals to the original equation. In \cite[Corollary~5.9]{GK1993}
Gardner--Kamran
asserted that hyperbolic equations of generic type do not have Riemann invariants. As f\/irst remarked by Eendebak~\cite{Eendebak2006}, this statement is incorrect and the equation $3rt^3+1=0$ is a~counterexample. Moreover, as described above, all maximally symmetric generic hyperbolic equations have Riemann invariants.
We refer the reader to page~130 in Goursat \cite{Goursat1898} for the implementation of Darboux's method to the equation $3rt^3+1=0$.
The implementation of Darboux's method in the case $(\epsilon,a) \neq (1,1)$ appears to be computationally quite dif\/f\/icult.
Let us comment on Darboux integrability for the submaximally symmetric cases described in Table~\ref{streqn-classification}. Recall that the structure equations listed in Sections \ref{8d-str-eqs} and \ref{7d-str-eqs} are those for the lifted coframe. To obtain the structure equations for the corresponding base coframe, we simply set $\hat\alpha^1 = 0$ and remove all hats. For all these cases we have either that
\begin{gather*}
C(I_F,dM_1)^{(3)} = \{ \omega^4, \omega^5 \}, \qquad C(I_F,dM_2)^{(3)} = \{ \omega^6, \omega^7 \}
\end{gather*}
and hence $I_F$ is Darboux integrable, or
\begin{gather*}
C(I_F,dM_1)^{(3)} = \{ \omega^5 \}, \qquad C(I_F,dM_2)^{(3)} = \{ \omega^7 \}
\end{gather*}
and $I_F$ is not Darboux integrable. We list the possibilities in Table~\ref{table:Darboux-int}.
Moreover, for these submaximally symmetric cases, all which are Darboux integrable have one-dimensional subsystems of $C(I_F,dM_1)^{(2)}$ and $C(I_F,dM_2)^{(2)}$ which are completely integrable (namely, $\{ \omega^5 \}$ and $\{ \omega^7 \}$ respectively). Thus, the converse of Theorem \ref{thm:Darboux-int} is clearly {\em false}.
\begin{table}[h]
\centering
\caption{Darboux integrability of submaximally symmetric generic hyperbolic PDE.}
\label{table:Darboux-int}
\vspace{1mm}
\begin{tabular}{|c|c|c|}\hline
Case & Darboux integrable? \\ \hline\hline
2a & no\\
2b & no in general; yes if $(m,\epsilon_1) = (1,-1)$\\\hline
3a & yes\\
3b & no in general; yes if $(m,\epsilon_1) = (1,-1)$\\ \hline
\end{tabular}
\end{table}
\section{Concluding remarks}
\label{genhyp:conclusions}
Let us summarize some of the main results of this paper:
\begin{itemize}\itemsep=0pt
\item We derived relative invariants $I_1$, $I_2$ (see Theorem \ref{thm:hyp-contact-inv}) given parametrically in terms of an arbitrary hyperbolic equation $F(x,y,z,z_x,z_y,z_{xx},z_{xy},z_{yy}) = 0$. Their vanishing/non\-va\-ni\-shing distinguishes the three types of hyperbolic equations.
\item In the generic case, the $\epsilon$ contact invariant is given parametrically as $\epsilon = {\rm sgn}(I_1 I_2) = \pm 1$.
\item In the abstract analysis of the generic hyperbolic structure equations, we identif\/ied relative contact invariants $m$, $n$, $B$ and $\Delta_1 = mn + \epsilon$, $\Delta_2 = m^2 - \epsilon n^2$ which played a key role in the classif\/ication of various generic hyperbolic structures admitting nine, eight, and seven-dimensional symmetry along with the corresponding complete structure equations.
\item Integration of maximally symmetric structure equations, leading to normal forms for all contact-equivalence classes of maximally symmetric generic hyperbolic equations.
\item Nine-dimensional symmetry algebras for these normal forms for generic hyperbolic equations are given explicitly. There are exactly two such nonisomorphic algebras.
\item For any maximally symmetric generic hyperbolic equation, the second derived systems of $C(I_F,dM_i)$, $i=1,2$ are rank 2 and completely integrable. Hence, all maximally symmetric generic hyperbolic equations are Darboux integrable.
\end{itemize}
We conclude with some possible points for future investigation:
\begin{enumerate}\itemsep=0pt
\item Maximally symmetric equations: (1)~Do ``simpler'' normal forms exist? (2)~Implement Darboux's integration method in the general case. (3)~Investigate the existence of conservation laws. (4)~Study the local solvability of these equations.
\item Submaximally symmetric equations: Integrate the structure equations given in Sections~\ref{8d-str-eqs} and~\ref{7d-str-eqs} and f\/ind normal forms for the corresponding PDE equivalence classes. Address similar questions as above.
\item The submaximally symmetric structures that we have derived here (see Table~\ref{streqn-classification} and Sections~\ref{8d-str-eqs} and~\ref{7d-str-eqs}) share the common property that $m$, $n$ are constants and $K^0$ is a subgroup of the structure group. Are there any other reductions of the initial 3-dimensional structure group that lead to valid structures?
\item In this article, we have carried out a detailed analysis of the generic (7-7) case. Hyperbolic equations of Goursat (6-7) type are equally poorly understood.
Some preliminary results on structure equations were stated in \cite{GK1993}, but to our knowledge, Vranceanu's student Petrescu \cite{Petrescu1938} has written the only paper which has made a more detailed investigation into the contact geometry of the Goursat class. Recasting Petrescu's results for a contemporary audience and building upon his work would make for a natural sequel to our paper.
\end{enumerate}
|
1,108,101,564,285 | arxiv | \section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm}{#1}}
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{array}}{\begin{array}}
\newcommand{\end{array}}{\end{array}}
\newcommand{\nonumber \\}{\nonumber \\}
\newenvironment{frcseries}{\fontfamily{frc}\selectfont}{}
\newcommand{\textfrc}[1]{{\frcseries#1}}
\newcommand{\mathfrc}[1]{\text{\scriptsize \bf\textfrc{#1}}}
\def \label {\label}
\def\alpha{\alpha}
\def\beta{\beta}
\def\lambda{\lambda}
\def\gamma{\gamma}
\def\zeta{\zeta}
\def\delta{\delta}
\def\theta{\theta}
\def\sigma{\sigma}
\def\epsilon{\epsilon}
\defP{P}
\def\Theta{\Theta}
\def\Lambda{\Lambda}
\def\Gamma{\Gamma}
\def\Omega{\Omega}
\newcommand{\vartheta}{\vartheta}
\newcommand{\varphi}{\varphi}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\phi}{\phi}
\def\theta{\theta}
\def\chi{\chi}
\def\epsilon{\epsilon}
\def{\cal{F}}{{\cal{F}}}
\def\Theta{\Theta}
\def{\muN}{{\muN}}
\def{\lambda\sigma}{{\lambda\sigma}}
\def{\vec x}{{\vec x}}
\def{\cal A}{{\cal A}}
\def{\cal L}{{\cal L}}
\def{\cal G}{{\cal G}}
\def{\cal M}{{\cal M}}
\def{\cal P}{{\cal P}}
\def{\cal J}{{\cal J}}
\def{\cal L}{{\cal L}}
\def{\cal H}{{\cal H}}
\def{\hat{L}}{{\hat{L}}}
\def{\hat{\phi}}{{\hat{\phi}}}
\def{\hat{K}}{{\hat{K}}}
\def{\hat{Z}}{{\hat{Z}}}
\def{\hat{A}}{{\hat{A}}}
\def{\hat{B}}{{\hat{B}}}
\def{\hat{\Omega}}{{\hat{\Omega}}}
\def{\hat{\rho}}{{\hat{\rho}}}
\def{\tilde{\nabla}}{{\hat{\nu}}}
\def{\bar{1}}{{\bar{1}}}
\def{\bar{2}}{{\bar{2}}}
\def{\lambda^1_+}{{\lambda^1_+}}
\def{\lambda^1_-}{{\lambda^1_-}}
\def{\lambda^{\bar{1}}_+}{{\lambda^{\bar{1}}_+}}
\def{\lambda^{\bar{1}}_-}{{\lambda^{\bar{1}}_-}}
\def{\hat{N}}{{\hat{N}}}
\def{\bf{e}}{{\bf{e}}}
\font\mybb=msbm10 at 11pt
\font\mybbb=msbm10 at 17pt
\def\bb#1{\hbox{\mybb#1}}
\def\bbb#1{\hbox{\mybbb#1}}
\def\bb{Z} {\bb{Z}}
\def\bb{R} {\bb{R}}
\def\bb{E} {\bb{E}}
\def\bb{H} {\bb{H}}
\def\bb{C} {\bb{C}}
\def\bb{I} {\bb{I}}
\def{\tilde{X}} {{\tilde{X}}}
\def\kappa{\kappa}
\def{\cal D}{{\cal D}}
\def{\cal I}{{\cal I}}
\def{\cal S}{{\cal S}}
\def \tilde{\nabla} {\tilde{\nabla}}
\def\hat{\tn}{\hat{\tilde{\nabla}}}
\def\check{\tn}{\check{\tilde{\nabla}}}
\def{\cal{O}}(\alpha'^0){{\cal{O}}(\alpha'^0)}
\def{\cal{O}}(\alpha'){{\cal{O}}(\alpha')}
\def{\cal{O}}(\alpha'^2){{\cal{O}}(\alpha'^2)}
\def{\buildrel \circ \over W}{{\buildrel \circ \over W}}
\def{\tilde{\nabla}} {{\tilde{\nabla}}}
\def{\hn^{[0]}}{} {{{\tilde{\nabla}}^{[0]}}{}}
\begin{document}
\begin{titlepage}
\begin{center}
\vspace*{-1.0cm}
\hfill DMUS--MP--16/07 \\
\vspace{2.0cm} {\Large \bf Anomaly Corrected Heterotic Horizons} \\[.2cm]
\vskip 2cm
A. Fontanella$^1$,~J. B. Gutowski$^1$ and G. Papadopoulos$^2$
\\
\vskip .6cm
\begin{small}
$^1$\textit{Department of Mathematics,
University of Surrey \\
Guildford, GU2 7XH, UK. \\
Email: [email protected] \\
Email: [email protected]}
\end{small}\\*[.6cm]
\begin{small}
$^2$\textit{ Department of Mathematics, King's College London
\\
Strand, London WC2R 2LS, UK.\\
E-mail: [email protected]}
\end{small}\\*[.6cm]
\end{center}
\vskip 3.5 cm
\begin{abstract}
\vskip1cm
We consider supersymmetric near-horizon geometries in heterotic supergravity
up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit
enhancement of supersymmetry. We show that solutions which undergo
supersymmetry enhancement exhibit an $\mathfrak{sl}(2, \bb{R})$ symmetry,
and we describe the geometry of their horizon sections.
We also prove a modified Lichnerowicz type theorem, incorporating
$\alpha'$ corrections,
which relates Killing spinors to zero modes of near-horizon Dirac operators.
Furthermore, we demonstrate that there
are no $AdS_2$ solutions in heterotic supergravity up to second order in $\alpha'$
for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class
of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the
spatial cross sections but not the dilatino one, and present a description of their geometry.
\end{abstract}
\end{titlepage}
\setcounter{section}{0}
\newsection{Introduction}
The effect of higher order corrections to supergravity solutions
is of considerable interest, perhaps most notably for our understanding of
quantum corrections to black holes. This is important in determining how string theory may resolve black hole singularities, as well as the investigation of the properties of
black holes away from the limit $\alpha' \rightarrow 0$.
In higher dimensions the four dimensional uniqueness theorems
\cite{israel, carter, hawking, robinson1, israel2, mazur, robinson}
no longer hold, and there are exotic types of black hole
solutions, such as the five dimensional black rings \cite{Emparan:2001wn}. For
ten and eleven dimensional supergravity, it is expected that there
is a particularly rich structure of black objects, and the classification
of these is ongoing. Progress has recently been
made in the classification of the near-horizon geometries
of supersymmetric black holes. Near-horizon
geometries of extremal black holes in supergravity are known to
generically undergo supersymmetry enhancement. This has been proven by analysing
the global properties of such solutions via generalized Lichnerowicz theorems
\cite{lichner11, lichneriib, lichneriia1, lichneriia2},
and making use of index theory arguments \cite{atiyah1}. One consequence of the supersymmetry enhancement is that
all such near-horizon geometries exhibit an $\mathfrak{sl}(2,\bb{R})$ symmetry. However, it is not apparent that
these properties persist after including string theory corrections.
There are several approaches to investigate how $\alpha'$ corrections can change the event horizons of
black holes. Many black holes have $AdS_p \times S^q$ near-horizon geometries and as it is expected that the symmetries of such backgrounds
persist in quantum theory, only the radii of the sphere and $AdS$ receive $\alpha'$ corrections. However, we
expect that exotic black holes in higher dimensions need not necessarily have
such near horizon geometries.
Another approach, in the context of supersymmetric black holes
in four and five dimensions,
is to assume that the corrected near horizon geometries
undergo an enhancement of supersymmetry in the near-horizon limit,
which simplifies considerably the analysis of the Killing spinor equations.
It is known that all supergravity $D=4$ and $D=5$ black holes undergo supersymmetry
enhancement in the near-horizon limit \cite{Kallosh:1992ta, Ferrara:1996dd, Gibbons:1993sv}.
In particular, the five dimensional BMPV black hole \cite{Breckenridge:1996is} undergoes supersymmetry enhancement
from $N=4$ to $N=8$ (maximal supersymmetry) in the near-horizon limit \cite{Chamseddine:1996pi}.
Also, the supersymmetric asymptotically $AdS_5$ black hole
of \cite{Gutowskiads5bh} undergoes supersymmetry enhancement
from $N=2$ to $N=4$ (half-maximal supersymmetry) in the near-horizon limit.
However it is not clear in general why one expects that
the $\alpha'$ corrections should preserve this property.
The first systematic classification of supersymmetric near-horizon geometries
in a higher derivative theory in five dimensions \cite{Hanaki:2006pj} was done in
\cite{Gutowski:2011nk},
in which the only assumption made was that the solutions should preserve
the minimal amount of supersymmetry. The five dimensional theory reduces to
ungauged five-dimensional supergravity coupled to arbitrarily many vector multiplets
when the higher derivative corrections are set to zero. In this limit, it is known
that near-horizon geometries are maximally supersymmetric with constant scalars
\cite{Gutowski:2004bj}, which is consistent with the standard picture of the attractor mechanism.
In contrast, when higher derivative terms are turned on, the list of
near-horizon geometries determined in \cite{Gutowski:2011nk} includes not only
the maximally supersymmetric geometries (which were classified in \cite{Castro:2008ne}),
but also a set of regular non-maximally supersymmetric solutions, on making use
of a result of \cite{Manton:2012fv}. Although it is unclear if these particular
near-horizon geometries can be extended to a full black hole solution, the existence
of such a solution proves that for certain supergravity theories, the presence of
higher derivative terms can change how supersymmetry is enhanced for near-horizon solutions.
In this paper, we consider how higher derivative corrections to ten dimensional supergravity
affect the geometry and supersymmetry of near-horizon solutions.
We shall choose to begin this work by investigating heterotic theory which includes $\alpha'$ corrections
up to two loops in sigma model perturbation theory.
This choice is motivated by two factors. Firstly, from the perspective
of the standard supergravity, much more is known about
the geometric structure of generic supersymmetric solutions,
and near-horizon geometries. In particular, as a consequence of
the spinorial geometry classification techniques developed in
\cite{class1, class2} which were then combined with
a global analysis of near-horizon geometries in \cite{hethor},
there exists a full classification of all possible supersymmetric
near-horizon geometries in the heterotic supergravity.
Secondly, the structure of higher derivative correction terms
in the field equations, and in the Killing spinor equations,
is significantly simpler for the heterotic theory when
compared to the types of terms which arise in type II supergravity
\cite{hetpap, Callan:1991at, Howe:1992tg, gsw}, and associated references.
The method we shall use to prove our results is that first we solve the Killing spinor equations in the
near-horizon lightcone directions, and then simplify
the remaining conditions as much as possible using both
the local field equations and Bianchi identities, as well as global analysis.
For the global analysis, we shall assume that the spatial cross-section of
the event horizon is smooth and compact, without boundary,
and that all near-horizon fields are also smooth.
As a result of this analysis, we find that there are no $AdS_2$ solutions (at zero and
first order in $\alpha'$) to heterotic supergravity,
which completes the classification of
heterotic AdS solutions in \cite{lichnerads4}.
We also show that all of the conditions of supersymmetry
reduce to a pair of gravitino KSEs and a pair of algebraic KSEs on the spatial horizon sections. The latter are associated
to the dilatino KSE.
Throughout, we allow for all near-horizon data, including the spinors, to receive
$\alpha'$ corrections.
Using these conditions, we show
that there is automatic supersymmetry enhancement
at both zero and first order in $\alpha'$
in the case for which there exists negative
light-cone chirality Killing spinor $\eta_-$ up to ${\cal{O}}(\alpha'^2)$ which does not vanish at zeroth order in $\alpha'$.
In this case the supersymmetry enhancement is obtained via
the same mechanism as for the near-horizon geometries
considered in \cite{hethor} without $\alpha'$ corrections, and the solution admits an $\mathfrak{sl}(2,\bb{R})$
symmetry. Such horizons admit 2, 4, 6 and 8 Killing spinors and their geometry is similar to that
of horizons with vanishing anomaly contribution examined in \cite{hethor}.
The remaining case, for which the negative
light-cone chirality spinors vanish at zeroth order in $\alpha'$
remains open. We have investigated global aspects of these
solutions by considering $\alpha'$ corrections to the global
analysis carried out in \cite{hethor}, and also by constructing
generalized Lichnerowicz theorems analogous to those
proven in \cite{lichner11, lichneriib, lichneriia1, lichneriia2},
again incorporating $\alpha'$ corrections. However, in both cases,
there is an undetermined sign in the ${\cal{O}}(\alpha'^2)$ terms appearing, which precludes the extension
of the maximum principle arguments to first order in $\alpha'$.
We also consider a class of near-horizon solutions which are ``nearly'' supersymmetric. These are not supersymmetric but some of their KSEs are satisfied. This
is motivated by the existence of WZW type of solutions to the heterotic theory with constant dilaton. It is known that such solutions
solve the gravitino KSE but not the dilatino one. In the present case, we consider horizons for which one of the gravitino KSEs is satisfied\footnote{Such solutions
are not supersymmetric, and furthermore the spacetime gravitino KSE is not necessarily satisfied.} on the spatial horizon section
up order ${\cal{O}}(\alpha'^2)$ but not the other and the algebraic KSEs. After some assumptions on the form of the fields, we give a complete
description of the geometry of such solutions.
This paper is organized as follows. In section 2, we present the fields of heterotic near horizon geometries and we integrate up the KSEs along the lightcone directions.
In sections 3 and 4, we identify the independent KSEs by examining the various cases that can occur
and in the process, prove that there are no $AdS_2$ solutions.
In section 5, we determine the conditions under which the horizons exhibit supersymmetry enhancement, and in section 6
we give the geometry of the horizon sections.
In section 7, we generalize the global analysis presented
near-horizon geometries in \cite{hethor} to include $\alpha'$ corrections. However because of a ${\cal{O}}(\alpha'^2)$ sign ambiguity, it is not
possible to prove that the horizon section admits a $G_2$ structure compatible with a connection with skew-symmetric torsion, as is the
case at zeroth order in $\alpha'$. We also generalize
the Lichnerowicz type theorems to higher orders in $\alpha'$. Once again, ${\cal{O}}(\alpha'^2)$ sign ambiguity means that
it is not possible to prove that zero modes of the horizon Dirac equation (at zero and
first order in $\alpha'$) satisfy the Killing spinor equations to the same order in
$\alpha'$, although the algebraic Killing spinor involving the 2-form gauge field is satisfied
to the required order in $\alpha'$. In sections 8 and 9, we examine the geometry of nearly supersymmetric horizons focusing on those
that admit a solution to the gravitino KSE on the horizon spatial section, and in section 10 we give our conclusions.
The paper contains several appendices. In appendix A, we summarize some key formulae that are used throughout
in the computations of the paper and present the field equations of the theory. In appendix B, we provide the details of part of the proof
to identify the independent KSEs on the spatial horizon section. In section C, we present a formula which relates
the gravitino KSE to the gaugino KSE which is instrumental in the investigation of the geometry of nearly supersymmetric horizons.
In appendix D, we present further detail of the proof of the Lichnerowicz type theorem for the heterotic theory, and
in Appendix E, we describe how $AdS_{n+1}$ can be written as a warped
product over $AdS_n$, and describe how such constructions are inconsistent with
our assumptions on the global structure and regularity of the solutions.
\newsection{Supersymmetric heterotic near-horizon geometries}
\subsection{Near horizon fields}
The metric near a smooth killing horizon expressed in Gaussian null co-ordinates
\cite{isen, gnull} can be written as
\begin{eqnarray}
ds^2 = 2 {\bf{e}}^+ {\bf{e}}^- + \delta_{ij} {\bf{e}}^i {\bf{e}}^j~,~~~
\label{nearhormetr}
\end{eqnarray}
where we have used the frame
\begin{eqnarray}
\label{nhbasis}
{\bf{e}}^+ &=& du~,~~~
{\bf{e}}^- = dr + r h - {1 \over 2} r^2 \Delta du~,~~~
{\bf{e}}^i = e^i{}_J dy^J~,
\end{eqnarray}
$i,j=1, \dots , 8$, $u,r$ are the lightcone coordinates, and the 1-form $h$, scalar $\Delta$
and ${\bf{e}}^i$ depend only on the coordinates $y^I$, $I=1, \dots ,8$, transverse to the lightcone. The black hole stationary
Killing vector field is identified with $\partial_u$.
The induced metric on ${\cal S}$ is
\begin{eqnarray}
ds_{\cal{S}}^2 = \delta_{ij} {\bf{e}}^i {\bf{e}}^j
\end{eqnarray}
and ${\cal S}$ is taken to be compact, connected without boundary. We denote the Levi-Civita connection of ${\cal{S}}$ by ${\tilde{\nabla}}$, and the Levi-Civita connection of the D=10 spacetime as
$\nabla$.
For the other heterotic fields, we assume that the dilaton $\Phi$, and the real 3-form
$H$, and non-abelian gauge potential $A$ admit well-defined near-horizon limits, and that
$\partial_u$ is a symmetry of the full solution:
\begin{eqnarray}
{\cal{L}}_{\partial_u} \Phi=0, \qquad {\cal{L}}_{\partial_u} H = 0, \qquad {\cal{L}}_{\partial_u}A=0~.
\end{eqnarray}
In particular, this means that $\Phi=\Phi(y)$, and also
\begin{eqnarray}
\label{threef}
H = {\bf{e}}^+ \wedge {\bf{e}}^- \wedge N+r {\bf{e}}^+ \wedge Y+W~,
\end{eqnarray}
where $N$, $Y$ and $W$ are $u,r$-independent 1, 2 and 3-forms
on ${\cal{S}}$ respectively, and we do not assume $dH=0$.
Moreover,
\begin{eqnarray}
A= r {\cal{P}} {\bf{e}}^+ + {\cal{B}}
\end{eqnarray}
where ${\cal{P}}$ and ${\cal{B}}$
are a $r,u$-independent $G$-valued scalar and 1-form on ${\cal{S}}$ respectively.
The non-abelian 2-form field strength $F$ is given by
\begin{eqnarray}
F = dA + A \wedge A~.
\end{eqnarray}
Our conventions for the heterotic theory including $\alpha'$ corrections
are consistent with those of \cite{hetpap}. We assume that the near-horizon data
admit a Taylor series expansion in $\alpha'$.
We denote this expansion by
\begin{eqnarray}
\Delta = \Delta^{[0]} + \alpha' \Delta^{[1]} + {\cal{O}}(\alpha'^2)
\end{eqnarray}
and similarly for all near-horizon data, including spinors. For the supersymmetric solutions, we shall assume
that that there is at least one zeroth order in $\alpha'$ Killing spinor, $\epsilon^{[0]} \neq 0$.
\subsection{Supersymmetry }
In the previous treatments
of heterotic near-horizon geometries \cite{hethor}, it was assumed that the anomaly vanishes and
so the Bianchi identity $dH=0$ was used to further
simplify the structure of the 3-form.
Here, we shall not
take $dH=0$ as there is a non-trivial contribution from the heterotic anomaly, and so the 3-form takes the more general form
given in ({\ref{threef}}).
We remark that the KSE of
heterotic supergravity have been solved in \cite{class1}
and \cite{class2}, and so, the solutions to the KSEs which
we consider here correspond to a subclass of the solutions
in \cite{class1, class2}. However for horizons the global assumptions on the spatial section ${\cal S}$, like compactness, allow the derivation
of additional conditions
on the spinors and on the geometry. So it is particularly
useful to re-solve the KSEs, decomposing the spinors into
positive and negative lightcone chiralities adapted for
the Gaussian null basis (\ref{nhbasis}), $\epsilon=\epsilon_+ + \epsilon_-$, where
\begin{eqnarray}
\Gamma_\pm \epsilon_\pm =0, \qquad \Gamma_{+-} \epsilon_\pm
= \pm \epsilon_\pm \ .
\end{eqnarray}
We shall then extract from the KSEs the conditions imposed on $\epsilon_\pm$ that will be useful to apply the global conditions on ${\cal S}$.
\subsubsection{The Gravitino Equation}
We begin by considering the gravitino equation
\begin{eqnarray}
\label{grav}
\hat\nabla_M\epsilon\equiv\nabla_M \epsilon -{1 \over 8}H_{M N_1 N_2} \Gamma^{N_1 N_2}
\epsilon= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
First, on examining the $M=-$ component of ({\ref{grav}})
we find that
\begin{eqnarray}
\epsilon_+ = \phi_+ + {\cal{O}}(\alpha'^2) , \qquad
\epsilon_- = \phi_- + {1 \over 4} r (h-N)_i \Gamma_- \Gamma^i \phi_+ + {\cal{O}}(\alpha'^2)~,
\label{grav2}
\end{eqnarray}
where $\partial_r \phi_\pm=0$.
Next, on examining the $M=+$ component of ({\ref{grav}}),
we find
\begin{eqnarray}
\phi_- = \eta_- + {\cal{O}}(\alpha'^2) , \qquad \phi_+ = \eta_+ + {1 \over 4}u (h+N)_i
\Gamma_+ \Gamma^i \eta_- + {\cal{O}}(\alpha'^2)~,
\label{grav3}
\end{eqnarray}
where $\partial_r \eta_\pm = \partial_u \eta_\pm=0$.
In additon, the $M=+$ component of ({\ref{grav}}) implies a number of algebraic conditions:
\begin{eqnarray}
\label{alg1}
\bigg({1 \over 2} \Delta +{1 \over 8}(h^2-N^2)
-{1 \over 8}(dh+Y+h \wedge N)_{ij} \Gamma^{ij} \bigg) \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{alg2}
\bigg(-{1 \over 2} \Delta -{1 \over 8}(h^2-N^2)
-{1 \over 8}(dh+Y+ h \wedge N)_{ij} \Gamma^{ij} \bigg) \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{alg3}
\bigg({1 \over 4} (\Delta h_i - \partial_i \Delta)\Gamma^i
-{1 \over 32} (dh+Y)_{ij}\Gamma^{ij} (h-N)_k \Gamma^k \bigg)
\phi_+= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
We remark that ({\ref{alg1}}) and ({\ref{alg2}}) are equivalent
to
\begin{eqnarray}
\label{alg4a}
{1 \over 2} \Delta +{1 \over 8}(h^2-N^2)= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
\begin{eqnarray}
\label{alg4b}
(dh+Y+ h \wedge N)_{ij} \Gamma^{ij} \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{alg5b}
(dh+Y+ h \wedge N)_{ij} \Gamma^{ij} \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
respectively. Furthermore, using these conditions,
({\ref{alg3}}) can also be rewritten as
\begin{eqnarray}
\label{alg6}
\bigg({1 \over 4} (\Delta h_j - \partial_j \Delta)
-{1 \over 8}(h-N)^k \big(dh+Y+2 h \wedge N)_{jk} \bigg) \Gamma^j \phi_+= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Next, we consider the $M=i$ components of ({\ref{grav}}).
This implies
\begin{eqnarray}
\label{par1}
{\tilde{\nabla}}_i \phi_+ + \bigg({1 \over 4}(N-h)_i -{1 \over 8} W_{ijk}
\Gamma^{jk} \bigg) \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{par2}
{\tilde{\nabla}}_i \eta_- + \bigg({1 \over 4}(h-N)_i -{1 \over 8} W_{ijk}
\Gamma^{jk} \bigg) \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
together with the algebraic condition
\begin{eqnarray}
\label{alg7}
\bigg({\tilde{\nabla}}_i (h-N)_j + {1 \over 2}(h_i N_j - h_j N_i)
-{1 \over 2}(h_i h_j -N_i N_j)
\nonumber \\
-(dh-Y)_{ij} -{1 \over 2} W_{ijk}(h-N)^k \bigg)
\Gamma^j \phi_+= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
These conditions exhaust the content of ({\ref{grav}}).
\subsubsection{Dilatino and Gaugino KSEs}
Next again ignoring $O(\alpha'^2)$ terms we consider the dilatino KSE
\begin{eqnarray}
\label{akse1}
\bigg(\Gamma^M \nabla_M \Phi -{1 \over 12}H_{N_1 N_2 N_3}
\Gamma^{N_1 N_2 N_3} \bigg) \epsilon = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
On making use of the previous conditions, it is straightforward
to show that the dilatino KSE is equivalent to
the following three conditions
\begin{eqnarray}
\label{aksecon1}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi +{1 \over 2} N_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{aksecon2}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi -{1 \over 2} N_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{aksecon2b}
\bigg( \big(\Gamma^i {\tilde{\nabla}}_i \Phi -{1 \over 2} N_i \Gamma^i -{1 \over 12} W_{ijk}
\Gamma^{ijk} \big) (h-N)_\ell \Gamma^\ell + Y_{ij} \Gamma^{ij} \bigg)
\phi_+= {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
It remains to consider the gaugino KSE
\begin{eqnarray}
\label{akse2}
F_{MN} \Gamma^{MN} \epsilon = {\cal{O}}(\alpha')~.
\end{eqnarray}
This implies the following conditions
\begin{eqnarray}
\label{akseconaux1}
\bigg(2 {\cal{P}} + {\tilde{F}}_{ij} \Gamma^{ij} \bigg) \phi_+= {\cal{O}}(\alpha')~,
\end{eqnarray}
and
\begin{eqnarray}
\label{akseconaux2}
\bigg(-2 {\cal{P}}+{\tilde{F}}_{ij} \Gamma^{ij} \bigg) \eta_- = {\cal{O}}(\alpha')~,
\end{eqnarray}
and
\begin{eqnarray}
\label{akseconaux2b}
\bigg( {1 \over 4}\big(-2 {\cal{P}} + {\tilde{F}}_{ij} \Gamma^{ij}\big)
(h-N)_\ell \Gamma^\ell
+2\big(h {\cal{P}}+ {\cal{P}} {\cal{B}}
- {\cal{B}} {\cal{P}}-d {\cal{P}}\big)_i \Gamma^i \bigg) \phi_+ = {\cal{O}}(\alpha')~,
\end{eqnarray}
where
\begin{eqnarray}
{\tilde{F}}=d {\cal{B}} + {\cal{B}} \wedge {\cal{B}}
\end{eqnarray}
The conditions ({\ref{akseconaux1}}) and ({\ref{akseconaux2}}) imply that
\begin{eqnarray}
{\cal{P}}= {\cal{O}}(\alpha')~,
\end{eqnarray}
and so $F={\tilde{F}} + {\cal{O}}(\alpha')$. Therefore ({\ref{akse2}}) is equivalent
to
\begin{eqnarray}
\label{aksecon3}
{\tilde{F}}_{ij} \Gamma^{ij} \phi_+= {\cal{O}}(\alpha')~,
\end{eqnarray}
and
\begin{eqnarray}
\label{aksecon4}
{\tilde{F}}_{ij} \Gamma^{ij} \eta_-={\cal{O}}(\alpha')~,
\end{eqnarray}
and
\begin{eqnarray}
\label{aksecon4b}
{\tilde{F}}_{ij} \Gamma^{ij} (h-N)_\ell \Gamma^\ell \phi_+ = {\cal{O}}(\alpha')~.
\end{eqnarray}
In order to simplify these conditions further,
we shall first consider the two cases for which either $\phi_+^{[0]} \equiv 0$
or $\phi_+^{[0]} \not \equiv 0$.
\newsection{Solutions with $\phi_+^{[0]} \equiv 0$}
Suppose that there exists a Killing spinor $\epsilon$ with
$\epsilon^{[0]} \not \equiv 0$, but $\phi_+^{[0]} \equiv 0$.
Such a spinor must therefore have $\eta_-^{[0]} \not \equiv 0$, and hence
it follows that
\begin{eqnarray}
h^{[0]}+N^{[0]}=0 \ .
\end{eqnarray}
Then ({\ref{par2}}) implies that
\begin{eqnarray}
\label{partrans}
d \parallel \eta_-^{[0]} \parallel^2 = - \parallel \eta_-^{[0]} \parallel^2
h^{[0]} \ .
\end{eqnarray}
In particular, this condition implies that if
$\eta_-^{[0]}$ vanishes at any point on the horizon
section, then $\eta_-^{[0]}=0$ everywhere.
So, $\eta_-^{[0]}$ must be everywhere non-vanishing.
On taking the divergence of ({\ref{partrans}}), and
making use of the $N_1=+, N_2=-$ component of the 2-form gauge potential field equation ({\ref{geq1}}), one obtains the following condition
\begin{eqnarray}
{\hn^{[0]}}{}^i {\hn^{[0]}}{}_i \parallel \eta_-^{[0]} \parallel^2 - \big(2 {\tilde{\nabla}}^i \Phi^{[0]} + \parallel \eta_-^{[0]} \parallel^{-2} {\hn^{[0]}}{}^i \parallel \eta_-^{[0]} \parallel^2 \big) {\hn^{[0]}}{}_i \parallel \eta_-^{[0]} \parallel^2 =0 \ .
\end{eqnarray}
As $ \parallel \eta_-^{[0]} \parallel^2$ is nowhere vanishing, an application of the maximum principle
implies that $ \parallel \eta_-^{[0]} \parallel^2=const.$, and hence ({\ref{partrans}})
gives that
\begin{eqnarray}
h^{[0]}=0, \qquad N^{[0]}=0 \ .
\end{eqnarray}
These conditions, together with ({\ref{alg4a}}), imply that
\begin{eqnarray}
\Delta={\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
Then the dilaton field equation ({\ref{deq}}) implies that
\begin{eqnarray}
{\tilde{\nabla}}^i {\tilde{\nabla}}_i (e^{-2 \Phi}) ={1 \over 6} e^{-2 \Phi} W_{ijk} W^{ijk} + {\cal{O}}(\alpha')~,
\end{eqnarray}
and hence it follows that
\begin{eqnarray}
\Phi^{[0]}=const, \qquad W^{[0]}=0 \ .
\end{eqnarray}
Furthermore, this then implies that
\begin{eqnarray}
H=du \wedge dr \wedge N +r du \wedge Y + W + {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and hence
\begin{eqnarray}
dH = du \wedge dr \wedge (dN-Y)-r du \wedge dY +dW + {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As the $ruij$ component on the RHS of the Bianchi identity is ${\cal{O}}(\alpha'^2)$
this implies that
\begin{eqnarray}
Y=dN+{\cal{O}}(\alpha'^2)
\end{eqnarray}
and in particular, $Y^{[0]}=0$.
Next consider the gauge equations. The $+-$ component of the 2-form gauge potential field equations ({\ref{geq1}}) is
\begin{eqnarray}
\label{dfree1}
{\tilde{\nabla}}^i N_i = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Also, the $u$-dependent part of ({\ref{par3}}) implies that
\begin{eqnarray}
{\tilde{\nabla}}_i (h+N)_j \Gamma^j \eta_- = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
which gives that
\begin{eqnarray}
\label{udepsimp1}
{\tilde{\nabla}}_i(h+N)_j = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Taking the trace of this expression, and using ({\ref{dfree2}}) yields
\begin{eqnarray}
\label{dfree2}
{\tilde{\nabla}}^i h_i = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Next, recall that the gravitino KSE ({\ref{par4}}) implies
\begin{eqnarray}
\label{dnsq1}
{\tilde{\nabla}}_i \parallel \eta_- \parallel^2 = -{1 \over 2}(h-N)_i \parallel \eta_- \parallel^2 + {\cal{O}}(\alpha'^2)
\end{eqnarray}
Taking the divergence yields, together with ({\ref{dfree1}}) and ({\ref{dfree2}}) the condition
\begin{eqnarray}
{\tilde{\nabla}}^i {\tilde{\nabla}}_i \parallel \eta_- \parallel^2 = {\cal{O}}(\alpha'^2)
\end{eqnarray}
which implies that $ \parallel \eta_- \parallel^2= const + {\cal{O}}(\alpha'^2)$.
Substituting back into ({\ref{dnsq1}}) gives the condition
$N=h+{\cal{O}}(\alpha'^2)$, and hence ({\ref{udepsimp1}}) implies
that
\begin{eqnarray}
{\tilde{\nabla}}_i h_j = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
So, to summarize, for this class of solutions, we have obtained the following
conditions on the fields
\begin{eqnarray}
\label{bossimp1}
N=h+{\cal{O}}(\alpha'^2), && \quad h^{[0]}=0, \quad Y={\cal{O}}(\alpha'^2), \quad {\tilde{\nabla}}_i h_j= {\cal{O}}(\alpha'^2),
\nonumber \\
\Delta = {\cal{O}}(\alpha'^2), && \quad H^{[0]}=0, \quad \Phi^{[0]}=const~,
\end{eqnarray}
and it is straightforward to check that the generic conditions on
$\phi_+$ then simplify to
\begin{eqnarray}
\label{par3bb}
{\tilde{\nabla}}_i \phi_+ -{1 \over 8}W_{ijk} \Gamma^{jk} \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{auxalg1bbb}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi +{1 \over 2} h_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \phi_+= {\cal{O}}(\alpha'^2)
\end{eqnarray}
and
\begin{eqnarray}
\label{auxalg1cbb}
{\tilde{F}}_{ij} \Gamma^{ij} \phi_+ = {\cal{O}}(\alpha') \ .
\end{eqnarray}
The generic conditions on $\eta_-$ also simplify to
\begin{eqnarray}
\label{par4bb}
{\tilde{\nabla}}_i \eta_- -{1 \over 8}W_{ijk} \Gamma^{jk} \eta_-= {\cal{O}}(\alpha'^2)
\end{eqnarray}
and
\begin{eqnarray}
\label{auxalg2bbb}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi -{1 \over 2} h_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \eta_-= {\cal{O}}(\alpha'^2)
\end{eqnarray}
and
\begin{eqnarray}
\label{auxalg2cbb}
{\tilde{F}}_{ij} \Gamma^{ij} \eta_- = {\cal{O}}(\alpha') \ .
\end{eqnarray}
In the next section, we shall consider the case for which there exists a Killing spinor with
$\phi_+^{[0]} \not \equiv 0$.
It will be shown that the conditions ({\ref{bossimp1}}) on the bosonic fields
and the simplified KSEs listed above correspond to special cases
of the corresponding conditions on the fields and simplified KSEs
of $\phi_+^{[0]} \not \equiv 0$. In particular,
this will allow the KSEs for $\phi_+^{[0]} \equiv 0$
and $\phi_+^{[0]} \not \equiv 0$ to be written in a unified way.
\newsection{Solutions with $\phi_+^{[0]} \not \equiv 0$}
Suppose that there exists a Killing
spinor $\epsilon$, with $\epsilon^{[0]} \not \equiv 0$ and
$\phi_+^{[0]} \not \equiv 0$. Then consider ({\ref{par1}}); this implies that
\begin{eqnarray}
\label{pt1}
{\tilde{\nabla}}_i \parallel \phi_+ \parallel^2 = {1 \over 2}(h_i-N_i)\parallel \phi_+ \parallel^2 + {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and ({\ref{alg7}}) gives that
\begin{eqnarray}
\label{alg7b}
{\tilde{\nabla}}_i (h-N)_j + {1 \over 2}(h_i N_j - h_j N_i)
-{1 \over 2}(h_i h_j -N_i N_j)
\nonumber \\
-(dh-Y)_{ij} -{1 \over 2} W_{ijk}(h-N)^k = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Taking the divergence of ({\ref{pt1}}), and using ({\ref{par1}})
together with the trace of ({\ref{alg7b}}), we find that
\begin{eqnarray}
\label{lapsq1}
{\tilde{\nabla}}^i {\tilde{\nabla}}_i \parallel \phi_+ \parallel^2 - h^i {\tilde{\nabla}}_i \parallel \phi_+ \parallel^2 = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
An application of the maximum principle (see e.g. \cite{maxp})
then yields the condition
\begin{eqnarray}
{\tilde{\nabla}}_i \parallel \phi_+ \parallel^2= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
To see this, note that to zeroth order in $\alpha'$,
({\ref{lapsq1}}) implies that ${\tilde{\nabla}}^{[0]}_i \parallel \phi_+^{[0]}\parallel^2=0$, on applying the maximum principle.
Then ({\ref{pt1}}) and ({\ref{alg7b}}) imply that $N^{[0]}=h^{[0]}$ and $Y^{[0]}=dh^{[0]}$; and from ({\ref{alg4a}}) we also have $\Delta^{[0]}=0$.
Then it is useful to consider the field equations of the 2-form gauge potential
({\ref{geq1}}), which imply that
\begin{eqnarray}
\label{bcx1}
{\tilde{\nabla}}^i \bigg( e^{-2 \Phi} h_i \bigg)= {\cal{O}}(\alpha')~,
\end{eqnarray}
and
\begin{eqnarray}
\label{bcx2}
e^{2 \Phi} {\tilde{\nabla}}^j \big(e^{-2 \Phi} dh_{ji}\big)
+{1 \over 2} W_{ijk} dh^{jk} + h^j dh_{ji}= {\cal{O}}(\alpha')~,
\end{eqnarray}
and the Einstein equations imply that
\begin{eqnarray}
\label{bcx3}
{\tilde{R}}_{ij} + {\tilde{\nabla}}_{(i} h_{j)} -{1 \over 4} W_{imn} W_j{}^{mn}
+2 {\tilde{\nabla}}_i {\tilde{\nabla}}_j \Phi = {\cal{O}}(\alpha')~.
\end{eqnarray}
Using ({\ref{bcx1}}), ({\ref{bcx2}}) and ({\ref{bcx3}})
it follows that{\footnote{We remark that
the condition ({\ref{bcx4}}) was also obtained
in \cite{hethor}. In that case, a bilinear matching condition
was imposed in order to find $N^{[0]}=h^{[0]}, Y^{[0]}=dh^{[0]}$.
Here we do not assume such a bilinear matching condition, but nevertheless
we find the same condition.}}
\begin{eqnarray}
\label{bcx4}
&&{\tilde{\nabla}}^i {\tilde{\nabla}}_i h^2 + (h-2 d \Phi)^j {\tilde{\nabla}}_j h^2
= 2 {\tilde{\nabla}}^{(i} h^{j)} {\tilde{\nabla}}_{(i} h_{j)}
\cr
&&~~~~~~~~~~~~~~~~
+{1 \over 2}(dh - i_h W)_{ij} (dh-i_h W)^{ij} + {\cal{O}}(\alpha')~.
\nonumber \\
\end{eqnarray}
In particular, ({\ref{bcx4}}) implies that ${\tilde{\nabla}}^{[0] i} h^{[0]}_i =0$
on applying the maximum principle.
It follows from ({\ref{lapsq1}}) that
\begin{eqnarray}
{\tilde{\nabla}}^{[0]i} {\tilde{\nabla}}^{[0]}_i \langle \phi_+^{[0]}, \phi_+^{[1]} \rangle
- h^{[0]i} {\tilde{\nabla}}^{[0]}_i \langle \phi_+^{[0]}, \phi_+^{[1]} \rangle =0~.
\end{eqnarray}
On multiplying this condition by $\langle \phi_+^{[0]}, \phi_+^{[1]} \rangle$
and integrating by parts, using ${\tilde{\nabla}}^{[0] i} h^{[0]}_i =0$, one finds that
${\tilde{\nabla}}^{[0]}_i \langle \phi_+^{[0]}, \phi_+^{[1]} \rangle =0$ as well.
So, it follows that ${\tilde{\nabla}}_i \parallel \phi_+ \parallel^2 = {\cal{O}}(\alpha'^2)$.
Then, ({\ref{pt1}}) also implies that $N=h+{\cal{O}}(\alpha'^2)$.
Substituting these conditions back into ({\ref{alg4a}}),
we find that $\Delta^{[1]}=0$ as well, so $\Delta={\cal{O}}(\alpha'^2)$.
Also, ({\ref{alg7b}}) implies that
\begin{eqnarray}
Y -dh= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
To summarize the conditions on the bosonic fields;
we have shown that for solutions with $\phi_+^{[0]} \neq 0$, we must have
\begin{eqnarray}
\label{bossimp2}
\Delta={\cal{O}}(\alpha'^2), \qquad N=h+{\cal{O}}(\alpha'^2), \qquad Y=dh+{\cal{O}}(\alpha'^2)
\end{eqnarray}
which implies that
\begin{eqnarray}
H = d ({\bf{e}}^- \wedge {\bf{e}}^+) + W + {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
The field equation ({\ref{geq1}}) of the 2-form gauge potential
can then be rewritten in terms of the near-horizon data
as
\begin{eqnarray}
\label{geq1a}
{\tilde{\nabla}}^i \big( e^{-2 \Phi} h_i \big)= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
\begin{eqnarray}
\label{geq1b}
e^{2 \Phi} {\tilde{\nabla}}^j \big(e^{-2 \Phi} dh_{ji}\big)
+{1 \over 2} W_{ijk} dh^{jk} + h^j dh_{ji}= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{geq1c}
e^{2 \Phi} {\tilde{\nabla}}^k \big(e^{-2 \Phi} W_{kij}\big)
+ dh_{ij} - h^k W_{kij} = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
In addition, ${\cal{P}}={\cal{O}}(\alpha') $ and so $F= {\tilde{F}} + {\cal{O}}(\alpha')$.
The $i,j$ component of the Einstein equation then simplifies to
\begin{eqnarray}
\label{einsp}
{\tilde{R}}_{ij} + {\tilde{\nabla}}_{(i} h_{j)} -{1 \over 4} W_{imn} W_j{}^{mn}
+2 {\tilde{\nabla}}_i {\tilde{\nabla}}_j \Phi
\nonumber \\
+{\alpha' \over 4} \bigg(-2 dh_{i \ell}
dh_j{}^\ell + \check {\tilde{R}}_{i \ell_1, \ell_2 \ell_3}
\check {\tilde{R}}_j{}^{\ell_1, \ell_2 \ell_3}
- {\tilde{F}}_{i\ell}{}^{ab} {\tilde{F}}_j{}^\ell{}_{ab} \bigg) ={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Furthermore, dilaton field equation can be written as
\begin{eqnarray}
\label{deqsimp1}
{\tilde{\nabla}}^i {\tilde{\nabla}}_i \Phi - h^i {\tilde{\nabla}}_i \Phi -2 {\tilde{\nabla}}^i \Phi {\tilde{\nabla}}_i \Phi -{1 \over 2} h_i h^i
+{1 \over 12} W_{ijk} W^{ijk}
\nonumber \\
+{\alpha' \over 16} \big(2 dh_{ij} dh^{ij}
+ {\tilde{F}}_{ij}{}^{ab} {\tilde{F}}^{ij}{}_{ab}
- \check {\tilde{R}}_{\ell_1 \ell_2, \ell_3 \ell_4}
\check {\tilde{R}}^{\ell_1 \ell_2, \ell_3 \ell_4} \big) = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
On making use of the conditions (\ref{bossimp2}) on the bosonic fields, the KSEs on
$\phi_+$ then simplify further to
\begin{eqnarray}
\label{par3}
{\tilde{\nabla}}_i \phi_+ -{1 \over 8}W_{ijk} \Gamma^{jk} \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
\begin{eqnarray}
\label{auxalg1}
dh_{ij} \Gamma^{ij} \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
\begin{eqnarray}
\label{auxalg1b}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi +{1 \over 2} h_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \phi_+= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{auxalg1c}
{\tilde{F}}_{ij} \Gamma^{ij} \phi_+ = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Furthermore, KSEs on $\eta_-$ also simplify to
\begin{eqnarray}
\label{par4}
{\tilde{\nabla}}_i \eta_- -{1 \over 8}W_{ijk} \Gamma^{jk} \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
\begin{eqnarray}
\label{auxalg2}
dh_{ij} \Gamma^{ij} \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
\begin{eqnarray}
\label{auxalg2b}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi -{1 \over 2} h_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \eta_-= {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\label{auxalg2c}
{\tilde{F}}_{ij} \Gamma^{ij} \eta_- = {\cal{O}}(\alpha') \ .
\end{eqnarray}
In both cases above, (\ref{par3}) and (\ref{par4}) are a consequence of the gravitino KSE, (\ref{auxalg1b}) and (\ref{auxalg2b}) are associated to the dilatino KSE,
while (\ref{auxalg1c}) and (\ref{auxalg2c}) are derived from the gaugino KSE. The two additional conditions (\ref{auxalg1}) and (\ref{auxalg2}) can be thought of
as integrability conditions.
\subsection{Independent KSEs}
The KSEs we have stated in the previous sections (\ref{par3bb})-(\ref{auxalg2cbb}) and (\ref{par3})-(\ref{auxalg2c}) are not all independent.
It turns out that the independent KSEs are
\begin{eqnarray}
\label{gravsimp}
\hat{\tn}\eta_\pm\equiv {\tilde{\nabla}}_i \eta_\pm - {1 \over 8} W_{ijk} \Gamma^{jk} \eta_\pm = {\cal{O}}(\alpha'^2)
\end{eqnarray}
and
\begin{eqnarray}
\label{algsimpmax}
\bigg(\Gamma^i {\tilde{\nabla}}_i \Phi \pm {1 \over 2} h_i \Gamma^i -{1 \over 12} W_{ijk} \Gamma^{ijk} \bigg) \eta_\pm = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
This is the case irrespectively on whether $\phi_+^{[0]} \equiv 0$ or $\phi_+^{[0]} \not= 0$ though the
conditions on the bosonic fields are somewhat different. The proof of this independence of the KSEs requires the use of field equations and Bianchi identities
and it is rather involved. The details can be found in appendix B.
\newsection{Supersymmetry enhancement}
A key ingredient in the investigation of heterotic horizons is that supersymmetry always enhances. As a result horizons preserve
2, 4, 6 and 8 supersymmetries \cite{hethor}. However this is based on a global argument which we shall see does not necessarily
apply to ${\cal{O}}(\alpha'^2)$.
As a result we shall seek some alternative conditions to guarantee that supersymmetry enhances. In particular we shall show that
if there exists a Killing spinor $\epsilon=\epsilon(\eta_+, \eta_-)$ up to ${\cal{O}}(\alpha'^2)$, ie $\eta_-$
solves (\ref{gravsimp}) and (\ref{algsimpmax}) up to ${\cal{O}}(\alpha'^2)$,
such that $\eta_-^{[0]} \neq 0$, and the horizon has $h^{[0]} \neq 0$, then there is automatic supersymmetry enhancement.
To prove this, it suffices to demonstrate that $h$ leaves all fields invariant and that it is covariantly constant with respect
to the connection with torsion $\hat{\tn}$ on ${\cal S}$. Indeed, first note that ({\ref{udepa}}) implies that
\begin{eqnarray}
\label{niceh}
\hat{\tn}_ih_j\equiv {\tilde{\nabla}}_i h_j - {1 \over 2} W_{ijk} h^k= {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
In particular, to both zeroth and first order in $\alpha'$,
$h$ defines an isometry on ${\cal{S}}$, with $h^2=const+{\cal{O}}(\alpha'^2)$.
Then the gauge equation ({\ref{geq1a}})
implies
\begin{eqnarray}
\label{phlie}
{\cal{L}}_h \Phi = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Also, the $u$-dependent part of ({\ref{auxalg1c}}) implies
\begin{eqnarray}
\label{extraalg3}
(i_h {\tilde{F}})_i \Gamma^i \eta_-= {\cal{O}}(\alpha')~,
\end{eqnarray}
which implies that $i_h {\tilde{F}}= {\cal{O}}(\alpha')$. So
in the gauge for which $i_h {\cal{B}}=0$, one has
\begin{eqnarray}
{\cal{L}}_h {\tilde{F}} = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Next we consider ${\cal{L}}_h W$, where
\begin{eqnarray}
\label{lie3}
{\cal{L}}_h W = -{\alpha' \over 2} \bigg( {\rm tr}\big( (i_h \check R) \wedge \check R\big) \bigg)+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
because $dh=i_h W+{\cal{O}}(\alpha'^2)$. To evaluate this expression, note first that the integrability conditions of
\begin{eqnarray}
\hat{\tn}_i \eta_-={\cal{O}}(\alpha'^2), \qquad \hat{\tn}_i(h_\ell \Gamma^\ell \eta_-)={\cal{O}}(\alpha'^2)
\end{eqnarray}
are
\begin{eqnarray}
{\hat {\tilde{R}}}_{ijpq} \Gamma^{pq} \eta_-={\cal{O}}(\alpha'^2), \qquad
{\hat {\tilde{R}}}_{ijpq} \Gamma^{pq} (h_\ell \Gamma^\ell \eta_-)={\cal{O}}(\alpha'^2)
\end{eqnarray}
from which we obtain the condition
\begin{eqnarray}
h^\ell {\hat {\tilde{R}}}_{ij\ell q} ={\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and hence, as a consequence of ({\ref{curvcross}}),
\begin{eqnarray}
h^\ell \check{\tilde{R}}_{\ell qij} ={\cal{O}}(\alpha')~.
\end{eqnarray}
Moreover,
\begin{eqnarray}
h^\ell \check {\tilde{R}}_{\ell q+-} = h^i (dh)_{i q} ={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
It follows that the contribution of $i_h \check R$ to the RHS of ({\ref{lie3}}) is of at least ${\cal{O}}(\alpha')$, and hence
\begin{eqnarray}
{\cal{L}}_h W={\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
So, we have shown that to both zero and first order in $\alpha'$,
the Lie derivative of the metric on ${\cal{S}}$, as well as $h, \Phi$ and $W$ with respect to $h$
vanishes, and the Lie derivative of ${\tilde{F}}$ with respect to $h$ vanishes to zeroth order
in $\alpha'$.
Supersymmetry is therefore enhanced, because if $\eta_+$ satisfies ({\ref{gravsimp}})
and ({\ref{algsimpmax}}), then so does $\eta_-' = \Gamma_- h_i \Gamma^i \eta_+$. Conversely, if $\eta_-$ satisfies
({\ref{gravsimp}})
and ({\ref{algsimpmax}}), then so does $\eta_+'= \Gamma_+ h_i \Gamma^i \eta_-$.
The proof of this makes use of the conditions
({\ref{niceh}}), together with ({\ref{phlie}}) and ({\ref{auxalg1}})
and ({\ref{auxalg2}}), and the reasoning is identical to that used
in \cite{hethor}.
This establishes a 1-1 correspondence between
spinors $\eta_+$ and $\eta_-$ satisfying ({\ref{gravsimp}})
and ({\ref{algsimpmax}}), so the number of supersymmetries preserved
is always even.
Next we wish to
determine whether a similar supersymmetry enhancement argument holds for $\eta_+$ spinors. In particular if there exists a solution to ({\ref{gravsimp}})
and ({\ref{algsimpmax}}) with $\eta_+^{[0]} \neq 0$ and $h^{[0]} \neq 0$, does this
imply that the number of $\eta_+$ solutions is equal to the number of $\eta_-$ solutions?
This does not follow
from a local analysis of ({\ref{gravsimp}})
and ({\ref{algsimpmax}}), because there is no analogue of
the condition ({\ref{udepa}}) acting on $\eta_+$.
Nevertheless, in \cite{hethor} a global analysis was used in
order to establish such a correspondence, by computing the
Laplacian of $h^2$ and applying a maximum principle argument,
in order to obtain ({\ref{niceh}}) to zeroth order in $\alpha'$.
We shall revisit this analysis in section \ref{hsq} including
the $\alpha'$ corrections.
\newsection{Geometry}
It is a consequence of the results of \cite{hethor}, see also section \ref{hsq}, that
horizons with non-trivial fluxes preserve an even number of supersymmetries up to ${\cal O}(\alpha')$. Furthermore we have also demonstrated that such horizons
with $\eta_-$ Killing spinors preserve an even number of supersymmetries up to ${\cal O}(\alpha'^2)$. It is straightforward to see
that horizons with more than 8 supersymmetries are trivial, ie the rotation $h$ vanishes. Therefore, the heterotic horizons of interest preserve
2,4,6 and 8 supersymmetries.
Up to ${\cal O}(\alpha')$, the investigation of geometry of all such horizons is identical to that given in \cite{hethor} for heterotic horizons
with closed 3-form field strength. Here we shall describe the geometry of the horizons that admit a $\eta_-$ Killing spinor up to ${\cal O}(\alpha'^2)$. We have seen that
for such horizons $h$ is parallel with respect to the connection with torsion up to ${\cal O}(\alpha'^2)$. Because of this, the geometry of such horizons is very similar to that
of horizons with closed 3-form flux. The only differences between the geometries of the two cases are solely located in the modified Bianchi identity for the 3-form flux.
As the two cases are similar, the description of the geometry will be brief.
\subsection{Horizons with $G_2$ structure}
Such horizons admit two supersymmetries up to ${\cal O}(\alpha'^2)$. In particular $h$ satisfies (\ref{niceh}).
The spacetime locally can be described as a (principal) $SL(2, \bb{R})$ fibration over a 7-dimensional manifold $B^7$
which admits a metric $d\tilde s_{(7)}^2$ and a 3-form $\tilde H_{(7)}$ such that the connection $\hat{\tilde\nabla}^{(7)}$ with torsion $\tilde H_{(7)}$
has holonomy contained in $G_2$.
The spacetime metric and 3-form flux can be written as
\begin{eqnarray}
ds^2&=&\eta_{ab} \lambda^a \lambda^b+d\tilde s_{(7)}^2+{\cal O}(\alpha'^2)~,~~~
\cr
H&=&CS(\lambda)+\tilde H_{(7)}+{\cal O}(\alpha'^2)~,
\end{eqnarray}
where $CS(\lambda)$ is the Chern-Simons form\footnote{Note that $CS(\lambda)= du\wedge dr\wedge h+r du\wedge dh+k^{-2} h\wedge dh$.} of the principal bundle connection,
\begin{eqnarray}
\lambda^- &=& {\bf{e}}^-~,~~~
\lambda^+ = {\bf{e}}^+ - {1 \over 2} k^2 u^2 {\bf{e}}^- -u h~,~~~
\lambda^1 = k^{-1} \big(h+ k^2 u {\bf{e}}^-\big)~,
\label{g2vbi}
\end{eqnarray}
$k^2=h^2$ is constant up to ${\cal O}(\alpha'^2)$
and
\begin{eqnarray}
\tilde H_{(7)}=k \varphi+ e^{2\Phi} \star_7d\big( e^{-2\Phi} \varphi\big)+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
The 3-form $\varphi$ is the fundamental $G_2$ and it is related to the fundamental $Spin(7)$ form of the $\eta_+$ Killing spinor via $\varphi=k^{-1} i_h\phi+{\cal{O}}(\alpha'^2)$.
The associated vector fields to $\lambda^-, \lambda^+, \lambda^1$ satisfy a $\mathfrak{sl}(2,\bb{R})$ algebra. The dilaton $\Phi$ depends only on the coordinates of $B^7$.
To find solutions, one has to solve the remaining equations
\begin{eqnarray}
&&d[e^{-2\Phi}\star_7\varphi]={\cal O}(\alpha'^2)~,~~~
\cr
&&k^{-2}\,dh\wedge dh+ d\tilde H_{(7)}=-{\alpha'\over4} \bigg(-2 dh\wedge dh+ \mathrm {tr}( \check R_{(8)}\wedge \check R_{(8)}- F\wedge F)\bigg)+{\cal O}(\alpha'^2)~,~~~
\cr
&&(dh)_{ij}={1\over2} \star_7\varphi_{ij}{}^{kl}
(dh)_{kl}+{\cal O}(\alpha'^2)~,~~~~F_{ij}={1\over2} \star_7\varphi_{ij}{}^{kl}
F_{kl}+{\cal O}(\alpha'^2)~.
\label{g2cons}
\end{eqnarray}
The first condition in (\ref{g2cons}) is required for $B^7$ to admit a $G_2$ structure compatible with a connection with skew-symmetric torsion. The second condition
is the anomalous Bianchi identity of the 3-form field strength written in terms of $B^7$ data. The curvature $\check R_{(8)}$ is that of the near horizon section ${\cal S}$
with metric and skew symmetric torsion given by
\begin{eqnarray}
d\tilde s_{(8)}^2= k^{-2} h\otimes h+d\tilde s_{(7)}^2+{\cal{O}}(\alpha'^2)~,~~~\tilde H_{(8)}= k^{-2} h\wedge dh+\tilde H_{(7)}+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As $\check R_{(8)}$ is invariant under $h$ and $i_h \check R_{(8)}={\cal{O}}(\alpha'^2)$, it descends on $B^7$. Finally, the last two equations in (\ref{g2cons})
imply that both $dh$ and $F$ are $\mathfrak{g}_2$ instantons on $B^7$.
\subsection{Horizons with $SU(3)$ structure}
Such horizons preserve 4 supersymmetries. Locally the spacetime is a principal bundle with fibre $SL(2, \bb{R})\times U(1)$ over a K\"ahler with torsion manifold (KT) $B^6$
with Hermitian form $\omega_{(6)}$.
The metric and 3-form field strength of the spacetime can be written as
\begin{eqnarray}
ds^2=\eta_{ab} \lambda^a \lambda^b+ d\tilde s^2_{(6)}+{\cal O}(\alpha'^2)~,~~~H&=&CS(\lambda)+\tilde H_{(6)}+{\cal O}(\alpha'^2)~,
\end{eqnarray}
where $\lambda^a$, $a=+,-,1,6$ is the principal bundle connections whose $a=+,-,1$ components are as in (\ref{g2vbi}) and
\begin{eqnarray}
\lambda^6=k^{-1} \ell
\end{eqnarray}
which is along the $\mathfrak{u}(1)$ direction in the Lie algebra. $h^2=k^2$ is constant up to ${\cal O}(\alpha'^2)$. The curvature of the principal bundle connection
$\lambda^a$ is expressed in terms of $dh$ and $d\ell$ which are 2-forms on $B^6$ and it is required to satisfy that
\begin{eqnarray}
dh^{2,0}=d\ell^{2,0}={\cal{O}}(\alpha'^2)~,~~~dh_{ij} \omega_{(6)}^{ij}={\cal{O}}(\alpha'^2)~,~~~d\ell_{ij} \omega_{(6)}^{ij}=-2 k^2+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
ie $h$ is a $\mathfrak{su}(3)$ instanton on $B^6$ while $\ell$ is a $\mathfrak{u}(3)$ instanton on $B^6$.
The KT manifold $B^6$ is in addition conformally balanced, ie
\begin{eqnarray}
\theta_{\omega_{(6)}}=2d\Phi+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $\theta$ is the Lee form and the torsion is
\begin{eqnarray}
\tilde H_{(6)}=-i_I d\omega+{\cal{O}}(\alpha'^2) =e^{2 \Phi} \star_6 d [e^{-2\Phi} \omega_{(6)}]+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
The dilaton $\Phi$ depends only on the coordinates of $B^6$. The gauge connection is a $\mathfrak{su}(3)$ instanton on $B^6$, i.e.
\begin{eqnarray}
F^{2,0}={\cal{O}}(\alpha')~,~~~F_{ij} \omega_{(6)}^{ij}={\cal{O}}(\alpha')~.
\end{eqnarray}
To find examples for such horizons two additional conditions should be satisfied. One is the restriction that
\begin{eqnarray}
\hat{\tilde R}_{(6)}{}_{ij} \omega_{(6)}^{ij}=-2 k^2 d\ell+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
This arises from requirement that the $U(3)$ structure on $B^6$ lifts to a $SU(3)$ structure on the spacetime or equivalent the spatial horizon section ${\cal S}$. The other is
the anomalous Bianchi identity which now reads
\begin{eqnarray}
&&k^{-2} dh\wedge dh+k^{-2} d\ell\wedge d\ell+ d\Big(e^{2 \Phi}\star_6 d [e^{-2\Phi} \omega]\Big)=
\cr~~~~~~~~~~~&&-{\alpha'\over4} \bigg(-2 dh\wedge dh+ \mathrm {tr}( \check R_{(8)}\wedge \check R_{(8)}- F\wedge F)\bigg)+{\cal O}(\alpha'^2)~,
\end{eqnarray}
where $\check R_{(8)}$ is the curvature of the connection with torsion on ${\cal S}$ which now its metric and torsion are given by
\begin{eqnarray}
d\tilde s^2&=&k^{-2} (h\otimes h+\ell\otimes\ell)+d\tilde s_{(6)}^2+{\cal{O}}(\alpha'^2)~,
\nonumber \\
\tilde H&=& k^{-2} (h\wedge dh+\ell\wedge d\ell)+\tilde H_{(6)}+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Note that $\hat\nabla_{(8)}$ has holonomy contained in $SU(3)$ and so $\check R_{(8)}$ is a well defined form on $B^6$.
\subsection{Horizons with $SU(2)$ structure and 6 supersymmetries}
The spacetime is locally a $SL(2,\bb{R})\times SU(2)$ principal fibration over a 4-dimensional anti-self-dual Weyl Einstein manifold $B^4$
with metric $d\mathring s^2_{(4)}$ and quaternionic K\"ahler structure 2-forms $\omega^{r'}_{(4)}$. The spacetime metric and 3-form field strength can be expressed as
\begin{eqnarray}
ds^2=\eta_{ab} \lambda^a\lambda^b+ \delta_{r's'} \lambda^{r'} \lambda^{s'}+e^{2\Phi} d\mathring s^2_{(4)}+{\cal{O}}(\alpha'^2)~,~~~H=CS(\lambda)+\tilde H_{(4)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $\tilde H_{(4)}=-\mathring\star_{4} de^{2\Phi}$, the principal bundle connection $\lambda^a$ for $a=+,-,1$ coincides with that
of (\ref{g2vbi}) while
\begin{eqnarray}
\lambda^{r'}=k^{-1} \ell^{r'}~,
\end{eqnarray}
are the components along the $\mathfrak{su}(2)$ subalgebra of the fibre. Furthermore the dilaton depends only on the coordinates of $B^4$, $dh$ as well as the
curvature $({\cal F}^{\rm sd})^{r'}$ of $\lambda^{r'}$ are 2-forms on $B^4$. In addition, we have that
\begin{eqnarray}
dh^{\rm sd}={\cal{O}}(\alpha'^2)~, ~~~({\cal F}^{\rm sd})^{r'}={k\over4}\omega_{(4)}^{r'}+{\cal{O}}(\alpha'^2)~,~~~F^{\rm sd}={\cal{O}}(\alpha')
\label{6conx}
\end{eqnarray}
and $dh^{\rm ad}$, $({\cal F}^{\rm ad})^{r'}$ and $F^{\rm ad}$ are not restricted, where the self-dual and anti-self dual components are appropriately denoted.
Geometrically, the set up is such that the $SO(4)=SU(2)\cdot SU(2)$ structure of $B^4$ when lifted the 7-dimensional manifold which is the principal
bundle with fibre $SU(2)$ reduces to $SU(2)$ as required from supersymmetry.
The only remaining condition to find solutions is
\begin{eqnarray}
&&\mathring{\nabla}^2 e^{2\Phi}=-{1\over2} ({\cal F}^{\rm ad})_{ij}^{r'}({\cal F}^{\rm ad})^{ij}_{r'}-{k^{-2}\over2}
dh_{ij} dh^{ij}+{3\over 8} k^2 e^{4\Phi}
\cr&&~~~~~~~~~~~~~~~+{\alpha'\over8} \bigg(-2 dh_{ij} dh^{ij}+ \mathrm {tr}( \check R_{(8) ij} \check R_{(8)}{}^{ij}- F_{ij} F^{ij})\bigg)+{\cal O}(\alpha'^2)~.
\label{6horcon}
\end{eqnarray}
Again $\check R_{(8)}$ is the curvature of the connection with torsion of the horizon section ${\cal S}$ which has metric and 3-form field strength
\begin{eqnarray}
d\tilde s^2&=&k^{-2} h\otimes h+\delta_{r's'} \lambda^{r'} \lambda^{s'}+ e^{2\Phi} d\mathring s^2_{(4)}+{\cal{O}}(\alpha'^2)~,
\nonumber \\
\tilde H &=& k^{-2} h\wedge dh+CS(\lambda^{r'})+\tilde H_{(4)}+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As $\hat R_{(8)}$ has holonomy contained in $SU(2)$, $\check R_{(8)}$ is a 2-form on $B^4$. For more details on the geometry of heterotic backgrounds
that preserve 6 supersymmetries and have $SU(2)$ holonomy see \cite{compgp, hethor}.
\subsection{Horizons with $SU(2)$ structure and 8 supersymmetries}
This class of horizons have a similar geometry to those of the previous section that preserve 6 supersymmetries. The differences are that
\begin{eqnarray}
({\cal F}^{\rm sd})^{r'}={\cal{O}}(\alpha'^2)~,
\end{eqnarray}
so ${\cal F}^{r'}$ is an anti-self dual instanton on $B^4$ which now is a hyper-K\"ahler manifold with respect to the metric $d\mathring s^2_{(4)}$. Furthermore
the equation for the dilaton (\ref{6horcon}) now reads
\begin{eqnarray}
&&\mathring{\nabla}^2 e^{2\Phi}=-{1\over2} {\cal F}_{ij}^{r'}{\cal F}^{ij}_{r'}-{k^{-2}\over2}
dh_{ij} dh^{ij}
\cr&&~~~~~~~~~~~+{\alpha'\over8} \bigg(-2 dh_{ij} dh^{ij}+ \mathrm {tr}( \check R_{(8) ij} \check R_{(8)}{}^{ij}- F_{ij} F^{ij})\bigg)+{\cal O}(\alpha'^2)~.
\label{8horcon}
\end{eqnarray}
Therefore at zeroth order, a partial integration argument reveals that
\begin{eqnarray}
dh={\cal O}(\alpha')~,~~~{\cal F}^{r'}={\cal O}(\alpha')~.
\end{eqnarray}
Thus $B^4$ up to a local isometry is $AdS_3\times S^3\times T^4$ or $AdS_3\times S^3\times K_3$ and the dilaton is constant. One does not expect additional $\alpha'$ corrections
to the geometry in the case that the $\check R_{(8)}$ is identified with $F$. Though additional corrections are expected otherwise. In the absence of 5-branes, consistency
requires that the Pontryagin number of the tangent bundle of $B^4$ cancels that of the gauge bundle which is the vanishing condition for the global anomaly.
\newsection{Global Properties}
\subsection{ Maximum principle on $h^2$} \label{hsq}
We shall revisit the global analysis of
\cite{hethor} by calculating the Laplacian of $h^2$,
but including also $\alpha'$ correction terms. Then we shall
examine the conditions imposed on the geometry by
this expression. To avoid the trivial case when $h^2={\cal{O}}(\alpha'^2)$, we take
$h^{[0]} \neq 0$.
Next we calculate the Laplacian of $h^2$ to find
that
\begin{eqnarray}
\label{lap1}
&&{\tilde{\nabla}}^i {\tilde{\nabla}}_i h^2 + (h-2 d \Phi)^j {\tilde{\nabla}}_j h^2
= 2 {\tilde{\nabla}}^{(i} h^{j)} {\tilde{\nabla}}_{(i} h_{j)}
+{1 \over 2}(dh - i_h W)_{ij} (dh-i_h W)^{ij}
\nonumber \\
&&~~-{\alpha' \over 4} h^i h^j
\bigg(-2 dh_{i \ell}
dh_j{}^\ell + \check {\tilde{R}}_{i \ell_1 \ell_2 \ell_3}
\check {\tilde{R}}_j{}^{\ell_1 \ell_2 \ell_3}
- {\tilde{F}}_{i\ell}{}^{ab} {\tilde{F}}_j{}^\ell{}_{ab} \bigg) + {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
In computing this expression, we
made use of the Einstein equation ({\ref{einsp}})
together with the gauge field equations ({\ref{geq1a}}) and
({\ref{geq1b}}).
We remark that the calculation proceeds
in exactly the same way as in \cite{hethor}; the $\alpha'$ terms
in ({\ref{lap1}}) originate from the $\alpha'$ terms in
$2 h^i h^j {\tilde{R}}_{ij}$.
It should be noted
that in order to fully control ${\cal{O}}(\alpha'^2)$ terms in this expression,
one would require to know the Einstein equations up to and including
$\alpha'^2$.
To begin, we consider ({\ref{lap1}}) to zeroth order in $\alpha'$.
We then re-obtain the conditions found in \cite{hethor} via a maximum
principle argument, i.e.
\begin{eqnarray}
\label{firstiso}
h^2&=& {\rm const} + {\cal{O}}(\alpha')~,~~~
{\tilde{\nabla}}_{(i} h_{j)}= {\cal{O}}(\alpha')~,~~~
dh-i_h W = {\cal{O}}(\alpha')
\end{eqnarray}
In particular, it follows from these conditions that
\begin{eqnarray}
i_h dh=O(\alpha') \ ,
\end{eqnarray}
and also
\begin{eqnarray}
{\cal{L}}_h \Phi = {\cal{O}}(\alpha'), \qquad {\cal{L}}_h W = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Furthermore, it also follows that if $\eta_+$ satisfies
({\ref{gravsimp}}), then $\Gamma_- h_i \Gamma^i \eta_+$ also satisfies
({\ref{gravsimp}}) to zeroth order in $\alpha'$. The integrability conditions
therefore imply that
\begin{eqnarray}
{\hat{\tilde{R}}}_{ijmn} h^m \Gamma^n \phi_+ = {\cal{O}}(\alpha')~,
\end{eqnarray}
and hence
\begin{eqnarray}
\check {\tilde{R}}_{mnij} h^m = {\cal{O}}(\alpha')~.
\end{eqnarray}
On substituting these conditions back into ({\ref{lap1}}) one finds that
the remaining content of ({\ref{lap1}}) is
\begin{eqnarray}
\label{lap2}
{\tilde{\nabla}}^i \bigg( e^{-2 \Phi} {\tilde{\nabla}}_i h^2 \bigg) + e^{-2 \Phi} h^j {\tilde{\nabla}}_j h^2
= {\alpha' \over 2} e^{-2 \Phi} h^i h^j {\tilde{F}}_{i\ell}{}^{ab} {\tilde{F}}_j{}^\ell{}_{ab}
+ {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
On integrating the ${\cal{O}}(\alpha')$ part of ({\ref{lap2}}) over the zeroth order
horizon section, one finds that
\begin{eqnarray}
i_h {\tilde{F}} = {\cal{O}}(\alpha')~,
\end{eqnarray}
and furthermore
\begin{eqnarray}
h^2 = {\rm const} + {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
It should be noted however that ({\ref{lap1}}) does not in general imply ({\ref{niceh}}). In particular, the conditions obtained from
the analysis of the properties of $h^2$ are not sufficient to
imply that if $\eta_+$, with $\eta_+^{[0]} \neq 0$, satisfies
({\ref{gravsimp}})
and ({\ref{algsimpmax}}), then $\eta_-'' = \Gamma_- h_i \Gamma^i
\eta_+$ also satisfies ({\ref{gravsimp}})
and ({\ref{algsimpmax}}). Thus although ({\ref{lap1}}) implies the horizons exhibit supersymmetry enhancement at ${\cal{O}}(\alpha')$, it does not imply
the same at ${\cal{O}}(\alpha'^2)$.
\subsection{ Lichnerowicz Type Theorems}
Next we shall investigate whether it is possible to identify Killing spinors with the zero modes of a suitable Dirac-like operator, by constructing a
generalized Lichnerowicz type theorem which incorporates the near-horizon fluxes.
Such Lichnerowicz type theorems have been established for near-horizon geometries
in D=11 supergravity \cite{lichner11}, type IIB \cite{lichneriib} and type IIA supergravity (both massive and massless) \cite{lichneriia1, lichneriia2},
as well as for $AdS$ geometries in ten and eleven dimensional supergravity \cite{lichnerads1, lichnerads2, lichnerads3, lichnerads4}.
To begin, let us first define the modified connection with torsion and the modified horizon Dirac operator, respectively
\begin{align}
{\nabla}^{(\kappa)}_{i} \equiv \hat{\tn}_{i} + \kappa \, \Gamma_{i} {\cal A} \ , \qquad\qquad\quad
{\cal D} \equiv \Gamma^{i} \hat{\tn}_{i} + q \, {\cal A} \ ,
\end{align}
where $\kappa, q \in \mathbb{R}$, and
\begin{align}
\notag
\hat{\tn}_i \eta_{\pm} &= \tilde{\nabla}_i \eta_{\pm} - \frac{1}{8} W_{ijk}\Gamma^{jk} \eta_{\pm} \ , \\
{\cal A} &= W_{ijk}\Gamma^{ijk} - 12\Gamma^i \tilde{\nabla}_i \Phi \mp 6 \Gamma^i h_i \ .
\end{align}
It is clear that if $\eta_{\pm}$ is a Killing spinor, i.e.
\begin{eqnarray}
\hat{\tn}_i \eta_{\pm} = {\cal{O}}(\alpha'^2), \qquad {\rm and} \qquad {\cal A} \eta_\pm = {\cal{O}}(\alpha'^2) \ ,
\end{eqnarray}
then ${\cal D} \eta_{\pm} = {\cal{O}}(\alpha'^2)$ also. Here we want to investigate the extent to
which the converse is true. We shall show that if ${\cal D} \eta_{\pm} = {\cal{O}}(\alpha'^2)$, then
\begin{eqnarray}
\label{killsp1}
\hat{\tn}_i \eta_{\pm} = {\cal{O}}(\alpha'), \qquad {\rm and} \qquad {\cal A} \eta_\pm = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
and moreover
\begin{eqnarray}
\label{killsp2}
{{dh}}_{ij} \Gamma^{ij} \eta_\pm = {\cal{O}}(\alpha'), \qquad {\rm and} \qquad {{\tilde{F}}}^{ab}_{ij}\Gamma^{ij} \eta_\pm
= {\cal{O}}(\alpha') \ .
\end{eqnarray}
In order to obtain this result, we begin by considering the following functional
\begin{eqnarray}
\label{I functional}
{\cal I} \equiv \int_{{\cal S}} e^{c\Phi} \bigg( \langle {\nabla}^{(\kappa)}_{i} \eta_{\pm} , {\nabla}^{(\kappa)i} \eta_{\pm} \rangle
- \langle{\cal D} \eta_{\pm} , {\cal D} \eta_{\pm} \rangle \bigg) \ ,
\end{eqnarray}
where $c \in \mathbb{R}$, and we assume all the field equations. After some algebra, which is described in appendix D, we find
\begin{align}
\label{final_I}
\notag
{\cal I} = &\left(8\kappa^2 - \frac{1}{6} \kappa \right) \int_{{\cal S}} e^{-2 \Phi} \parallel {\cal A}\, \eta_{\pm} \parallel^2
+ \int_{{\cal S}} e^{-2\Phi} \langle \eta_{\pm}, \Psi {\cal D} \eta_{\pm} \rangle \\
&- \frac{\alpha'}{64} \int_{{\cal S}} e^{-2\Phi} \left( 2 \parallel \slashed{dh}\, \eta_{\pm} \parallel^2 + \parallel \slashed{\tilde{F}} \eta_{\pm} \parallel^2 - \langle \check{\tilde{R}}_{\ell_1\ell_2,\, ij}\Gamma^{\ell_1\ell_2}\eta_{\pm}, \check{\tilde{R}}^{ ij}_{\ell_3\ell_4,}\Gamma^{\ell_3\ell_4}\eta_{\pm}\rangle \right) + {\cal{O}}(\alpha'^2)\ ,
\end{align}
which is true if and only if $q= \frac{1}{12} + {\cal{O}}(\alpha'^2)$ and $c = -2 +{\cal{O}}(\alpha'^2)$, and the $\Psi$ is defined as follows
\begin{eqnarray}
\Psi \equiv 2\left(\kappa - \frac{1}{12}\right) {\cal A}^{\dagger} -2 \Gamma^{i}\tilde{\nabla}_{i} \Phi - \frac{1}{6} \Gamma^{\ell_1\ell_2\ell_3}W_{\ell_1\ell_2\ell_3} + {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
The values of $q$ and $c$ are fixed by requiring that certain terms in the
functional ({\ref{I functional}}), which cannot be rewritten in terms of
the Dirac operator ${\cal{D}}$, or ${\cal{A}}^\dagger {\cal{A}}$, and which have no fixed sign, should vanish.
The part of (\ref{final_I}) which is of zeroth order in $\alpha'$ implies that if $0 < \kappa < \frac{1}{48}$, then
\begin{eqnarray}
\label{Dirac->Killing}
\label{gravitino + alg}
{\cal D} \eta_{\pm} = {\cal{O}}(\alpha'^2) \quad \Longrightarrow ({\ref{killsp1}})
\end{eqnarray}
and establishes the first part of the theorem. Next
the integrability condition of $\hat{\tn}\eta_{\pm} = {\cal{O}}(\alpha')$ is
\begin{eqnarray}
\hat{\tilde{R}}_{mn, \ell_1\ell_2}\Gamma^{\ell_1\ell_2}\eta_{\pm} = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
which in turn implies that
\begin{eqnarray}
\check{\tilde{R}}_{\ell_1\ell_2, mn}\Gamma^{\ell_1\ell_2} \eta_{\pm} = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Hence we shall neglect the term in (\ref{final_I}) which is quadratic in
$\check{\tilde{R}}$, as this term is ${\cal{O}}(\alpha'^3)$.
Then, assuming (\ref{Dirac->Killing}), the part of (\ref{final_I})
which is first order in $\alpha'$ further implies ({\ref{killsp2}}).
This completes the proof.
\newsection{Nearly supersymmetric horizons }
\subsection{Description of the backgrounds}
We have proven that for near horizon geometries the necessary and sufficient conditions
imposed by supersymmetry on the spinors can be reduced
to ({\ref{gravsimp}}) and ({\ref{algsimpmax}}).
In this section, we shall consider
the case for which the
supersymmetry is explicitly partially broken, in the sense that the
gravitino KSE ({\ref{gravsimp}})
admits solutions but not dilatino one ({\ref{algsimpmax}}). We also assume that the fields satisfy
\begin{eqnarray}
\Delta = {\cal{O}}(\alpha'^2), \qquad H = d ({\bf{e}}^- \wedge {\bf{e}}^+) +W + {\cal{O}}(\alpha'^2) \ .
\label{nearh}
\end{eqnarray}
These conditions were previously obtained via the supersymmetry
analysis; here we shall assume them.
In particular, all of the conditions obtained from
the global analysis of the Laplacian of $h^2$ in Section 7
remain true. As a consequence of this,
\begin{eqnarray}
\label{zeroiso}
\hat{\tn}_i h_j = {\cal{O}}(\alpha')~.
\end{eqnarray}
However we do not assume that $\hat{\tn} h = {\cal{O}}(\alpha'^2)$.
One consequence of these assumptions is that none of the spacetime Killing spinor equations are satisfied even at ${\cal{O}}(\alpha')$.
In particular, the spacetime gravitino KSE requires in addition the condition that $dh_{ij}\Gamma^{ij}\eta_+={\cal{O}}(\alpha')$ which is
not one of our requirements. In what follows, we shall investigate the consequences of the above assumptions on the geometry of
the spatial horizon sections ${\cal S}$. We shall also comment on the special case where $\hat{\tn} h = {\cal{O}}(\alpha'^2)$.
\subsection{Additional parallel spinors}
A key property of backgrounds that satisfy the gravitino KSE but not the dilatino one is the existence of
additional parallel spinors, see also appendix C. In the present context to show this focus
on the spinor $\eta_+$;
a similar analysis can be undertaken for the $\eta_-$ spinors.
To proceed, it will be useful to define
\begin{eqnarray}
{{\cal A}} = W_{ijk} \Gamma^{ijk} -12 \Gamma^i {\tilde{\nabla}}_i \Phi -6 h_i \Gamma^i~,
\end{eqnarray}
so that the algebraic condition ({\ref{algsimpmax}}) on $\eta_+$ is equivalent to ${\cal{A}} \eta_+ = {\cal{O}}(\alpha'^2)$.
We then note the useful identity
\begin{eqnarray}
{\tilde{\nabla}}_i W_{\ell_1 \ell_2 \ell_3} \Gamma^{\ell_1 \ell_2 \ell_3} \eta_+ &=&
{\tilde{\nabla}}_i ({\cal A} \eta_+) -{1 \over 8} W_{i \ell_1 \ell_2} \Gamma^{\ell_1 \ell_2}
({\cal A} \eta_+)
\nonumber \\
&+&3 W_{\ell_1 \ell_2 q} W_{i \ell_3}{}^q \Gamma^{\ell_1 \ell_2 \ell_3} \eta_+
-\big(6 {\tilde{\nabla}}^m \Phi+3 h^m\big) W_{mi\ell} \Gamma^\ell \eta_+
\nonumber \\
&+&\big(12 \Gamma^\ell {\tilde{\nabla}}_i {\tilde{\nabla}}_\ell \Phi +6 {\tilde{\nabla}}_i h_\ell \Gamma^\ell\big) \eta_+ \ .
\end{eqnarray}
The integrability conditions of ({\ref{gravsimp}}) imply that
\begin{eqnarray}
\label{ksenil1}
{1 \over 6} \bigg({\tilde{\nabla}}_i ({\cal A} \eta_+) -{1 \over 8} W_{i \ell_1 \ell_2}
\Gamma^{\ell_1 \ell_2} ({\cal A} \eta_+) \bigg)
-{\alpha' \over 8} ({\tilde{F}}_{i \ell})_{ab} \Gamma^\ell ({\tilde{F}}_{q_1 q_2})^{ab}
\Gamma^{q_1 q_2} \eta_+
\nonumber \\
-{\alpha' \over 16} dh_{i \ell} \Gamma^\ell dh_{q_1 q_2} \Gamma^{q_1 q_2} \eta_+ = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and hence
\begin{eqnarray}
{1 \over 6} \langle \eta_+, \Gamma^i {\tilde{\nabla}}_i({\cal A} \eta_+) -{1 \over 8} W_{\ell_1 \ell_2 \ell_3}
\Gamma^{\ell_1 \ell_2 \ell_3} ({\cal A} \eta_+) \rangle
+{\alpha' \over 8} \langle (({\tilde{F}}_{\ell_1 \ell_2})_{ab} \Gamma^{\ell_1 \ell_2} \eta_+
, ({\tilde{F}}_{q_1 q_2})^{ab} \Gamma^{q_1 q_2} \eta_+ \rangle
\nonumber \\
+{\alpha' \over 16} \langle dh_{\ell_1 \ell_2} \Gamma^{\ell_1 \ell_2} \phi_+,
dh_{q_1 q_2} \Gamma^{q_1 q_2} \eta_+ \rangle = {\cal{O}}(\alpha'^2) \ .
\nonumber \\
\end{eqnarray}
Integrating this expression over ${\cal{S}}$ yields the conditions
\begin{eqnarray}
\label{F_dh_cond}
{\tilde{F}}_{ij} \Gamma^{ij} \eta_+ = {\cal{O}}(\alpha'), \qquad
dh_{ij} \Gamma^{ij} \eta_+ = {\cal{O}}(\alpha')~,
\end{eqnarray}
and substituting these conditions back into ({\ref{ksenil1}}) then implies
that
\begin{eqnarray}
{\tilde{\nabla}}_i ({\cal A} \eta_+) -{1 \over 8} W_{i \ell_1 \ell_2} \Gamma^{\ell_1 \ell_2} ({\cal A} \eta_+)={\cal{O}}(\alpha'^2) \ .
\label{naeta}
\end{eqnarray}
Therefore the spinor $\tau_+={\cal A} \eta_+$ is also $\hat{\tilde\nabla}$-parallel. As $\tau_+$ has opposite chirality from $\eta_+$ cannot be identified as an additional Killing spinor within the heterotic theory. Nevertheless it is instrumental in the description of the geometry of ${\cal S}$.
\subsection{Nearly supersymmetric horizons with $G_2$ holonomy}
\subsubsection{A symmetry of horizon section}
Suppose that we consider solutions
for which there exists a single solution $\eta_+$ to
the gravitino KSE
\begin{eqnarray}
\label{covcon1}
\hat{\tn} \eta_+ = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
for which $\big({\cal{A}} \eta_+\big)^{[0]} \neq 0$.
This implies that the horizon section
${\cal{S}}^{[0]}$ at zeroth order in $\alpha'$ admits a $G_2$ structure.
We begin by defining $\tau_+ = {\cal{A}} \eta_+$, with $\tau_+^{[0]} \neq 0$. It will be particularly useful to define
\begin{eqnarray}
\label{vecV}
V_i = \langle \eta_, \Gamma_i \tau_+ \rangle~.
\end{eqnarray}
In what follows we shall show that $V$ is a symmetry of all the fields of the spatial
horizon section.
As $\tau_+^{[0]} \neq 0$, this implies that $V^{[0]} \neq 0$.
In addition, as $\eta_+$ and $\tau_+$ satisfy
\begin{eqnarray}
\label{eta_tau}
\hat{\tn} \eta_+ = {\cal{O}}(\alpha'^2)~, \qquad \hat{\tn} \tau_+
={\cal{O}}(\alpha'^2)~,
\end{eqnarray}
it follows that
\begin{eqnarray}
\label{parallelV}
\hat{\tn} V = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
so that $V^2=const. + {\cal{O}}(\alpha'^2)$, and $V$ is an isometry of
${\cal{S}}$ to both zero and first order in $\alpha'$.
Next, we consider the relationship of $V$ to $h$. In particular, the
spinors $h_i \Gamma^i {\cal{A}} \eta_+$ and $V_i \Gamma^i {\cal{A}} \eta_+$
are both parallel with respect to $\hat{\tn}$ at zeroth order in $\alpha'$. As we have assumed that ({\ref{covcon1}}) admits only one
solution, there must be a nonzero constant $c$ such that
\begin{eqnarray}
V = ch+ {\cal{O}}(\alpha')~.
\end{eqnarray}
In addition, we have
\begin{eqnarray}
{\cal L}_V W = i_V dW + {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
because $dV=i_V W + {\cal{O}}(\alpha'^2)$. Also, as $V=ch+{\cal{O}}(\alpha')$ it follows that
\begin{eqnarray}
{\cal L}_V W = c i_h dW + {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As a consequence of ({\ref{zeroiso}}), one has that $i_h dh={\cal{O}}(\alpha')$,
and from the global analysis of the Laplacian of $h^2$, we find
$i_h {\tilde{F}}={\cal{O}}(\alpha')$ as well as $\check{\tilde{R}}_{mnij} h^m = {\cal{O}}(\alpha')$.
These conditions imply that
\begin{eqnarray}
{\cal L}_V W = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and so $W$ is invariant.
Next we consider ${\cal L}_V \Phi$. As $V = c h + {\cal{O}}(\alpha')$ it follows that
\begin{eqnarray}
{\cal L}_V dh = c {\cal L}_h dh + {\cal{O}}(\alpha') = {\cal{O}}(\alpha')~.
\end{eqnarray}
Also we have
\begin{eqnarray}
{\cal L}_V {\tilde{R}}_{ij,pq} = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and
\begin{eqnarray}
\big({\cal L}_V {\tilde{F}}\big)_{ij}{}^a{}_b \tilde{F}^{ijb}{}_a
= {\cal{O}}(\alpha')~,
\end{eqnarray}
which follows from
\begin{eqnarray}
{\cal L}_V {\tilde{F}} = c [{\tilde{F}}, i_h {\cal{B}}]+ {\cal{O}}(\alpha')~.
\end{eqnarray}
Hence we have
\begin{eqnarray}
{\cal L}_V \bigg( \alpha' \big(-2dh_{ij} dh^{ij} + \check{\tilde{R}}_{ij,pq}
\check{\tilde{R}}^{ij,pq} - ({\tilde{F}}_{ij})^{ab} ({\tilde{F}}^{ij})_{ab} \big) \bigg) = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
So, on taking the Lie derivative of the trace of ({\ref{einsp}}) with respect to $V$ we find
\begin{eqnarray}
\label{L_trEins}
{\cal L}_V \bigg( {\tilde{\nabla}}^i h_i +2 {\tilde{\nabla}}_i {\tilde{\nabla}}^i \Phi \bigg) ={\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and hence, as a consequence of the field equation ({\ref{geq1a}}), we find
\begin{eqnarray}
\label{liex1}
{\cal L}_V \bigg( h^i {\tilde{\nabla}}_i \Phi + {\tilde{\nabla}}^i {\tilde{\nabla}}_i \Phi \bigg) = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Also, on taking the Lie derivative of the dilaton field equation
({\ref{deqsimp1}}), we get
\begin{eqnarray}
\label{liex2}
{\cal L}_V \bigg(-h^i {\tilde{\nabla}}_i \Phi -2 {\tilde{\nabla}}_i \Phi {\tilde{\nabla}}^i \Phi + {\tilde{\nabla}}^i {\tilde{\nabla}}_i \Phi \bigg) = {\cal{O}}(\alpha'^2)
\end{eqnarray}
On taking the sum of ({\ref{liex1}}) and ({\ref{liex2}}), we find
\begin{eqnarray}
{\cal L}_V \bigg( {\tilde{\nabla}}^i {\tilde{\nabla}}_i \Phi - {\tilde{\nabla}}^i \Phi {\tilde{\nabla}}_i \Phi \bigg) = {\cal{O}}(\alpha'^2)
\end{eqnarray}
and hence if $f= {\cal L}_V \Phi$ we have
\begin{eqnarray}
\label{laplx3}
{\tilde{\nabla}}_i {\tilde{\nabla}}^i f -2 {\tilde{\nabla}}^i \Phi {\tilde{\nabla}}_i f = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
We know ${\cal L}_h \Phi = {\cal{O}}(\alpha')$ as a consequence of the analysis
of the Laplacian of $h^2$, so $f=\alpha' f^{[1]}+ {\cal{O}}(\alpha'^2)$.
Then, on integrating, ({\ref{laplx3}}) implies that
\begin{eqnarray}
\int_{{\cal{S}}^{[0]}} e^{-2 \Phi^{[0]}} {\tilde{\nabla}}_i f^{[1]} {\tilde{\nabla}}^i f^{[1]} = 0~,
\end{eqnarray}
so $f^{[1]}=\beta$ for constant $\beta$, and so
\begin{eqnarray}
{\cal L}_V \Phi = \beta \alpha' + {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As we require that $\Phi$ must attain a global maximum on ${\cal{S}}$,
at this point ${\cal L}_V \Phi=0$ to all orders in $\alpha'$, for any $V$.
This fixes $\beta=0$, so
\begin{eqnarray}
\label{L_Phi}
{\cal L}_V \Phi = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
which proves the invariance of $\Phi$.
Next, we consider ${\cal L}_V h$. On taking the Lie derivative of the field equation of the 2-form
gauge potential ({\ref{geq1c}}) we find
\begin{eqnarray}
\label{Lie_gauge}
d ({\cal L}_V h)_{ij} - ({\cal L}_V h)^k W_{ijk} = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and on taking the Lie derivative of the Einstein equation
({\ref{einsp}}) we get
\begin{eqnarray}
\label{Lie_einst}
{\tilde{\nabla}}_{(i} ({\cal L}_V h)_{j)} = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where we have used
\begin{eqnarray}
{\cal L}_h \bigg( {\tilde{F}}_{i\ell}{}^{ab} {\tilde{F}}_j{}^\ell{}_{ab} \bigg) = {\cal{O}}(\alpha')~.
\end{eqnarray}
It follows that
\begin{eqnarray}
\hat{\tn} ({\cal L}_V h)_j = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As $V=ch+ {\cal{O}}(\alpha')$, it is convenient to write
\begin{eqnarray}
{\cal L}_V h = \alpha' \Lambda+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where
\begin{eqnarray}
\hat{\tn} \Lambda = {\cal{O}}(\alpha')~.
\end{eqnarray}
As $\Lambda_j \Gamma^j {\cal{A}} \eta_+$ and $h_j \Gamma^j {\cal{A}} \eta_+$
are both parallel with respect to $\hat{\tn}$ at zeroth order in $\alpha'$,
it follows as a consequence of (ii) that we must have
\begin{eqnarray}
\Lambda = b h + {\cal{O}}(\alpha')~,
\end{eqnarray}
for constant $b$. It is also useful to compute
\begin{eqnarray}
\label{hL_h}
h^i ({\cal L}_V h_i) = h^i \bigg( V^j {\tilde{\nabla}}_j h_i + h_j {\tilde{\nabla}}_i V^j \bigg)
= {1 \over 2} {\cal L}_V h^2 + h^i h^j {\tilde{\nabla}}_i V_j = {\cal{O}}(\alpha'^2)~,
\end{eqnarray}
which follows because $h^2 = \mathrm{const} + {\cal{O}}(\alpha'^2)$, and $\hat{\tn} V = {\cal{O}}(\alpha'^2)$.
This implies that $b=0$, and hence
\begin{eqnarray}
{\cal L}_V h = {\cal{O}}(\alpha'^2)~.
\end{eqnarray}
So $V$ is a symmetry of the full solution to both zeroth and first order in
$\alpha'$.
\subsubsection{Geometry}
We have shown that $V$ is a symmetry of the backgrounds up ${\cal O}(\alpha'^2)$. To investigate further the geometry
of the horizon section ${\cal S}$, let us first consider the consequences of the existence of the $\eta_+$ Killing spinor.
As the isotropy group of $\eta_+$ in $Spin(8)$ is $Spin(7)$, the fundamental self-dual 4-form $\phi$ of $Spin(7)$ on ${\cal S}$ is $\hat{\tilde \nabla}$-parallel.
It is known that in such a case, the torsion 3-form $W$ can be uniquely determined in terms of $\phi$ and the metric without any additional
conditions on the $Spin(7)$ structure of ${\cal S}$ \cite{ivanovspin7}. Next the condition $\hat{\tilde \nabla}\tau_+={\cal{O}}(\alpha'^2)$ with $\tau_+={\cal A} \eta_+$ is equivalent
to requiring that
\begin{eqnarray}
\hat{\tilde \nabla}_i\big(( 2d\Phi+h)_j-(\theta_\phi)_j\big)={\cal O}(\alpha'^2)~,
\end{eqnarray}
where $\theta_\phi$ is the Lee form of $\phi$, see \cite{class1}. As a result $2d\Phi+h-\theta_\phi$ is a parallel 1-form. If it is not linearly dependent on $V$, it will give rise
to an additional solution for the gravitino KSE ${\cal S}$. As we have assumed that there is strictly one parallel spinor of the same chirality as $\eta_+$, we have to require that
\begin{eqnarray}
2d\Phi+h-\theta_\phi=\lambda V +{\cal{O}}(\alpha'^2)~,
\label{phivi}
\end{eqnarray}
for some non-zero constant $\lambda$; for $\lambda=0$ the dilatino KSE is satisfied as well.
Let us next turn to investigate the $G_2$ structure on ${\cal S}$. As $V$ is an isometry on ${\cal S}$ and $i_VW=dV$, setting $V^2=\ell^2+{\cal{O}}(\alpha'^2)$ for $\ell$ constant, we can decompose
the metric and 3-form as
\begin{eqnarray}
d\tilde s^2={1\over\ell^2} V\otimes V+ds^2_{(7)}+{\cal{O}}(\alpha'^2)~,~~~W=\ell^{-2}V\wedge dV+W_{(7)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $ds^2_{(7)}$ is the metric on the space orthogonal to $V$ and $i_VW_{(7)}=0$. The data $(ds^2_{(7)}, W_{(7)})$ are thought (locally) as the metric
torsion on the space of orbits $M^7$ of $V$. For this observe that ${\cal L}_V W_{(7)}=0$ and as $i_V W_{(7)}=0$, $W_{(7)}$ descends as a 3-form on the space of orbits.
The spatial horizon section ${\cal S}$ admits a $G_2$ structure with fundamental form $\varphi={\ell}^{-1} i_V\phi$ as $\hat{\tilde \nabla} \varphi={\cal{O}}(\alpha'^2)$. The question is
whether this $G_2$ structure descends on the space of orbits of $V$. First observe that $i_V\varphi=0$. So it remains to investigate whether ${\cal L}_V\varphi={\cal{O}}(\alpha'^2)$.
For this notice that under $G_2$ representations $dV$ decomposes as $dV=dV^{\bf 7}+ dV^{\bf 14}+{\cal{O}}(\alpha'^2)$ because $i_V dV={\cal{O}}(\alpha'^2)$. Then use (\ref{bianx}) together with
$\hat{\tilde \nabla} \varphi=\hat{\tilde \nabla} V={\cal{O}}(\alpha'^2)$ and $i_V dW={\cal O}(\alpha'^2)$ to show that
\begin{eqnarray}
\hat{\tilde \nabla} dV^{\bf 7}={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As $ dV^{\bf 7}$ is a vector in ${\cal S}$ orthogonal to $V$, if it is not vanishing will generate an additional $\hat{\tilde \nabla}$-parallel spinor on ${\cal S}$ of the same chirality as $\eta_+$. As we have restricted
the number of such spinors to one, we have to set $dV^{\bf 7}={\cal{O}}(\alpha'^2)$. It has been shown in \cite{class1} that a $\hat{\tilde \nabla}$-parallel k-form $\alpha$ is invariant under the action of a
$\hat{\tilde \nabla}$-parallel vector $V$,
iff the rotation $i_VW$ leaves the form invariant. As $i_V W=dV+{\cal{O}}(\alpha'^2)$ and $dV$ takes values in $\mathfrak{g}_2$, we conclude that
\begin{eqnarray}
{\cal L}_V \varphi={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
and so $M^7$ admits a $G_2$ structure compatible with connection with skew-symmetric torsion given by the data $(ds^2_{(7)}, W_{(7)})$. In such a
case $W_{(7)}$ can be determined uniquely in terms of $\varphi$ and $ds^2_{(7)}$ provided
a certain geometric constraint is satisfied \cite{ivanovg2}.
It remains to explore (\ref{phivi}) from the perspective of $M^7$. Let us decompose $h=V+ h^\perp$, where $g(V, h^\perp)=0$. Then (\ref{phivi}) can be written as
\begin{eqnarray}
&&\ell^{-1} g(V, h)-{1\over 6} ( W_{(7)})_{ijk} \varphi^{ijk}=\lambda\ell +{\cal{O}}(\alpha'^2)~,
\cr
&& 2d\Phi+h^\perp-\theta_\varphi={\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $\theta_\varphi$ is the Lee form of $\varphi$ on $M^7$. The former determines the singlet part of $ W_{(7)}$ in terms of $V$ and $h$ while
the latter imposes the dilatino KSE on $M^7$.
\newsection{Nearly supersymmetric horizons with additional parallel spinors}
\subsection{Nearly supersymmetric horizons with $SU(3)$ holonomy}
\subsubsection{Symmetries of horizon section}\label{symsu3}
Suppose there are exactly two
linearly independent spinors $\eta^{(1)}_+$, $\eta^{(2)}_+$ such that
\begin{eqnarray}
\label{grav_SU(3)}
\hat{\tn} \eta^{(a)}_+ = {\cal{O}}(\alpha'^2), \qquad a=1,2 \ ,
\end{eqnarray}
for which $\big({\cal{A}} \eta^{(a)}_+\big)^{[0]} \neq 0$, ($a=1, 2$).
It follows that the
horizon section ${\cal{S}}^{[0]}$ admits a $SU(3)$ structure at zeroth order in $\alpha'$.
We set $\tau^{(a)}_+ = {\cal A} \eta^{(a)}_+$ which are non-vanishing spinors that satisfy
\begin{eqnarray}
\label{gravtau}
\hat{\tn} \tau^{(a)}_+ = {\cal{O}}(\alpha'^2)~, \qquad a=1,2 \ .
\end{eqnarray}
Using these we define the 1-form and 2-form spinor bilinears $V$ and $\omega$
by
\begin{eqnarray}
V_i = \langle \eta^{(1)}_+ , \Gamma_i \tau^{(1)}_+ \rangle~, \qquad
\omega_{ij} = \langle \eta^{(1)}_+ , \Gamma_{ij} \eta^{(2)}_+ \rangle~,
\end{eqnarray}
and also let
\begin{eqnarray}
\tilde{V} = i_{V} \omega~.
\end{eqnarray}
Observe that
\begin{eqnarray}
\label{par_cond}
\hat{\tn} V = {\cal{O}}(\alpha'^2)~, \qquad \hat{\tn} \omega = {\cal{O}}(\alpha'^2)~, \qquad \hat{\tn} \tilde{V} = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
We also define ${\tilde{h}}$ by
\begin{eqnarray}
\tilde{h} = i_h \omega \ ,
\end{eqnarray}
which satisfies
\begin{eqnarray}
\label{covdd1}
\hat{\tn} \tilde{h}= {\cal{O}}(\alpha') \ .
\end{eqnarray}
The main task below is to show that both $V$ and $\tilde V$ leave invariant all the fields
on ${\cal S}$, and that they generate a $\bb{R}\oplus \bb{R}$ lie algebra.
As $V$ and $\tilde V$ are $\hat{\tn}$-parallel, they are killing. Next consider the invariance
of $W$.
The spinors $V^j \Gamma_j {\cal A} \eta^{(a)}_+$, $h^j \Gamma_j {\cal A} \eta^{(a)}_+$ and $\tilde{h}^j \Gamma_j {\cal A} \eta^{(a)}_+$ are all parallel with respect to $\hat{\tn}$ to zeroth order in $\alpha'$. In order for ({\ref{grav_SU(3)}})
to have exactly two solutions, we must have
\begin{eqnarray}
V = ch + \tilde{c} \tilde{h} + {\cal{O}}(\alpha') \ ,
\end{eqnarray}
for some constants $c$, $\tilde{c}$. Thus
\begin{eqnarray}
{\cal L}_V W = c i_h dW + \tilde{c} i_{\tilde{h}} dW + {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
To continue,
since the two spinors $\eta^{(1)}_+$ and $\eta^{(2)}_+$ must satisfy (\ref{F_dh_cond}), it follows that, at zeroth order in $\alpha'$, $\tilde{F}$ and $dh$ are $(1,1)$ traceless with respect to
the almost complex structure obtained from $\omega$.
This, together with the conditions $i_h dh = {\cal{O}}(\alpha')$ and $i_h \tilde{F} = {\cal{O}}(\alpha')$,
which follow from the global analysis of the Laplacian of $h^2$, implies that
\begin{eqnarray}
\label{ht_dh_F}
i_{\tilde{h}} dh = {\cal{O}}(\alpha') \ , \qquad\qquad i_{\tilde{h}} \tilde{F} = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
and hence
\begin{eqnarray}
i_V dh = {\cal{O}}(\alpha') \ , \qquad \qquad i_V \tilde{F}= {\cal{O}}(\alpha') \ .
\end{eqnarray}
It is also useful to consider the spinors $\eta^{(a)}_{+}$ and $\tilde{h}_{\ell}\Gamma^{\ell}\eta^{(a)}_{+}$. The integrability conditions of
\begin{eqnarray}
\hat{\tn}\eta^{(a)}_{+} = {\cal{O}}(\alpha'^2) \ , \qquad\qquad \hat{\tn}\left(\tilde{h}_{\ell}\Gamma^{\ell}\eta^{(a)}_{+}\right) = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
are
\begin{eqnarray}
\hat {\tilde{R}}_{ij, pq} \Gamma^{pq} \eta^{(a)}_{+} = {\cal{O}}(\alpha'^2) \ , \qquad\qquad
\hat {\tilde{R}}_{ij, pq} \Gamma^{pq} \left(\tilde{h}_{\ell}\Gamma^{\ell}\eta^{(a)}_{+}\right) = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
which imply
\begin{eqnarray}
\label{ht_R}
\tilde{h}^p \check{\tilde{R}}_{pq, ij} = {\cal{O}}(\alpha') \ .
\end{eqnarray}
It follows that $i_h dW={\cal{O}}(\alpha'^2)$ and $i_{\tilde{h}} dW={\cal{O}}(\alpha'^2)$, as
a consequence of the Bianchi identity, and therefore $i_V dW= {\cal{O}}(\alpha'^2)$.
Thus we have shown that
\begin{eqnarray}
{\cal L}_V W = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
This proves the invariance of $W$.
Next we consider ${\cal L}_V \Phi$. It follows from ({\ref{covdd1}}) that
\begin{eqnarray}
i_{\tilde{h}} d\tilde{h} = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
and also
\begin{eqnarray}
{\cal L}_{\tilde{h}} W = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Since $\tilde{h}$ is an isometry of ${\cal S}$ to zeroth order in $\alpha'$, we also have
\begin{eqnarray}
{\cal L}_{\tilde{h}} \tilde{R}_{ij, pq} = {\cal{O}}(\alpha') \ .
\end{eqnarray}
On taking the Lie derivative of the trace of (\ref{einsp}) with respect to $\tilde{h}$, we find
\begin{eqnarray}
{\cal L}_{\tilde{h}} \left( \tilde{\nabla}_i \tilde{\nabla}^i \Phi \right) = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
which is equivalent, if $g = {\cal L}_{\tilde{h}} \Phi$, to
\begin{eqnarray}
\label{lap_g}
\tilde{\nabla}_i \tilde{\nabla}^i g = {\cal{O}}(\alpha') \ .
\end{eqnarray}
On integrating the zeroth order of (\ref{lap_g}), we find
\begin{eqnarray}
\int_{{\cal S}^{[0]}} \tilde{\nabla}_i g^{[0]} \tilde{\nabla}^i g^{[0]} = 0 \ ,
\end{eqnarray}
so $g^{[0]} = \gamma$, for constant $\gamma$. Thus
\begin{eqnarray}
{\cal L}_{\tilde{h}} \Phi = \gamma + {\cal{O}}(\alpha') \ .
\end{eqnarray}
Since $\Phi$ must attain a global maximum on ${\cal S}$, at this point ${\cal L}_{\tilde{h}} \Phi = 0$ to all orders in $\alpha'$. This fixes the constant $\gamma = 0$, and so
\begin{eqnarray}
\label{L_ht}
{\cal L}_{\tilde{h}} \Phi = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
which implies
\begin{eqnarray}
\label{L_Phi_al}
{\cal L}_V \Phi = {\cal{O}}(\alpha') \ .
\end{eqnarray}
As $V= ch + \tilde{c}\tilde{h} + {\cal{O}}(\alpha')$, it follows that
\begin{eqnarray}
\label{L_dh}
{\cal L}_V dh = c{\cal L}_h dh + \tilde{c}{\cal L}_{\tilde{h}} dh + {\cal{O}}(\alpha') = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Since $V$ is an isometry of ${\cal S}$ to first order in $\alpha'$, we have
\begin{eqnarray}
\label{L_R}
{\cal L}_V \tilde{R}_{ij, pq} = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
Also we have
\begin{eqnarray}
\label{L_F}
({\cal L}_V\tilde{F})_{ij}{}^a{}_b \tilde{F}^{ij\, b}{}_a = {\cal{O}}(\alpha') \ ,
\end{eqnarray}
which follows from
\begin{eqnarray}
{\cal L}_V \tilde{F} = c [{\tilde{F}}, i_h {\cal{B}}] + \tilde{c} [{\tilde{F}}, i_{\tilde{h}} {\cal{B}}] + {\cal{O}}(\alpha')
\ .
\end{eqnarray}
Using the conditions (\ref{L_Phi_al}), (\ref{L_dh}), (\ref{L_R}) and (\ref{L_F}), we follow the analysis for the $G_2$ case of the previous section undertaken from the equation (\ref{L_trEins}) to (\ref{L_Phi}), and conclude that
\begin{eqnarray}
{\cal L}_V \Phi = {\cal{O}}(\alpha'^2) \ ,
\end{eqnarray}
which proves the invariance of the dilaton $\Phi$.
Next we consider ${\cal L}_V h$. Equations (\ref{Lie_gauge}) and (\ref{Lie_einst}), which have been established in the previous section, hold here as well after using in the addition that
\begin{eqnarray}
{\cal L}_{\tilde{h}}\bigg( \tilde{F}_{i\ell}{}^{ab}\tilde{F}_j{}^{\ell}{}_{ab} \bigg) = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Then it follows that
\begin{eqnarray}
\hat{\tn}_i \left({\cal L}_V h \right)_j = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
Furthermore we notice that
\begin{eqnarray}
{\cal L}_{\tilde{h}} h = {\cal{O}}(\alpha') \ .
\end{eqnarray}
As $V = ch + \tilde{c}\tilde{h} + {\cal{O}}(\alpha')$, it is convenient to write
\begin{eqnarray}
{\cal L}_V h = \alpha' \Psi + {\cal{O}}(\alpha'^2) \ ,
\end{eqnarray}
where
\begin{eqnarray}
\hat{\tn} \Psi = {\cal{O}}(\alpha') \ .
\end{eqnarray}
Then it follows that the spinors $\Psi_j \Gamma^j {\cal A} \eta_+$, $h_j \Gamma^j {\cal A} \eta_+$ and $\tilde{h}_j \Gamma^j {\cal A} \eta_+$ are all parallel with respect to $\hat{\tn}$ at zeroth order in $\alpha'$. In order for
({\ref{grav_SU(3)}}) to admit exactly two solutions, we must have
\begin{eqnarray}
\Psi = b h + \tilde{b}\tilde{h} + {\cal{O}}(\alpha') \ ,
\end{eqnarray}
for constants $b$ and $\tilde{b}$. Then using $i_h{\cal L}_V h = {\cal{O}}(\alpha'^2)$, which has been computed in (\ref{hL_h}), and $h^2 = const. + {\cal{O}}(\alpha'^2)$, it follows that $b = {\cal{O}}(\alpha')$ and therefore
\begin{eqnarray}
{\cal L}_V h = \alpha' \tilde{b} \tilde{h} + {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
Next we consider the symmetries generated by $\tilde{V}$. Since $V = ch + \tilde{c}\tilde{h} + {\cal{O}}(\alpha')$, then we have
\begin{eqnarray}
\tilde{V} = c\tilde{h} - \tilde{c} h + {\cal{O}}(\alpha') \ .
\end{eqnarray}
Since $V$ and $\omega$ are both parallel with respect to $\hat{\tn}$ to first order in $\alpha'$, we also have
\begin{eqnarray}
\hat{\tn} \tilde{V} = {\cal{O}}(\alpha'^2) \ .
\end{eqnarray}
Then the analysis undertaken for $V$ holds as well for $\tilde{V}$, because the only properties of $V$ used through the analysis are that $V$, at zeroth order in $\alpha'$, is a linear combination of $h$ and $\tilde{h}$ with constant coefficients, and $V$ is parallel with respect to $\hat{\tn}$ to first order in $\alpha'$. Thus we argue in a similar way that
\begin{eqnarray}
{\cal L}_{\tilde{V}} W = {\cal{O}}(\alpha'^2) \ , \qquad {\cal L}_{\tilde{V}} \Phi = {\cal{O}}(\alpha'^2) \ , \qquad {\cal L}_{\tilde{V}} h = \alpha' \tilde{q} \tilde{h} + {\cal{O}}(\alpha'^2)\ ,
\end{eqnarray}
for a constant $\tilde{q}$.
Finally, the $V$ and $\tilde V$ commute up to ${\cal{O}}(\alpha'^2)$. To see this observe that since $i_V \tilde V=0$ and $i_V W=dV+{\cal{O}}(\alpha'^2)$, we have that
\begin{eqnarray}
{\cal L}_{\tilde V} V=i_{\tilde V} i_V W+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Using (\ref{bianx}) adapted to ${\cal S}$ as well as $i_V dW=i_{\tilde V} dW={\cal{O}}(\alpha'^2)$, we conclude that
\begin{eqnarray}
\hat{\tilde \nabla} i_{\tilde V} i_V W={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Therefore the vector $i_{\tilde V} i_V W$ is $\hat{\tilde \nabla}$-parallel and moreover is orthogonal to both $V$ and $\tilde V$. So if it is non-zero, it will generate
additional $\hat{\tn}$-parallel $\eta_+$ spinors on ${\cal S}$. As we have restricted those to be strictly two, we conclude that $i_{\tilde V} i_V W$ vanishes and so
\begin{eqnarray}
[V, \tilde V]={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
In particular as $i_V \tilde V=0$, we have that
\begin{eqnarray}
i_V d\tilde V=i_{\tilde V} dV={\cal{O}}(\alpha'^2)~.
\label{vdtv}
\end{eqnarray}
This concludes the examination of symmetries of ${\cal S}$.
\subsubsection{Geometry}
It is clear from the examination of the symmetries of the fields on ${\cal S}$ and in particular (\ref{par_cond}) and (\ref{vdtv}) that we can set
\begin{eqnarray}
d\tilde s^2&=&\ell^{-2} V\otimes V+ \ell^{-2} \tilde V\otimes \tilde V+ ds^2_{(6)}+{\cal{O}}(\alpha'^2)~
\nonumber \\
W&=&\ell^{-2} V\wedge dV+\ell^{-2} \tilde V \wedge d\tilde V+ W_{(6)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $V^2=\tilde V^2=\ell^2+{\cal{O}}(\alpha'^2)$ and $\ell$ is constant, $ds^2_{(6)}$ is the metric in the orthogonal complement of $V$ and $\tilde V$ and $i_VW_{(6)}=i_{\tilde V} W_{(6)}={\cal{O}}(\alpha'^2)$.
From construction ${\cal S}$ admits an $SU(3)$ structure. We shall now investigate whether this (locally) descends on the space of orbits $M^6$ of $V$ and $\tilde V$.
First the data $(ds^2_{(6)}, W_{(6)})$ define a Riemannian geometry on $M^6$ with skew-symmetric torsion. In particular for the torsion this follows from
$i_VW_{(6)}=i_{\tilde V} W_{(6)}={\cal{O}}(\alpha'^2)$ and ${\cal L}_V W_{(6)}= {\cal L}_{\tilde V} W_{(6)}={\cal{O}}(\alpha'^2)$.
Next consider the reduction of the (almost) Hermitian form $\omega$. Choosing without loss of generality $V$ and $\tilde V$ orthogonal, one can write
\begin{eqnarray}
\omega=\ell^{-2} V \wedge \tilde V+\omega_{(6)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $i_V\omega_{(6)}=i_{\tilde V}\omega_{(6)}={\cal{O}}(\alpha'^2)$. For $\omega_{(6)}$ to descend to a Hermitian structure on $M^6$, it must be invariant under the action of both
$V$ and $\tilde V$. Observe that $\hat{\tilde \nabla} \omega_{(6)}={\cal{O}}(\alpha'^2)$ and also $\hat{\tilde \nabla} V=\hat{\tilde \nabla} \tilde V={\cal{O}}(\alpha'^2)$. Thus $\omega_{(6)}$ is invariant
iff the rotations $i_V W=dV+{\cal{O}}(\alpha'^2)$ and $i_{\tilde V} W=d{\tilde V}+{\cal{O}}(\alpha'^2)$ leave $\omega_{(6)}$ invariant \cite{class1}. In turn this implies that the (2,0) and (0,2) parts of
the rotations which we denote with $[dV]^{2,0}$ and $[d\tilde V]^{2,0}$, respectively, must vanish. Using (\ref{bianx}), $\hat{\tilde \nabla}\omega_{(6)}={\cal{O}}(\alpha'^2)$ and $i_VdW=i_{\tilde V} dW={\cal{O}}(\alpha'^2)$, we find
that
\begin{eqnarray}
\hat{\tilde \nabla}[i_{ V} W]^{2,0}=\hat{\tilde \nabla}[i_{\tilde V} W]^{2,0}={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
As ${\cal S}$ has an $SU(3)$ structure compatible with $\hat{\tilde \nabla}$, contracting with the (3,0)-form both $[i_{\tilde V} W]^{2,0}$ and $[i_{\tilde V} W]^{2,0}$
give rise to vector fields in ${\cal S}$ orthogonal to both $V$ and $\tilde V$ which are $\hat{\tilde \nabla}$-parallel. Thus the requirement of strictly
two $\eta_+$ $\hat{\tn}$-parallel spinors leads to setting $[i_{\tilde V} W]^{2,0}=[i_{\tilde V} W]^{2,0}={\cal{O}}(\alpha'^2)$ which in turn implies that
\begin{eqnarray}
{\cal L}_V \omega_{(6)}= {\cal L}_{\tilde V} \omega_{(6)}={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Thus $M^6$ admits an almost Hermitian structure compatible with a connection $\hat{\tilde \nabla}^{(6)}$ with skew-symmetric torsion $W_{(6)}$. It is well known
that in such case $W_{(6)}$ is determined in terms of the almost complex structure on $M^6$ and the metric, see eg \cite{howegp}.
To find whether $M^6$ inherits a $SU(3)$ structure as well, let investigate whether the (3,0) fundamental $SU(3)$ form $\chi$ of ${\cal S}$ descends on $M^6$.
It can always be arranged such that $i_V \chi=i_{\tilde V}\chi=0$. So it remains to see whether $\chi$ is invariant under the action of $V$ and $\tilde V$.
For this a similar argument to that explained above for $\omega_{(6)}$ leads to the assertion that $\chi$ is invariant iff
the $\omega$-traces $i_{\tilde V} W\cdot \omega$ and $i_{\tilde V} W\cdot \omega $ of $i_{\tilde V} W$ and $i_{\tilde V} W$, respectively, vanish. Furthermore,
an application of (\ref{bianx}) implies that both $i_{\tilde V} W\cdot \omega$ and $i_{\tilde V} W\cdot \omega $ are constant but not necessarily zero.
Thus $M^6$ has generically a $U(3)$ structure instead of an $SU(3)$ one.
It remains to investigate the rest of the content of the conditions $\hat{\tilde \nabla} \tau_+^{(a)}={\cal{O}}(\alpha'^2)$. First consider the (3,0) part of $W_{(6)}$
denoted by $W_{(6)}^{3,0}$. An application of (\ref{bianx}) using that $dW$ is a (2,2) form yields that
\begin{eqnarray}
\hat{\tilde \nabla} W_{(6)}^{3,0}={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Thus $W_{(6)}^{3,0}$ is another globally defined $\hat{\tilde \nabla}$-parallel (3,0)-form on ${\cal S}$ and so it can either be set to zero or be identified with $\chi$. In the
former case, the complex structure on $M^6$ is integrable and so $M^6$ is a KT manifold \cite{hkt}.
Writing $h=\lambda_1 V+ \lambda_2 \tilde V+h^\perp$, where $h^\perp$ is orthogonal to both $V$ and $\tilde V$ and $\lambda_1$ and $\lambda_2$ are constants, we find using (\ref{bianx}) that
\begin{eqnarray}
\hat{\tilde\nabla} \big(2 d\Phi+ h^\perp-\theta_{\omega_{(6)}}\big)={\cal{O}}(\alpha'^2)~.
\label{covtheta}
\end{eqnarray}
Now if $2 d\Phi+ h^\perp-\theta_{\omega_{(6)}}$ in non-vanishing and since it is orthogonal to $V$ and $\tilde V$ will give rise to more than two $\eta_+$ $\hat{\tn}$-parallel
spinors
on ${\cal S}$. Since we have assumed that there are just two, we set
\begin{eqnarray}
2 d\Phi+ h^\perp-\theta_{\omega_{(6)}}={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
This concludes the investigation of geometry.
\subsection{Nearly supersymmetric horizons with $SU(2)$ holonomy}
\subsubsection{Assumptions and definitions}
It is known that if one requires the existence of an additional $\hat{\tilde\nabla}$-parallel spinor $\eta_+$ to those of the $SU(3)$ backgrounds on ${\cal S}$, then the isotropy algebra
of the all the five spinors reduces to $\mathfrak{su}(2)$. As a result, ${\cal S}$ admits 8 $\hat{\tilde\nabla}$-parallel spinors and the holonomy group reduces to a subgroup $SU(2)$.
To describe the geometry of backgrounds with exactly 8 such spinors, we consider four linearly independent spinors $\eta^{(a)}_+$, and impose the condition
\begin{eqnarray}
\label{grav_SU(2)}
\hat{\tn} \eta^{(a)}_+ = {\cal{O}}(\alpha'^2), \qquad a=0,1,2,3 \ ,
\end{eqnarray}
for which $\big({\cal{A}} \eta^{(a)}_+\big)^{[0]} \neq 0$, ($a=0, 1, 2, 3$).
It follows that the
horizon section ${\cal{S}}^{[0]}$ admits a $SU(2)$ structure at zeroth order in $\alpha'$. We continue by setting $\tau^{(a)}_+ = {\cal A} \eta^{(a)}_+$. These are non-vanishing and satisfy
\begin{eqnarray}
\label{gravtausu2}
\hat{\tn} \tau^{(a)}_+ = {\cal{O}}(\alpha'^2), \qquad a=0,1,2,3 \ .
\end{eqnarray}
Furthermore, we also define 1-form and 2-form spinor bilinears $V^{(a)}$ and $\omega_r$, respectively,
by
\begin{eqnarray}
V_i\equiv V^{(0)}_i = \langle \eta^{(0)}_+ , \Gamma_i \tau^{(0)}_+ \rangle, \qquad
(\omega_r)_{ij} = \langle \eta^{(0)}_+ , \Gamma_{ij} \eta^{(r)}_+ \rangle~,~~~r=1,2,3~,
\end{eqnarray}
and also let
\begin{eqnarray}
\tilde{V}_r = i_{V} \omega_r~.
\end{eqnarray}
In fact $\omega_r$ together with the metric and $W$ define an almost HKT structure \cite{hkt} on ${\cal S}$ as
\begin{eqnarray}
\label{par_condsu2}
\hat{\tn} V = {\cal{O}}(\alpha'^2) , \qquad \hat{\tn} \omega_r = {\cal{O}}(\alpha'^2) , \qquad \hat{\tn} \tilde{V}_r = {\cal{O}}(\alpha'^2) \ ,
\end{eqnarray}
and the almost complex structures associated to $\omega_r$ satisfy the algebra of unit quaternions.
These follow from (\ref{naeta}) and the $\mathfrak{su}(2)$ isotropy of the parallel spinors.
\subsubsection{Symmetries of the horizon section}
It is clear from (\ref{par_condsu2}) that $V^{(a)}$, $V^{(r)}=V_r$, generate isometries on ${\cal S}$ and that
\begin{eqnarray}
i_aW=dV^{(a)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $i_a$ denotes inner-derivation with respect to $V^{(a)}$. Without loss of generality we choose $g(V^{(a)}, V^{(b)})=\ell^2 \delta^{ab}+{\cal{O}}(\alpha'^2)$
for $\ell$ constant.
An investigation similar to the one explained in section \ref{symsu3} reveals that
\begin{eqnarray}
&&{\cal L}_a \Phi={\cal{O}}(\alpha'^2)~,~~~{\cal L}_a W={\cal{O}}(\alpha'^2)~,~~~{\cal L}_a h={\cal{O}}(\alpha')~,~~~i_adh={\cal{O}}(\alpha')~,~~~
\cr
&&i_aF={\cal{O}}(\alpha')~.
\end{eqnarray}
Next let us consider the commutator $[V^{(a)}, V^{(b)}]=i_a i_b W$. An application of (\ref{bianx}) together with the conditions above reveal that
\begin{eqnarray}
\hat{\tilde\nabla}[V^{(a)}, V^{(b)}]={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
Thus the commutator is either linear dependent on $V^{(a)}$ or it will lead to further reduction of the holonomy of $\hat{\tilde\nabla}$ to $\{1\}$.
In the latter case, the horizon section ${\cal S}$ will admit more than four $\eta_+$ $\hat{\tn}$-parallel spinors violating our assumptions. Thus, we conclude that
\begin{eqnarray}
[V^{(a)}, V^{(b)}]=f^{ab}{}_c V^{(c)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
for some constants $f$ with $\ell^2 f^{ab}{}_c= i_ai_bi_c W+{\cal{O}}(\alpha'^2)$. As $f$ is skew-symmetric, the Lie algebra spanned by $V^{(a)}$ is a metric (compact) Lie algebra.
As it has dimension 4, it is either isomorphic to $\oplus^4\mathfrak{u}(1)$ or to $\mathfrak{u}(1)\oplus \mathfrak{su}(2)$.
Therefore the horizon section ${\cal S}$ can be viewed locally as a fibration with fibre either $\times^4U(1)$ or $U(1)\times SU(2)$ over the space of orbits $M^4$ of $V^{(a)}$.
We shall determine the geometry of ${\cal S}$ by specifying the geometry of $M^4$.
\subsubsection{ Geometry}
To simplify the analysis, we choose up to an $\mathfrak{so}(4)$ rotation $V$ to be along a $\mathfrak{u}(1)$ direction in either
$\oplus^4\mathfrak{u}(1)$ or $\mathfrak{u}(1)\oplus \mathfrak{su}(2)$. This in particular implies that $i_0 i_r W={\cal{O}}(\alpha'^2)$. Then the
metric and torsion of ${\cal S}$ can be written as
\begin{eqnarray}
d\tilde s^2=\ell^{-2} \delta_{ab} V^{(a)}\otimes V^{(b)}+d\tilde s^2_{(4)}+{\cal{O}}(\alpha'^2)~,~~~ W=\ell^{-2} V\wedge dV+CS(V_r)+ W_{(4)}+{\cal{O}}(\alpha'^2)
\nonumber \\
\end{eqnarray}
where $V^{(a)}$ is viewed as a principal bundle connection and $CS(V_r)$ is the Chern-Simons form which for the $\oplus^4\mathfrak{u}(1)$ case is
\begin{eqnarray}
CS(V_r)=\ell^{-2}\sum_r V_r\wedge dV_r~.
\end{eqnarray}
The data $(ds^2_{(4)}, W_{(4)})$ define a geometry on $M^4$ with skew-symmetric torsion.
First, let us investigate the reduction of the almost HKT structure of ${\cal S}$ on $M^4$. For this observe that
\begin{eqnarray}
\omega_r= \ell^{-2} V\wedge V_r+{\ell^{-2}\over2} \epsilon_r{}^{st}V_s\wedge V_t+\omega_r^{(4)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where $i_a \omega_r^{(4)}={\cal{O}}(\alpha'^2)$. Next consider ${\cal L}_a \omega^{(4)}_r$. As both $V^{(a)}$ and $\omega^{(4)}_r$ are $\hat{\tilde \nabla}$-parallel, ${\cal L}_a \omega^{(4)}_r$
is specified by the properties of the rotation $i_a W$. In particular if $i_a W$ is invariant under $\omega^{(4)}_r$, the Lie derivative vanishes.
Next let us investigate the two cases $\oplus^4\mathfrak{u}(1)$ and $\mathfrak{u}(1)\oplus \mathfrak{su}(2)$
separately. In the abelian case, as $i_ai_bW={\cal{O}}(\alpha'^2)$, $i_a W$ is a 2-form on $M^4$. Furthermore ${\cal L}_a \omega^{(4)}_r$ vanishes iff the self-dual part, $i_a W^{\rm sd}$, of $i_aW$ is zero. However in general
this may not be the case. An application of (\ref{bianx}) implies that
\begin{eqnarray}
\hat{\tilde \nabla} i_a W^{\rm sd}={\cal{O}}(\alpha'^2)~,
\end{eqnarray}
and so there exist some constants $u$ such that
\begin{eqnarray}
i_a W^{\rm sd}= u_a{}^r \omega_r^{(4)}+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
otherwise the holonomy of $\hat{\tilde \nabla}$ will be reduced further and it will admit more than four $\eta_+$ parallel spinors.
Then
\begin{eqnarray}
{\cal L}_a \omega_r^{(4)}=2u_a{}^s\epsilon_{sr}{}^t \omega^{(4)}_t+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
The identity $[{\cal L}_a, {\cal L}_b]={\cal L}_{[V^{(a)}, V^{(b)}]}$ gives
\begin{eqnarray}
(u_a^r u_b^s-u_b^r u_a^s)={\cal{O}}(\alpha'^2)~.
\end{eqnarray}
The covariant constancy condition on $M^4$ now reads
\begin{eqnarray}
\hat{\tilde \nabla}^{(4)} \omega^{(4)}_r=2 \ell^{-2} V^{(a)} u_a^s \epsilon_{sr}{}^t \omega^{(4)}_t+{\cal{O}}(\alpha'^2)~,
\end{eqnarray}
where now $V^{(a)}$ should be thought as the pull back of the principal bundle connection $V^{(a)}$ with a local section.
It is clear that the relevant connection that determines the geometry of $M^4$ is $ Z^s=V^{(a)} u_a^s$.
If $u_a^r=0$, $M^4$ is a HKT manifold. It is easy to see this as $\omega_r$ are covariantly constant with respect to a connection with skew-symmetric torsion
and all three almost complex structures are integrable. The latter follows because of dimensional reasons. Otherwise one of the 3-vectors $u_a$ must be non-zero. Without
loss of generality take $u_0\not=0$. In such a case the above equation can be solved as $(u_a^r)=(u_0^r, u_0^r v_s)$, where $v_s=|u_0|^{-2} \sum_r u^r_s u_0^r $.
Using these data, the covariant constancy condition of $\omega^{(4)}_r$ on $M^4$ can be written as
\begin{eqnarray}
\hat{\tilde \nabla}^{(4)} \omega^{(4)}_r=2\ell^{-2} (V^0+V^p v_p) u_0^s \epsilon_{sr}{}^t \omega^{(4)}_t+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
It is clear from this that $M^4$ is a KT manifold with respect to the Hermitian form $|u_0|^{-1} u_0^r \omega_r$. In fact $M^4$ is an (almost)\footnote{In the definition
of QKT structure in \cite{qkt} an additional integrability condition was considered.} QKT manifold \cite{qkt}
for which the holonomy of the $Sp(1)$ connection has been reduced to $U(1)$.
Next let us turn to examine the non-abelian $\mathfrak{u}(1)\oplus \mathfrak{su}(2)$ case. It is easy to see that
\begin{eqnarray}
(u_a^r u_b^s-u_b^r u_a^s)={1\over2}f^{ab}{}_c u_c^t \epsilon_{tr}{}^s+{\cal{O}}(\alpha'^2)~.
\end{eqnarray}
If the 3-vector $u_0\not=0$, then all the rest of the components of $u$ vanish. In such a case, $M^4$ is an KT manifold. This class of solutions
includes the WZW type of solution $AdS_3\times S^3\times M^4$ where $M^4=S^1\times S^3$ with the bi-invariant metric and constant dilaton.
Such a horizon is not supersymmetric but it is nearly supersymmetric.
It remains to consider the case $u_0=0$. One can then show that $\det u\not=0$ and so $(u^r_s)$ is invertible. Thus $Z^s= V^{(a)} u_a^s$ takes values
in the $\mathfrak{sp}(1)$ Lie algebra. $M^4$ is a QKT manifold, see also \cite{compgp}.
To conclude we remark that in all HKT and KT cases, there is an analogue of the condition (\ref{covtheta}) for every Hermitian form $\omega_r$
that determines these structures. If the associated $2d\Phi+h^\perp-\theta_r$ forms do not vanish, then the holonomy of the connection with
torsion reduces to $\{1\}$ and the number of parallel spinors enhance to 16. The solutions are the group manifolds. The solution $AdS_3\times S^3\times S^3\times S^1$ mentioned above
belongs to the class where the holonomy of the connection with torsion is $\{1\}$.
There is an analogue
of this in the QKT case but in such a case the condition from the perspective of $M^4$ twists with $\mathfrak{sp}(1)$. If $2d\Phi+h^\perp-\theta_r$ do not vanish,
again the holonomy of the connection with torsion on ${\cal S}$ reduces to $\{1\}$. However now some of the data like the Hermitian forms
are not (bi-)invariant under the action of the group. It would be of interest to explore his further to see whether there are actual solutions.
We conclude the examination of the geometry of nearly supersymmetric backgrounds in the $G_2$, $SU(3)$ and $SU(2)$ cases by pointing out that they exhibit an $\mathfrak{sl}(2,\bb{R})$ up to order ${\cal{O}}(\alpha')$ but not up to order ${\cal{O}}(\alpha'^2)$.
For the latter, $h$ must be a symmetry of the theory up to the same order and so it can be identified with $V$. The description of the
geometry of this special class of nearly supersymmetric backgrounds is very similar to the one we have given above. The only difference
is that now we can identify $h$ with $V$.
\newsection{Conclusions}
We have investigated the supersymmetric
near-horizon geometry of heterotic black holes up to and including two loops in sigma model perturbation theory.
Using a combination of
local and global techniques, together with the bosonic field equations and Bianchi identities, we have proven that the conditions
obtained from the KSEs are equivalent to a pair of gravitino equations ({\ref{gravsimp}}) and a pair of algebraic conditions, related to the dilatino KSE,
({\ref{algsimpmax}}), which are required to hold at zeroth and first order
in $\alpha'$. In particular, we have shown that the KSE related to the gaugino
is implied by the other KSEs and field equations.
In all cases, we have also shown that
there are no regular $AdS_2$ solutions with compact without boundary internal space by demonstrating that $\Delta={\cal{O}}(\alpha'^2)$.
This is not in contradiction with the fact that one can locally
write $AdS_3$ as a warped product over $AdS_2$ \cite{strominger}, see also appendix E. This is because our assumptions
on the internal space of $AdS_2$ are violated in such a case.
Furthermore, we have demonstrated that horizons that admit a non-vanishing $\eta_-$ Killing spinor up to order ${\cal{O}}(\alpha'^2)$, which does not vanish at zeroth order in $\alpha'$, exhibit
supersymmetry enhancement via
the same mechanism as described in \cite{hethor}, and so preserve 2, 4, 6 and 8 supersymmetries. We have described the geometry
of such horizons in all cases and this is similar to that presented in \cite{hethor} for the horizons with $dH=0$.
We have also considered in some detail
the global properties of our solutions. The analysis of the global properties of $h^2$ proceeds in much the same way
as in the heterotic theory with $dH=0$. However in the presence of anomaly, the consequences of the global restrictions on the geometry of the horizons
are somewhat weaker. For example, it is only possible to prove that $h$ is an isometry
of the horizon section to zeroth order in $\alpha'$. So one cannot establish a direct algebraic relation
between $\eta_+$ and $\eta_-$ spinors to order ${\cal{O}}(\alpha'^2)$, and therefore it is not
possible to directly show that there is supersymmetry enhancement via
this mechanism, as was done in \cite{hethor} for the theory with $dH=0$.
We have also constructed generalized Lichnerowicz
type theorems, which relate spinors which are parallel with respect to a certain type of near-horizon supercovariant derivative,
to zero modes of near-horizon Dirac operators.
We have shown that if $\eta$ is a zero mode of the near-horizon
Dirac operator to both zero and first order in $\alpha'$,
then the Lichnerowicz theorems imply that $\eta$ only satisfies
the KSE ({\ref{gravsimp}}) and
({\ref{algsimpmax}}) to zero order in $\alpha'$. Hence, the
types of arguments used to show supersymmetry enhancement via
Lichnerowicz type theorems
in \cite{lichner11, lichneriib, lichneriia1, lichneriia2} also do
not work to the required order in $\alpha'$ for the heterotic theory.
Finally, we have examined a class of nearly supersymmetric horizons for which the gravitino KSE is allowed
to admit solutions on the spatial horizon section but not the rest of the KSEs. Such solutions
in general do not admit any spacetime Killing spinors including solutions of the gravitino KSE.
Under some conditions on the fluxes, we investigate the geometry of the spatial horizon sections
using a combination of local and global techniques as well as the field equations.
We find that those with a $G_2$, $SU(3)$ and $SU(2)$ structure admit 1, 2 and 4 parallel vectors
on the spatial horizon sections with respect to the connection with torsion. The geometry on the orbit
spaces of these isometries is fully specified.
The spacetime of both supersymmetric, and nearly supersymmetry
horizons considered here admits a $SL(2,\bb{R})$ symmetry at zeroth order in $\alpha'$.
In the supersymmetric case for which there is a $\eta_-$ Killing spinor to order ${\cal{O}}(\alpha'^2)$ such that $\eta_-$ does not vanish at zeroth order, $\eta_-^{[0]}\not=0$,
this symmetry persists at first order in $\alpha'$.
The nearly supersymmetric horizons also admit an $SL(2,\bb{R})$ symmetry provided that $h$ is parallel with respect to the connection with torsion up to ${\cal{O}}(\alpha'^2)$.
It is not apparent whether the properties of the heterotic horizons described here are going to persist to higher than two loops in sigma model perturbation theory.
It is likely though that the presence of an $\mathfrak{sl}(2,\bb{R})$ symmetry will persist after perhaps a suitable choice of a scheme in perturbation theory. There is no apparent reason
to hypothesize that such a symmetry can be anomalous at higher loops.
What happens to global properties of the horizons, for example
the Lichnerowicz type theorems, is less clear.
We have already seen that these theorems do not hold to the expected order in $\alpha'$ even at two loops. This can be taken as an indication
that additional higher order corrections may further weaken the consequences of such theorems.
\vskip 0.5cm
\noindent{\bf Acknowledgements} \vskip 0.1cm
\noindent AF is partially supported by the EPSRC grant FP/M506655. JG is supported by the STFC grant, ST/1004874/1. GP is partially supported by the STFC rolling grant ST/J002798/1.
\vskip 0.5cm
\vskip 0.5cm
\noindent{\bf Data Management} \vskip 0.1cm
\noindent No additional research data beyond the data presented and cited in this work are
needed to validate the research findings in this work.
\vskip 0.5cm
\newpage
\setcounter{section}{0}
\setcounter{subsection}{0}
|
1,108,101,564,286 | arxiv | \section{Introduction}
Observations have ever confirmed the so-called transient period of acceleration (TPA)~\cite{ob}, in this regard any realistic cosmological model should include component(s) with negative pressure. In this work we consider a spatially flat Friedmann-Roberston-Walker (FRW) cosmology where the content of the universe has two components with negative pressure and a pressureless component. This is the so-called \p consisting of a barotropic fluid with equation of state $p_{\ga}=(\ga-1)\rg$ where $0\leq\ga\leq2$, a pressureless dark matter (DM) density $\rdm$ and a dark-energy-scalar-field (DE) $\phi$ coupled to exponential potential $V(\phi)=V_0\exp{(-\ka\la\phi)}$ with equation of state $p_{\phi}=\of\rf$, $p_{\phi}=\dot{\phi}^2/2-V(\phi)$ and $\rf=\dot{\phi}^2/2+V(\phi)$. We assume that the three components are noninteracting. In this model, the barotropic fluid represents visible matter if $\ga\geq 1$ [radiation if $\ga=4/3$ or ordinary matter (baryons) if $\ga=1$]. For short, we will call this component matter ($\ga\geq 1$). In this cosmological 3-fluid model $-1\leq \of\leq 1$; however, from a physical point of view, there is no compelling reason to constrain the values of $\of$ to the interval [$-1,\,1$], in this regard it has been established that the teleparallel DE~\cite{para1} cosmological model allows for $\of<-1$~\cite{para2}.
Existing analytical methods~\cite{3fluid} have failed~\cite{And2} to produce exact solutions to the \P. Apart from some trivial solutions (power law inflationary solutions where the scale factor of the universe evolves as $a\propto t^m$ for $m>0$), no exact solution to the \p seems to exist to our knowledge.
In Ref.~\cite{num} we restricted ourselves to ordinary matter, where $1\leq\ga\leq2$, and to positive exponential potentials and resorted to numerical approach by which we derived new solutions to the \P. The solutions where classified hyperbolic and trigonometric according to the value of $\la$, this extends the classification made for the solutions to the cosmological 2-fluid problem~\cite{And} (consisting of a barotropic fluid plus a DE-scalar-field $\phi$ coupled to exponential potential), which were first derived in~\cite{Russo}. For the whole range of $\la$, we were able to construct solutions with one TPA where the universe undergoes the deceleration-TPA-eternal deceleration expansions. No solutions with two or more TPA's were found. However, as we shall see later, solutions with one TPA and a late-time eternal acceleration expansion do exist for the range $0\leq \ga < 2/3$ (which were not reported in Ref.~\cite{num}), where the universe undergoes the deceleration-TPA-TPD-eternal acceleration expansions (TPD: For transient period of deceleration). We shall also derive solutions with two TPA's and two TPD's for $\ga$ approaching from below 2/3. Solutions with many TPA's
and TPD's may exist too.
Phase-plane and -space analyses of autonomous differential equations lead to specific solutions that may provide the late-time attractors or the early-time repellers. Both type of solutions are interesting and provide rich insight into the evolution of the universe. Prior to the determination of the exact solutions to the cosmological 2-fluid problem~\cite{And}-~\cite{CTS}, phase-plane analyses of the 2-fluid problem were performed~\cite{p1}-~\cite{p4} (for a general procedure see~\cite{p3,psc}) and led to the discovery of potential-kinetic-scaling solutions, which are the unique late-time attractors whenever they exist for $\la^2>3\ga$.
We shall carry a phase-space analysis of the autonomous differential equations governing the dynamics of the \P. Among the conclusions we reach (1) the stability of the scalar field dominated solution for $\la^2\leq\min(3,3\ga)$, (2) the stability of the potential-kinetic-matter scaling solution for $\ga\leq 1$, which are quantitatively different from the corresponding results for the 2-fluid problem~\cite{p1}, and the existence of (3) new attractors (the potential-kinetic-DM scaling solution and the potential-kinetic-matter-DM scaling solution) and (4) new repellers and saddle points. In Sect.~\ref{sec2} we derive the autonomous differential equations of the \p and some other useful formulas. In Sect.~\ref{sec3} we discuss and extend the methods used for the stability analysis. In Sect.~\ref{sec4} we derive the critical points, investigate their stability and and their cosmologic implications, and construct numerically solutions with two TPA's and two TPD's as well as solutions with one TPA and a late-time eternal acceleration expansion. In Sect.~\ref{sec5} we discuss which physical scenarios are well fitted by the \P. We conclude in Sect.~\ref{sec6}.
\section{Autonomous differential equations of the \p \label{sec2}}
The three components being noninteracting each fluid satisfies a conservation equation of the form $T^{\mu\nu}_{\text{i}}{}_{;\nu}=0$ where $T^{\mu\nu}_{\text{i}}$ is the corresponding stress-energy tensor ($\text{i}=\ga,\,\phi,\,\text{DM}$). Keeping the relevant conservation equation for our analysis (corresponding to $\text{i}=\ga$) and using a similar notation as in~\cite{p1}, the dynamics of the three fluids in a spatially flat FRW universe, with a scale factor $a(t)$ and a Hubble parameter $H(t)=\dot{a}/a$, are governed by the very Eqs. (1) to (3) of~\cite{p1} upon slightly modifying the first equation by adding the contribution attributable to DM
\begin{align}
\label{1}& \dot{H}=-\frac{\ka^2}{2}(\rg+p_{\ga}+\rdm+\dot{\phi}^2)=-\frac{\ka^2}{2}(\ga\rg+\rdm+\dot{\phi}^2),\\
\label{2}& \dot{\rho}_{\ga}=-3H(\rg+p_{\ga})=-3H\ga\rg,\\
\label{3}& \ddot{\phi}=-3H\dot{\phi}-\frac{\dd V}{\dd \phi},
\end{align}
where $\dot{F}=\dd F/\dd t$. These equations are constrained by
\begin{equation}\label{4}
H^2=(\ka^2/3)[\rg +\rdm +(\dot{\phi}^2/2)+V].
\end{equation}
From now on we consider only positive potentials $V(\phi)=V_0\exp{(-\ka\la\phi)}$ where $\la>0$. The dimensionless variables
\begin{equation}\label{5}
x=\frac{\ka\dot{\phi}}{\sqrt{6}H}=\frac{\ka\phi'}{\sqrt{6}},\;y=\frac{\ka\sqrt{V}}{\sqrt{3}H},\;
z=\frac{\ka\sqrt{\rg}}{\sqrt{3}H},\;w=\frac{\ka\sqrt{\rdm}}{\sqrt{3}H},
\end{equation}
where $\dot{F}=HF'$ and $F'=\dd F/\dd N$ ($N\equiv \ln a$), reduce the system~\eqref{1}-~\eqref{3} to the following system of three linearly independent autonomous differential equations
\begin{align}
\label{6}&x'=\sqrt{\frac{3}{2}} \lambda y^2-3 x+\frac{3}{2} x [1+x^2-y^2+(\gamma -1) z^2],\\
\label{7}&y'=-\sqrt{\frac{3}{2}} \lambda x y+\frac{3}{2} y [1+x^2-y^2+(\gamma -1) z^2],\\
\label{8}&z'=-\frac{3}{2} \gamma z+\frac{3}{2} z [1+x^2-y^2+(\gamma -1) z^2],
\end{align}
where the variable $w$ is solved by
\begin{equation}\label{9}
x^2+y^2+z^2+w^2=1,
\end{equation}
which follows from~\eqref{4}. It is worth mentioning that the expression in the square parentheses in the system~\eqref{6}-~\eqref{8} is positive or zero: $1+x^2-y^2+(\gamma -1) z^2=2x^2+\ga z^2+w^2$. To arrive at~\eqref{6}-~\eqref{8} we used
\begin{equation}\label{10a}
H'/H=-3(2x^2+\ga z^2+w^2)/2.
\end{equation}
The equation governing the motion of $w$ is independent of $\la$
\begin{equation}\label{10}
w'=3w [x^2-y^2+(\gamma -1) z^2]/2.
\end{equation}
In general (for all $\ga$), the constraint~\eqref{9} restricted the motion to within the unit solid 2-sphere of center at the origin: $x^2+y^2+z^2\leq 1$. A necessary formula for the stability analysis is readily derived upon combining~\eqref{6}, \eqref{7} and~\eqref{8} setting $x^2+y^2+z^2=r^2$
\begin{equation}\label{11}
(r^2)'=3(r^2-1)(2x^2+\ga z^2-r^2).
\end{equation}
Another useful formula for the stability analysis and qualitative behavior of the solutions is derived upon eliminating the expression in the square parentheses in~\eqref{8} and~\eqref{10}
\begin{equation*}
\frac{z'}{z}-\frac{w'}{w}=\frac{3}{2}(1-\ga)
\end{equation*}
leading to\footnote{Eq.~\eqref{12} is also derived upon combining Eqs. (18) and (19) of~\cite{3fluid}.}
\begin{equation}\label{12}
z^2=L^2w^2a^{3(1-\ga)},
\end{equation}
where $L>0$ is a constant of integration. For a pressureless barotropic fluid ($\ga=1$), Eq.~\eqref{11} reduces to $z^2=L^2w^2$ which leads, using~\eqref{9} and setting $\ell =L/\sqrt{L^2+1}<1$, to
\begin{equation}\label{13}
x^2+y^2+z^2/\ell^2=1.
\end{equation}
Thus for $\ga=1$, the motion happens on an ellipsoid of revolution around the $z$ axis in the phase space. The ellipsoid, which is inside the 2-sphere $x^2+y^2+z^2\leq 1$, does not contain all the trajectories for $\ga=1$; as we shall see, there are some equilibrium points inside and outside the ellipsoid; there are also other trajectories corresponding to $L=0$ ($\Rightarrow z=0$) and to $L=\infty$ ($\Rightarrow w=0$).
The relative densities are defined by $\Om_{\phi}\equiv x^2+y^2$, $\Om_{\ga}\equiv z^2$, $\Om_{\text{DM}}\equiv w^2$ and obey the conservation equation $\Om_{\phi}+\Om_{\ga}+\Om_{\text{DM}}=1$. Other relevant quantities are the parameter $\of=(x^2-y^2)/(x^2+y^2)$ which is constrained by $-1\leq \of \leq 1$ and the deceleration parameter $q\equiv -\ddot{a}/(aH^2)$ which is, by the field equations, the same as $-1-\dot{H}/H^2=-1-H'/H$ leading to
\begin{equation}\label{14}
2q=1+3[x^2-y^2+(\gamma -1) z^2].
\end{equation}
Combining~\eqref{11} and~\eqref{14} we arrive at
\begin{align}
\label{14a}&\Om_{\text{DM}}'=(2q-1)\Om_{\text{DM}}\\
\label{14b}&\Om_{\ga}'=[(2q-1)-3(\ga -1)]\Om_{\ga}\\
\label{14c}&\Om_{\phi}'=(2q-1)(1-\Om_{\phi})+3(\ga -1)\Om_{\ga}\,.
\end{align}
\section{The critical points (CP's) -- Lyapunov's Stability and Instability Theorems (LST and LIT) \label{sec3}}
The CP's are the equilibrium points ($x_c,\,y_c,\,z_c,\,w_c$) in the phase space obtained upon solving the nonlinear algebraic equations $x'=0$, $y'=0$, $z'=0$, and $w'=0$. To determine the stability of the CP's we proceed to the linearization of the system~\eqref{6}-~\eqref{8} setting $x=X+x_c$, $y=Y+y_c$, $z=Z+z_c$ ($w=W+w_c$) where the new variables ($X,\,Y,\,Z$) still obey the full nonlinear system~\eqref{6}-~\eqref{8}. Upon linearization, the system~\eqref{6}-~\eqref{8} is brought to the matrix form:
\begin{equation}\label{15}
(X',\,Y',\,Z')^{T}=J_c\cdot (X,\,Y,\,Z)^{T},
\end{equation}
where $(X,\,Y,\,Z)^{T}$ is the column matrix transpose of $(X,\,Y,\,Z)$ and $J_c$ is the $3\times3$ Jacobi matrix~\cite{b1}-~\cite{b3} evaluated at the CP ($x_c,\,y_c,\,z_c$):
\begin{equation}\label{16}
J_c=\begin{bmatrix}
J_{c\,11} & (-3 x_c+\sqrt{6} \lambda ) y_c & 3 (\gamma -1) x_c z_c \\
(3 x_c-\sqrt{\frac{3}{2}} \lambda ) y_c & J_{c\,22} & 3 (\gamma -1) y_c z_c \\
3 x_c z_c & -3 y_c z_c & J_{c\,33}
\end{bmatrix}
\end{equation}
where $2J_{c\,11}=3[-1+3 x_c^2-y_c^2+(\gamma -1) z_c^2]$, $2J_{c\,22}=3-\sqrt{6} \lambda x_c+3 x_c^2-9 y_c^2+3 (\gamma -1) z_c^2$, $2J_{c\,33}=3[x_c^2-y_c^2+(\gamma -1) (3 z_c^2-1)]$.
The test for stability of almost linear systems~\cite{b1} states that~\cite{b1,b3} if (1) \textsl{all} the eigenvalues of the nonsingular matrix $J_c$ ($\det J_c\neq 0$) have negative real parts, then the CP is asymptotically stable but if (2) any eigenvalue has positive real part, then the CP is unstable. If some eigenvalues are zero ($\det J_c= 0$) or have zero real parts (and still $\det J_c\neq 0$), we will employ appropriate arguments (among which Lyapunov's Theorems, LST~\cite{b1}-~\cite{b3}, and LIT~\cite{b1}) for the determination of the stability of the corresponding CP as the above-mentioned test is no longer valid~\cite{b1,b3}. Mathematically speaking, we shall not make use of the notion of saddle points for a saddle CP is generically unstable. However, physically, we shall distinguish between a repeller and a saddle point.
LST assumes the existence of a continuously differentiable function $U(X,\,Y,\,Z)$ that is positive definite in a neighborhood $\mathcal{D}$ of the CP and has an \textsl{isolated} minimum at the CP, which is the origin in the new coordinates $(X,\,Y,\,Z)$: $(X_{\text{CP}},\,Y_{\text{CP}},\,Z_{\text{CP}})=(0,\,0,\,0)$. If further the derivative of $U$ along a solution curve, $U'=\partial_{i}U(X^i)'$ with $i=1\to3$ and $X^1=X,\,X^2=Y,\,X^3=Z$, is negative definite on $\mathcal{D}$ (except at the origin): $U'(X,\,Y,\,Z)<0$, then the CP is asymptotically stable. We are not concerned with the case where the CP is stable~\cite{b1}-~\cite{b3}.
LIT~\cite{b1} for 2-dimensional systems generalizes to higher dimensional systems in a straightforward way. It consists in finding a function $U(X,\,Y,\,Z)$ that is continues on a domain $\mathcal{D}$ containing the CP, which is assumed to be isolated. The Theorem assumes that $U(\text{CP})=0$ and that there is at least a point $P_0=(X_0,\,Y_0,\,Z_0)$ in each disc in $\mathcal{D}$, of center CP, where $U(P_0)>0$. If $U'$ is positive definite on $\mathcal{D}$ (except at the origin): $U'(X,\,Y,\,Z)>0$, then the CP is unstable.
The intuition behind LIT is as follows. Assume the above conditions are satisfied. In any disc $D_{\epsilon}$ in $\mathcal{D}$ select a solution curve that starts at\footnote{In a general problem, use $t$ instead of $N$.} $P_0$: $X(N=0)=X_0,\,Y(N=0)=Y_0,\,Z(N=0)=Z_0$. If the solution curve evolves from $P_0$ to, say, $P_1=(X_1,\,Y_1,\,Z_1)$ we must have $U(P_1)>U(P_0)>0$ since $U$ is increasing along the solution curve. Now, since $U(P_0)>U(\text{CP})=0$, this solution curve won't reach the CP in a finite or an infinite time $N$ (otherwise $U$ would decrease). Thus the CP is not asymptotically stable (an asymptotically stable point is a CP where any solution curve starting in its vicinity ends up at it as $N\to\infty$). Furthermore, the solution curve must leave the disc $D_{\epsilon}$ since, otherwise, as $N\to\infty$, $U\to\infty$ too, which is not possible as the continuity of $U$ on $D_{\epsilon}$ implies that it is bounded there. Thus, the CP is not stable; it must be unstable. LST works, in a sense, the other way around in that its hypotheses ensure that the solution curve approaches the CP as $N\to\infty$.
In LIT, $U$ need not be zero at the CP since one can add any positive or negative constant to $U$ without modifying the condition of stability, and $U(P_0)$ need not be positive\footnote{The LST and LIT were firstly formulated to deal with the stability of autonomous differential equations where the CP is the origin of the new coordinates $(X,\,Y,\,Z)$. This is no longer the case in other coordinate systems as $(x,\,y,\,z)$.}: It is sufficient to have $U(\text{CP})<U(P_0)$. A variant of LIT may be formulated as follows. If (1) $U(x,\,y,\,z)$ is continues on a domain $\mathcal{D}$ containing the CP, which is assumed to be isolated, (2) in every disc centered at the CP [here the CP is not necessarily the origin of the coordinates $(x,\,y,\,z)$], there exists some point $P_0=(x_0,\,y_0,\,z_0)$ such that $U(\text{CP})>U(P_0)$, (3) $U'$ is negative definite on $\mathcal{D}$, then the CP is unstable.
In cosmology both LST and LIT are very useful. One is interested to stable CP's or attractors where different solution curves end up at regardless of their initial conditions. One is also interested to unstable solutions or repellers which represent starting or intermediate events.
The main difficulty in applying LST and LIT is that there is no method how to find $U$. There are, however, some directions for that purpose~\cite{b3}. However, the construction of $U$ may be greatly simplified relying on the assumptions of LST and LIT concerning $\mathcal{D}$. This will be illustrated in the following discussion.
\section{The critical points (CP's) -- Stability analysis -- Cosmological implications \label{sec4}}
We have counted ten CP's labeled from $A$ to $J$. In the following we will provide the values of the CP's in the form $(x_c,\,y_c,\,z_c,\,w_c)$, determine their stability conditions and discuss their cosmological implications. The stability conditions are determined in terms of intervals of $\la$ and/or $\ga$ and are derived using the ``Hessian" test for stability as well as both LST and LIT. Particularly, the LST and LIT are employed to determine the stability conditions at the endpoints of the intervals of $\la$ and/or $\ga$, a task generally overlooked, skipped or difficult without use of the theorems~\cite{para2,p1,p2,psc,int3,int4}. We summarize our results in Table~\ref{Tab1}. As was mentioned earlier, no distinction is made in the text between a saddle point and an unstable CP; This distinction appears only in Table~\ref{Tab1}.
Following the classification made for the analytic solutions to the 2-fluid problem~\cite{And}, the solutions with $\la^2<6$ are called hyperbolic and those with $\la^2>6$ are called trigonometric. Due to different conventions, the value of $\la$ used in~\cite{num}, $\la_{\text{num}}$, is related to the value of $\la$ used in this work by $\la_{\text{num}}=\sqrt{3}\la$.
\begin{table}[h]
{\footnotesize
\begin{tabular}{|@{}c@{}|l|l|l@{}|l|l|l|}
\hline
\textbf{CP} & $\pmb{(x_c,\,y_c,\,z_c,\,w_c)}$ & \textbf{Existence} & \textbf{Stability} & $\pmb{\of}$ & $\pmb{2q}$ & $\pmb{\Om_{\ga}+\Om_{\text{DM}}}$ \\
\hline
\hline
$A$ & $(0,\,0,\,0,\,1)$ & always & SP & $\nexists$ & 2 & 1 \\
\hline
$B_+$ & $(1,\,0,\,0,\,0)$ & always & Un & 1 & 4 & 0 \\
\hline
$B_-$ & $(-1,\,0,\,0,\,0)$ & always & Un & 1 & 4 & 0 \\
\hline
& & & $\la^2\leq\min(3,3\ga)$: AS & & & \\
$D$ & $(\la/\sqrt{6},\,\sqrt{1-(\la^2/6)},\,0,\,0)$ & $\la^2<6$ & & $\frac{\la^2}{3}-1$ & $1+\la^2$ & 0 \\
& & & $\min(3,3\ga)<\la^2<6$: SP & & & \\
\hline
& & & $\ga=0$: AS & & & \\
$E$ & $(0,\,0,\,1,\,0)$ & always & $0<\ga <2$: SP & $\nexists$ & $3\ga-2$ & 1 \\
& & & $\ga=2$: Un & & & \\
\hline
& $(\cos\ta,\,0,\,\sin\ta,\,0)$ & & & & & \\
$F$ & & $\ga=2$ & Un & 1 & 4 & $\sin^2\ta$ \\
& ($0<\ta<\pi$) & & & & & \\
\hline
& & & $0\leq\ga\leq\frac{2}{9}$ \& $3\ga <\la^2$: SN & & & \\
& & & $\frac{2}{9}<\ga\leq 1$ \& $3\ga <\la^2\leq\frac{24\ga^2}{9\ga-2}$: SN & & & \\
$G$ & $\Big(\sqrt{\frac{3}{2}}\frac{\ga}{\la},\,\sqrt{\frac{3}{2}}\frac{\sqrt{(2-\ga)\ga}}{\la},\,\frac{\sqrt{\la^2-3\ga}}{\la},\,0\Big)$ & $\la^2\geq 3\ga$ & $\frac{2}{9}<\ga\leq 1$ \& $\la^2>\frac{24\ga^2}{9\ga-2}$: SS & $\ga-1$ & $3\ga-2$ & $1-\frac{3\ga}{\la^2}$ \\
& & & $\ga\leq1$ \& $\la^2=3\ga\,$: AS & & & \\
& & & $1<\ga<2$ \& $3\ga \leq\la^2$: SP & & & \\
& & & $\ga=2$ \& $6 \leq\la^2$: Un & & & \\
\hline
& & & $\ga>1$ \& $3<\la^2\leq\frac{24}{7}$: SN & & & \\
$H$ & $\Big(\sqrt{\frac{3}{2}}\frac{1}{\la},\,\sqrt{\frac{3}{2}}\frac{1}{\la},\,0,\,\frac{\sqrt{\la^2-3}}{\la}\Big)$ & $\la^2\geq3$ & $\ga>1$ \& $\la^2>\frac{24}{7}$: SS & 0 & 1 & $1-\frac{3}{\la^2}$ \\
& & & $\ga=1$ \& $\la^2\geq3$: AS & & & \\
& & & $\ga<1$ \& $\la^2\geq3$: SP & & & \\
\hline
& $(0,\,0,\,\cos\ta,\,\sin\ta)$ & & & & & \\
$I$ & & $\ga=1$ & SP & $\nexists$ & 1 & 1 \\
& ($0<\ta<\pi/2$) & & & & & \\
\hline
& & $\ga=1$ & $3<\la^2<\frac{24}{7}$: SN & & & \\
$J$ & $\Big(\sqrt{\frac{3}{2}}\frac{1}{\la},\,\sqrt{\frac{3}{2}}\frac{1}{\la},\,z_c,\,\sqrt{1-\frac{3}{\la^2}-z_c^2}\Big)$ & \& & & 0 & 1 & $1-\frac{3}{\la^2}$ \\
& & $\la^2>3$ & $\la^2>\frac{24}{7}$: SS & & & \\
\hline
\end{tabular}}
\caption{{\footnotesize Existence and stability of the critical points. \textsc{Nomenclature:} ``CP" for ``Critical Point", ``$\nexists$" for ``indefined", ``AS" for ``Asymptotically Stable", ``Un" for ``Unstable", ``SP" for ``Saddle Point", ``SN" for ``Stable Node", ``SS" for ``Stable Spiral".}}\label{Tab1}
\end{table}
\subparagraph{\pmb{$A=(0,\,0,\,0,\,1)$}.} For $\ga \neq 1$, $J_c$ has at least one positive eigenvalue: $\{-3/2, \,3/2,\,3(1-\ga)/2\}$. This CP is unstable. For $\ga = 1$, $J_c$ is singular. However, it is straightforward to show that in this case the CP is also unstable. This is achieved upon linearizating~\eqref{6} and~\eqref{7} in which case we obtain the eigenvalues $\mp 3/2$ of opposite signs.
Cosmologically, the only solution curves that may reach $A$ emanate from $B_{\pm}$ with $y\equiv 0$ and $z\equiv 0$; otherwise, some solution curves (only those emanating from $B_+$) may just get close to it, but do not cross it, as shown in Fig.~\ref{Fig1}. At this CP, all densities vanish for a dominant DM component $\Om_{\text{DM}}=1$, $\of$ is an indeterminate, and the universe undergoes a decelerated expansion with $q=1/2$.
\subparagraph{\pmb{$B_+=(+1,\,0,\,0,\,0)$}, \pmb{$B_-=(-1,\,0,\,0,\,0)$}.} The matrix $J_c$ has the eigenvalues $\{3,\,(6-\epsilon\sqrt{6}\la)/2,\,3(2-\ga)/2\}$ where $\epsilon=1,\,-1$ for $B_+,\,B_-$, respectively, so they are generically unstable. They are also unstable in the special case $\ga =2$ where $J_c$ is singular\footnote{For $B_+$, if $\ga =2$ and $\la =\sqrt{6}$, we conclude to the instability upon applying LIT with $U=aX^2$ and $a>0$. The instability of the case $\ga =2$ and any $\la$ can also be achieved considering~\eqref{11} which becomes $(r^2)'=3(r^2-1)(r^2-2y^2)$. A solution curve that starts near the CP has $r<1$ (the CP is on the sphere $r=1$ and all solution curves are inside the sphere). Since $y^2=Y^2\ll r^2\approx 1$, we have $(r^2)'<0$ and thus the solution curve moves in the direction of decreasing $r$ and never returns back to the CP where $r=1$, which is then unstable. This is a first application of a variant of LIT which was formulated in the previous section.\label{var}} as can be concluded from the linearization of~\eqref{6} and~\eqref{7}.
These are the repellers, as shown in Fig.~\ref{Fig1} and Fig.~\ref{Fig2}, with a dominant kinetic energy, a decelerated expansion $q=2$, and $\of=1$. For a steep potential, $\la>\sqrt{6}$, $B_+$ is a saddle point.
\subparagraph{\pmb{$D=(\la/\sqrt{6},\,\sqrt{1-(\la^2/6)},\,0,\,0)$}.} This CP exits for $\la^2<6$ (the case $\la^2=6$ leads to the previous case). The eigenvalues of $J_c$ are: $\{(\la^2-3\ga)/2,\,\la^2-3,\,(\la^2-6)/2\}$. In the case $\det J_c\neq 0$, the CP is asymptotically stable for $\la^2<\min(3,3\ga)$ and unstable for $\min(3,3\ga)<\la^2<6$ [this includes the cases ($\la^2=3$ and $\ga<1$) and ($\la^2=3\ga$ and $\ga>1$)]. If $\det J_c=0$, it is asymptotically stable in the cases ($\la^2=3$ and $\ga>1$) and ($\la^2=3\ga$ and $\ga<1$) upon linearizing~[\eqref{7} and~\eqref{8}] and [\eqref{6} and~\eqref{7}], respectively.
There remains the case $\la^2=3$ and $\ga=1$ where we expect the CP [in this case $D=(1/\sqrt{2},\,1/\sqrt{2},\,0,\,0)$] to be asymptotically stable. We apply LST and select $U$ of the form: $U=a(X+Y)^2+(b-a)Y^2+cZ^2$, which is positive definite if $0<a<b$ and $0<c$. The CP is an isolated minimum of $U$ with $U(CP)=0$. The directional derivative along the solution curves, $U'=\partial_{i}U(X^i)'$ with $i=1\to3$ and $X^1=X,\,X^2=Y,\,X^3=Z$, is evaluated using the r.h.s's of~\eqref{6}, \eqref{7} and~\eqref{8} after converting to new coordinates ($X,\,Y,\,Z$):
\begin{equation}
U'=F(X,Y,Z) \; \text{ with }\; F=-3(b-a)Y^2+O[(X^i)^3],
\end{equation}
which is negative definite in the vicinity of the CP. We choose $\mathcal{D}$ to be any neighborhood of the CP (including the CP), where $U'<0$, and not including other points of the surface $S:\, F(X,Y,Z)=0$. This way we make $U$ negative definite\footnote{This evokes the pendulum problem~\cite{b2}. Even if $\mathcal{D}$ were to include other points on the surface $S$ (in which case $U'$ would be negative semidefinite), we would still conclude that the CP is asymptotically stable (and not just stable).} in $\mathcal{D}$. With these choices we satisfy the hypotheses of LST, so the CP with the case $\la^2=3$ and $\ga=1$ is asymptotically stable.
For $\la^2\leq\min(3,3\ga)$, this CP is an attractor with a dominant scalar field component $\Om_{\phi}=1$, $\of=-1+\la^2/3\leq 0$ and a decelerated expansion $2q=1+\la^2$.
\subparagraph{\pmb{$E=(0,\,0,\,1,\,0)$}.} From the set of the eigenvalues of $J_c$, $\{-3(2-\ga)/2,\,3\ga/2,\,3(\ga-1)\}$, the CP is generically unstable if $\det J_c\neq 0$.
Now, consider the case where $J_c$ is singular ($\det J_c=0$). In the special case $\ga =2$, the CP is unstable for the linearization of~\eqref{7} and~\eqref{8} leads to $Y'=3Y$, $Z'=3Z$. The same conclusion is achieved from $(X^2+Y^2+Z^2)'=3(Y^2+Z^2)+\cdots >0$. For $\ga =1$ the CP is also unstable by LIT or upon linearizing~\eqref{6} and~\eqref{7} which results in the eigenvalues $\pm 3/2$ of opposite signs. The case $\ga=0$ is stable since we have $(X^2+Y^2+Z^2)'=-3(Y^2+Z^2)+\cdots <0$.
Thus, $E$ is a matter dominant attractor ($\Om_{\ga}=1$) if $\ga =0$, a saddle point if $0<\ga <2$, and a repeller if
$\ga=2$. With the parameter $\of$ remains undetermined, the state of the universe at $E$ undergoes a decelerated expansion if $2/3<\ga \leq 2$ or an accelerated expansion if $0\leq \ga < 2/3$. This is a novel point because one may have a TPA without necessary having (at the same time) a minimum kinetic energy and a maximum potential energy as in the case of the 2-fluid problem~\cite{Russo,num}. In fact, at $E$ both kinetic and potential energies are zero.
Thus if, for $0\leq \ga < 2/3$, a solution curve approaches the saddle point $E$ then deviates to a CP (this would be the CP $G$), the TPA there (at $E$) may last longer than the TPA occurring away from saddle points. This is because a saddle point behaves partly as an attractor and partly as a repeller. This is in fact the case in Fig.~\ref{Fig3} that is a plot, for $\la =\sqrt{6.3}$ and $\ga= 0.6666$, of twice the deceleration parameter, $2q$, and the kinetic and potential relative densities, $x^2$ (dashed line) and $y^2$ (continuous line). The parameter $q$ crosses the $N$ axis at: $N_1=4.82333$, $N_2=7.984$, $N_3=10.3342$, $N_4=13.0676$ and $N_5=13.9537$. This solution has thus two TPA's and two TPD's: The first and second TPA's are observed in the intervals $N_1<N<N_2$ and $N_3<N<N_4$, respectively, and the first and second TPD's are observed in the intervals $N_2<N<N_3$ and $N_4<N<N_5$, respectively. The first TPA starts at $N=N_1$, which is the moment where $x^2\simeq 0$ and $y^2\simeq 0$ [$x(N_1)=-0.075$, $y(N_1)=0.106$, $z(N_1)=0.991$], that is the corresponding point on the solution curve is near $E$. The graph of $2q$ continues to oscillate for $N>N_5$ below the line $q=0$, this is a sign that solutions with many TPA's and TPD's may exist if careful choice of the parameters is carried out. Fig.~\ref{Fig4} is a similar plot with different inputs $\la =\sqrt{6.3}$ and $\ga= 0.6$. It is a solution with one TPA and one TPD which depicts a case with a TPA starting at the moment where $x^2$ is minimum and $y^2$ is maximum.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{Fig1a.eps} \includegraphics[width=0.4\textwidth]{Fig1b.eps}\\
\caption{\footnotesize{(a): Left panel. Case $\la =3$, $\ga= 1.5$. For these values of the parameters, $H$ is the unique attractor. Solutions starting at $B_+$ and $E$ (for this value of $\ga$, $E$ is a saddle point) get very close to $A$, which is a saddle point. All solutions starting in the vicinity of $B_{\pm}$ and $E$ spiral to $H$. Those curves, which start in the vicinity of $B_{\pm}$ with $y\equiv 0$ and $z\equiv 0$ or in the vicinity of $E$ with $x\equiv 0$ and $y\equiv 0$, end up at $A$. Since $A$ is unstable, any perturbations in the values of the coordinates cause the solution curve to continue its journey to $H$. (b): Rigt panel. Case $\la =3$, $\ga= 4/3$.}}\label{Fig1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Fig2.eps}\\
\caption{\footnotesize{Case $\la =1.8$, $\ga= 1$. For these values of the parameters, the vertical line $J$, through the end-points $H=(\sqrt{3/2}/\la,\,\sqrt{3/2}/\la,\,0)$ and $G=(\sqrt{3/2}/\la,\,\sqrt{3/2}/\la,\,\sqrt{1-(3/\la^2)})$, is the unique family of attractors ($H$ and $G$ are locally stable for $\la =1.8$, $\ga= 1$). $I$ is the line through $A=(0,\,0,\,0)$ and $E=(0,\,0,\,1)$. There is a curve starting in the vicinity of $B_-$ which converges to a point on the line $J$. There are three curves starting in the vicinity of $B_+$. The upper and lower curves approach the line $I$ then converge to different points on the line $J$. The intermediate curve, which corresponds to $y\equiv 0$ converges to the line $I$, which is a set of saddle points; any perturbation in the value of $y$ causes this curve to end up at any point on the line $J$. [In this caption the coordinates of the CP's have been given on the form $(x_c,\,y_c,\,z_c)$.]}}\label{Fig2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{Fig3.eps}\\
\caption{\footnotesize{Case $\la =\sqrt{6.3}$, $\ga= 0.6666$. For our initial conditions, at $N=0$, we took $x_0=-0.9999997$, $y_0=4.3569\times 10^{-12}$, $z_0=\sqrt{1-x_0^2-y_0^2}$. (Upper and lower left plots) Twice the deceleration parameter $2q$. (Lower right plot) The kinetic and potential relative densities, $x^2$ (dashed line) and $y^2$ (continuous line). The parameter $2q$ crosses the $N$ axis at: $N_1=4.82333$, $N_2=7.984$, $N_3=10.3342$, $N_4=13.0676$ and $N_5=13.9537$. This solution has thus two TPA's and two TPD's: The first and second TPA's are observed in the intervals $N_1<N<N_2$ and $N_3<N<N_4$, respectively, and the first and second TPD's are observed in the intervals $N_2<N<N_3$ and $N_4<N<N_5$, respectively. The first TPA starts at $N=N_1$, which is the moment where $x^2\simeq 0$ and $y^2\simeq 0$, that is the corresponding point on the solution curve is near $E$.}}\label{Fig3}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{Fig4.eps}\\
\caption{\footnotesize{Case $\la =\sqrt{6.3}$, $\ga= 0.6$. For our initial conditions, at $N=0$, we took $x_0=-0.999975$, $y_0=4.3569\times 10^{-7}$, $z_0=0.000479$. (Left plot) Twice the deceleration parameter $2q$. (Right plot) The kinetic and potential relative densities, $x^2$ (dashed line) and $y^2$ (continuous line). The parameter $2q$ crosses the $N$ axis at: $N_1=2.44678$, $N_2=2.94285$ and $N_3=6.14727$. This solution has thus one TPA and one TPD: The TPA is observed in the interval $N_1<N<N_2$ and the TPD is observed in the interval $N_2<N<N_3$. The TPA starts at $N_1$, which is the moment where $x^2$ is minimum and $y^2$ is maximum.}}\label{Fig4}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Fig5.eps}\\
\caption{\footnotesize{Case $\la =3$, $\ga= 2$. For these values of the parameters, $H$ is the unique attractor. The circle through $B_+$, $E$ and $B_-$ is the one-parameter family $F$ of repellers plus $B_{\pm}$. Any curve starting in the vicinity of this kinetic-matter repeller ends up at $H$.}}\label{Fig5}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Fig6.eps}\\
\caption{\footnotesize{Case $\la =\sqrt{6.3}$, $\ga= 0.6666$. Two solution curves starting from $E$ and $H$ and ending up at $G$.}}\label{Fig6}
\end{figure}
\subparagraph{\pmb{$F=(\cos\ta,\,0,\,\sin\ta,\,0)$}, \pmb{$0<\ta<\pi$}, \pmb{$\ga =2$}.} This unstable CP generalizes $B_{\pm}$ in that $\Om_{\ga}=\sin^2\ta$ may assume any value between 0 and 1; it also generalizes $E$.
As $J_c$ is singular, it is not possible to draw any conclusion concerning stability by linearization of the system~\eqref{6}-~\eqref{8} for this CP. With $\ga =2$, Eq.~\eqref{11} becomes $(r^2)'=3(r^2-1)(r^2-2y^2)$. A solution curve that starts near the CP has $r<1$ (the CP is on the sphere $r=1$ and all solution curves are inside the sphere). Since $y^2=Y^2\ll r^2\approx 1$, we have $(r^2)'<0$ and thus the solution curve moves in the direction of decreasing $r$ and never returns back to the CP where $r=1$, which is then unstable. This gives another application of a variant of LIT (see footnote~\ref{var}).
Since $\ta$ is not constrained, $F$ is a new one-parameter family of kinetic-matter repellers. With a stiff equation of state $\ga =2$, the initial density is shared between the barotropic fluid and DE, $\of =1$, and $q=2$ (decelerated expansion). Solution curves starting from $F$, which is represented by a semicircle in Fig.~\ref{Fig5}, reach $H$.
\subparagraph{\pmb{$G=(\sqrt{3/2}(\ga/\la),\,\sqrt{3/2}\sqrt{(2-\ga)\ga}/\la,\,\sqrt{\la^2-3\ga}/\la,\,0)$}.} The corresponding solution for the 2-fluid problem~\cite{p1} is a potential-kinetic scaling solution the stability of which does not depend on the value of $\ga$. For the \p we rather have a potential-kinetic-matter scaling solution, its stability depends on $\ga$ as we shall see later soon.
The eigenvalues depend on both ($\la,\,\ga$): $(3/4)\{4(\ga-1),\,\ga-2-\al,\,\ga-2+\al\}$ where we define $\al\equiv \sqrt{(2-\ga)[24\ga^2+\la^2(2-9\ga)]}/\la$. This CP exists for $\la^2\geq 3\ga$ only. If $\det J_c\neq 0$, it is asymptotically stable for $\ga<1$ and $\la^2> 3\ga$. Furthermore, this CP is (a) a stable node for $0\leq\ga\leq 2/9$ for all\footnote{This subcase exists also for the the 2-fluid problem but was not derived in~\cite{p1,para2}. In this subcase ($0\leq\ga\leq 2/9$), $\la^2$ need not be smaller than $24\ga^2/(9\ga-2)$.} $\la^2> 3\ga$ (the case $\ga=0$ leads to $E$ discussed above), (b) a stable node for $2/9<\ga<1$ and $3\ga<\la^2\leq 24\ga^2/(9\ga-2)$, and (c) a stable spiral if $2/9<\ga<1$ and $\la^2> 24\ga^2/(9\ga-2)$. The CP is unstable for $1<\ga<2$ and $\la^2> 3\ga$.
If $\det J_c=0$, this CP is asymptotically stable for $\ga=1$ and $\la^2> 3$ since the linearization of~\eqref{6} and~\eqref{7} provides two negative eigenvalues: $-(3/4)(1\pm \bt)$ with $\bt\equiv \sqrt{24-7\la^2}/\la$ (a stable node for $3<\la^2\leq 24/7$ and a stable spiral for $\la^2>24/7$). For the remaining cases where $J_c$ is singular ($\ga = 2$ or $\la^2= 3\ga$), the CP is unstable for $\ga=2$ and $\la^2\geq 6$ since near it we establish: $(X^2+Y^2+Z^2)'=6(\sqrt{\la^2-6}Z+\sqrt{6}X)^2/\la^2+\cdots >0$. For $\la^2= 3\ga$, the eigenvalues which result from the linearization of~\eqref{6} and~\eqref{7} are proportional to $\ga-2$ and $\ga-1$, ensuring asymptotic stability for $\ga<1$ and instability for $1<\ga<2$. For the case $\ga=1$ and $\la^2=3$, where $\det J_c=0$, we recover the CP $D=(1/\sqrt{2},\,1/\sqrt{2},\,0,\,0)$ which has been shown to be asymptotically stable.
The fact that $\ga\leq 1$ ---to ensure asymptotic stability of the CP--- results in $\of=\ga -1\leq 0$, while in the case of the 2-fluid problem $\of$, still given by the same formula, may have both signs. $\Om_{\phi}$ and $\Om_{\ga}$ depend on both ($\la,\,\ga$): $\Om_{\phi}=3\ga/\la^2$, $\Om_{\ga}=1-\Om_{\phi}$. With $2q=1+3(\ga-1)$, the state of the universe approaching this stable point may undergo a decelerated expansion if $2/3<\ga \leq1$ or an accelerated expansion if $0\leq \ga < 2/3$. $G$ is a saddle point for $1<\ga<2$.
\subparagraph{\pmb{$H=(\sqrt{3/2}/\la,\,\sqrt{3/2}/\la,\,0,\,\sqrt{\la^2-3}/\la)$}.} Here again the eigenvalues depend on both parameters ($\la,\,\ga$): $(3/4)\{6(1-\ga),\,-1-\bt,\,-1+\bt\}$. This CP exists for $\la^2\geq 3$ only and it is asymptotically stable for $\ga>1$ and $\la^2>3$. The CP is (a) a stable node if $3<\la^2\leq24/7$ (with $\ga>1$) or (b) a stable spiral if $\la^2>24/7$ (with $\ga>1$). If $\det J_c=0$, it is asymptotically stable for $\ga=1$ and $\la^2>3$ as the linearization of~\eqref{6} and~\eqref{7} leads to the same eigenvalues $(3/4)\{-1-\bt,\,-1+\bt\}$. For the case $\ga=1$ and $\la^2=3$ we recover the CP $D=(1/\sqrt{2},\,1/\sqrt{2},\,0,\,0)$ which has been shown to be asymptotically stable.
The relevant parameters are $\Om_{\phi}=3/\la^2$ which depends only on $\la$, $\of=0$, $\Om_{\text{DM}}=1-\Om_{\phi}$, and $q=1/2$.
For the 2-fluid problem, $G$ is the unique attractor for $\la^2\geq 3\ga$ (for all $\ga$)~\cite{p1}. But $G$ depends on $\ga$, this means that the end-behavior of the solution depends on the nature of the barotropic fluid. We have seen that, for the \P, $G$ is no longer stable for $1<\ga\leq 2$ but $H$ is, which is a new attractor and does not depend on $\ga$. Thus, no matter the barotropic fluid equation of state is, the universe's evolution ends up at the same state provided $1<\ga\leq 2$. As we shall se below, for $\ga=1$ ($\la^2\geq 3\ga$), there is a line (a one-parameter family) of attractors all represented by the CP $J$.
\subparagraph{\pmb{$I=(0,\,0,\,\cos\ta,\,\sin\ta)$}, \pmb{$0<\ta<\pi/2$}, \pmb{$\ga =1$}.} $J_c$ is singular, however, the eigenvalues which result from the linearization of~\eqref{6} and~\eqref{7} are $\pm 3/2$, ensuring instability.
This unstable CP generalizes $E$ and $A$ in that $\Om_{\ga}$ may assume any value constrained by $\Om_{\ga}+\Om_{\text{DM}}=1$, with $q=1/2$ and $\of$ remains undetermined. Since $\ta$ is not constrained, $I$ is a new one-parameter family of saddle points where only matter and DM are the nonvanishing components. $I$ is represented by a vertical line in the phase diagram, which is the line $AE$ of Fig.~\ref{Fig2}.
\subparagraph{\pmb{$J=(\sqrt{3/2}/\la,\,\sqrt{3/2}/\la,\,z_c,\,\sqrt{1-(3/\la^2)-z_c^2})$}, \pmb{$\ga =1$}.} The CP exists only for $\la^2>3$. With $\det J_c=0$, the CP is however asymptotically stable since the linearization of~\eqref{6} and~\eqref{7} provides the two negative eigenvalues: $-(3/4)(1\pm \bt)$ (a stable node for $3<\la^2\leq 24/7$ and a stable spiral for $\la^2>24/7$).
Since $\ta$ is a free parameter, $J$ is a new one-parameter family of attractors in which all components coexist: It is a potential-kinetic-matter-DM scaling solution where $\of=0$, $q=1/2$, $\Om_{\phi}=3/\la^2$, and $\Om_{\ga}+\Om_{\text{DM}}=1-(3/\la^2)$. $\Om_{\ga}$ and $\Om_{\text{DM}}$ are both arbitrary and smaller than $1-(3/\la^2)$. According to the last and most accurate observations~\cite{obs}, $\Om_{0\,\phi}=0.721\pm 0.015$ (corresponding to $\la^2 =4.161$), thus $\Om_{0\,\ga}\leq 0.279\pm 0.015$.
In Fig.~\ref{Fig2}, $I$ is any point on the line through $A$ and $E$. Only solution curves emanating from $B_{\pm}$ and $I$ (including $A$ and $E$) may reach $J$, which is represented by a vertical line in the phase diagram, this is the line $HG$ of Fig.~\ref{Fig2}.
\section{Fitting the 3-fluid model \label{sec5}}
As stated in the Introduction, any realistic model should include at least one component with a negative pressure to account for a TPA~\cite{ob}. For that purpose, many theoretical models have been suggested the simplest of which is the so-called $\Lambda$CDM where the component with negative pressure is a vacuum energy (the cosmological constant). This model results in a constant DE equation of state, $\om_{\text{DE}}=-1$, and the coincidence problem. The next generation of models, which consider two noninteracting fluids~\cite{And,Russo,p1,p2,2mod}, have introduced a scalar field (quintessence) to generalize the $\Lambda$CDM model. They have emerged to tackle the coincidence problem and to provide a variable DE equation of state. Models where ordinary matter and DE interact have also emerged~\cite{psc,int1}.
While there is no observational evidence of the existence of any interaction between the two dark components, some authors, however, arguing that the amounts of DE and DM are comparable at the present age of the universe, have anticipated that and formulated 2- and 3-fluid problems with DE-DM~\cite{int3,int4,int2} or DE-matter-radiation~\cite{psc} interaction terms. Of course, these models reduce to the \p with no interaction terms if appropriate constraints are further imposed. In Ref.~\cite{int4}, the authors considered a DE-DM interaction with baryons uncoupled and radiation redshifted away or neglected. Thus, their model applies to the epoch beyond the matter-radiation decoupling, which corresponds to a redshift $\text{z}_{\text{dec}}=1099.9$~\cite{books}, and it reduces to the \p upon setting the DE-DM interaction coupling constant $\bt=0$~\cite{int4} (with this constraint, the model corresponds to ours with $\ga=1$, as we shall see below in this section). In contrast, the authors of Ref.~\cite{psc}, considering again a flat FRW, included radiation in, but dropped baryons from, their DE-DM-radiation model to allow for a deeper investigation of the universe's dynamics during the radiation dominant era. They considered a non-minimally and non-constant coupling, inspired from Scalar Tensor Theories (STT), the value of which depends on the trace of the energy-momentum tensor of the background component. Since radiation is traceless, it remains decoupled from the DE-DM system they investigated. The authors presented a general procedure for dynamical analysis of the STT inspired DE-DM interactions. Their model reduces to the \p if their DE-DM interaction coupling function~\cite{psc} $\chi(\phi)=\text{constant}$ and their DM parameter $\ga_{\text{~\cite{psc}}}=1$ (with these constraints, the model corresponds to ours with $\ga=4/3$, as we shall see below in this section).
In most of the above-mentioned models, the potential functions associated with DE and/or the interaction terms have been derived or chosen, relying partly on some physical assumptions, so that the problems remain analytically tractable (even though no nontrivial exact analytic solutions have been found so far), among which we find the \p we are considering here with no interaction terms. However, by neglecting all types of interactions, particularly that of visible matter, we restrict the application of the 3-fluid model to beyond the epoch of matter-radiation decoupling ($\text{z}_{\text{dec}}=1099.9$~\cite{books}). Thus, for $\text{z}<\text{z}_{\text{dec}}$, the model fits well the following three physical scenarios based solely on the value of $\ga$.
\begin{enumerate}
\item $\ga=4/3$. In this case the components of the universe are regrouped in a way that the barotropic fluid represents radiation, the DM and baryons together make up the pressureless component with a total relative density $\Om_{0}=0.279$, a baryonic density $\Om_{0\,b}=0.04-0.05$ and a DE density $\Om_{0\,\phi}=0.721$~\cite{obs} at the present age.
The only stable attractor corresponding to this application is $H$ provided $\la^2>3$. The application has three saddle points $G$, provided $\la^2\geq 4$, $E$, and $A$. In this case, the model describes the evolution of the universe starting from $E$ (or from $G$ if $\la$ is large), where radiation is dominant, passing through or approaching $A$, where the pressureless component (matter) is dominant, and ending up at $H$, where the universe content is shared between DE and DM (which becomes dominant for large $\la$), as Fig~\ref{Fig1} depicts.
\item $\ga=1$. The epoch of matter-radiation equality~\cite{books} corresponds to a redshift $\text{z}_{\text{eq}}=24000\Om_0h^2-1=3470.2$ where we take $h=0.72$. With $\text{z}<\text{z}_{\text{dec}}$ it is a good approximation to neglect radiation. It is now easy to see that the pressureless barotropic fluid represents baryons. In fact, at late times (but well before formation of structures), as the temperature drops by the effect of the expansion, baryons behave as a nonrelativistic ``monoatomic" ideal gas with pressure $p_b=n_bk_{\text{B}}T_b$ and mass density $\rho_b=m_bc^2n_b+3n_bk_{\text{B}}T_b/2$ that is sum of rest mass and kinetic energy densities provided $k_{\text{B}}T_b/(m_bc^2)\ll 1$. As far as the nonrelativistic approximation is valid ($k_{\text{B}}T_b/(m_bc^2)\ll 1$), the equation of state for baryons reduces to $p_b\approx 0$ and $\rho_b\approx m_bc^2n_b$ where $n_b$ is the number density and $m_b$ is the rest mass. (If baryons have different masses, we sum over all baryons). Here again we have two pressureless components, the DM and baryons, with a total relative density $\Om_{0}=0.279$ at the present age.
To this application correspond four attractors: $D$ if $\la^2=3$, $G$ and $J$ if $\la^2>3$, and $H$ if $\la^2\geq3$ and one saddle point $A$. In this case, the model describes the evolution of the universe starting from any point near the line through $A$ and $E$ (representing $I$), where pressureless matter dominates, and ending up on the line through $H$ and $G$ (representing $J$) as shown in Fig~\ref{Fig2}.
\item $\ga<2/3$. We have seen that this is the case where the universe may undergo (at least) two TPA's and two TPD's. In this case the barotropic fluid, as the scalar field, has a negative pressure too. Arguing that each component with negative pressure causes a TPA to occur in the history of the universe, we may consider the barotropic fluid as another source of DE. Both sources of DE acting together can be understood as a rough approximation to a more general and elaborate source of DE.
To this application correspond two attractors: $D$ if $\la^2=3$ or $\la^2=3\ga$, and $G$ if $\la^2>3\ga$. The attractor $G$ is rather a scaling solution of these two sources of DE. For instance, we may have an evolution from $E$ to $G$ or from $H$ (unstable in this case) to $G$ as Fig~\ref{Fig6} shows.
However, to have a faithful description of the evolution of the universe one should introduce an ordinary matter or baryonic component $\rho_b$ (radiation may be neglected). For a pressureless matter component all that one needs is to add the extra equation
\begin{equation}\label{ex}
u'=3u [x^2-y^2+(\gamma -1) z^2]/2,\qquad (\ga<2/3),
\end{equation}
to the system~\eqref{6} to~\eqref{8} with $u=\ka\sqrt{\rho_b}/(\sqrt{3}H)$ and $x^2+y^2+z^2+u^2+w^2=1$.
\end{enumerate}
These are the known cases where the barotropic fluid has applications in the context of a \P. The case of a kination or stiff matter, which corresponds to $\ga=2$, may be relevant at early times if interactions are taken into considerations. However, some authors argue that interactions could still be neglected in this case and considered a (massless and free) kination along with a DE-scalar-field component with exponential potential~\cite{C}. The case $\ga=2$ would generalize the investigation made in~\cite{C} by including a non-interacting DM component. Specifically, this generalizes the two repellers $B_{\pm}$, which correspond to singularities in the scalar field, to the semi-circle $B_+EB_-$ of Fig.~\ref{Fig5} and generalizes the scaling solution $a(t)\propto t^{2/\la^2}$ of~\cite{C}, which becomes now stable for $\la^2\leq 3$ (table~\ref{Tab1}, the CP $D$).
\section{Concluding remarks \label{sec6}}
We have generalized the results obtained in~\cite{p1} and derived new ones. The conclusions we could reach are: (1) The scalar field dominated solution is stable for $\la^2\leq\min(3,3\ga)$ (this was stable for $\la^2<3\ga$~\cite{p1}). (2) The potential-kinetic-matter scaling solution is stable for $\ga\leq 1$ (its corresponding solution~\cite{p1} is a potential-kinetic scaling one the stability of which does not depend on $\ga$). This constituted the main solution derived in~\cite{p1}. With the inclusion of DM, this solution is no longer stable for $\ga>1$, no longer an attractor; rather, it is a saddle point (table~\ref{Tab1}, the CP $G$) and thus a transient potential-kinetic-radiation (taking $\ga=4/3$) equilibrium point. Such possibility is not offered in the 2-fluid problem. The derivation of (3) new attractors (the potential-kinetic-DM scaling solution and the potential-kinetic-matter-DM scaling solution), (4) new repellers and saddle points, and (5) solutions with one and two TPA's and one and two TPD's.
We have obtained attractor solutions where both DE and DM coexist and the late-time density is shared according to $\Om_{\phi}=3/\la^2$ and $\Om_{\phi}+\Om_{\text{DM}}=1$ in a way independent of the value of $\ga >1$. The case of a pressureless barotropic fluid ($\ga=1$) is more interesting and has a one-parameter family of attractors where all components coexist with, as before, $\Om_{\phi}=3/\la^2$ but $\Om_{\ga}+\Om_{\text{DM}}=1-(3/\la^2)$. New one-parameter families of matter-DM saddle points and kinetic-matter repellers were also derived. The ten CP's may be grouped into families as follows.
(1) Repellers. These include $B_{\pm}$ and $F$ and they are represented by the semicircle of Fig.~\ref{Fig5}, which includes $E$ if $\ga =2$. Eqs.~\eqref{14a}-~\eqref{14c} imply $\Om_{\text{DM}}'=3\Om_{\text{DM}}$ and $\Om_{\ga}'=3(2-\ga)\Om_{\ga}$. Thus, for $B_{\pm}$ both relative densities, $\Om_{\text{DM}}$ and $\Om_{\ga}$, increase at the beginning of the evolution against $\Om_{\phi}$ which starts decreasing. This applies to $F$ too with the exception that $\Om_{\ga}$ has a stationary value at the beginning of the evolution. In Fig.~\ref{Fig1} we plot three solution curves two of which come very close to $A$ then converge to $H$.
(2) Saddle points. If $\ga=1$, these include all the points on the line through $A$ and $E$ (representing $I$). For $0<\ga <2$, $E$ is a saddle point. $D$, $G$ and $H$ behave under some parameter restrictions as saddle points too, as shown in table~\ref{Tab1}. They all have different values of $q$. $\of$ remains undetermined. For $\ga=1$, as solution curves approach $I$, as shown in Fig.~\ref{Fig2}, all relative densities tend to become stationary as their derivatives vanish there by~\eqref{14a}-~\eqref{14c}. Thus, $I$ is a turning point. This is almost obvious from Fig.~\ref{Fig2} where the two curves, which start from $B_+$ and converge to two different values of $J$ (here $J$ is a one-parameter family of attractors which is a vertical line through $H$ and $G$ in the phase diagram), have their maximum values of $z$ ($\Om_{\ga}=z^2$) in the vicinity of $I$.
We have also noticed that, for $0\leq \ga < 2/3$, a TPA occurs as the state of the universe approaches the intermediate state defined by the saddle point $E$ (where both kinetic and potential energies are zero), which lasts longer than other TPA's occurring away from saddle points (where the kinetic energy has a minimum and the potential energy has a maximum). To our knowledge, such a conclusion was never discussed in other \P s with interactions.
(3) Attractors. $J$ and $G$ form a set of attractors for $\ga\leq 1$ and $\la^2 >3\ga$. $J$ is a one-parameter family of new attractors represented by a vertical line in the phase diagram which extends from the point $H=\sqrt{3/2}/\la,\,\sqrt{3/2}/\la,\,0)$ to the point $G=(\sqrt{3/2}/\la,\,\sqrt{3/2}/\la,\,\sqrt{1-(3/\la^2)})$. For $\ga>1$, we have the potential-kinetic-DM scaling solution, $H$, which is a new attractor where the end-behavior of the universe's evolution does not depend on the barotropic fluid equation of state. To our knowledge, this point was never discussed in other \P s with interactions. $G$, the potential-kinetic-matter scaling solution, is stable for $\ga \leq 1$ but the universe approaching this late-time state undergoes a decelerated expansion, as it should be, only if $(1\geq)\ga >2/3$.
It is straightforward to see that the CP's correspond to power law solutions for the scale factor $a(t)$, as is the case with the CP's of the 2-fluid problem~\cite{p1,p2}. If ($x_c\neq 0$ and $z_c\neq 0$), we obtain from~\eqref{10a} $a(t)\propto t^m$ with $m=2/(6x_c^2+3\ga z_c^2)$~\cite{p2}.
For the case $\ga =1$, it is interesting to give a qualitative description of the solution curves which lie on the ellipsoid~\eqref{13}. For $\ga =1$, Eq.~\eqref{11} implies $(r^2)'=3(r^2-1)(x^2-y^2)$. Thus, curves with higher kinetic energy densities ($x^2>y^2$) move upward on the ellipsoid, in the direction of decreasing $r$ i.e. decreasing $\Om_{\phi}$ and increasing $\Om_{\ga}=L^2\Om_{\text{DM}}$, and those with lower kinetic energy densities ($x^2<y^2$) move downward in the direction of increasing $\Om_{\phi}$ and decreasing $\Om_{\ga}=L^2\Om_{\text{DM}}$. The only critical point that lies on the ellipsoid is the point $J_{\text{ellipsoid}}=(3/2/\la^2,\,3/2/\la^2,\,\ell \sqrt{1-3/\la^2})$, which also lies on the line $J$ through the points $H$ and $G$, with $w_c=\sqrt{1-\ell^2}\sqrt{1-3/\la^2}$ where $0<\ell<1$ is still a free parameter. $J_{\text{ellipsoid}}$ lies on the segment of the ellipse that joins the points $(1/\sqrt{2},\,1/\sqrt{2},\,0)$ and $(0,\,0,\,\ell)$. All solution curves end up, directly or spiraling, at $J_{\text{ellipsoid}}$. Thus, for $\la^2>24/7$, since $J$ is a stable spiral, the three relative densities, ($\Om_{\phi},\,\Om_{\ga},\,\Om_{\text{DM}}$) undergo oscillations around their average values, ($3/\la^2,\,\ell^2(1-3/\la^2),\,(1-\ell^2)(1-3/\la^2)$), respectively.
This cosmological model of three fluids, consisting of a barotropic fluid with an equation-of-state parameter $\gamma-1$, a pressureless DM fluid, plus a scalar field $\phi$ coupled to exponential potential $V=V_0\exp{(-\kappa\lambda\phi)}$, offers more possibilities for alleviating the coincidence problem: The late-time state is a decelerated expansion if $\ga >2/3$, $\of\leq 0$, and the late-time relative densities are constant (but depend on $\la$) or arbitrary with their values determined through observations only. The model fits well the three physical scenarios: $\ga=4/3$, $\ga=1$ and $\ga<2/3$ as discussed in Sect.~\ref{sec5}.
|
1,108,101,564,287 | arxiv | \section{Introduction}
High-speed flows (e.g. hypersonic flows \cite{Anderson}) and space plasmas \cite{Poedts} are typically characterized by strong shocks, shock/shock and/or shock/diffusion layers interactions. The numerical simulation of such flow problems may require extremely fine meshes over narrow regions of the physical domain in order to resolve the steep gradients occurring in the flow field. The high-gradient regions are not known to the analyst a priori. Thus, a-posteriori Adaptive Mesh Refinement (AMR) techniques represent a quite effective and established procedure to better capture the relevant flow features and to improve the overall quality of the numerical results. In particular, AMR allows for aligning grid cells with flow discontinuities (e.g. shocks, contact surfaces) in hypersonic flows \cite{Kleb2007} and for tackling the large disparity of scales (ranging from mega-meters to the ion and electron scales) within the same computational domain for space weather simulations \cite{Muller2011} , respectively, at the price of an increased algorithmic complexity. AMR is driven by physics-based sensors and can involve, h-refinement and/or r-refinement.
\begin{itemize}
\item \textbf{h-refinement}\\
The method consists of locally increasing the mesh resolution by adding or removing points, for instance via recursive cell subdivision or local re-meshing \cite{r-h-refinement}. This technique is relatively complex to implement, especially on unstructured grids and deeply affects the parallelization, requiring load balancing methods, for e.g the Dynamic Domain Decomposition \cite{Masaharu2013}, to keep a good performance and equidistribute the workload among the involved processors.
\item \textbf{r-refinement}\\
The r-refinement consists of repositioning the mesh points while keeping their number and connectivity frozen. This method is much more easily parallelizable than h-refinement and therefore it is highly desirable in large-scale simulations, since it naturally preserves the load balancing among processes \cite{whyr,mario}. While h-refinement is often used in hypersonic flow and astrophysical plasma applications, r-refinement is much less consolidated. This can be due to two main reasons:
\begin{enumerate}
\item Most hypersonic flow codes use cartesian meshes with high aspect ratio to improve the heat flux prediction and to reduce spurious entropy \cite{Kleb2007}, while r-refinement performs best on unstructured meshes (with triangles in 2D and tetrahedral in 3D).
\item State-of-the-art r-refinement typically relies upon the solution of pseudo-elastic systems (associated to the given mesh) \cite{L}, requiring the use of efficient Linear System Solvers (LSS) and increasing the overall complexity of the method.
\end{enumerate}
\end{itemize}
Fig.\ref{fig:AMR} shows a comparison between the two approaches applied on a simple Cartesian grid.
\begin{figure}[H]
\centering
\includegraphics[width=.5\textwidth]{Fig1}
\caption{Initial mesh (left), after h-refinement (middle), after r-refinement (right).}
\label{fig:AMR}
\end{figure}
In this work, we developed a novel, robust and efficient r-refinement algorithm in which the local physical characteristics are the main driver of the adaptation method. The resulting algorithm has been implemented into the COOLFluiD platform \cite{Kimpe,COOLFluiDAiaa}, a world-class open source framework for multi-physics modeling and simulations, particularly of hypersonic flows \cite{GaricanoHF,PanesiTCNEQ}, radiation \cite{DuarteMC}, laboratory \cite{ZhangLabo} and space plasmas \cite{lagunatwofluid,laguna2017effect,maneva2017multi,ALVAREZLAGUNA,lani2014gpu}. The selection of different monitor variables can help resolving different features in the final solution, according to the needs of the modeler (e.g. density or pressure). The developed AMR algorithm works on triangles, quadrilateral and tetrahedral cells, is fully parallel, implemented as a standalone module and totally physics-independent, letting the user decide which monitor physical quantity to use for driving the adaptation according to the application.
After giving an overview about the state-of-the-art r-refinement techniques in Sec.\ref{sec::stateoftheart}, a high-level description of the mesh adaptation algorithm is developed in Sec.\ref{sec:prob statement}.
Details about the definition of the network of fictitious springs upon which the algorithm relies and the corresponding stiffness computations are given in Sec.\ref{sec:math}. Numerical results are presented in Sec.\ref{sec:results}, showing the good performance of the developed method on a variety of application scenarios. Finally, Sec.\ref{sec:MQI} and Sec.\ref{sec:RSI} propose and demonstrate novel mesh quality indicator and refinement stop indicator concepts respectively.
\section{State-of-the-art r-refinement}
\label{sec::stateoftheart}
R-refinement (a.k.a. mesh fitting) techniques are usually developed as error- or geometry-based. Blom \cite{L} investigates the linear spring analogy, first introduced by Batina \cite{Batina} by adding fictitious springs to the grid with stiffness chosen to be inversely proportional to the length of the supporting edge. Yet, he showed that the linear spring analogy frequently produces negative cell volumes and becomes unreliable when the mesh points undergo large displacements. Farhat \cite{T,farhat3D} proposes the torsional spring analogy to upgrade the linear spring analogy concept and to mitigate the appearance of invalid triangulation by adding torsional stiffness attached to each mesh vertex, in order to counterbalance the change of the angle at the vertex. This approach appears to be robust but complex especially in 3D AMR simulations. A simpler model is proposed by Zeng and Ethier \cite{ST}, i.e. the semi-torsional spring analogy for triangular and tetrahedral meshes, where the simplicity of the linear spring implementation is preserved and corrected by a factor reflecting the local geometrical properties of the triangular element. Finally, for 3D test cases, Markou \cite{OST} proposed the ortho-semi-torsional spring analogy forcing the validity of the tetrahedral element by preventing the corner vertex to cross the opposite face. Detailed reviews of multiple mesh deformation methods, advantages, disadvantages, and computational complexity can be found in \cite{joliT}.
\section{Problem statement}
\label{sec:prob statement}
Let $n$ $\in$ $\mathbb{N}$ be the number of the nodes in a mesh $\mathcal{M}$ and let $\textbf{P}=\{\textbf{P}_\textbf{1}, \textbf{P}_\textbf{2}...\textbf{P}_\textbf{n}\}$ be the set of the nodes positions inside $\mathcal{M}$ \footnote{Depending on the dimensions of the problem $\textbf{P}_\textbf{i}$=\{$x_i$; $y_i$\} or $\textbf{P}_\textbf{i}$=\{$x_i$; $y_i$; $z_i$\}}.\\
Let $\textbf{L}$ be the incidence matrix defined as in \cite{firasMS}:
\label{eq:Lij}
$$
L_{ij}= \left\{
\begin{array}{ll}
1, \mbox { ~~~~ if nodes \textit{i} and \textit{j} are edge-connected}\\
0, \mbox { ~~~~ otherwise.}
\end{array}
\right.
$$\\
We want to equidistribute the mesh nodes according to a positive scalar function $W = W(x)$ to achieve an optimal mesh \cite{mario}. For the 1D case \cite{EulerLagrange}, between node positions $x_i$ and $x_{i+1}$ we have:
\begin{equation}
\int_{x_{i}}^{x_{i+1}} W(x) dx = \text{constant}.
\end{equation}
For the multidimensional case, let \{$\textbf{P}_\textbf{i}$,$\textbf{P}_\textbf{j}$\} be a set of two nodes positions such that $L_{ij}=1$, and let $\textbf{r}(s)$ be the edge parametrization obeying to the Eq.\ref{eq:r}:
\begin{equation}
\label{eq:r}
\textbf{r}(s)=\textbf{P}_\textbf{i}+s(\textbf{P}_\textbf{j}-\textbf{P}_\textbf{i}),
\end{equation}
where $s \in [0,1]$.\\
Then, in order to equidistribute the mesh nodes, the line integral $I$, expressed in Eq.\ref{eq:LineIntegral}, must be constant:
\begin{equation}
\label{eq:LineIntegral}
I=\int_0^1 W(\textbf{r}(s))\cdot r'(s) ds = \text{constant}
\end{equation}
Eq.\ref{eq:LineIntegral} is the solution of the Euler-Lagrange equation to the minimization of the energy which reads:
\begin{equation}
\label{eq:energy}
E_{ij}=L_{ij} \int_0^1 W(\textbf{r}(s)) (\textbf{P}_\textbf{j}-\textbf{P}_\textbf{i})^2 ds,
\end{equation}
where, the incidence matrix $\textbf{L}$ is artificially added ensuring the physical meaning of the energy function $E$.
\begin{proof}
The Euler-Lagrange equation \cite{EulerLagrange} may be written as:
\begin{equation}
\left( \frac{\partial}{\partial \textbf{r}} - \frac{d}{ds} \left( \frac{\partial}{\partial \textbf{r}'} \right) \right)E =0.
\end{equation}
Using Eq.\ref{eq:r}, we obtain $\textbf{r}' = (\textbf{P}_\textbf{j}-\textbf{P}_\textbf{i})$. Hence, the energy equation may be re-written as:
\begin{equation}
\label{eq:energy1}
E_{ij}=L_{ij} \int_0^1 W(\textbf{r}(s)) (\textbf{r}')^2 ds,
\end{equation}
and applying the chain rule:
\begin{align}
\begin{split}
\frac{\partial E}{\partial \textbf{r}} &= \frac{\partial E}{\partial s} \frac{\partial s}{\partial \textbf{r} } \\
&= \frac{\partial E}{\partial s} \frac{1}{\textbf{r}'} .
\end{split}
\end{align}
Therefore, after dropping the incidence matrix, the Euler-Lagrange equation can be expressed as:
\begin{align}
\begin{split}
\frac{1}{\textbf{r}'} \frac{\partial E}{\partial s} - \frac{d}{ds} \left( \frac{\partial E}{\partial \textbf{r}'} \right) & = \frac{1}{\textbf{r}'} \frac{\partial }{\partial s} \left(\int_0^1 W(\textbf{r}(s)) (\textbf{r}')^2 ds \right) - \frac{d}{ds} \left( \frac{\partial}{\partial \textbf{r}'}\left(\int_0^1 W(\textbf{r}(s)) (\textbf{r}')^2 ds \right) \right) \\
&= \frac{\partial}{\partial s} \left(\int_0^1 W(\textbf{r}(s)) (\textbf{r}') ds \right) - \frac{d}{ds} \left(\int_0^1 2W(\textbf{r}(s)) (\textbf{r}') ds \right)\\
& = \frac{d}{d s} \left(\int_0^1 W(\textbf{r}(s)) (\textbf{r}') ds \right) - 2\frac{d}{ds} \left(\int_0^1 W(\textbf{r}(s)) (\textbf{r}') ds \right)\\
& = - \frac{d}{d s} \left(\int_0^1 W(\textbf{r}(s)) (\textbf{r}') ds \right) = 0,
\end{split}
\end{align}
hence, $ \int_0^1 W(\textbf{r}(s)) (\textbf{r}') ds$ is a constant.
\end{proof}
Since we are considering a cell-centered Finite Volume method, the weight function $W$ can be considered constant between two edge-connected nodes, such that $W = W_{ij}$. Hence, the energy equation can be simplified into:
\begin{equation}
\label{eq:itttt}
E_{ij}=L_{ij} W_{ij} (\textbf{P}_\textbf{j}-\textbf{P}_\textbf{i})^2,
\end{equation}
which is analogous to the spring potential energy equation:
\begin{equation}
\label{potentialEnergy}
V =c^{t} k |\Delta \textbf{x}|^2,
\end{equation}
where $V$ is the potential energy, $k$ the spring stiffness, $|\Delta \textbf{x}|$ is the displacement. Algebraically identifying each term of the Eq.\ref{eq:itttt} compared Eq.\ref{potentialEnergy} leads to a stiffness coefficient of $W_{ij}$ and an equilibrium spring length set to zero.\\
The simplest optimization problem depends on finding the equilibrium positions between two adjacent nodes in the mesh $\mathcal{M}$ based on a network of springs \cite{pedro, firasMS}:
\begin{equation}
\frac{\partial E}{\partial \textbf{P}}=0 ~~~~~~~ \& ~~~~~~~ \frac{\partial^2 E}{\partial \textbf{P}^2}>0.
\end{equation}
\subsection{Linear system assembly and solution}
The optimization process of the nodes mesh positions is formulated through the assembly and solution of a linear system, including the following main algorithmic steps:
\begin{enumerate}
\item The analytic Jacobian is defined as:
\begin{equation}
\frac{\partial E_{ij}}{\partial \textbf{P}_\textbf{i}}=-2L_{ij} W_{ij} (\textbf{P}_\textbf{j}-\textbf{P}_\textbf{i})=0.
\end{equation}
\item After simplifying the constant and collecting the contributions of each node, we obtain:
\begin{equation}
\sum_{j=1}^{n}L_{ij} W_{ij} (\textbf{P}_\textbf{j}-\textbf{P}_\textbf{i})=0.
\end{equation}
\item The resulting linear system can be expressed as:
\begin{equation}
\label{eq:AP=0}
\textbf{AP}=0,
\end{equation}
where
$$
A_{ij}= \left\{
\begin{array}{ll}
-L_{ij} W_{ij}, \mbox { $~~~~~~~~if $ $ i\ne j $}\\
\sum_{j=1}^{n} L_{ij} W_{ij}, \mbox { $~~if $ $ i=j$.}
\end{array}
\right.
$$\\
\item Solving the linear system using an iterative solver, i.e. the Generalized Minimal RESidual (GMRES) algorithm complemented by a parallel Additive Schwartz Preconditioner as provided by the PETSc toolkit \cite{petsc1,petsc2,petsc3,petsc4}.
\end{enumerate}
When the weight function $W_{ij}$ is a linear combination of the mesh node positions, the optimal solution can be found in a single step. However, in this work, the weight functions depend on both physical and geometrical variables,, thus being non-linear in space. In order to alleviate and overcome the nonlinear effects, we apply the following measures:
\begin{itemize}
\item The nodal positions of the mesh $\mathcal{M}$ are computed and updated every $m$ flow field iterations to limit the stiffness of the process and enable the stabilization of the flow field solution.
\item An under-relaxation factor $\omega$, having an analogous behavior as a mesh velocity, is also added to the mesh adaptation solver to smooth the nodal displacement and to mitigate, for certain cases, the cells overlap. However, since the under-relaxation factor affects negatively the convergence rate, a trade-off between the flow solver convergence and and the pseudo-elastic convergence rate was sought and found in $\omega$ =$\mathcal{O}(10^{-2})$.
\item $W_{ij} \ge 0$ is imposed in order to preserve the characteristic of a weight function and a stiffness coefficient.
\end{itemize}
As a result, the nodal re-positioning obeys to the following relation:
\begin{equation}
\label{eq:reposition}
\textbf{P}^{k+m}=(1-\omega)\textbf{P}^{k}+\omega \textbf{D},
\end{equation}
where \textbf{D} is the nodal displacement computed from Eq.(\ref{eq:AP=0}).
\subsection{Boundary Conditions}
Two types of the boundary conditions are defined:\\
-Dirichlet (i.e. locked node) where the node position is kept constant: $P_i^m$=$P_i^0$;\\
-Neumann (i.e. moving node in boundary) where only the tangential displacement is allowed, i.e. $\frac{\partial \textbf{P}_\textbf{i} \cdot \textbf{n}_\textbf{i}}{\partial \textbf{x}}=0 $, where $\textbf{n}_\textbf{i}$ is the boundary face normal vector.
\section{Numerical \& Mathematical formulation of the Spring Network}
\label{sec:math}
\subsection{Linear Spring analogy}
The weight function introduced in the Sec.\ref{sec:prob statement} is computed as:
\begin{equation}
\label{eq:k_lin}
W_{ij}=|U_j-U_i|,
\end{equation}
where $U_i$ is a user-defined flow field state variable related to the node $i$, e.g. density or pressure.
The absolute value ensures the positivity of the weight function and guarantees the minimization of the system's potential energy. $W_{ij}$ in the Eq.\ref{eq:k_lin} is referred as a linear stiffness coefficient between two edge-connected nodes $i$ and $j$, denoted as $k_{ij}^{L}$.\\
During the simulation of extreme conditions, the mesh adaptation creates highly distorted cells due to the large node displacements and high physical gradients. Therefore, the linear spring coefficient needs to be truncated and bounded. The choice of the upper and lower bound values, referred respectively as the minimum percentile (minPer) and the maximum percentile (maxPer), are computed via a $P^2$ algorithm \cite{p2}. This dynamic method estimates the p-percentile as the observations are generated\footnote{for e.g. the median is 0.5-percentile}. The algorithm is independent of the size of the data since the method does not store information about the samples nor data as well as their sizes. Thus, this method requires a confined storage space. The percentile values allow for controlling the stability and the convergence rate of the flow solver.
\subsection{Issues related to the linear spring analogy}
A major drawback of the linear spring analogy appears when the mesh motions and deformations are of large amplitude leading to invalid elements (e.g. negative volumes, areas or grid lines crossovers) \cite{T, ST}, due essentially to the design behavior of a linear spring: the stiffness coefficient $k_{ij}^{L}$ between two neighbor nodes acts only in tension and compression along the connecting edge. Hence, when a mesh cell is experiencing an inversion or a near-inversion state, there is no geometric information about its angles, area (2D) or volume (3D), leading to a free movement of the node, possibly leading to node overlap and edge crossover. According to a solid analogy, we can consider the nodes as articulated ball joints, where there is no blocking momentum at each node. In order to illustrate issues which are related to the linear spring analogy, we consider what happens in the adapted mesh of an axisymmetric double cone test case (see Sec.\ref{sec:DC} for details on the configuration).
As shown in Fig.\ref{fig:dist}, the linear mesh refinement is not well adapted to handle high-aspect ratio meshes, leading to localized edge crossovers close the wall, inside the boundary layer region.
\begin{figure}[H]
\centering
\includegraphics[width=.4\textwidth]{Fig2.png}
\caption{Distorted mesh -- Issues related to linear spring analogy}
\label{fig:dist}
\end{figure}
\subsection{Torsional spring analogy}
The linear spring analogy concept can be upgraded by introducing, in the dynamic mesh, a vertex-attached torsional spring in order to add angular momentum. The torsional spring concept will strongly mitigate, by means of local geometrical information, the inversion or near-inversion of the elements \cite{T}.\\
Let $\mathcal{T}_{ijk}$ denotes a triangle and let $\theta_i^{ijk}$ the angle between the edges $ij$ and $ik$ inside $\mathcal{T}_{ijk}$ (see Fig.\ref{fig:triangleTors}). Therefore, the attached $i$-vertex torsional spring coefficient $C_{i}^{ijk}$ is expressed as:
\begin{equation}
\label{eq:C}
C_{i}^{ijk} = \frac{1}{sin^2(\theta_i^{ijk})}.
\end{equation}
Eq.\ref{eq:C} conserves the validity of the element, i.e
\begin{equation}
\text{If} \quad \theta_i^{ijk} \rightarrow \text{0 or} ~ \pi \Rightarrow \quad C_{i}^{ijk} \rightarrow \infty
\end{equation}
\begin{figure}[H]
\centering{\includegraphics[scale=0.5]{Fig3}}
\caption{Torsional spring analogy \cite{T}}
\label{fig:triangleTors}
\end{figure}
Let $N$ denotes the number of the mesh elements attached to the vertex $i$. The torsional spring constant coming from each triangle $\mathcal{T}$ connected to the vertex $i$, contributes to the overall stiffness. Therefore, the torsional spring stiffness $C_{i}$ attached to each vertex $i$ becomes:
\begin{equation}
\label{eq:Ctot}
C_{i} = \sum_{m=1}^{N}\frac{1}{sin^2(\theta_i^{m})},
\end{equation}
\cite{ST} shows that this model is expensive regarding memory cost and computational time, especially for 3D simulation. In fact, within this spring concept, the torque system resulting from torsional springs associated to each vertex needs to be transformed into linear forces on nodes in order to compatible with the linear spring analogy and to contribute to the edge global stiffness. In addition, \cite{joliT} shows that the complexity of the torsional spring method, i.e. $\mathcal{O}(n_e^3+n_v^3)$, is mush higher that the linear one, i.e. $\mathcal{O}(n_e^3)$, where $n_e$ and $n_v$ are the number of edges and vertices of the considered mesh. Hence, a simpler model is embraced and introduced in the following.
\subsection{Semi-torsional spring analogy}
\label{sec:semi}
\subsubsection{Mathematical formulation}
This model is based on adding a correction factor to the existing linear spring stiffness coefficient $k^L$ proportional to the area of the triangular mesh element, denoted $k^{ST}$.
The total stiffness of the mesh network related to each edge $ij$ will be \cite{ST}:
\begin{equation}
k_{ij}=k_{ij}^{L}+k_{ij}^{ST},
\end{equation}
and
\begin{equation}
\label{eq:kST}
k_{ij}^{ST}= \textsc{p}\sum_{m=1}^{N} \frac{1}{sin^2(\theta_{ij}^m)},
\end{equation}
where \textsc{p} denote a user-defined parameter, $N$ the number of elements attached to the edge $ij$ and $\theta_{ij}$ the angle facing the edge $ij$ as well.\\
\subsubsection{Including the physics}
The mesh r-adaptive algorithms are physics-based. The flow field state variables define the linear stiffness coefficients. Therefore, the formulation of the semi-torsional stiffness must incorporate both physical and geometrical properties. Hence, the factor \textsc{p} will be function of the local physical characteristics.
\subsubsection{2D formulation}
For the 2D case, the expression of the semi-torsional spring coefficient becomes:
\begin{equation}
k_{ij}^{ST}=\textsc{p} \left( \frac{1}{sin^2(\theta_1)}+\frac{1}{sin^2(\theta_2)} \right),
\end{equation}
where $\theta_1$ and $\theta_2$ are the angles defined in Fig.\ref{fig:semi}.
A simpler computation of the $k_{ij}^{ST}$ is based on the following expression:
\begin{equation}
\label{eq:kSTsimple}
k_{ij}^{ST}= \textsc{p} \left(\frac{l_{kj}^2 l_{ki}^2}{4 A_{ijk}^2}+\frac{l_{lj}^2 l_{li}^2}{4 A_{ijl}^2}\right),
\end{equation}
where $l_{ij}$ is the distance between nodes $i$ and $j$ and $A_{ijk}$ is the area of the triangular element $ijk$ computed thought the cross product using the formula:
\begin{equation}
A_{ijk} = \frac{1}{2} ||\vec{ki} \times \vec{kj}||.
\end{equation}
\begin{figure}[H]
\centering{\includegraphics[scale=0.6]{Fig4}}
\caption{Semi-torsional analogy: 2D triangular case \cite{ST}}
\label{fig:semi}
\end{figure}
\subsubsection{3D formulation}
The probability of creating negative cell volumes increases in the case of the 3D tetrahedral elements since the vertex corner can easily cross the opposite face. The idea was to generalize the semi-torsional spring analogy to be applied to tetrahedral elements \cite{ST}.
The concept is based on inserting a triangle inside the tetrahedral cell as shown in Fig.\ref{fig:STanalogy3Dtetra1}. This triangle will be the start point of computing the $k^{ST}$. Eq.\ref{eq:kST} is still valid where the angle $\theta_{ij}^m$ is the angle facing the edge as presented in Fig.\ref{fig:STanalogy3Dtetra}:
\begin{figure}[H]
\captionsetup{justification=centering}
\centering
\begin{minipage}{.42\linewidth}
\includegraphics[width=\linewidth]{Fig5.png}
\caption{Inserted triangle \cite{joliT}}
\label{fig:STanalogy3Dtetra1}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.30\linewidth}
\includegraphics[width=\linewidth]{Fig6.png}
\caption{Facing edge angle definition \cite{joliT} }
\label{fig:STanalogy3Dtetra}
\end{minipage}
\end{figure}
Eq.\ref{eq:3Dksemi} expresses the semi-torsional spring constant within the cell $\mathcal{H}_{m}$ attached to the edge $ij$:
\begin{equation}
\label{eq:3Dksemi}
k^{ST}_{ij}=\textsc{p} \frac{d_{jp}^2 d_{ip}^2}{A_{ijp}^2}.
\end{equation}
\subsection{Ortho-semi-torsional spring analogy}
For some 3D test cases, the stiffness network provided by the semi-torsional spring coefficients is not sufficient and need to be upgraded. A proposed solution is to use the ortho-semi-torsional spring analogy \cite{OST}. Therefore, the stiffness of an edge $qs$ is described as:
\begin{equation}
\label{eq:TOT}
k_{qs}^{total}=k^{OST}_{qs}+k^{ST}_{qs}+k^{L}_{qs}.
\end{equation}
The goal of this concept is to construct an additional fictitious spring. Therefore, the mesh stiffness will increase and ensure the validity of the elements.
Let $i$ be the projection of the vertex corner $s$ on the opposite face (see Fig.\ref{fig:OSTtetra}).
The projection forms geometry based linear springs $k_{si}=\frac{1}{d_{si}}$ and $k_{qi}=\frac{1}{d_{qi}}$, where $d_{\alpha \beta}$ denotes the distance between the point $\alpha$ and $\beta$.\\
\begin{figure}[H]
\centering{\includegraphics[scale=0.5]{Fig7}}
\caption{Ortho-semi-torsional spring analogy for 3D tetrahedral mesh \cite{OST}}
\label{fig:OSTtetra}
\end{figure}
The contribution of $k_{si}$ to the edge $qs$ is computed through the following procedure:
\begin{itemize}
\item compute $d_{tot}=d_{qs}+d_{ps}+d_{rs}$ and $\lambda_{si} = \lambda_{qi} = \frac{d_{qs}}{d_{tot}}$, the linear allocation parameter,
\item compute $k^{OST}$ according to the following relation:
\end{itemize}
\begin{equation}
\label{eq:OST}
k^{OST}=\textsc{p}_1 \left(\frac{k_{si}}{\lambda_{si}^\textsc{a}}+\frac{k_{qi}}{\lambda_{qi}^\textsc{a}}\right)^\textsc{b},
\end{equation}
where the constants \textsc{a} and \textsc{b} affect the contribution of $k^{OST}$ to the global stiffness network and $\textsc{p}_1$ will incorporate physical characteristics of the flow field. Choosing $\textsc{a}=\textsc{b}=1$, Eq.\ref{eq:OST} is be transformed into:
\begin{equation}
\label{eq:kostFinal}
k^{OST}=\textsc{p}_1 \left(\frac{k_{si}}{\lambda_{si}}+\frac{k_{qi}}{\lambda_{qi}}\right).
\end{equation}
\subsection{Connectivity information}
We computed and stored the connectivity information, i.e. identifying the edge connected nodes, once and for all within \texttt{std::multimap} during the setup phase of the solver in order to save memory storage and computational time since, in the r-adaptive method, a node's connectivity does not change.
Multimaps are specific containers that can store information so that multiple values can be associated to the same key \cite{multimap}. While providing more flexibility and potentially less memory requirements than corresponding multi-dimensional arrays (with variable row size, as required by our problems), , the major drawback of multimaps is that a binary search algorithm needs to be used for accessing entries instead of constant-time access which could be provided by a multi-dimensional arrays.
\section{Results}
\label{sec:results}
The application of the newly developed physics-based AMR for 2D and 3D cases are presented in this section on the following representative test cases:
\begin{enumerate}
\item Steady Euler 2D flow: Double Wedge channel flow, triangular mesh.
\item Steady viscous thermo-chemical non-equilibrium (TCNEQ) 2D flows :
\begin{itemize}
\item Double Cone, triangular mesh.
\item Hornung Cylinder, quadrilateral mesh.
\end{itemize}
\item Steady Euler 3D flow: Hemisphere, tetrahedral mesh.
\item Magneto Hydro-Dynamics (MHD):
\begin{itemize}
\item Unsteady Rotor, 2D triangular mesh.
\item Steady Solar Wind, 3D tetrahedral mesh.
\end{itemize}
\end{enumerate}
In this section, three tables are presented for each test case, summarizing:
\begin{enumerate}[label=(\alph*)]
\item The flow conditions (e.g. free stream, wall temperature in viscous cases);
\item The mesh characteristics and boundary conditions (BC);\label{pt:2b}
\item the main settings for the r-adaptation algorithm.
\end{enumerate}
Moreover, snapshots of the computational domains are also provided. Herein, the numbers on each boundary surface define the corresponding BC which is applied, as listed in the table \ref{pt:2b}.
\subsection{Wedge channel flow}
\label{sec:DW}
The 2D supersonic double wedge channel flow test case conditions are presented in Tab.\ref{tab:DWflowchar}, Tab.\ref{tab:DWmeshchar} and Tab.\ref{tab:DWAMR}, while the test case definition and the corresponding unstructured mesh are shown in Fig.\ref{fig:DWgeometry} and Fig.\ref{fig:DWinitmesh} respectively.
\begin{table}[H]
\centering
\caption{Double wedge -- Flow characteristics}
\label{tab:DWflowchar}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Physical Model} & \footnotesize{M} & \footnotesize{$\rho$ [-]} & \footnotesize{$\rho$u [-]} & \footnotesize{$\rho$v [-]} & \footnotesize{$\rho$E [-] } \\
\footnotesize{Perfect gas} & \footnotesize{2} & \footnotesize{1} & \footnotesize{2.36643} & \footnotesize{0} & \footnotesize{5.3}\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Double wedge -- Mesh characteristics}
\label{tab:DWmeshchar}
\begin{tabular}{|ccccccc|}
\hline
\footnotesize{Dimensions} & \footnotesize{Type} & \footnotesize{\# Elements} & \footnotesize{BC 1} & \footnotesize{BC 2} & \footnotesize{BC 3} & \footnotesize{BC 4} \\
\footnotesize{2D} & \footnotesize{Triangular} & \footnotesize{6871} & \footnotesize{Inlet} & \footnotesize{Outlet} & \footnotesize{Symmetry} & \footnotesize{no-slip wall}\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Double wedge -- r-refinement}
\label{tab:DWAMR}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{
Spring Network} &\footnotesize{Monitor Variable} & \footnotesize{Process Rate} & \footnotesize{Stop AMR Iteration} & \footnotesize{minPer} & \footnotesize{maxPer} \\
\footnotesize{Linear} & \footnotesize{Density} &\footnotesize{20} & \footnotesize{7000} & \footnotesize{0.20} & \footnotesize{0.65} \\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig8.png}}
\caption{2D double wedge geometry}
\label{fig:DWgeometry}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.25]{Fig9.png}}
\caption{Double wedge -- initial mesh}
\label{fig:DWinitmesh}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.25]{Fig10.png}}
\caption{Double wedge -- final mesh}
\label{fig:DWfinalmesh}
\end{figure}
As shown in Fig.\ref{fig:DWfinalmesh}, in the final adapted solution, the oblique shock, the expansion wave and their reflections are perfectly resolved.
\subsection{Double cone}
\label{sec:DC}
The 2D axisymmetric double cone test case conditions are presented in Tab.\ref{tab:DCflowchar}, Tab.\ref{tab:DCmeshchar} and Tab.\ref{tab:DCAMR}, while the test case definition and the corresponding unstructured mesh are shown in Fig.\ref{fig:doublecone}, Fig.\ref{fig:doubleconeComputationalDomain}, Fig.\ref{fig:init1Cone} and Fig.\ref{fig:init2Cone}.
\begin{table}[H]
\centering
\caption{Double cone -- Flow characteristics}
\label{tab:DCflowchar}
\begin{tabular}{|cccccccc|}
\hline
\footnotesize{Physical Model} & \footnotesize{M}& \footnotesize{$y_{{N}_{2}}$} & \footnotesize{$\rho$ [kg/$m^3$]} & \footnotesize{u [m/s]} & \footnotesize{$T$ [K]} & \footnotesize{$T^{v}$ [K] } & \footnotesize{$T^{w}$ [K] } \\
\footnotesize{TCNEQ ($N-N_{2}$)} & \footnotesize{11.5 } & \footnotesize{1} & \footnotesize{0.001468} & \footnotesize{3849.3} & \footnotesize{268.7} & \footnotesize{3160} & \footnotesize{294.7}\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Double cone -- Mesh characteristics}
\label{tab:DCmeshchar}
\begin{tabular}{|ccccccc|}
\hline
\footnotesize{Dimensions} & \footnotesize{Type} & \footnotesize{\# Elements} & \footnotesize{BC 1} & \footnotesize{BC 4} & \footnotesize{BC 2 \& 3} & \footnotesize{BC 5} \\
\footnotesize{2D axisymmetric} & \footnotesize{Triangular} & \footnotesize{65280} & \footnotesize{Symmetry} & \footnotesize{Inlet} & \footnotesize{Iso-thermal wall} & \footnotesize{Outlet} \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Double cone -- r-refinement}
\label{tab:DCAMR}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Spring Network} &\footnotesize{Monitor Variable} & \footnotesize{Process Rate} & \footnotesize{Stop AMR Iteration} & \footnotesize{minPer} & \footnotesize{maxPer} \\
\footnotesize{Semi-torsional} & \footnotesize{Density} &\footnotesize{10} & \footnotesize{200} & \footnotesize{0.30} & \footnotesize{0.55} \\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm]{Fig11.png}
\caption{DC geometry - units: 'inches'}
\label{fig:doublecone}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm]{Fig12.png}
\caption{DC - Computational Domain}
\label{fig:doubleconeComputationalDomain}
\end{minipage}
\end{figure}
The semi-torsional spring analogy is applied to the double cone test case. The parameter $\textsc{p}$ is set to be equal to ${k}^{L}$ in order to include the physical characteristics within the adaptation.
The expression of the global mesh stiffness, between two edge-connected nodes $ij$, is therefore described:
\begin{equation}
\label{eq:doubleCone}
k_{ij}^{tot}= k_{ij}^{L}\cdot (1+k_{ij}^{ST}).
\end{equation}
\begin{figure}[H]
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm]{Fig13.png}
\caption{Initial mesh--zoom $1^{st}$ cone}
\label{fig:init1Cone}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm]{Fig14.png}
\caption{Initial mesh--zoom $2^{nd}$ cone}
\label{fig:init2Cone}
\end{minipage}
\end{figure}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm]{Fig15.png}
\caption{Zoom, $2^{nd}$ cone, as appearing after 200 steps of refinement}
\label{fig:Zoom 2^{nd} cone}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\captionsetup{justification=centering}
\includegraphics[width=5.5cm]{Fig16.png}
\caption{Bow shock as appearing after 200 steps of refinement}
\label{fig:bow}
\end{minipage}
\end{figure}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm]{Fig17.png}
\caption{SWBLI as appearing after 200 steps of refinement}
\label{fig:BL interaction}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=5.5cm,height=5cm]{Fig18.png}
\caption{Schematic of the double cone flow field \cite{phd:lani08}}
\label{fig:coneS}
\end{minipage}
\end{figure}
Fig.\ref{fig:BL interaction} shows the shock wave boundary layer interactions (SWBLI) occurring near the junction between the first and second cones. The shock structure highlighted by the mesh refinement closely resembles the qualitative solution presented in Fig.\ref{fig:coneS}
\subsection{Hornung}
\label{sec:HC}
The 2D semi-cylinder Hornung test case conditions are presented in Tab.\ref{tab:Hornungflowchar}, Tab.\ref{tab:Hornungmeshchar} and Tab.\ref{tab:HornungAMR}, while the test case definition is shown in Fig.\ref{fig:geomHornung}.
\begin{table}[H]
\centering
\caption{Hornung -- Flow characteristics}
\label{tab:Hornungflowchar}
\begin{tabular}{|ccccccc|}
\hline
\footnotesize{Physical Model} &\footnotesize{M} & $\footnotesize{\rho}_{\footnotesize{{N}}}$ \footnotesize{[kg/$m^3$]} & $\footnotesize{\rho}_{\footnotesize{{{N}_{2}}}}$ \footnotesize{[kg/$m^3$]} & \footnotesize{u [m/s]} & \footnotesize{$T$ [K]} & \footnotesize{$T^{w}$ [K]} \\
\footnotesize{TCNEQ ($N-N_{2}$)} & 6&\footnotesize{0.0001952} & \footnotesize{0.004956} & \footnotesize{5590} & \footnotesize{1833} & \footnotesize{1000}\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Hornung -- Mesh characteristics}
\label{tab:Hornungmeshchar}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Dimensions} & \footnotesize{Type} & \footnotesize{\# Elements} & \footnotesize{BC 1} & \footnotesize{BC 2 \& 3} & \footnotesize{BC 4} \\
\footnotesize{2D} & \footnotesize{Quadrilateral} & \footnotesize{25000} & \footnotesize{Inlet} & \footnotesize{Outlet} & \footnotesize{Iso-thermal wall} \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Hornung -- r-refinement}
\label{tab:HornungAMR}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Spring Network} &\footnotesize{Monitor Variable} & \footnotesize{Process Rate} & \footnotesize{Stop AMR Iteration} & \footnotesize{minPer} & \footnotesize{maxPer} \\
\footnotesize{Linear} & \footnotesize{Flow density} &\footnotesize{10} & \footnotesize{till convergence} & \footnotesize{0.30} & \footnotesize{0.55} \\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering{\includegraphics[scale=0.3]{Fig19.png}}
\caption{Semi-circle geometry}
\label{fig:geomHornung}
\end{figure}
The simulation uses the linear spring analogy.
The mesh refinement result is presented in Fig.\ref{fig:final mesh} showing a perfect bow shock adaptation.
The flow field pressure and density contours, presented in Fig.\ref{Pressure contours: Converged solution} and \ref{Density contours: Converged solution}, show a symmetrical solution. The refined shock, based on the flow field density, and the density contours match properly as shown in Fig.\ref{Final mesh and flow field density}.
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig20.png}
\caption{Hornung -- Final mesh}
\label{fig:final mesh}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm,height=8.2cm]{Fig21.png}
\caption{Final mesh and flow field density}
\label{Final mesh and flow field density}
\end{minipage}
\end{figure}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig22.png}
\caption{Pressure contours}
\label{Pressure contours: Converged solution}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm,height=6.5cm]{Fig23.png}
\caption{Density contours}
\label{Density contours: Converged solution}
\end{minipage}
\end{figure}
\subsection{Hemisphere}
\label{sec:Hemisphere}
The 3D hemisphere test case conditions are presented in Tab.\ref{tab:HemisphereFC}, Tab.\ref{tab:Hemispheremeshchar} and Tab.\ref{tab:HemisphereAMR}, while the computational domain and a 2D section are shown in Fig.\ref{fig:hemisphereGeom} and Fig.\ref{fig:hemisphereGeom2D} respectively.
\begin{table}[H]
\centering
\caption{Hemisphere -- Flow characteristics}
\label{tab:HemisphereFC}
\begin{tabular}{|cccccccc|}
\hline
\footnotesize{Physical Model} & \footnotesize{M} &\footnotesize{P [Pa]} & \footnotesize{u [m/s]} & \footnotesize{v [m/s]} & \footnotesize{w [m/s]} &\footnotesize{T [K]} & \footnotesize{$\rho$} \footnotesize{[kg/$m^3$]} \\
\footnotesize{Perfect gas} & 10 &\footnotesize{1000} & \footnotesize{3413.8} & \footnotesize{0} & \footnotesize{0} & \footnotesize{290} & \footnotesize{0.0120129} \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Hemisphere -- Mesh characteristics}
\label{tab:Hemispheremeshchar}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Dimensions} & \footnotesize{Type} & \footnotesize{\# Elements} & \footnotesize{BC 1 .. 5} & \footnotesize{BC 6} & \footnotesize{BC 7} \\
\footnotesize{3D} & \footnotesize{Tetrahedral} & \footnotesize{190485} & \footnotesize{Inlet} & \footnotesize{Outlet} & \footnotesize{no-slip wall} \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Hornung -- r-refinement}
\label{tab:HemisphereAMR}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Spring Network} &\footnotesize{Monitor Variable} & \footnotesize{Process Rate} & \footnotesize{Stop AMR Iteration} & \footnotesize{minPer} & \footnotesize{maxPer} \\
\footnotesize{orth-semi-torsional} & \footnotesize{Pressure} &\footnotesize{20} & \footnotesize{300} & \footnotesize{0.30} & \footnotesize{0.55} \\
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig24.png}
\caption{Hemisphere geometry}
\label{fig:hemisphereGeom}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.45\linewidth}
\includegraphics[width=0.55\linewidth]{Fig25.png}
\caption{2D section}
\label{fig:hemisphereGeom2D}
\end{minipage}
\end{figure}
The ortho-semi-torsional spring analogy, coupled with the linear and semi-torsional spring analogy, is used within this test case. The global mesh stiffness obeys to Eq.\ref{eq:TOT} where the ortho-semi-torsional spring analogy in Eq.\ref{eq:kostFinal} is transformed into:
\begin{equation}
k^{OST}_{qs}= \frac{k^{L}_{qs}}{2}\left(\frac{k_{si}}{\lambda_{si}}+\frac{k_{qi}}{\lambda_{qi}}\right),
\end{equation}
while the semi-torsional spring analogy in Eq.\ref{eq:3Dksemi} will be transformed into:
\begin{equation}
k^{ST}_{qs}=k^{L}_{qs} \frac{d_{ql}^2 d_{sl}^2}{A_{qsl}^2},
\end{equation}
where $l$ has the same geometrical signification as the point $p$ in Eq.\ref{eq:3Dksemi}.
The mesh is adequately refined around the shock. In fact, the mesh nodes density increases around the zone of pressure variation.
\begin{figure}[H]
\centering{\includegraphics[scale=0.35]{Fig26}}
\caption{Initial mesh: section Y=0 }
\label{fig:bow1}
\end{figure}
\begin{figure}[H]
\centering
\begin{minipage}{.45\linewidth}
\includegraphics[scale=0.35]{Fig27.png}
\caption{Final mesh: section Y= 0}
\label{fig:Mesh000}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.45\linewidth}
\includegraphics[scale=0.35]{Fig28.png}
\caption{Mesh and pressure contours}
\label{fig:pressureMesh}
\end{minipage}
\end{figure}
\begin{figure}[H]
\centering
\captionsetup{justification=centering}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig29.png}
\caption{Pressure Contours: section Y=0}
\label{fig:pressureContours0}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig30.png}
\caption{Temperature contours: section Y=0}
\label{fig:temperatureContours0}
\end{minipage}
\end{figure}
\subsection{Solar wind/Earth's magnetosphere interaction}
\label{sec:SW}
This test case simulates a Solar Wind/Earth's Magnetosphere Interaction that occurred during a magnetic storm on April the $6^{th}$, $2000$. The inlet conditions correspond to real data which were recorded by the NASA's Advanced Composition Explorer (ACE) satellite at the Lagrangian point L1 \cite{solarwindA}. The test case conditions (in adimensional form, as explained in \cite{solarwindA}) are presented in Tab.\ref{tab:SolarWindFC} and Tab.\ref{tab:SolarWindAMR}.
\begin{table}[H]
\centering
\caption{Solar wind -- Flow characteristics}
\label{tab:SolarWindFC}
\begin{tabular}{|ccccc|}
\hline
\footnotesize{Physical Model} & \footnotesize{$\rho$ [-]} & \footnotesize{u [-]} & \footnotesize{v [-]} & \footnotesize{w [-] } \\
\footnotesize{MHD} & \footnotesize{1.26020} & \footnotesize{-10.8434} & \footnotesize{-0.859678} & \footnotesize{0.0146937} \\
\hline
\footnotesize{$B_x$} \footnotesize{[-]} & \footnotesize{$B_y$} \footnotesize{[-]} & \footnotesize{$B_z$} \footnotesize{[-]} & \footnotesize{p [-]}&\\
\footnotesize{0.591792} & \footnotesize{-2.13282} & \footnotesize{-0.602181} & \footnotesize{0.565198 } &\\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Solar wind -- r-refinement}
\label{tab:SolarWindAMR}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Spring Network} &\footnotesize{Monitor Variable} & \footnotesize{Process Rate} & \footnotesize{Stop AMR Iteration} & \footnotesize{minPer} & \footnotesize{maxPer} \\
\footnotesize{semi-torsional} & \footnotesize{Flow density} &\footnotesize{20} & \footnotesize{1045} & \footnotesize{0.30} & \footnotesize{0.55} \\
\hline
\end{tabular}
\end{table}
The computational domain is a rectangular box and a sphere (modelling the earth) centered at the origin, as explicitly defined in \cite{solarwindA} and shown in Fig.\ref{fig:SWgeom}:
\begin{figure}[H]
\centering{\includegraphics[scale=0.3]{Fig32}}
\caption{Computational domain, -200$\le$x$\le$235, -50$\le$y,z$\le$50, radius of the sphere $r=2.5$}
\label{fig:SWgeom}
\end{figure}
The semi-torsional spring analogy is applied to the solar wind test case:
\begin{equation}
k^{ST}_{qs}=k^{L}_{qs} \frac{d_{ql}^2 d_{sl}^2}{A_{qsl}^2},
\end{equation}
The initial mesh is shown in Fig.\ref{fig:fullView} (full view) and Fig.\ref{fig:zoomArroundEarth} (zoom around the Earth), while the final adapted mesh corresponding to the converged steady state solution is presented in Fig.\ref{fig:FinalMeshView} (full view) and Fig.\ref{fig:FinalMeshZoomView} (zoom around the Earth). The reference solution for this case was computed on a mesh with $2773426$ tetrahedral elements (see.Fig.\ref{fig:SWadapted}), while this work shows promising results (at least qualitatively) even for those kind of complex applications using only $197060$ tetrahedral elements.
\begin{figure}[H]
\centering
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig33}
\caption{Initial mesh, section Y=0}
\label{fig:fullView}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig34}
\caption{Final mesh, section Y=0}
\label{fig:FinalMeshView}
\end{minipage}
\end{figure}
\begin{figure}[H]
\centering
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig35}
\caption{Initial mesh-zoom, section Y=0}
\label{fig:zoomArroundEarth}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig36}
\caption{Final mesh- zoom, section Y=0}
\label{fig:FinalMeshZoomView}
\end{minipage}
\end{figure}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig37}
\caption{Mesh and density, section Y=0}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig38}
\caption{Density contours, section Y=0}
\end{minipage}
\end{figure}
The main features of the plasma field in the Earth magnetosphere are detected by the r-adaptation as compared to the sketch in Fig.\ref{fig:SW3D}. In particular, the bow shock and the magnetopause are well resolved as shown in Fig.\ref{fig:solarwindt}.
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig39}}
\caption{General flow features of the solar wind/Earth's magnetosphere interaction \cite{solarwindtheory}}
\label{fig:solarwindt}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.5]{Fig40}}
\caption{Final mesh, section Y=0}
\label{fig:SW3D}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.55]{Fig41}}
\caption{Adapted mesh from \cite{solarwindA}, section Y=0 }
\label{fig:SWadapted}
\end{figure}
\subsection{MHD Rotor}
The test case studies the evolution of strong torsional Alfv\'en waves in ideal MHD. More details about this case can be found in \cite{ALVAREZLAGUNA}.
The ideal 2D MHD Rotor test case conditions are presented in Tab.\ref{tab:RotorFC}, Tab.\ref{tab:RotorMesh} and Tab.\ref{tab:RotorAMR}, while the corresponding unstructured mesh is shown in Fig.\ref{fig:RotorCD}.
\begin{table}[H]
\centering
\caption{Rotor -- Flow characteristics at $t=0$}
\label{tab:RotorFC}
\begin{tabular}{|ccccccc|}
\hline
\footnotesize{Physical Model} & \footnotesize{$\textbf{B}$} & \footnotesize{$\textbf{E}$} & \multicolumn{3}{c|}{\footnotesize{$\rho$}} \\
\footnotesize{MHD} & \footnotesize{$(2.5/ \sqrt{4\pi}, 0, 0)$} & \footnotesize{(0, 0, $B_x$ $u_y$)} & \multicolumn{3}{c|}{\footnotesize{1+9f(t)}} \\
\hline
\footnotesize{$u_x$} & \footnotesize{$u_y$} & \footnotesize{T} & \multicolumn{3}{c|}{\footnotesize{$f(r)$}} \\
\footnotesize{-2$f(r)$y/10; r<10} & \footnotesize{2$f(r)$x/10; r<10} & \footnotesize{0.5/(1+9$f(t)$)} & \multicolumn{3}{c|}{\footnotesize{1; r<10 -- 0; r>11.5 }}\\
\footnotesize{-2$f(r)$y/r; r$\ge$10 } & \footnotesize{2$f(r)$x/r; r$\ge$10} &\footnotesize{} &\multicolumn{3}{c|}{\footnotesize{$\frac{200}{3}(11.5-r)$; 10$\le$ r $\le$ 11.5}} \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{Rotor -- Mesh characteristics}
\label{tab:RotorMesh}
\begin{tabular}{|cccc|}
\hline
\footnotesize{Dimensions} & \footnotesize{Type} & \footnotesize{\# Elements} & \footnotesize{BC 1 .. 4} \\
\footnotesize{2D} & \footnotesize{Triangle} & \footnotesize{20000} & \footnotesize{Outlet} \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{ Rotor -- r-refinement}
\label{tab:RotorAMR}
\begin{tabular}{|cccccc|}
\hline
\footnotesize{Spring Network} &\footnotesize{Monitor Variable} & \footnotesize{Process Rate} & \footnotesize{Stop AMR time} & \footnotesize{minPer} & \footnotesize{maxPer} \\
\footnotesize{Linear} & \footnotesize{Flow density} &\footnotesize{1} & \footnotesize{t=0.2962} & \footnotesize{0.30} & \footnotesize{0.55} \\
\hline
\end{tabular}
\end{table}
The initial mesh in Fig.\ref{fig:RotorCD} is unstructured and obtained by splitting a uniform structured mesh. The refined mesh in Fig.\ref{fig:AdaptedRotor} appears to follow closely the main flow features, as highlighted in Fig.\ref{fig:MHDRotorAMRdensity} (density) and Fig.\ref{fig:MHDRotorAMRT} (temperature).
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig42.png}
\caption{Computational Domain -- Rotor}
\label{fig:RotorCD}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig43.png}
\caption{Rotor -- Adapted Mesh}
\label{fig:AdaptedRotor}
\end{minipage}
\end{figure}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig44.png}
\caption{Rotor -- Density field}
\label{fig:MHDRotorAMRdensity}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig45.png}
\caption{Rotor -- Temperature field }
\label{fig:MHDRotorAMRT}
\end{minipage}
\end{figure}
\section{Mesh Quality Indicator}
\label{sec:MQI}
\subsection{Motivation}
The following section will present a new method to grade an adapted mesh qualitatively. The mesh r-adaptive algorithm re-positions grid nodes according to a certain monitor flow field variable. For instance, if one monitors the density of the flow field, the nodes will migrate and the local concentration of the mesh node will increase at discontinuities. Hence, for an adequate refinement, the cells around the discontinuity result to be highly distorted. The Author's key idea is based on defining a certain cell distortion criteria and coupling it to the local physical properties of the monitored flow field variable. Let $\mathcal{D}_{init}$ be the measure of a cell distortion on the initial un-modified mesh and $\mathcal{D}_{final}$ at the end of the refinement, both extrapolated to nodal values.\\
In order to reflect the physics of the problem, the function $f(\mathcal{D}_{init},\mathcal{D}_{final})$ is multiplied by the ratio of the monitored flow state variable. Let $\mathcal{S}_{init}$ the initial monitored nodal state and $\mathcal{S}_{final}$ at the end of the refinement.\\
The proposed mesh quality indicator ($\mathcal{MQI}$) is expressed as:
\begin{equation}
\label{eq:quality}
\mathcal{MQI} = f(\mathcal{D}_{init},\mathcal{D}_{final})~ \frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}.
\end{equation}
\subsection{Analysis of MQI}
\label{point:analysis}
\begin{itemize}
\item For the free-stream flow, the ratio $\frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}$ should be equal to 1. Since the AMR is physic driven, the mesh nodes within the free stream do not move. Therefore, $f(\mathcal{D}_{init},\mathcal{D}_{final})=\mathcal{C}$, where $\mathcal{C}$ is a constant yielding to $\mathcal{MQI}=\mathcal{C}$.
\item If both the ratio $\frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}$ and the distortion function measurement $f(\mathcal{D}_{init},\mathcal{D}_{final})$ increase (resp. decrease), then, $MQI >> \mathcal{C}$ (resp. $\mathcal{MQI} << \mathcal{C}$). As a result, the mesh fitting is inadequate.
\item If the ratio $\frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}$ increases, the local refinement is needed. Therefore, the function $f(\mathcal{D}_{init},\mathcal{D}_{final})$ must incorporate the philosophy of the distortion criteria and reflect the increase of the local mesh nodes density.
\end{itemize}
\subsection{MQI applied to a 2D mesh}
\subsubsection{Triangular mesh}
The cell distortion criteria $\mathcal{D}$ is defined as the radius of the inscribed circle of the mesh triangular element and denoted as $\mathcal{R}^{in}$. The in-circle radius formulation gives a direct information about the triangle distortion as Fig.\ref{fig:incircle} and Fig.\ref{fig:incircle2} show.
\begin{figure}[H]
\centering
\begin{minipage}{.44\linewidth}
\includegraphics[width=\linewidth]{Fig46}
\caption{Initial element}
\label{fig:incircle}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.41\linewidth}
\includegraphics[width=\linewidth]{Fig47}
\caption{Distorted element}
\label{fig:incircle2}
\end{minipage}
\end{figure}
The Eq.\ref{eq:quality} is transformed into:
\begin{equation}
\label{eq:MQI_radius}
\mathcal{MQI} = \frac{\mathcal{R}_{final}^{in}}{\mathcal{R}_{init}^{in}} ~ \frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}.
\end{equation}\\
\begin{itemize}
\item \textit{Discussion: Choice Of $f(\mathcal{D}_{init},\mathcal{D}_{final})$ }
\end{itemize}
First, the ratio $\frac{\mathcal{R}_{final}}{\mathcal{R}_{init}}$ is further investigated:
$$
\frac{\mathcal{R}_{final}}{\mathcal{R}_{init}} \left\{
\begin{array}{ll}
=1, \mbox { if the cell keeps the same shape; }\\
<1, \mbox { if the cell becomes narrow;}\\
>1, \mbox { if the cell becomes extended.}
\end{array}
\right.
$$
\begin{itemize}
\item \textit{Computation of the in-circle radius}
\end{itemize}
\cite{Rin} expresses, for a triangle $ijk$, the in-radius $\mathcal{R}^{in}$ formulation based on Eq.\ref{eq:inRadius}:
\begin{equation}
\label{eq:inRadius}
\mathcal{R}^{in}=\frac{2 A_{ijk}}{d_{ij}+d_{ik}+d_{jk}},
\end{equation}
where $A_{ijk}$ denotes the area of the triangle $ijk$ and $d_{ij}$ denotes the distance between $i$ and $j$ vertices. The extrapolation to a nodal value is done by averaging all the in-circle radius of the $N$ triangles attached to the considered vertex $i$.
\begin{equation}
\label{Extrapolation1}
\mathcal{R}^{in}_{i}=\frac{1}{N}\sum_{m=1}^{N}\frac{2 A_{ijk}^{m}}{d_{1}^{m}+d_{2}^{m}+d_{3}^{m}}.
\end{equation}
\begin{itemize}
\item \textit{Results}
\end{itemize}
\underline{2D Wedge}\\
The results of computing the $\mathcal{MQI}$, defined by Eq.\ref{eq:MQI_radius}, are presented in Fig.\ref{fig:MQI_radius}.
\begin{figure}[H]
\centering{\includegraphics[scale=0.5]{Fig48}}
\caption{$\mathcal{MQI}$ applied to the 2D triangular double wedge}
\label{fig:MQI_radius}
\end{figure}
The free-stream presents a value of $\mathcal{MQI}$ equal 1. The increase of the $\mathcal{MQI}$ value after the discontinuities (red zone after the first oblique shock and yellow zone after the first reflection of the oblique shock) is explained by the increase of the ratio $\frac{\mathcal{R}_{final}}{\mathcal{R}_{init}}$. Since the nodes adjacent to a discontinuity will contribute to the growth of the local grid resolution and the r-adaptive technique does not either add nodes nor change connectivity, then, the cells size next to a discontinuity will increase. Further analysis are presented in Fig.\ref{fig:secWedgeT0.3} and Fig.\ref{fig:secWedgeT0.8}.\\
\begin{figure}[H]
\centering{\includegraphics[scale=0.38]{Fig49.png}}
\caption{$\mathcal{MQI}$ for double wedge triangular test case at a line section $Y=0.3[m]$}
\label{fig:secWedgeT0.3}
\end{figure}
Fig.\ref{fig:secWedgeT0.3} shows a $\mathcal{MQI}$ decrease at discontinuities (i.e $\frac{\rho_{Final}}{\rho_{Init}}$ increases at the oblique shock and its reflections). \\
Let $\mathcal{S}_\infty$ be set of nodes with $ X$ $\in$ [0, 1.18].\\
Let $\mathcal{S}_1$ be set of nodes with $X$ $\in$ [1.40, 2.1].\\
For nodes $\in$ $\mathcal{S}_\infty$, the $\mathcal{MQI}$ $\ne$ 1. This is due to the mesh relaxation and the equilibrium node position after the refinement. The goal is to better refine the main oblique shock. As a consequence, $\mathcal{MQI}$ $\approx$ 1. Yet, this increase is not too large and can be accepted since the oblique shock is better refined and nothing of interest happens in the free stream.\\
The jumps in the density ratio reflects the existence of shocks. At those positions, the $\mathcal{MQI}$ shows a strong peak with respect to the state jump. Therefore, this can be explained by the fact that the cells are becoming smaller and smaller implying a good mesh refinement. Hence, $\mathcal{MQI}$ peaks (e.g. peak I, peak II and peak III in Fig.\ref{fig:secWedgeT0.3} at the positions $X=0.75[m]$, $X=2.18[m]$ and $X=2.64[m]$ respectively indicating the position of the $1^{st}$ oblique shock and its reflections) reflect partially the ability of a cell to deform and show the intensity of the aforementioned shocks.
The $\mathcal{MQI}$'s overshoots with respect to the density ratio indicate cells enlargement. For example, for nodes $\in$ $\mathcal{S}_1$, the $\mathcal{MQI}$ $\ne$ 1. This overshoot was expected since the grid nodes in $\mathcal{S}_1$ are pulled to contribute to both the main oblique shock and its first reflection.
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig50.png}}
\caption{$\mathcal{MQI}$= for double wedge triangular test case at a line section $Y=0.8[m]$}
\label{fig:secWedgeT0.8}
\end{figure}
Fig.\ref{fig:secWedgeT0.8} shows the same conclusions as Fig.\ref{fig:secWedgeT0.3} for the free stream flow, main oblique shock and the $\mathcal{MQI}$ overshoot.\\
Let $\mathcal{S}_2$ be set of nodes with $X$ $\in$ [1.6, 2.4].\\
The mesh nodes $\in$ $\mathcal{S}_2$ are subject to the expansion wave and reflection of the oblique shock interaction. Since $\frac{\rho_{Final}}{\rho_{Init}}$ $>$ $\mathcal{MQI}$ $\Rightarrow$
$\frac{R_{Final}}{R_{Init}}$ $< 1$, the refinement is applied consistently.\\
The $\mathcal{MQI}$ value at the outlet section of the double wedge mesh is greater than $\frac{\rho_{Final}}{\rho_{Init}}$. Hence, the cells are becoming enlarged. In fact, those cells, not subject to any shocks, are pulled and contribute to the refinement of the third reflection of the oblique shock.\\
\underline{Double cone}\\
Fig.\ref{fig:MQIdoublecone} and Fig.\ref{fig:MQIdoubleconezoom} show the mesh quality indicator for the double cone test case, especially the distribution of the $\mathcal{MQI}$ arround the SWBLI. The nodes located within the red zones contribute to the refinement of the adjacent shocks. Therefore, the triangles are enlarged and the radius of the in-circle increases leading to an increase of the $\mathcal{MQI}$ value.\\
Fig.\ref{fig:MQIdoublecone} presents a light blue-turquoise zone at the inlet of the double cone due to the nodal contribution to the oblique shock as shown in Fig.\ref{fig:InletInitial} and Fig.\ref{fig:InletFinal}.
The simulation of the test case is crashing when applying an AMR technique based on the density. Thanks to $\mathcal{MQI}$, we observe a blue-turquoise zone at the level of the second cone that indicates an enlargement of the cells. Those cells are located within the boundary layer and since they become too big and they create a zone of negative pressure. Hence, in order to be able to converge the double cone test case, we would need to add more points in the original mesh or follow another monitor flow field variable.
\begin{figure}[H]
\centering
\captionsetup{justification=centering}
\begin{minipage}{.44\linewidth}
\includegraphics[width=\linewidth]{Fig51}
\caption{$\mathcal{MQI}$ applied to 2D double cone test case}
\label{fig:MQIdoublecone}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.44\linewidth}
\includegraphics[width=\linewidth]{Fig52}
\caption{$\mathcal{MQI}$ applied to 2D double cone test case--zoom}
\label{fig:MQIdoubleconezoom}
\end{minipage}
\end{figure}
\begin{figure}[H]
\centering
\begin{minipage}{.44\linewidth}
\includegraphics[width=\linewidth]{Fig53}
\caption{Initial mesh-zoom inlet}
\label{fig:InletInitial}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.44\linewidth}
\includegraphics[width=\linewidth]{Fig54}
\caption{Final mesh-zoom inlet}
\label{fig:InletFinal}
\end{minipage}
\end{figure}
\subsection{Quadrilateral mesh}
The cell distortion criteria $\mathcal{D}$ definition for 2D quadrilateral meshes is more complex compared to the triangular mesh. Depending on the test case, $\mathcal{D}$ will be based on the aspect ratio or the skewness of the quadrilateral element. Hence, for the Hornung test case in Sec.\ref{sec:HC}, the aspect ratio $\mathcal{AR}$ will be used as a distortion criterion, whereas, for the quadrilateral double wedge in Sec.\ref{sec:DW}, the skewness $\Theta$ of the element will be adopted.
\begin{figure}[H]
\centering
\begin{minipage}{.40\linewidth}
\includegraphics[width=\linewidth]{Fig55}
\caption{Initial cell}
\label{fig:AR}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.47\linewidth}
\includegraphics[width=\linewidth]{Fig56}
\caption{Distorted cell}
\label{fig:AR2}
\end{minipage}
\end{figure}
Eq.\ref{eq:quality} is transformed into:
\begin{equation}
\label{eq:MQI_AR}
\mathcal{MQI} = \frac{\mathcal{AR}_{initial}}{\mathcal{AR}_{final}} ~ \frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}.
\end{equation}
First, the ratio $\frac{\mathcal{AR}_{init}}{\mathcal{AR}_{final}}$ will be further investigated.
$$
\frac{\mathcal{AR}_{init}}{\mathcal{AR}_{final}} \left\{
\begin{array}{ll}
=1, \mbox { if the cell keeps the same shape; }\\
<1, \mbox { if the cell becomes narrow;}\\
>1, \mbox { if the cell becomes extended.}
\end{array}
\right.
$$\\
For a quadrilateral $ABDC$, the aspect ratio $\mathcal{AR}$ is determined through the following relation:
\begin{equation}
\label{eq:AR}
\mathcal{AR}=\frac{d_{AB}}{d_{AC}},
\end{equation}
where $d_{AC}$ denotes the distance between the nodes $A$ and $C$.
The extrapolation to a nodal value is done by averaging all the aspect ratio of the $N$ elements attached to the considered vertex $i$, according to:
\begin{equation}
\label{Extrapolation}
\mathcal{AR}_{i}=\frac{1}{N}\sum_{m=1}^{N}\frac{d_{AB}^{m}}{d_{AC}^{m}}.
\end{equation}
The results of $\mathcal{MQI}$ are shown in Fig.\ref{fig:AR_start}.
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig57}}
\caption{$\mathcal{MQI}$ applied to the Hornung test case-- based on the Aspect Ratio}
\label{fig:AR_start}
\end{figure}
The free-stream flow presents, as expected, a value of $\mathcal{MQI}$ $\approx$ 1. One can observe that the value of $\mathcal{MQI}$, at the two tip-end of the bow shock, is too high and presents a maximum red spot. In that region, the mesh elements present a decrease in $\mathcal{AR}$, implying an increase of the ratio $\frac{\mathcal{AR}_{init}}{\mathcal{AR}_{final}}$. Fig.\ref{fig:AR_003} shows both an increase in $\mathcal{MQI}$ at the first jump of the density and close to the outlet. The choice of the distortion criterion at section $Y=0.03[m]$ does not respond to our first hypothesis (see Analysis of MQI in Sec.\ref{point:analysis}), therefore, the quality assessment of the refinement is performed in two steps:
\begin{enumerate}
\item a mesh quality indicator based on the aspect ratio is adopted to estimate the mesh quality close to the stagnation line where the quadrilateral mesh elements are compressed or enlarged due to the AMR process;
\item a mesh quality indicator based on the skewness distortion criteria $\Theta$ is adopted, since, the quadrilateral mesh element at the bow shock's tips are skewed.
\end{enumerate}
Fig.\ref{fig:AR_0} shows an expected behaviour whereas Fig.\ref{fig:AR_003} present an increase of the $\mathcal{MQI}$ values instead of a steady behaviour close to the outlet.
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig58}}
\caption{$\mathcal{MQI}$ applied to the Hornung test case -- stagnation line}
\label{fig:AR_0}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig59}}
\caption{$\mathcal{MQI}$ applied to the Hornung test case -- section Y=0.03[m]}
\label{fig:AR_003}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig60}}
\caption{Left: Initial cell-- Right: Distorted cell}
\label{fig:skew}
\end{figure}
Eq.\ref{eq:quality} is transformed into
\begin{equation}
\mathcal{MQI}= \Delta \Theta ~ \frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}, \quad \text{where}
\end{equation}
$$
\Delta \Theta \left\{
\begin{array}{ll}
=0, \mbox { if the cell keeps the same shape. }\\
else, \mbox { if the cell becomes narrow.}\\
\end{array}
\right.
$$\\
For a quadrilateral element $ABDC$, the skewness of an element is computed through the following formula \cite{skew}:
\begin{equation}
\Theta = max[\frac{\alpha_{max}-\alpha_{ref}}{180^\circ-\alpha_{ref}},\frac{\alpha_{ref}-\alpha_{max}}{\alpha_{ref}} ],
\end{equation}
where $\alpha_{ref}$ =90$^\circ$ for a quadrilateral element, $\alpha_{max}$ and $\alpha_{min}$ are respectively the maximum and minimum angle in the quadrilateral element.
The extrapolation to a nodal value is done by averaging all the element's skewness of the $N$ elements attached to the considered vertex $i$:
\begin{equation}
\Theta_i = \frac{1}{N}\sum_{m=1}^{N} \Theta^m.
\end{equation}
\underline{Hornung}
\begin{figure}[H]
\centering{\includegraphics[scale=0.35]{Fig61}}
\caption{$\mathcal{MQI}$ applied to the Hornung test case--Skewness based}
\label{fig:skew_Hornung}
\end{figure}
Fig.\ref{fig:skew_Hornung_sec} shows the $\mathcal{MQI}$ based on $\Theta$ for the Hornung test case. In order to have the same constant $\mathcal{C}$ for the free stream flow for all the cases, an improved $\mathcal{MQI}$ based on the skewness of the quadrilateral element is proposed:
\begin{equation}
\mathcal{MQI}= 1+ \Delta \Theta ~ \frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}.
\end{equation}
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig62}}
\caption{$\mathcal{MQI}$ applied to the Hornung test case $Y=0.03[m]$ }
\label{fig:skew_Hornung_sec}
\end{figure}
Fig.\ref{fig:skew_Hornung} presents more accurate interpretation of the $\mathcal{MQI}$ with respect to the curves in Fig.\ref{fig:AR_003}. In fact, the $\mathcal{MQI}$ shows a peak at the first density jump and skewed elements at the post-shock region indicating the cell alignment with the density field.\\
\underline{Double wedge quadrilateral mesh}\\
Fig.\ref{fig:wedgeQmesh} shows the mesh refinement final results based on the linear spring analogy where the flow conditions are presented in Tab.\ref{tab:DWflowchar}. The mesh quality indicator based on skewness element is discussed hereby:
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig63}}
\caption{Final mesh- double wedge quadrilateral mesh}
\label{fig:wedgeQmesh}
\end{figure}
The free stream flow provides a $\mathcal{MQI}$=$0$. Fig.\ref{fig:wedgeQ03} and Fig.\ref{fig:wedgeQ08} show peaks at density discontinuities. The increase of $\mathcal{MQI}$ for nodes $\in$ [2, 3] at the section $Y=0.8[m]$ can be explained by the fact that the nodes are contributing to the refinement of the expansion shock and second oblique shock reflection near the outlet boundary.
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig64}}
\caption{$\mathcal{MQI}$ applied to the double wedge test case}
\label{fig:wedgeQMQI}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.43]{Fig65}}
\caption{$\mathcal{MQI}$ applied to the double wedge test case-- $Y=0.3[m]$}
\label{fig:wedgeQ03}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.5]{Fig66}}
\caption{$\mathcal{MQI}$ applied to the double wedge test case-- $Y=0.8[m]$}
\label{fig:wedgeQ08}
\end{figure}
\subsection{MQI applied to 3D meshes}
\subsubsection{3D tetrahedral}
The concept of the inserted circle in a triangle is extended to the 3D tetrahedral element. The cell distortion criteria is defined as the radius of the inscribed sphere inside the tetrahedron, denoted as $\mathcal{R}^{S}$.\\
Eq.\ref{eq:quality} will be transformed into:
\begin{equation}
\mathcal{MQI} = \frac{\mathcal{R}^{S}_{final}}{\mathcal{R}^{S}_{init}} ~ \frac{\mathcal{S}_{final}}{\mathcal{S}_{init}}.
\end{equation}
The ratio $\frac{\mathcal{R}^{S}_{final}}{\mathcal{R}^{S}_{init}}$ is investigated in the following:
$$
\frac{\mathcal{R}^{S}_{final}}{\mathcal{R}^{S}_{init}} \left\{
\begin{array}{ll}
=1, \mbox { if the cell keeps the same shape; }\\
<1, \mbox { if the cell becomes smaller;}\\
>1, \mbox { if the cell is enlarged.}
\end{array}
\right.
$$\\
For a tetrahedral element $sijk$, the in-radius $\mathcal{R}^{S}$ is determined through the following Eq.\ref{eq:SRadius}, as expressed in \cite{Rin}:
\begin{equation}
\label{eq:SRadius}
\mathcal{R}^{S}=\frac{3 V_{sijk}}{A_{ijk}+A_{sij}+A_{sik}+A_{sjk}},
\end{equation}
where $A_{sijk}$ denotes the volume of the $sijk$ and $A_{ijk}$ denotes the area of the triangle $ijk$. The extrapolation to a nodal value is done by averaging all the in-sphere radius of the $N$ tetrahedron attached to considered the vertex $i$:
\begin{equation}
\label{Extrapolation2}
\mathcal{R}^{S}_{i}=\frac{1}{N}\sum_{m=1}^{N}\frac{3 V_{sijk}}{A_{ijk}+A_{sij}+A_{sik}+A_{sjk}}.
\end{equation}
\underline{Hemisphere}\\
The $\mathcal{MQI}$ values are shown in Fig.\ref{fig:MQI000}:
\begin{figure}[H]
\centering{\includegraphics[scale=0.45]{Fig67}}
\caption{$\mathcal{MQI}$ applied to Hemisphere test case section $Y=0$}
\label{fig:MQI000}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.45]{Fig68}}
\caption{Line section $Y=0, Z=0$}
\label{fig:MQI_Hemisp}
\end{figure}
\begin{figure}[H]
\centering{\includegraphics[scale=0.4]{Fig69}}
\caption{Line section $Y=0, Z=0$ - zoom}
\label{fig:MQI_Hemip_zoom}
\end{figure}
The $\mathcal{MQI}$ free stream value $\approx$ 1 (see Fig.\ref{fig:MQI_Hemisp}). In fact, the cells, in the free stream, are enlarged to contribute to the refinement of the bow shock. From Fig.\ref{fig:MQI_Hemisp}, one can observe that the cells close to discontinuities becomes smaller to increase the local mesh node density.
\subsection{Advantages and Drawbacks of MQI}
The $\mathcal{MQI}$ presents several advantages, to name only a few:
\begin{itemize}
\item Combines both local physical and geometrical properties;
\item Applicable to 2D and 3D;
\item $\mathcal{MQI}$=1 at free stream flow;
\item Allows for grading an adapted mesh and can be used to capture shocks;
\item The $\mathcal{MQI}$ peaks return the shock intensity and reflect, in general, the cells mesh distortion.
\end{itemize}
Also, the $\mathcal{MQI}$ presents some drawbacks:
\begin{itemize}
\item Mesh-type dependent;
\item While the solution states and distortions criteria are cell-based for the Finite Volume solver, the $\mathcal{MQI}$ is nodal based and the extrapolation can introduce some errors.
\end{itemize}
\section{Refinement Stop Indicator}
\label{sec:RSI}
This section presents a new method to help decide whether or not terminate the mesh refinement process qualitatively.
In r-adaptive steady-state simulations, the residuals (i.e. $L_{2}$ norms of some monitor quantities which are used to determine the iterative convergence of the flow solver) will be affected by some fluctuations due to nodal re-positioning implying an increase in the computational time and memory cost. At each mesh fitting process, the flow solver is seeing a new mesh, and therefore the residuals will increase when applying the mesh fitting.
The Author's key idea is based on defining a certain user-defined tolerance on the mesh movement, denoted $\epsilon$, and a measure of the relative mesh movement criterion, denoted $\delta$. The Refinement Stop Indicator, denoted $\mathcal{RSI}$, will be a function of $\delta$.
The $\epsilon$ and $\delta$ definitions yield to:
$$
\mathcal{RSI}= f(\delta) \left\{
\begin{array}{ll}
> \epsilon, \mbox { Continue the mesh refinement $\Rightarrow$ the mesh is not stable;}\\
\le \epsilon, \mbox { The mesh can be considered stable $\Rightarrow$ Stop the mesh refinement.}\\
\end{array}
\right.
$$\\
Many challenges arise when defining the $\mathcal{RSI}$ and $\delta$ due to the complex nature of the problem: number of moving nodes, relative displacement magnitude etc...\\
The following empirical formula is proposed
\begin{equation}
\label{RSI complex}
\mathcal{RSI}=\frac{1}{N} (\sum_i^{m_{up}} A_i^{x_i} \sqrt(\delta_i^{up}) + \sum_i^{m_{down}} B_i \delta_i^{down}),
\end{equation}
where:
\begin{itemize}
\item $N$ is the number of mesh nodes,
\item $\delta_i^{up}$ the relative displacement of nodes > $\epsilon$,
\item $\delta_i^{down}$ the relative displacement of nodes $\le \epsilon$,
\item $A_i$ the number of nodes having a relative displacement $\delta_i^{up}$,
\item $m_{up}$ (resp. $m_{down}$) the number of nodes having a relative displacement >$\epsilon $ (resp. $\le$ $\epsilon $),
\item $B_i$ the number of nodes having a relative displacement $\delta_i^{down}$,
\item $x_i$: the numerical contribution of the value $A_i$ will be affected by this exponent:
\begin{enumerate}
\item $x_i$ needs to increase when $A_i$ increases and $\delta_i^{up}$ decreases yet still > $\epsilon$. \label{1}
\item $x_i$ needs to further increase with respect to point (\ref{1}) when $A_i$ is small and $\delta_i^{up}$ is high.
\item $x_i$ needs to be small (resp. high) when $A_i$ and $\delta_i^{up}$ are small (resp. high).
\end{enumerate}
\end{itemize}
Hence, the exponent $x_i$ becomes:
\begin{equation}
x_i= \sqrt A_i \delta_i^{up}.
\end{equation}
To simplify Eq.\ref{RSI complex}, one can define the following parameters:
\begin{itemize}
\item $A$ the number of all nodes having a relative displacement > $\epsilon$,
\item $B$ the number of all nodes having a relative displacement < $\epsilon$,
\item $\delta^{up}$ the average of $\delta_{i}^{up}$,
\item $\delta^{down}$ the average of $\delta_{i}^{down}$,
\item $x= \sqrt A \delta^{up}$.
\end{itemize}
Therefore, Eq.\ref{RSI complex} becomes:
\begin{equation}
\label{eq:RSI average}
\mathcal{RSI}=\frac{1}{N} (A^{x} \sqrt(\delta^{up}) +B \delta^{down}).
\end{equation}
To make this approach more robust, the user also defines the iteration (a.k.a. Trigger $\mathcal{RSI}$) at which the $\mathcal{RSI}$ computation will start. The purpose of such value is to distinguish between the case of a stable mesh and a case where shocks are slowly developing and detaching. This value will depend on how fast the simulation is developing.\\
\subsection{RSI applied to 2D mesh}
\subsubsection{2D Quadrilateral mesh}
The relative displacement will be based on a cell distortion criterion specific to a quadrilateral element. The choice is made based on $\mathcal{AR}$ to compute $\mathcal{RSI}$ since one is only interested in the nodal displacement and not the distortion of a cell. In addition, the $\mathcal{AR}$ will be extrapolated to a nodal value $i$ at time steps $n$ and $n+m$, where the mesh is updated every m field flow iteration:
\begin{equation}
\delta_i = \frac{|\mathcal{AR}_i^{n+1}-\mathcal{AR}_i^{n}|}{\mathcal{AR}_i^{n}}.
\end{equation}\\
\underline{2D double wedge quadrilateral test case}\\
The user-defined tolerance is chosen to be equal to $\epsilon$=0.01\%.
The mesh fitting process continues till the iteration 2981, as Fig.\ref{RSIquads} and Fig.\ref{zoomRSIquads} show. When $\mathcal{RSI}$<$\epsilon$, the mesh fitting process stops enabling a fast convergence. In the convergence history, we can observe that the oscillation disappears when stopping the refinement.
\begin{figure}[H]
\centering
\captionsetup{justification=centering}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig70}
\caption{$\mathcal{RSI}$ in function of the number of iterations}
\label{RSIquads}
\end{minipage}
\hspace{.05\linewidth}
\begin{minipage}{.45\linewidth}
\includegraphics[width=\linewidth]{Fig71}
\caption{$\mathcal{RSI}$ in function of the number of iterations- zoom}
\label{zoomRSIquads}
\end{minipage}
\end{figure}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig72.png}
\caption{Convergence history with $\mathcal{RSI}$}
\label{fig:convQuads}
\end{minipage}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig73.png}
\caption{Convergence history with a classical stop condition iter=16000}
\label{fig:conQuads16000}
\end{minipage}
\end{figure}
Fig.\ref{fig:convQuads} shows the gain in convergence when using $\mathcal{RSI}$ compared to the same simulation using a classical stop condition (see Fig.\ref{fig:conQuads16000}). The implementation of the refinement stop indicator clearly influences the convergence rate positively.
\subsubsection{2D triangular mesh}
\underline{2D double wedge triangular test case }\\
The relative displacement is based on the in-circle radius of the triangular element extrapolated to the nodal value $i$ at mesh fitting time steps $n$ and $n+m$.
\begin{equation}
\label{eq:delta_i_RSI}
\delta_i = \frac{|\mathcal{R}_i^{n+m}-\mathcal{R}_i^{n}|}{\mathcal{R}_i^{n}}.
\end{equation}
\begin{figure}[H]
\captionsetup{justification=centering}
\begin{minipage}[t]{6cm}
\centering
\includegraphics[width=6cm]{Fig74.png}
\caption{Convergence history with $\mathcal{RSI}$}
\label{fig:convWithRSI}
\end{minipage}
\begin{minipage}[t]{6cm}
\captionsetup{justification=centering}
\centering
\includegraphics[width=6cm]{Fig75.png}
\caption{Convergence history with a classical stop condition iter=7000}
\label{fig:StopTriangular7000}
\end{minipage}
\end{figure}
Fig.\ref{fig:convWithRSI} shows the advantage of using the $\mathcal{RSI}$ as a stop condition for the mesh fitting process. The convergence is reached twice faster when $\mathcal{RSI}$ is applied (see Fig.\ref{fig:StopTriangular7000}), also thanks to the disappearance of the fluctuations due to the small nodal displacement.
\subsubsection{Advantages and Drawbacks of RSI}
The $\mathcal{RSI}$ presents several advantages, for instance:
\begin{itemize}
\item Provides only one value, therefore being easy to monitor;
\item Allows for reducing the run-time cost while automatizing the refinement process till convergence;
\item Accelerates the convergence to steady state and limits the fluctuations of the convergence history;
\item Can be finely tuned by a user-defined tolerance.
\end{itemize}
Also, the $\mathcal{RSI}$ presents some drawbacks:
\begin{itemize}
\item Mesh-type dependent;
\item Relies on an empirical formula that may need further adjustment of the coefficients depending on the case.
\end{itemize}
\section{Conclusion}
\label{sec:conclusion}
A novel physics-based r-refinement has been developed and, in combination with an existing Finite Volume CFD solver, successfully applied to several high-speed and space plasmas test cases based on multiple spring concepts mainly linear, semi-torsional and orth-semi-torsional spring analogies for two- and three-dimensional flows. Our AMR solver showed its ability to resolve different flow features depending on the user-defined monitored variable. This work also introduced and showed the potential of a newly defined mesh quality indicator to grade an adapted mesh qualitatively. Finally, computational improvements and simulation speed-up have been demonstrated through the use of a proposed refinement stop indicator.
\input{creferences.bbl}
\end{document}
|
1,108,101,564,288 | arxiv | \section{Introduction}
Compactness and nuclearity conditions, which characterize phase space properties, proved useful in the study of many aspects of Quantum Field Theory \cite{HS, BW, BJ, Scaling, Bos1, Bos2, Gan1}. Verification of phase space conditions in models \cite{BW,BJ1,universal,BP, Bos1, Dyb} is an integral part of these investigations, since it demonstrates consistency of these criteria with the basic postulates of local, relativistic quantum physics \cite{Haag}. In \cite{Dyb} a sharpened nuclearity condition has been proposed. It restricts correlations between different phase space regions and implies several physically desirable features. Among them are a certain form of additivity of energy over isolated subsystems and the uniqueness of vacuum states which can be prepared with a finite amount of energy. These vacuum states appear, in particular, as limits of physical states under large timelike translations in Lorentz covariant theories and are approximated by states of increasingly sharp energy-momentum values, in accordance with the uncertainty principle. This novel nuclearity condition seems also relevant to the study of particle aspects of a theory \cite{BS}. It is the aim of the present Letter to verify this criterion in massless free field theory. In comparison with the
massive case studied in \cite{Dyb}, the present investigation requires substantial technical improvements which we discuss below. As will be shown in a future publication, these advances enable a detailed harmonic analysis of translation automorphisms in massless theories.
Before we formulate the sharpened nuclearity condition, we recall briefly the mathematical framework: Let $V$, $W$ be Banach spaces
and $|\!|\!|\cdot |\!|\!|$ be a norm on the space $\mathcal{L}(V,W)$ of linear maps from $V$ to $W$. We say that a map $\Pi: V\to W$ is p-nuclear w.r.t. the norm $|\!|\!|\cdot |\!|\!|$ if there exists a decomposition $\Pi(v)=\sum_n\Pi_n(v)$ into rank-one maps, convergent for any $v\in V$ in the norm topology in $W$, s.t. $\nu:=(\sum_n|\!|\!|\Pi_n|\!|\!|^p)^{\fr{1}{p}}<\infty$. The $p$-norm $|\!|\!|\Pi|\!|\!|_{p}$ of this map is the smallest such $\nu$ over the set of all admissible decompositions. To construct
the norms which are suitable for our purposes, suppose that there acts a group of automorphisms $\mathbb{R}^{s+1}\ni x\to\beta_x$ on $V$. Then, for any $N\in\mathbb{N}$ and $x_1\ldots x_N\in\mathbb{R}^{s+1}$, we set
\begin{equation}
\|\Pi\|_{x_1\ldots x_N}=\sup_{v\in V_1}\bigg(\sum_{k=1}^N\|\Pi(\beta_{x_k}v)\|^2\bigg)^{\fr{1}{2}},\quad \Pi\in \mathcal{L}(V,W), \label{Nnorm}
\end{equation}
where $V_1$ is the unit ball in $V$, and denote the corresponding $p$-norm by $\|\cdot\|_{p,x_1\ldots x_N}$.
Next, we identify the spaces $V$, $W$, automorphisms $\beta_x$ and maps $\Pi$ in the framework of Quantum Field Theory.
Let $\mathcal{H}$ be the Hilbert space,
$\omega_0$ the normal vacuum state, $\mathbb{R}^{s+1}\ni x\to\alpha_x\in \textrm{Aut}(B(\mathcal{H}))$ the translation automorphisms and $H$ the Hamiltonian. We set $\trace_E=P_E B(\mathcal{H})_*P_E$, where $P_E$ is the spectral projection of $H$ on the subspace
spanned by vectors of energy lower than $E$ and choose $V=\mathring{\trace}_{E}:=\{\vp-\vp(I)\omega_0 \ | \ \vp\in\trace_E\}$.
This space is clearly invariant under the dual action of translations $\beta_x=\alpha^*_x$. Finally, we set
$W=\mathfrak{A}(\mathcal{O})^*$, where $\mathfrak{A}(\mathcal{O})\subset B(\mathcal{H})$ is the local algebra of observables attached to a double cone $\mathcal{O}\subset \mathbb{R}^{s+1}$, and define the family of maps $\Pi_E: \mathring{\trace}_{E}\to \mathfrak{A}(\mathcal{O})^*$ given by
\begin{equation}
\Pi_E(\vp)=\vp|_{\mathfrak{A}(\mathcal{O})},\quad \vp\in\mathring{\trace}_{E}.
\end{equation}
The strengthened nuclearity condition, proposed in \cite{Dyb}, has the following form.
\begin{enumerate}
\item[] \bf Condition \rm $N_{\bnatural}$. The maps $\Pi_E$ are $p$-nuclear w.r.t. the norms $\|\cdot\|_{x_1\ldots x_N}$
for any $N\in\mathbb{N}$, $x_1\ldots x_N\in\mathbb{R}^{s+1}$, $0<p\leq 1$, $E\geq 0$,
and double cone $\mathcal{O}\subset\mathbb{R}^{s+1}$. Moreover, there holds for their nuclear $p$-norms
\begin{equation}
\limsup\|\Pi_E\|_{p,x_1\ldots x_N}\leq c_p, \label{strengthening}
\end{equation}
where $c_p$ is independent of $N$ and the limit is taken for configurations $x_1\ldots x_N$, where all $x_i-x_j$, $i\neq j$,
tend to spacelike infinity.
\end{enumerate}
We note that the first, qualitative part of this criterion is equivalent to Condition~$N_{\bsharp}$ formulated in \cite{BP}
and the essential additional information is contained in the bound~(\ref{strengthening}).
This refinement is motivated by the observation that a measurement is always accompanied
by an energy transfer from the physical state to the observable. Additivity of energy over isolated
subregions should then imply that for any $\vp\in\mathring{\trace}_{E}$ the restricted functionals $\alpha_{\vec{x}}^*\vp|_{\mathfrak{A}(\mathcal{O})}$ are arbitrarily close to zero apart from translations varying in some compact subset of $\mathbb{R}^{s}$, depending on $\vp$. This picture is particularly plausible in a massive theory, where a state of bounded energy contains only a finite number of particles which are well localized in space. Making use of this simplification,
Condition~$N_{\bnatural}$ was verified in \cite{Dyb} in a theory of non-interacting massive particles.
In the present Letter we demonstrate that this criterion is valid also in the massless case for
$s\geq 3$. There the status of Condition $N_{\bnatural}$ is less obvious, since one has to handle the "infrared cloud"- states of bounded energy containing arbitrarily large numbers of massless particles whose localization properties are poor. The proof is accomplished by combining the underlying physical idea of additivity of energy over isolated subregions (Lemma~\ref{harmonic}) with
the quadratic decay of vacuum correlations between spatially separated observables in a massless theory
(Lemma~\ref{Cook}). As an interesting application of our methods, we briefly discuss in the Conclusions the momentum transfer of local operators in the model under study.
\section{Massless Scalar Free Field Theory}
In the model at hand the Hilbert space $\mathcal{H}$ is the symmetric Fock space
over $L^2(\mathbb{R}^s, d^sp)$. On this latter space there acts the unitary representation
of translations
\begin{equation}
(U_1(x)f)(\vec{p})=e^{i(\omega(\vec{p})x^0-\vec{p} \vec{x})}f(\vec{p}),\quad f\in L^2(\mathbb{R}^s, d^sp),
\end{equation}
where $\omega(\vec{p})=|\vec{p}|$. We denote by $U(x)$ its second quantization acting on $\mathcal{H}$,
introduce the corresponding family of automorphisms of $B(\mathcal{H})$
\begin{equation}
\alpha_x(\cdot)=U(x)\cdot U(x)^*
\end{equation}
and adopt the notation $A(x):=\alpha_x(A)$ for translated operators $A\in B(\mathcal{H})$.
Next, we construct the local algebra $\mathfrak{A}(\mathcal{O})$ attached to the double cone $\mathcal{O}$,
whose base is the $s$-dimensional ball $\mathcal{O}_{r}$ of radius $r$ centered at the origin in
configuration space: We introduce the closed subspaces $\mathcal{L}^{\pm}:=[\omega^{\mp\fr{1}{2}}\widetilde{D}(\mathcal{O}_{r})]$,
where tilde denotes the Fourier transform, represent the respective projections by the same symbol
and consider the real linear subspace of $L^2(\mathbb{R}^s, d^sp)$
\begin{equation}
\mathcal{L}=(1+J)\mathcal{L}^{+}+(1-J)\mathcal{L}^{-},
\end{equation}
where $J$ is the complex conjugation in configuration space.
Then the local algebra is given by
\begin{eqnarray}
\mathfrak{A}(\mathcal{O})=\{ \ W(f) \ | \ f\in\mathcal{L} \ \}^{\prime\prime},
\end{eqnarray}
where $W(f)=e^{i(a^*(f)+a(f))}$ and $a^*(f)$, $a(f)$ are the creation and annihilation operators.
The rest of this section, which serves mostly to establish our notation, is devoted to the proof
of the well known fact \cite{BP,Bos3} that the maps $\Pi_E$ in this model are $p$-nuclear w.r.t.
the standard norm on $\mathcal{L}(\mathring{\trace}_{E},\mathfrak{A}(\mathcal{O})^*)$. In the massive case the argument
was outlined in \cite{Dyb}, Appendix B, so it suffices here to give a brief sketch which stresses the modifications:
First, our present construction of the trace-class operator $T$ differs from the choices made in
the existing literature \cite{BP,Bos3,Dyb}: Let $Q_E$ be the projection on states of energy lower than $E$ in the single-particle space,
let $h\in D(\mathcal{O}_{r})$ be real and s.t. $\widetilde{h}>0$. We choose $\fr{1}{2}\leq\gamma<\fr{s-1}{2}$
and define operators $T_{E,\pm}=\omega^{-\h} Q_E\mathcal{L}^{\pm}$, $T_{h,\pm}=\omega^{-\gamma} \widetilde{h}^{1/2}\mathcal{L}^{\pm}$, where $\widetilde{h}$ is the corresponding multiplication operator in momentum space.
By a slight modification of Lemma~3.5 from \cite{BP} one obtains that for $s\geq 3$ these operators satisfy
$\||T_{E,\pm}|^p\|_1<\infty$, $\||T_{h,\pm}|^p\|_1<\infty$ for any $p>0$, where $\|\cdot\|_1$ denotes
the trace norm. We define the operator $T$ as follows
\begin{equation}
T=(|T_{E,+}|^2+|T_{E,-}|^2+|T_{h,+}|^2+|T_{h,-}|^2)^\fr{1}{2}.
\end{equation}
Making use of the fact \cite{Kos} that for any $0<p\leq1$ and any pair of positive operators $A$, $B$, s.t.
$A^p$, $B^p$ are trace-class, there holds $\|(A+B)^p\|_1\leq \|A^p\|_1+\|B^p\|_1$, we get
\begin{equation}
\|T^p\|_1\leq\||T_{E,+}|^p\|_1+\||T_{E,-}|^p\|_1 +\||T_{h,+}|^p\|_1+\||T_{h,-}|^p\|_1 \textrm{ for } 0<p\leq 1. \label{lub3}
\end{equation}
Since $T$ commutes with $J$, it has a $J$-invariant orthonormal basis of eigenvectors $\{e_j\}_1^\infty$
and we denote the corresponding eigenvalues by $\{t_j\}_1^\infty$.
In order to construct an expansion of the map $\Pi_E$ into rank-one mappings, we evaluate a Weyl operator on some functional $\vp\in\mathring{\trace}_{E}$, rewrite it in a normal ordered form and expand it into a power series
\begin{eqnarray}
&\vp&\!\!\!\!\!(W(f))\nonumber\\
&=&e^{-\fr{1}{2}\|f\|^2}
\sum_{m^{\pm},n^{\pm}\in\mathbb{N}_{0}}\fr{i^{m^++n^++2m^-}}{m^+!m^-!n^+!n^-!}
\vp(a^*(f^+)^{m^+}a^*(f^-)^{m^-}a(f^+)^{n^+}a(f^-)^{n^-}),\qquad \label{powerseries}
\end{eqnarray}
where $f=f^++if^-$ and $f^{\pm}\in\mathcal{L}^{\pm}$ are real in configuration space.
Subsequently, we expand each function $f^\pm$ in the orthonormal basis $\{e_j\}_1^{\infty}$ of $J$-invariant eigenvectors of the operator $T$: $f^\pm=\sum_{j=1}^{\infty}e_j\langle e_j|f^\pm\rangle$. Then, making use of the multinomial formula, we obtain
\begin{equation}
a^{(*)}(f^\pm)^{m^\pm}=\sum_{\mu^\pm,|\mu^\pm|=m^\pm}\fr{m^\pm!}{\mu^\pm!}\langle e|f^{\pm}\rangle^{\mu^\pm} a^{(*)}(\mathcal{L}^{\pm} e)^{\mu^\pm},
\label{multinomial}
\end{equation}
where $\mu^+$, $\mu^-$ are multiindices, and substitute these expansions to (\ref{powerseries}). In order to simplify
the resulting expression, we define for any two pairs of multiindices $\overline{\mu}=(\mu^+,\mu^-)$, $\overline{\nu}=(\nu^+,\nu^-)$ functionals $S_{\mub,\nub}\in\mathring{\trace}_{E}^*$ given by
\begin{equation}
S_{\mub,\nub}(\vp)=\vp(a^*(\mathcal{L} e)^{\overline{\mu}}a(\mathcal{L} e)^{\overline{\nu}} ),
\end{equation}
where $a^{(*)}(\mathcal{L} e)^{\overline{\mu}}=a^{(*)}(\mathcal{L}^{+} e)^{\mu^+}a^{(*)}(\mathcal{L}^{-} e)^{\mu^-}$. Moreover, with the help of the formula
\begin{eqnarray}
(\Omega|[a(e_1),[\ldots,[a(e_k),[a^*(e_{k+1}),[\ldots, [a^*(e_l),W(f)],\ldots]\Omega)\nonumber\\
=e^{-\fr{1}{2}\|f\|^2}\prod_{n_1=1}^k\langle e_{n_1}| if\rangle \prod_{n_2=k+1}^l\langle if| e_{n_2}\rangle,
\end{eqnarray}
one can express the factors $\langle e|f^{\pm}\rangle^{\mu^\pm}$, appearing in (\ref{multinomial}), in terms of
normal functionals $\tau_{\overline{\mu},\overline{\nu}}\in \mathfrak{A}(\mathcal{O})^*$ defined
as in \cite{Dyb}, Appendix~B, (using methods from \cite{Bos3}). Then expression~(\ref{powerseries})
takes the form
\begin{eqnarray}
\vp(W(f))&=&\sum_{\overline{\mu},\overline{\nu}}\tau_{\overline{\mu},\overline{\nu}}(W(f)) S_{\mub,\nub}(\vp). \label{ffexpansion}
\end{eqnarray}
In order to extend this formula to all $A\in \mathfrak{A}(\mathcal{O})$, we study its convergence properties:
In the present case the norms of the functionals $\tau_{\overline{\mu},\overline{\nu}}$
are not uniformly bounded in $\overline{\mu}$, $\overline{\nu}$. Instead, one obtains as in formula~(B.7) of \cite{Dyb}
\begin{equation}
\|\tau_{\overline{\mu},\overline{\nu}}\|\leq \fr{4^{|\overline{\mu}|+|\overline{\nu}|} }{(\overline{\mu}!\overline{\nu}!)^\fr{1}{2}} \bigg(\fr{(\overline{\mu}+\overline{\nu})!}{\overline{\mu}!\overline{\nu}!}\bigg)^\fr{1}{2}\leq
\fr{2^{\fr{5}{2}(|\overline{\mu}|+|\overline{\nu}|)}}{(\overline{\mu}!\overline{\nu}!)^\fr{1}{2}},\label{tauestimate}
\end{equation}
where $|\overline{\mu}|=|\mu^+|+|\mu^-|$ and $\overline{\mu}!=\mu^+!\mu^-!$.
Making use of the fact that for any $f_1,\ldots,f_n\in L^2(\mathbb{R}^s, d^sp)$ in the domain of $\omega^{\h}$ there hold the so called energy bounds~\cite{BP}
\begin{equation}
\|a(\omega^{\h} f_1)\ldots a(\omega^{\h} f_n)P_E\|
\leq (E)^{\fr{n}{2}}\|f_1\|\ldots \|f_n\|,
\end{equation}
we obtain the estimate
\begin{equation}
\|S_{\mub,\nub}\|\leq E^{\fr{|\overline{\mu}|+|\overline{\nu}|}{2}}\|\ommQ_E \mathcal{L} e\|^{\overline{\mu}}\,\|\ommQ_E\mathcal{L} e\|^{\overline{\nu}} \leq E^{\fr{|\overline{\mu}|+|\overline{\nu}|}{2}}t^{\overline{\mu}} t^{\overline{\nu}}. \label{Sestimate}
\end{equation}
With the help of the bounds (\ref{tauestimate}) and (\ref{Sestimate}) one verifies that for
any $0<p\leq 1$
\begin{eqnarray}
\sum_{\overline{\mu},\overline{\nu} }\|\tau_{\overline{\mu},\overline{\nu}}\|^p \, \|S_{\mub,\nub}\|^p \leq \sum_{\overline{\mu},\overline{\nu}}
\fr{(2^5E)^{\fr{1}{2} p (|\overline{\mu}|+|\overline{\nu}|)} }{(\overline{\mu}!)^{\fr{1}{2} p}(\overline{\nu}!)^{\fr{1}{2} p}} t^{p\overline{\mu}}t^{p\overline{\nu}}
&=& \bigg(\sum_{\mu^+}\fr{(2^5E)^{\fr{1}{2} p |\mu^+|} }{(\mu^+!)^{\fr{1}{2} p}} t^{p\mu^+}\bigg)^4\nonumber\\
&\leq& \bigg(\sum_{k=0}^\infty \fr{(2^5E)^{\fr{1}{2} pk}\|T^p\|_1^k}{(k!)^{\fr{1}{2} p}} \bigg)^4\!\!\!,\label{traces}
\end{eqnarray}
where in the last step we set $k=|\mu^+|$ and made use of the multinomial formula.
This bound allows us to restate expression (\ref{ffexpansion}) as follows
\begin{eqnarray}
\Pi_E(\vp)&=&\sum_{\overline{\mu},\overline{\nu}}\tau_{\overline{\mu},\overline{\nu}}S_{\mub,\nub}(\vp), \textrm{ for } \vp\in\mathring{\trace}_{E}, \label{exp1}
\end{eqnarray}
where the sum converges in the norm topology in $\mathfrak{A}(\mathcal{O})^*$ and there holds, in addition,
$\|\Pi_E\|_{p}\leq (\sum_{\overline{\mu},\overline{\nu} }\|\tau_{\overline{\mu},\overline{\nu}}\|^p \, \|S_{\mub,\nub}\|^p)^{1/p}<\infty$ for $0<p\leq 1$. This concludes the proof
of the known fact that Condition $N_{\bsharp}$ holds in massless free field theory \cite{BP,Bos3}.
In the next section we will use the same expansion (\ref{exp1}) to verify Condition~$N_{\bnatural}$.
\section{Verification of Condition $N_{\bnatural}$}
By definition of the nuclear $p$-norms and formula (\ref{exp1}) there holds the bound
\begin{equation}
\|\Pi_E\|_{p,x_1\ldots x_N}\leq\bigg(\sum_{\overline{\mu},\overline{\nu}}\|\tau_{\overline{\mu},\overline{\nu}}\|^p\|S_{\mub,\nub}\|^p_{x_1\ldots x_N}\bigg)^{\fr{1}{p}}.\label{start}
\end{equation}
To verify Condition $N_{\bnatural}$ we have to find estimates on
the norms $\|S_{\mub,\nub}\|_{x_1\ldots x_N}$ whose growth with $N$ can be controlled at large spacelike distances $x_i-x_j$
for $i\neq j$.
The first step in this direction is taken in the following lemma which is inspired
by Lemma 2.2 from \cite{Buch3}. In contrast to the bound from \cite{Dyb}, Lemma~4.1,
the present estimate is uniform in the particle number and depends only on the energy of
the state in question. This result substantiates the underlying physical idea of additivity
of energy over isolated subregions.
\begin{lemma}\label{harmonic}
Suppose that $g \in L^2(\mathbb{R}^s, d^sp)$ and $\widetilde{h} g$ is in the domain of $\omega^{-\h}$, where $\widetilde{h}\in \widetilde{D}(\mathcal{O}_{r})$
appeared in the definition of the operator $T$ above.
Then, for any $x_1\ldots x_N\in\mathbb{R}^{s+1}$, there holds the bound
\begin{eqnarray}
\|P_E\sum_{k=1}^N(a^*(g)a(g))(x_k)P_E\|\leq E\sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-2}
\big\{ \| \omega^{-\h}\widetilde{h} g \|^2 \nonumber\\
\phantom{4444444444}+(N-1)\sup_{i\neq j}|\langle \omega^{-\h} \widetilde{h} g|U(x_{i}-x_{j})\omega^{-\h}\widetilde{h} g\rangle| \big\}.
\label{harmonice}
\end{eqnarray}
\end{lemma}
\noindent{\bf Proof. }
We pick single-particle vectors $\Psi_1, g_1\in L^2(\mathbb{R}^s,d^sp)$ and define $Q=$\\ $\sum_{k=1}^N(a^*(g_1)a(g_1))(x_k)$. Then there holds
\begin{eqnarray}
(\Psi_1|QQ\Psi_1)\leq\sum_{l=1}^N(\Psi_1|(a^*(g_1)a(g_1))(x_l)\Psi_1)\sum_{k=1}^N|\langle U(x_k)g_1|U(x_l)g_1\rangle|
\phantom{4}& &\nonumber\\
\leq (\Psi_1|Q\Psi_1)\big\{\|g_1\|^2+(N-1)\sup_{i\neq j}|\langle U(x_{j})g_1|U(x_{i})g_1\rangle|\big\},& &
\end{eqnarray}
where we made use of the fact that $a(U(x_k)g_1)a(U(x_l)g_1)\Psi_1=0$ and of the Cauchy-Schwarz inequality.
Since $(\Psi_1|Q\Psi_1)^2\leq(\Psi_1| QQ \Psi_1)\|\Psi_1\|^2$, we obtain
\begin{eqnarray}
& &\sum_{k=1}^N(\Psi_1|(a^*(g_1)a(g_1))(x_k)\Psi_1)\nonumber\\
& &\phantom{4444444}\leq \|\Psi_1\|^2\big\{\|g_1\|^2+(N-1)\sup_{i\neq j}|\langle U(x_{j})g_1|U(x_{i})g_1\rangle|\big\}. \label{single}
\end{eqnarray}
Next, let $n\geq 1$ and $\Psi_n\in P_E\mathcal{H}$ be an $n$-particle vector s.t. the corresponding symmetric wave-function
$\Psi_n(\vec{p}_1\ldots \vec{p}_n)$ belongs to $S(\mathbb{R}^{s\times n})$. We also introduce a single-particle wave-function associated with
$\Psi_n$ given by $\Psi_1(\vec{p}_1)_{\vec{p}_2,\ldots,\vec{p}_n}=|\vec{p}_1|^\fr{1}{2} \widetilde{h}(\vec{p}_1)^{-1}\Psi_n(\vec{p}_1,\ldots \vec{p}_n)$, where we treat $\vec{p}_2,\ldots,\vec{p}_n$ as parameters. With the help of (\ref{single}) we get
\begin{eqnarray}
\sum_{k=1}^N(\Psi_n|(a^*(g)a(g))(x_k)\Psi_n)\phantom{444444444444444444444444444444444444444444}& &\nonumber\\
=n\int d^3x^sp_2\ldots d^sp_n \sum_{k=1}^N(\Psi_{1,\vec{p}_2,\ldots,\vec{p}_n}|( a^*(\omega^{-\h}\widetilde{h} g)a(\omega^{-\h}\widetilde{h} g) )(x_k)\Psi_{1,\vec{p}_2,\ldots,\vec{p}_n})\phantom{.}& &\nonumber\\
\leq n\int d^3x^sp_1\ldots d^sp_n |\widetilde{h}(\vec{p}_1)|^{-2} |\vec{p}_1||\Psi_n(p_1,\ldots p_n)|^2\phantom{444444444444444444444.}& &\nonumber\\
\cdot\big\{\| \omega^{-\h}\widetilde{h} g \|^2+
(N-1)\sup_{i\neq j}|\langle \omega^{-\h}\widetilde{h} g|U(x_{i}-x_{j}) \omega^{-\h}\widetilde{h} g \rangle|\big\}.& &
\end{eqnarray}
Finally, we note that
\begin{eqnarray}
& &n\int d^3x^sp_1\ldots d^sp_n |\widetilde{h}(\vec{p}_1)|^{-2} |\vec{p}_1||\Psi_n(\vec{p}_1,\ldots \vec{p}_n)|^2\nonumber\\
& &\phantom{4444}\leq \sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-2}
\int d^3x^sp_1\ldots d^sp_n (|\vec{p}_1|+\cdots+|\vec{p}_n|)|\Psi_n(\vec{p}_1,\ldots \vec{p}_n)|^2\nonumber\\
& &\phantom{4444}\leq \sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-2} E\|\Psi_n\|^2\!,
\end{eqnarray}
where we made use of the fact that the wave-function is symmetric. Since the operators $(a^*(g)a(g))(x_k)$
conserve the particle number and vectors of the form $\Psi=c\Omega+\sum_{n=1}^{\infty} \Psi_n$, where $\|\Psi\|^2=|c|^2+\sum_{n=1}^{\infty}\|\Psi_n\|^2<\infty$,
are dense in $P_E\mathcal{H}$, we easily obtain the bound in the statement of the lemma. $\Box$\medskip\\
Our next task is to control the expressions appearing on the right-hand side of estimate~(\ref{harmonice}).
Lemma \ref{F} below, which holds in particular for $\widetilde{F}(\vec{p})=|\vec{p}|^{-2}$, will be crucial in this respect.
We start with some definitions:
for any $\rho>0$ and some fixed $\teps>0$ we choose a function $\chi(\mathcal{O}_{\rho})\inC_0^{\infty}(\mathbb{R}^s)$ s.t. $\chi(\mathcal{O}_{\rho})(\vec{x})=1$
for $\vec{x}\in\mathcal{O}_{\rho}$ and $\chi(\mathcal{O}_{\rho})(\vec{x})=0$ for $\vec{x}\notin\mathcal{O}_{\rho+\teps}$.
We denote the operator of multiplication by $\chi(\mathcal{O}_{\rho})$ in configuration space by the same
symbol.
\begin{lemma}\label{F} Suppose that $F\in S^\prime(\mathbb{R}^s)$ coincides with a bounded, measurable function in the
region $\{\, \vec{y}\in\mathbb{R}^s \,|\, |\vec{y}|\geq \rho\,\}$ and its Fourier transform
$\widetilde{F}$ is a positive, measurable function s.t. $\widetilde{F}^{1/2}\in L^2(\mathbb{R}^s,d^sp)+L^{\infty}(\mathbb{R}^s,d^sp)$.
Then $\widetilde{F}^{1/2}\chi(\mathcal{O}_{\tr})$ is a bounded operator and there holds
\begin{equation}
\|\chi(\mathcal{O}_{\tr}) \widetilde{F}\chi_{\vec{x}}(\mathcal{O}_{\tr})\|\leq c_{s,\rho,\teps}\sup_{|\vec{z}|\leq 2\rho+3\teps} |F(\vec{z}-\vec{x})| \ \textrm{ for }
|\vec{x}|\geq 3(\rho+\teps),\label{Fbound}
\end{equation}
where $\chi_{\vec{x}}(\mathcal{O}_{\tr})(\vec{y})=\chi(\mathcal{O}_{\tr})(\vec{y}-\vec{x})$, the constant $c_{s,\rho,\teps}$ is independent of $\vec{x}$ and we denote the
operator of multiplication by $\widetilde{F}$ in momentum space by the same symbol.
\end{lemma}
\noindent{\bf Proof. } In order to prove the first statement we make a decomposition $\widetilde{F}^{1/2}=\widetilde{F}^{1/2}_2+\widetilde{F}^{1/2}_{\infty}$,
where $\widetilde{F}^{1/2}_2\in L^2(\mathbb{R}^s,d^sp)$, $\widetilde{F}^{1/2}_{\infty}\in L^{\infty}(\mathbb{R}^s,d^sp)$. Since
$\widetilde{F}^{1/2}_{\infty}$ is a bounded operator, it suffices to consider $\widetilde{F}^{1/2}_2\chi(\mathcal{O}_{\tr})$. We pick
$f_1,f_2\in S(\mathbb{R}^s)$ and estimate
\begin{eqnarray}
|\langle f_1|\widetilde{F}^{1/2}_2 \chi(\mathcal{O}_{\tr}) f_2\rangle|=(2\pi)^{-\fr{s}{2}}\big|\int d^spd^sq \ \bar{f}_1(\vec{p}) \widetilde{F}^{1/2}_2(\vec{p})\widetilde{\chi}(\mathcal{O}_{\rho})(\vec{p}-\vec{q})f_2(\vec{q})\big|\phantom{4}& &\nonumber\\
\leq c\|\bar{f}_1\widetilde{F}^{1/2}_2\|_1\|\widetilde{\chi}(\mathcal{O}_{\rho}) \|_2\|f_2\|_2\leq c\|f_1\|_2
\|\widetilde{F}^{1/2}_2\|_2\|\widetilde{\chi}(\mathcal{O}_{\rho}) \|_2\|f_2\|_2,& &
\end{eqnarray}
where in the second step we made use of the Young inequality\footnote
{The Young inequality states that
for any positive functions $f\in L^{r_1}(\mathbb{R}^s,d^sp)$, $g\in L^{r_2}(\mathbb{R}^s,d^sp)$, $h\in L^{r_3}(\mathbb{R}^s,d^sp)$,
where
$1\leq r_1,r_2,r_3\leq\infty$ s.t. $\fr{1}{r_1}+\fr{1}{r_2}+\fr{1}{r_3}=2$, there holds the
bound
\begin{eqnarray*}
\int d^spd^sq \ f(\vec{p})g(\vec{p}-\vec{q})h(\vec{q})\leq c_{r_1,r_2,r_3}\|f\|_{r_1}\|g\|_{r_2}\|h\|_{r_3}
\end{eqnarray*}} \cite{RS2} and in the last estimate we applied H\"older's inequality.
Next, we verify relation (\ref{Fbound}). If $|\vec{x}|\geq 3(\rho+\teps) $,
then $|\vec{y}+\vec{x}|\leq 2\rho+3\teps$ implies $|\vec{y}|\geq \rho$ and the expression
\begin{equation}
\widetilde{F}_{\vec{x}}(\vec{p}):=(2\pi)^{-\fr{s}{2}}\int d^sy \, e^{-i\vec{p}\vec{y}}F(\vec{y})\chi_{-\vec{x}}(\mathcal{O}_{2(\rho+\teps)})(\vec{y}) \label{Fx}
\end{equation}
defines a bounded, continuous function.
The operator of multiplication by $\widetilde{F}_{\vec{x}}$ in momentum space, denoted by the same symbol,
satisfies the equality
\begin{equation}
\chi(\mathcal{O}_{\tr}) \widetilde{F}_{\vec{x}} \chi_{\vec{x}}(\mathcal{O}_{\tr}) =\chi(\mathcal{O}_{\tr}) \widetilde{F}\chi_{\vec{x}}(\mathcal{O}_{\tr}) \label{Fequality}
\end{equation}
which can be verified by computing the matrix elements of both bounded operators
between vectors from $S(\mathbb{R}^s)$, proceeding to configuration space and noting
that the distributions $F$ and $\chi_{-\vec{x}}(\mathcal{O}_{2(\rho+\teps)})F$ coincide on the
resulting set of smearing functions. Moreover, we obtain from (\ref{Fx})
\begin{eqnarray}
|\widetilde{F}_{\vec{x}}(\vec{p})|
\leq(2\pi)^{-\fr{s}{2}}\int d^sy\, |\chi(\mathcal{O}_{2(\rho+\teps)})(\vec{y})| \sup_{|\vec{z}|\leq 2\rho+3\teps} |F(\vec{z}-\vec{x})|\phantom{,}& &\nonumber\\
=c_{s,\rho,\teps}\sup_{|\vec{z}|\leq 2\rho+3\teps} |F(\vec{z}-\vec{x})|,& &
\end{eqnarray}
what concludes the proof of the lemma. $\Box$\medskip\\
After this preparation we set $g=\mathcal{L}^{\pm} e$ in Lemma \ref{harmonic} and undertake the study of the functions
\begin{equation}
\mathbb{R}^{s+1}\ni x\to\langle \omega^{-\h}\widetilde{h} \mathcal{L}^{\pm} e|U(x)\omega^{-\h} \widetilde{h} \mathcal{L}^{\pm} e\rangle \label{function}
\end{equation}
appearing on the right-hand side of estimate (\ref{harmonice}). We recall from our discussion in Section~2
that $\omega^{-\h}\widetilde{h}^{1/2}\mathcal{L}^{\pm}$ are trace-class operators, so $\widetilde{h} g$ are in the domain of $\omega^{-\h}$ as
required in Lemma \ref{harmonic}. A link with Lemma \ref{F} is provided by the following identities
\begin{equation}
\mathcal{L}^{\pm}=\omega^{\mp\fr{1}{2}}\chi(\mathcal{O}_{\rz})\omega^{\pm\fr{1}{2}}\mathcal{L}^{\pm}, \label{chil}
\end{equation}
where $r$ is the radius of the ball entering into the definition of the
subspaces $\mathcal{L}^{\pm}$. The following result covers the case of translations in space.
\begin{lemma} \label{mbounds} Assume that $s\geq 3$ and let $e$ be a normalized eigenvector of the operator $T$ corresponding to the eigenvalue $t$. Then there holds
\begin{enumerate}
\item[(a)] $\langle \omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{-} e | U(\vec{x})\omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{-} e\rangle=0$ for $|\vec{x}|>4r$,
\item[(b)] $|\langle \omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{\pm} e | U(\vec{x})\omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{\pm} e\rangle|
\leq \fr{c_{s,r}t^2}{(|\vec{x}|+1)^{s-2} }$,
\end{enumerate}
where the constant $c_{s,r}$ is independent of $\vec{x}$ and $e$.
\end{lemma}
\noindent{\bf Proof. }
To prove part (a) we set again $\chi_{\vec{x}}(\mathcal{O}_{r})(\vec{y})=\chi(\mathcal{O}_{r})(\vec{y}-\vec{x})$ and note that
\begin{eqnarray}
& &\langle \omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{-} e |U(\vec{x})\omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{-} e\rangle\nonumber\\
& &\phantom{44444444444}=\langle\omega^{-\fr{1}{2}}\widetilde{h} \mathcal{L}^{-} e |\chi(\mathcal{O}_{2r})\chi_{\vec{x}}(\mathcal{O}_{2r}) U(\vec{x})
\omega^{-\fr{1}{2}}\widetilde{h} \mathcal{L}^{-} e\rangle=0,
\end{eqnarray}
for $|\vec{x}|>4r$, since $h\in D(\mathcal{O}_{r})$ and hence $\omega^{-\h}\widetilde{h}\mathcal{L}^{-} e\in [\widetilde{D}(\mathcal{O}_{2r})]$.
Due to the uniform bound
\begin{equation}
|\langle \omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{\pm} e | U(\vec{x})\omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{\pm} e\rangle|\leq
\|\omega^{\gamma-\fr{1}{2}}\widetilde{h}^{1/2}\|^2_{\infty} \langle e |T_{h,\pm}^2 e\rangle\leq \|\omega^{2\gamma-1}\widetilde{h}\|_{\infty} t^2,
\label{uniformbound}
\end{equation}
which involves the parameter $\gamma\in [\fr{1}{2},\fr{s-1}{2}[$ from the
definition of the operator $T$, there also follows the ($-$) part of (b). To prove the (+) part we estimate
\begin{eqnarray}
|\langle\omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{+} e | U(\vec{x})\omega^{-\fr{1}{2}}\widetilde{h}\mathcal{L}^{+} e\rangle |
&=&|\langle\widetilde{h}\omega^\fr{1}{2} \mathcal{L}^{+} e|\chi(\mathcal{O}_{2r})\omega^{-2}\chi_{\vec{x}}(\mathcal{O}_{2r})\widetilde{h}\omega^{\h} U(\vec{x})\mathcal{L}^{+} e\rangle|\nonumber\\
&\leq&t^2\| \omega^{2\gamma+1} \widetilde{h}\|_{\infty}\, \|\chi(\mathcal{O}_{2r})\omega^{-2}\chi_{\vec{x}}(\mathcal{O}_{2r})\|.
\end{eqnarray}
Now we are in position to apply Lemma \ref{F}: We set $\widetilde{F}(\vec{p})=|\vec{p}|^{-2}$. Then
\begin{equation}
\widetilde{F}(\vec{p})^{1/2}=|\vec{p}|^{-1}\theta(-|\vec{p}|+1)+|\vec{p}|^{-1}\theta(|\vec{p}|-1)\in
L^2(\mathbb{R}^s,d^sp)+L^{\infty}(\mathbb{R}^s,d^sp)
\end{equation}
and $F(\vec{x})=c_s|\vec{x}|^{-(s-2)}$,
where $c_s= 2^{\fr{s}{2}-2}\Gamma(\fr{s}{2}-1)$. We obtain for $|\vec{x}|\geq 6r+3\teps$
\begin{equation}
\|\chi(\mathcal{O}_{2r})\omega^{-2}\chi_{\vec{x}}(\mathcal{O}_{2r})\|\leq \fr{c_{s,r}}{(|\vec{x}|-4r-3\teps)^{s-2}}.
\end{equation}
Making use of the uniform bound (\ref{uniformbound}),
we get the estimate from the statement of the lemma for a suitable constant $c_{s,r}$. $\Box$\medskip\\
In order to obtain estimates on functions (\ref{function}) valid for arbitrary spacelike translations
$x$ we recall, in a slightly generalized form, the following result from \cite{universal}.
\begin{lemma}\label{damping} Let $\delta>0$. Then there exists some continuous function $f(\omega)$ which decreases almost
exponentially, i.e. $\sup_{\omega}|f(\omega)|e^{|\omega|^\kappa}<\infty \textrm{ for any } 0<\kappa<1$,
and which has the property that for any pair of operators
$A$, $B$ such that $\Omega$ belongs to their domains and to the domains of their adjoints,
satisfying
\begin{equation}
(\Omega| \, [A, e^{itH}Be^{-itH}] \, \Omega)=0 \textrm{ for } |t|<\delta,
\end{equation}
there holds the identity $(\Omega|AB\Omega)=(\Omega|Af(\delta H)B\Omega)+(\Omega|Bf(\delta H)A\Omega)$.
\end{lemma}
With the help of the above lemma we prove the desired bounds.
\begin{lemma}\label{Cook} Assume that $s\geq 3$. Let $e\in L^2(\mathbb{R}^s, d^sp)_1$ satisfy $Te=te$ and $Je=e$.
Then, for any $\varepsilon>0$ and $x\in\mathbb{R}^{s+1}$ s.t. $|\vec{x}|\geq |x^0|$, there hold the estimates
\begin{equation}
|\langle\widetilde{h}\omega^{-\h}\mathcal{L}^{\pm} e|U(x)\widetilde{h}\omega^{-\h}\mathcal{L}^{\pm} e\rangle |\leq \fr{c_{s,r,\varepsilon}t^2}{(|\vec{x}|-|x^0|+1)^{s-2-\varepsilon}},
\end{equation}
where the constant $c_{s,r,\varepsilon}$ is independent of $x$ and $e$.
\end{lemma}
\noindent{\bf Proof. }
First, we define the operators $\phi_{+}(e)=a^*(\widetilde{h}\mathcal{L}^{+} e)+a(\widetilde{h}\mathcal{L}^{+} e)$, $\phi_{-}(e)=a^*(i\widetilde{h}\mathcal{L}^{-} e)
+a(i\widetilde{h}\mathcal{L}^{-} e)$ and their
translates $\phi_{\pm}(e)(x)=U(x)\phi_{\pm}(e)U(x)^{-1}$. Since the projections $\mathcal{L}^{\pm}$ and the multiplication
operators $\widetilde{h}$ commute with
$J$ and $Je=e$, the operators $\phi_{\pm}(e)$ are just canonical
fields and momenta of the free field theory localized in the double cone of radius $2r$ centered at zero.
We assume without loss of generality that $x^0>0$, introduce functions
$F^{\pm}(\tau)=\langle\widetilde{h}\mathcal{L}^{\pm} e|\omega^{-1}U(\vec{x}+\tau\hat{e}_0)\widetilde{h}\mathcal{L}^{\pm} e\rangle$ for $0\leq \tau\leq x^0$, where $\hat{e}_0$ is the unit vector in the time direction, and consider the derivative
\begin{equation}
\bigg|\fr{dF^{\pm}(\tau)}{d\tau}\bigg|=|(\Omega|\phi_{\pm}(e)\phi_{\pm}(e)(\vec{x}+\tau\hat{e}_0)\Omega)|.
\end{equation}
We define $\delta_{\tau}=|\vec{x}|-\tau-4r$ and assume that $\delta_{\tau}>0$ for $0\leq \tau\leq x^0$, i.e. $|\vec{x}|-x^0>4r$. Then, by locality, $\phi_{\pm}(e)$ and
$\phi_{\pm}(e)(\vec{x}+\tau \hat{e}_0)$ satisfy the assumptions of Lemma~\ref{damping} with $\delta=\delta_{\tau}$. Making
use of this result, we obtain
\begin{eqnarray}
\bigg|\fr{dF^{\pm}(\tau)}{d\tau}\bigg|&=&|\langle\omega^{-\gamma}\widetilde{h}\mathcal{L}^{\pm} e|\omega^{2\gamma}f(\delta_\tau\omega)U(\vec{x}+\tau \hat{e}_0)\omega^{-\gamma}\widetilde{h}\mathcal{L}^{\pm} e\rangle\nonumber\\
&+&\langle\omega^{-\gamma}\widetilde{h}\mathcal{L}^{\pm} e|\omega^{2\gamma}f(\delta_\tau\omega) U(-\vec{x}-\tau \hat{e}_0)\omega^{-\gamma}\widetilde{h}\mathcal{L}^{\pm} e\rangle|\nonumber\\
&\leq& \fr{2}{\delta_\tau^{2\gamma}} t^2 \| \widetilde{h}\|_{\infty}\, \sup_{\omega\geq 0}|\omega^{2\gamma}f(\omega)|. \label{derivative}
\end{eqnarray}
Next, we set $\gamma=\fr{s-1-\varepsilon}{2}$ for $0<\varepsilon<1$ and arrive at the following estimate
\begin{eqnarray}
|\langle \omega^{-\h}\widetilde{h}\mathcal{L}^{\pm} e|U(x)\omega^{-\h}\widetilde{h}\mathcal{L}^{\pm} e\rangle |=|F^{\pm}(x^0)|&\leq& |F^{\pm}(0)|+\int_0^{x^0}d\tau \bigg|\fr{dF^{\pm}(\tau)}{d\tau}\bigg|\nonumber\\
&\leq& \fr{c_{s,r,\varepsilon}t^2}{(|\vec{x}|-x^0-4r)^{s-2-\varepsilon} },\label{Cookmethod}
\end{eqnarray}
where in the last step we applied Lemma \ref{mbounds} and estimate (\ref{derivative}).
Since the left-hand side of relation (\ref{Cookmethod}) satisfies a uniform bound analogous to (\ref{uniformbound}),
we obtain the estimate in the statement of the lemma. $\Box$\medskip\\
Now we are ready to prove the required bounds on the norms of the functionals~$S_{\mub,\nub}$.
\begin{proposition}\label{semibound} Given a family of points $x_1\ldots x_N\in\mathbb{R}^{s+1}$ we define
$\delta(\underline{x})=\inf_{i\neq j}(|\vec{x}_i-\vec{x}_j|-|x_i^0-x_j^0|)$. For $s\geq 3$, $\delta(\underline{x})\geq 0$ and any $\varepsilon>0$ the functionals
$S_{\mub,\nub}$ satisfy the bound
\begin{eqnarray}
\|S_{\mub,\nub}\|_{x_1\ldots x_N}^2
&\leq& 16 c_{s,r,\varepsilon}\sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-2} E^{|\overline{\mu}|+|\overline{\nu}|}t^{2(\overline{\mu}+\overline{\nu})}\bigg\{1+\fr{N-1}{(\delta(\underline{x})+1)^{s-2-\varepsilon}}\bigg\},\,\,
\end{eqnarray}
where the constant $c_{s,r,\varepsilon}$ appeared in Lemma~\ref{Cook}.
\end{proposition}
\noindent{\bf Proof. } Making use of the fact that $S_{0,0}=0$, we can assume without loss of generality
that $\overline{\nu}\neq 0$ and decompose it into two pairs of multiindices $\overline{\nu}=\nub_a+\nub_b$ in such a
way that $|\nub_b|=1$. Proceeding as in the proof of Proposition 4.4 in \cite{Dyb} (formulas
(4.12) and (4.13)) we obtain the bound
\begin{equation}
\|S_{\mub,\nub}\|^2_{x_1\ldots x_N}\leq 16E^{|\overline{\mu}|+|\nub_a|}t^{2(\overline{\mu}+\nub_a)}\|P_E\sum_{k=1}^N\big(a^*(\mathcal{L} e)^{\nub_b}a(\mathcal{L} e)^{\nub_b}\big)(x_k)P_E\|.\label{aux2}
\end{equation}
From Lemmas \ref{harmonic} and \ref{Cook} we get
\begin{eqnarray}
& &\|P_E\sum_{k=1}^N\big(a^*(\mathcal{L} e)^{\nub_b}a(\mathcal{L} e)^{\nub_b}\big)(x_k)P_E\|
\leq
E\sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-2}\big\{\|\widetilde{h}\omega^{-\h}(\mathcal{L} e)^{\nub_b}\|^2 \nonumber\\
& &\phantom{4444444444444}+(N-1)\sup_{i\neq j}|\langle\widetilde{h}\omega^{-\h}(\mathcal{L} e)^{\nub_b}| U(x_i-x_j)\widetilde{h}\omega^{-\h}(\mathcal{L} e)^{\nub_b}\rangle| \big\}\nonumber\\
& &\phantom{4444444444444}\leq c_{s,r,\varepsilon} \sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-2} E t^{2\nub_b}\bigg\{1+\fr{N-1}{(\delta(\underline{x})+1)^{s-2-\varepsilon}}\bigg\}.
\label{collect}
\end{eqnarray}
Substituting inequality
(\ref{collect}) into formula (\ref{aux2}),
we obtain the estimate in the statement of the proposition. $\Box$\medskip\\
We note that the bound from Proposition \ref{semibound} has a similar structure to estimate~(\ref{Sestimate})
for the ordinary norms of $S_{\mub,\nub}$. Therefore, making use of formulas (\ref{start}) and (\ref{traces}), we obtain
\begin{eqnarray}
& &\|\Pi_E\|_{p,x_1\ldots x_N}\nonumber\\
& &\phantom{44}\leq 4c_{s,r,\varepsilon}^{1/2} \sup_{|\vec{p}|\leq E}|\widetilde{h}(\vec{p})|^{-1} \bigg(\sum_{k=0}^\infty \fr{(2^5E)^{\fr{1}{2} pk}\|T^p\|_1^k}{(k!)^{\fr{1}{2} p}} \bigg)^{\fr{4}{p}}
\bigg\{1+\fr{N-1}{(\delta(\underline{x})+1)^{s-2-\varepsilon}}\bigg\}^\fr{1}{2}.\,\,\,\,\,\,\,\,
\end{eqnarray}
It follows that $\limsup_{\delta(\underline{x})\to\infty}\|\Pi_E\|_{p,x_1\ldots x_N}$ satisfies a bound which is
independent of $N$. Consequently, we get
\begin{theoreme} Condition $N_{\bnatural}$ holds in massless scalar free field theory in $s\geq 3$ dimensional space.
\end{theoreme}
\section{Conclusions}
In this work we verified the sharpened nuclearity condition $N_{\bnatural}$ in massless free field theory
in spacetime of physical or higher dimension. This criterion guarantees the uniqueness of the vacuum
state in the energy-connected component of
the state space, in agreement with physical observations \cite{Dyb}. Nevertheless,
it turns out to be consistent with a degenerate vacuum structure:
Recall that massless free field theory has a spontaneously broken gauge symmetry $\mathbb{R}\ni\lambda\to\beta_{\lambda}$,
corresponding to a shift of the pointlike localized field by a constant, which is defined on Weyl operators~by
\begin{equation}
\beta_{\lambda}(W(f))=e^{i\lambda(\widetilde{\omega^{1/2} f})(0)}W(f). \label{gauge}
\end{equation}
This group of transformations gives rise to a family of pure, regular vacuum states
\begin{equation}
\omega_0^{(\lambda)}(W(f))=e^{i\lambda(\widetilde{\omega^{1/2} f})(0)}\omega_0(W(f))
\end{equation}
whose energy-connected components are, in fact, disjoint subsets of the state
space for $s\geq 3$ \cite{BWa}. This is no longer true for $s=2$ in which case Condition $N_{\bnatural}$,
as well as the weaker Condition $N_{\bsharp}$, does not hold due to singular infrared properties of this theory \cite{BP}.
The methods developed in the present Letter are relevant to harmonic analysis
of local operators $A\in\mathfrak{A}(\mathcal{O})$. We recall that in any relativistic quantum
field theory there holds the bound \cite{Buch3}
\begin{equation}
\sup_{\vp\in\trace_{E,1}}\int d^sp|\vec{p}|^{s+1+\varepsilon}|\vp(\widetilde{A}(\vec{p}))|^2<\infty, \label{ha}
\end{equation}
for any $\varepsilon>0$, where $\widetilde{A}(\vec{p})$ is the Fourier transform of $A(\vec{x})$. Since the mollifier
$|\vec{p}|^{s+1+\varepsilon}$ suppresses the contributions to $\vp(\widetilde{A}(\vec{p}))$ with small momentum transfer,
which become relevant at asymptotic times \cite{AH,Porr1,Porr2},
we are interested in the minimal power of $|\vec{p}|$ for which the bound (\ref{ha}) is still
valid. Making use of an improved variant of an improved variant of Lemma \ref{harmonic},
one can show that for $s\geq 3$ there holds in massless free field theory
\begin{equation}
\sup_{\vp\in\trace_{E,1} }\int d^sp|\vec{p}|^{2}|\vp(\widetilde{A}(\vec{p}))|^2<\infty. \label{ha1}
\end{equation}
With the help of a suitable sequence of functionals $\vp_n\in\trace_{E,1}$, involving arbitrarily
large number of particles, it can be verified that the power of the mollifier $|\vec{p}|^2$ cannot be
further reduced on the whole local algebra $\mathfrak{A}(\mathcal{O})$ in this model.
However, making use of the more refined expansion of the map $\Pi_E$ into rank-one mappings, developed in \cite{Bos3},
one can construct a subspace \emph{of finite co-dimension} in $\mathfrak{A}(\mathcal{O})$ on which there holds the bound
\begin{equation}
\sup_{\vp\in\trace_{E,1}}\int d^sp|\vp(\widetilde{A}(\vec{p}))|^2<\infty, \label{ha2}
\end{equation}
familiar from massive free field theory \cite{Dyb}. This subspace contains, in particular, the elements
of the fixed-point subalgebra of $\lambda\to\beta_{\lambda}$ whose vacuum expectation
values vanish. These results, whose detailed proofs will be presented elsewhere, demonstrate the
utility of the phase space methods in the development of a more detailed harmonic analysis of automorphism groups
\cite{Arveson}.
\bigskip
\noindent{\bf Acknowledgements:}
I would like to thank Prof. D. Buchholz for his continuing advice and encouragement
in the course of this work. Financial support from Deutsche Forschungsgemeinschaft is
gratefully acknowledged.
|
1,108,101,564,289 | arxiv | \section*{Acknowledgement}
We thank S.H. Shao, Y. D. Huang and C. I. Chiang for interesting and inspiring discussions. This research is supported by Taiwan National Science Council under
Project No. NSC 97-2112-M-002-026-MY3 and by US Department of Energy under Contract No. DE-AC03- 76SF00515. We also thank the support of the National Center for Theoretical Sciences of Taiwan.
|
1,108,101,564,290 | arxiv | \section{Introduction}
\label{sec:Intro}
Consider the stochastic heat equation
\bel{eq000}
\frac{\partial u(t,x)}{\partial t} =
\frac{\partial^2 u(t,x)}{\partial x^2} +
u(t,x) \dot{W},
\ee
where $\dot{W}$ is a Gaussian white noise.
Motivated by various applications in physics, equation
\eqref{eq000} is often called parabolic Anderson model with
continuous time and space parameters.
If $W=W(t)$ is a Brownian motion in time, then,
with an It\^{o} interpretation,
a change of variables $u(t,x)=v(t,x)\exp(W(t)-(t/2))$ reduces
\eqref{eq000} to the usual heat equation $v_t=v_{xx}$.
If $W=W(t,x)$ is a two-parameter Brownian motion, or Brownian sheet,
then equation \eqref{eq000} has been studied in detail, from one of the original
references \cite[Chapter 3]{Walsh} to a more recent book \cite{DK14}.
In particular, the It\^{o} interpretation is the only option; cf. \cite{HP-WZ-SHE}.
If $W=W(x)$ is a Brownian motion in space, then equation \eqref{eq000}
has two different interpretations:
\begin{enumerate}
\item Wick-It\^{o}-Skorokhod interpretation
\bel{eq000-sp1}
\frac{\partial u(t,x)}{\partial t} =
\frac{\partial^2 u(t,x)}{\partial x^2} +
u(t,x)\diamond \dot{W}(x),
\ee
where $\diamond$ is the Wick product;
\item Stratonovich interpretation
\bel{eq000-sp2}
\frac{\partial u(t,x)}{\partial t} =
\frac{\partial^2 u(t,x)}{\partial x^2} +
u(t,x)\cdot\dot{W}(x),
\ee
where $u(t,x)\cdot\dot{W}(x)$ is understood in the point-wise, or path-wise,
sense.
\end{enumerate}
In \cite{Hu02, Hu15}, equation
\eqref{eq000-sp1} is studied on the whole line as a part of a more
general class of equations. Two works dealing specifically with
\eqref{eq000-sp1} are \cite{UH96}, where the equation is considered on the whole line, and \cite{VStan-WN}, where the Dirichlet boundary value problem
is considered with a slightly more general random potential.
According to \cite[Theorem 4.1]{UH96}, the solution of \eqref{eq000-sp1}
is almost H\"{o}lder(1/2) in time and space.
By comparison, the solution of \eqref{eq000-sp2}
is almost H\"{o}lder(3/4) in time and almost H\"{o}lder(1/2)
in space \cite[Theorem 4.12]{Hu15}, whereas for the equation with
additive noise
$$
\frac{\partial u(t,x)}{\partial t} =
\frac{\partial^2 u(t,x)}{\partial x^2} +
\dot{W}(x), \ t>0,\ x\in \bR, \ u(0,x)=0,
$$
the solution is almost H\"{o}lder(3/4) in time and is almost
H\"{o}lder(3/2) in space, which follows by applying the
Kolmogorov continuity criterion to
$$
u(t,x)=\int_0^t\int_{\bR} \frac{1}{\sqrt{4\pi s}}\,e^{-(x-y)^2/(4s)}dW(y)ds.
$$
The objective of this paper is to establish optimal space-time
regularity of the solution of
\bel{eq:main}
\begin{split}
\frac{\partial u(t,x)}{\partial t} &=
\frac{\partial^2 u(t,x)}{\partial x^2} +
u(t,x)\diamond \dot{W}(x),\ t>0, \ 0<x<\pi,\\
u_x(t,0)&=u_x(t,\pi)=0, \ u(0,x)=u_0(x),
\end{split}
\ee
and to define and investigate the corresponding
fundamental solution.
We show that the solution of \eqref{eq:main} is almost
H\"{o}lder(3/4) in time and is almost
H\"{o}lder(3/2) in space.
As a result, similar to the case of space-time white
noise, solutions of equations driven by either
additive or multiplicative Gaussian white noise
in space have the same regularity, justifying the optimality
claim in connection with \eqref{eq:main}.
Our analysis relies on the chaos expansion of the
solution and the Kolmogorov continuity criterion.
Section \ref{sec:CSp} provides the necessary background about
chaos expansion and the Wick product. Section \ref{sec:CSol}
introduces the chaos solution of \eqref{eq:main}. Section
\ref{sec:RCS} establishes basic regularity of the chaos solution as a
random variable and introduces the main tools necessary for the
proof of the main result. Section \ref{sec:AdN} establishes the
benchmark regularity result for the additive-noise version of
\eqref{eq:main}. The main results, namely, H\"{o}lder
continuity of the chaos solution of \eqref{eq:main}
in time and space, are in Sections
\ref{sec:Time} and \ref{sec:Space}, respectively. Section \ref{sec:FS}
is about the fundamental chaos solution of \eqref{eq:main}.
Section \ref{sec:FD} discusses various generalizations
of \eqref{eq:main}, including other types of boundary
conditions.
We use the following notations:
$$
f_t(t,x)=\frac{\partial f(t,x)}{\partial t}, \ \
f_x(t,x)=\frac{\partial f(t,x)}{\partial x}, \ \
f_{xx}(t,x)=\frac{\partial^2 f(t,x)}{\partial x^2};
$$
$$
\bT^n_{s,t}=\left\{(s_1,\ldots,s_n)\in \bR^n:\
s < s_1 < s_{2} < \cdots
< s_n < t\right\}\!,
$$
$0\leq s<t,\ n=1,2,\ldots$;
$$
(g,h)_0=\int_0^{\pi} g(x)h(x)dx,\ \ \|g\|_0=\sqrt{(g,g)_0},\ \
g_k=(g,\mfk{m}_k)_0,
$$
where $\{\mfk{m}_k,\ k\geq 1\}$ is an orthonormal basis in $L_2((0,\pi))$;
$$
dx^n=dx_1dx_2\cdots dx_n.
$$
\section{The Chaos Spaces}
\label{sec:CSp}
Let $(\Omega, \cF, \mathbb{P})$ be a probability space.
A {\tt Gaussian white noise} $\dot{W}$
on $L_2((0,\pi))$ is a collection of Gaussian random
variables $\dot{W}(h),\ h\in L_2((0,\pi)),$ such that
\begin{equation}
\label{dW0}
\bE \dot{W}(g)=0,\
\bE \Big( \dot{W}(g)\dot{W}(h)\Big)=(g,h)_0.
\end{equation}
For a Banach space $X$,
denote by $L_p(W;X)$, $1\leq p<\infty$, the collection of
random elements $\eta$ that are measurable with respect to the
sigma-algebra generated by $\dot{W}(h),\ h\in L_2((0,\pi)),$ and
such that $\bE\|\eta\|_X^p<\infty$.
In what follows, we fix the Fourier cosine
basis $\{\mfk{m}_k,\ k\geq 1\}$ in $L_2((0,\pi))$:
\begin{equation}
\label{cos-basis}
\mfk{m}_1(x)=\frac{1}{\sqrt{\pi}},\ \
\mfk{m}_k(x)=\sqrt{\frac{2}{\pi}}\cos(kx),
\end{equation}
and define
\bel{xik}
\xi_k=\dot{W}(\mfk{m}_k).
\ee
By \eqref{dW0}, $\xi_k,\ k\geq 1,$ are iid standard Gaussian random variables,
and
$$
\dot{W}(h)=\sum_{k\geq 1} (\mfk{m}_k, h)_0\, \xi_k.
$$
As a result,
\bel{WN}
\dot{W}(x)=\sum_{k\geq 1} \mfk{m}_k(x) \xi_k
\ee
becomes an alternative notation for $\dot{W}$;
of course, the series in \eqref{WN} diverges in
the traditional sense.
It follows from \eqref{dW0} that
$W(x)=\dot{W}(\chi_{[0,x]}) $ is a standard
Brownian motion on $[0,\pi]$, where
$\chi_{[0,x]}$ is the indicator function of the interval $[0,x]$.
Denote by ${\mathcal{J}}$ the collection of multi-indices $\ba$
with $\ba=(\alpha_{1},\alpha_{2},\ldots)$
so that each $\alpha_{k}$ is a non-negative integer and
$|\ba|:=\sum_{k\geq1}\alpha_{k}<\infty$. For
$\ba,\bbt\in{\mathcal{J}}$, we define
\[
\ba+\bbt=(\alpha_{1}+\beta_{1},\alpha_{2}+\beta_{2},\ldots),\quad
\ba!=\prod_{k\geq1}\alpha_{k}!.
\]
Also,
\begin{itemize}
\item $\zm$ is the multi-index with all zeroes;
\item $\bep(i)$ is the multi-index $\ba$ with $\alpha_{i}=1$
and $\alpha_{j}=0$ for $j\not=i$;
\item
$\ba-\bbt=(\max(\alpha_1-\beta_1,0),
\max(\alpha_2-\beta_2,0),\dots)$;
\item $\ba^-(i)=\ba-\bep(i)$.
\end{itemize}
An alternative way to describe a multi-index $\ba\in \cJ$
with $|\ba|=n>0$ is by its
{\tt characteristic set} $K_{\ba}$, that is, an ordered
$n$-tuple $K_{\ba}=\{k_{1},\ldots,k_{n}\}$,
where $k_{1}\leq k_{2}\leq\ldots\leq k_{n}$
indicate the locations and the values of the
non-zero elements of $\ba$:
$k_{1}$ is the index of the first non-zero element of
$\ba,$ followed by $\max\left( 0,\alpha_{k_{1}}-1\right) $
of entries with the same value.
The next entry after that is the index of the second
non-zero element of $\ba$,
followed by $\max\left( 0,\alpha_{k_{2}}-1\right) $
of entries with the same value, and so on.
For example, if $n=7$ and $\ba=(1,0,2,0,0,1,0,3,0,\ldots)$,
then the non-zero elements of
$\ba$ are $\alpha_{1}=1$,
$\alpha_{3}=2$, $\alpha_{6}=1$, $\alpha_{8}=3$, so
that
$K_{\ba}=\{1,3,3,6,8,8,8\}$:
$k_{1}=1,\,k_{2}=k_{3}=3,\,k_{4}=6,
k_{5}=k_{6}=k_{7}=8$.
Define the collection of random variables
$\Xi=\{\xi_{\ba}, \ \ba \in{\mathcal{J}}\}$ by
\begin{equation*}
\label{eq:basis}
\xi_{\ba} = \prod_{k}
\left(
\frac{\Hep_{\alpha_{k}}(\xi_{k})}{\sqrt{\alpha_{k}!}}
\right),
\end{equation*}
where $\xi_k$ is from \eqref{xik} and
\begin{equation}
\label{eq:hermite}
\Hep_{n}(x) = (-1)^{n} e^{x^{2}/2}\frac{d^{n}}{dx^{n}
e^{-x^{2}/2
\end{equation}
is the Hermite polynomial of order $n$.
By a theorem of Cameron and Martin \cite{CM},
$\Xi$ is an orthonormal basis in
$L_2(W;X)$ as long as $X$ is a Hilbert space.
Accordingly, in what follows, we always assume that $X$ is a
Hilbert space.
For $\eta\in L_2(W;X)$, define
$\eta_{\ba}=\bE\big(\eta\xi_{\ba}\big)\in X$. Then
$$
\eta=\sum_{\ba\in\cJ} \eta_{\ba}\xi_{\ba},\
\bE\|\eta\|_X^2=\sum_{\ba\in\cJ}\|\eta_{\ba}\|_X^2.
$$
We will often need spaces other than $L_2(W;X)$:
\begin{itemize}
\item The space
$$
\mathbb{D}^{n}_2(W;X)=\Big\{ \eta=\sum_{\ba\in\cJ}\eta_{\ba}\xi_{\ba}\in
L_2(W;X):
\sum_{\ba\in\cJ}|\ba|^n\;\|\eta_{\ba}\|_X^2
<\infty\Big\},\ n>0;
$$
\item The space
$$
L_{2,q}(W;X)
=\Big\{ \eta=\sum_{\ba\in\cJ}\eta_{\ba}\xi_{\ba}\in
L_2(W;X): \sum_{\ba\in\cJ}q^{|\ba|}
\|\eta_{\ba}\|_X^2<\infty\Big\},\ q>1;
$$
\item The space $L_{2,q}(W;X)$, $0<q<1$, which is the closure of
$L_2(W;X)$ with respect to the norm
$$
\|\eta\|_{L_{2,q}(X)}=\left(
\sum_{\ba\in\cJ}q^{|\ba|} \|\eta_{\ba}\|_X^2
\right)^{1/2}.
$$
\end{itemize}
It follows that
$$
L_{2,q_1}(W;X)\subset L_{2,q_2}(W;X),\ q_1>q_2,
$$
and, for every $q>1$,
$$
L_{2,q}(W;X)\subset \bigcap_{n>0}\mathbb{D}^{n}_2(W;X).
$$
It is also known \cite[Section 1.2]{Nualart} that, for
$n=1,2,\ldots,$ the space
$\mathbb{D}^{n}_2(W;X)$ is the domain of $\mathbf{D}^n$,
the $n$-th power of the Malliavin derivative.
Here is another useful property of the spaces $L_{2,q}(W;X)$.
\begin{proposition}
\label{prop:LcInLp}
If $1<p<\infty$, and $q>p-1$, then
$$
L_{2,q}(W;X)\subset L_p(W;X).
$$
\end{proposition}
\begin{proof}
Let $\eta\in L_{2,q}(W;X)$.
The hypercontractivity property of the Ornstein-Uhlenbeck
operator \cite[Theorem 1.4.1]{Nualart} implies\footnote{In fact,
a better reference is the un-numbered equation at the bottom of
page 62 in \cite{Nualart}.}
$$
\left(
\bE \left\|
\sum_{|\ba|=n} \eta_{\ba}\xi_{\ba} \right\|_X^p
\right)^{1/p} \leq
(p-1)^{n/2}\left(
\sum_{|\ba|=n}\|\eta_{\ba}\|_X^2
\right)^{1/2}.
$$
It remains to apply the triangle inequality, followed
by the Cauchy-Schwarz inequality:
$$
\Big(\bE\|\eta\|_X^p\Big)^{1/p}
\leq
\sum_{n=0}^{\infty}
(p-1)^{n/2}\left(\sum_{|\ba|=n}\|\eta_{\ba}\|_X^2\right)^{1/2}
\leq \left(
\sum_{n=0}^{\infty}
\left(\frac{p-1}{q}\right)^n
\right)^{1/2}
\|\eta\|_{L_{2,q}(W;X)}.
$$
\end{proof}
\begin{definition}
\label{def:WP}
For $\eta \in L_2(W;X)$ and $\zeta\in L_2(W;\bR)$, the
{\tt Wick product} $\eta\diamond\zeta$ is defined by
\bel{eq:def-WP}
\big(\eta\diamond\zeta\big)_{\ba}
=\sum_{\bbt,\bg\in \cJ:\,\bbt+\bg=\ba}
\ \ \ \left(\frac{\ba!}{\bbt!\bg!}\right)^{1/2}
\eta_{\bbt}\,\zeta_{\bg}.
\ee
\end{definition}
To make sense of $\eta_{\bbt}\,\zeta_{\bg}$,
the definition requires at least one of $\eta,\zeta$ to be real-valued.
The normalization in \eqref{eq:def-WP}
ensures that, for every $n,m,k$,
$$
\Hep_n(\xi_k)\diamond \Hep_m(\xi_k)=\Hep_{n+m}(\xi_k),
$$
where $\xi_k$ is one of the
standard Gaussian random variables \eqref{xik} and
$\Hep_n$ is the Hermite polynomial \eqref{eq:hermite}.
\begin{remark}
If $\eta\in L_2\big(W;L_2((0,\pi))\big)$ and
$\eta$ is adapted, that is,
for every $x\in [0,\pi]$, the random
variable $\eta(x)$ is measurable with respect to the
sigma-algebra generated by
$\dot{W}(\chi_{[0,y]}), \ 0\leq y\leq x$, then,
by \cite[Proposition 2.5.4 and Theorem 2.5.9]{HOUZ-2},
$$
\int_0^x \eta(x)\diamond \dot{W}(x)dx =
\int_0^x \eta(x)dW(x),
$$
where the right-hand side is
the It\^{o} integral with respect to the standard Brownian
motion $W(x)=\dot{W}(\chi_{[0,x]})$.
This connection with the It\^{o} integral does not help
when it comes to equation \eqref{eq:main}$:$
the structure of the heat kernel implies that, for every
$x\in (0,\pi)$, the solution $u=u(t,x)$ of \eqref{eq:main}
depends on all of the trajectory
of $W(x),\ x\in (0,\pi),$ and therefore is not adapted as a function
of $x$.
\end{remark}
Given a fixed $\ba\in \cJ$, the sum in \eqref{eq:def-WP}
contains finitely many terms, but, in general,
$\sum_{\ba\in \cJ}
\Big\Vert\big(\eta\diamond\zeta\big)_{\ba}\Big\Vert_X^2
= \infty$ so that $\eta\diamond\zeta$ is not square-integrable.
Here is a sufficient condition for the Wick
product to be square-integrable.
\begin{proposition}
\label{prop:WP0}
If
\bel{eq:ex-zt}
\zeta=\sum_k b_k\xi_k,\ b_k\in \bR,
\ee
and
$\sum_k b^2_k<\infty$, then
$\eta \ \ \mapsto \ \ \eta\diamond \zeta$
is a bounded linear operator from $\mathbb{D}^1_2(W;X)$
to $L_2(W;X)$.
\end{proposition}
\begin{proof}
By \eqref{eq:def-WP},
$$
\bE\|\eta\diamond \zeta\|_X^2 =
\sum_{\ba\in\cJ}
\left\Vert \sum_k \sqrt{\alpha_k}\;
b_k \eta_{\ba^-(k)}\right\Vert^2_X.
$$
By the Cauchy-Schwarz inequality,
$$
\left\Vert \sum_k \sqrt{\alpha_k}\,
b_k \eta_{\ba^-(k)}\right\Vert^2_X
\leq
|\ba|\sum_k b_k^2 \|\eta_{\ba^-(k)}\|_X^2.
$$
After summing over all $\ba$ and shifting the summation index,
$$
\bE\|\eta\diamond \zeta\|_X^2 \leq
\Big(\sum_k b_k^2 \Big)
\sum_{\ba\in \cJ}\big(|\ba|+1\big)\|\eta_{\ba}\|_X^2,
$$
concluding the proof.
\end{proof}
Note that, while $\dot{W}(x)$ is of the
form \eqref{eq:ex-zt} (cf. \eqref{WN}),
Proposition \ref{prop:WP0} does not apply: for a typical
value of $x\in [0,\pi]$, $\sum_k |\mfk{m}_k(x)|^2=+\infty$.
Thus, without either adaptedness of $\eta$ or square-integrability
of $\dot{W}$, an investigation of the Wick
product $\eta\diamond \dot{W}(x)$ requires additional constructions.
One approach (cf. \cite{LR-spn}) is to note that if \eqref{eq:ex-zt}
is a linear combination of $\xi_k$,
then, by \eqref{eq:def-WP}, the number
$$
(\eta\diamond \zeta)_{\ba}=
\sum_k \sqrt{\alpha_k}\,b_k\eta_{\ba^-(k)}
$$
is well-defined for every $\ba\in \cJ$ regardless of
whether the series $\sum_k b_k^2$ converges or diverges.
This observation allows an extension of
the operation $\diamond$ to
spaces much bigger than $L_2(W;X)$ and $L_2(W;\bR)$;
see \cite[Proposition 2.7]{LR-spn}. In particular, both $\dot{W}$ and
$\eta\diamond\dot{W}$, with
\bel{wp-a}
\Big(\eta\diamond\dot{W}\Big)_{\ba}=
\sum_k \sqrt{\alpha_k}\,\mfk{m}_k\eta_{\ba^-(k)},
\ee
become generalized random elements with
values in $L_2((0,\pi))$.
An alternative approach, which we will pursue in this paper,
is to consider $\dot{W}$ and
$\eta\diamond\dot{W}$ as usual (square integrable) random elements
with values in a space of generalized functions.
For $\gamma\in \bR$, define the operator
\bel{LambdaOp}
\Lambda^{\gamma}
=\left(I-\frac{\partial^2}{\partial x^2}\right)^{\gamma/2}
\ee
on $L_2((0,\pi))$ by
\bel{Ldg}
\big(\Lambda^{\gamma} f\big)(x)=
\sum_{k=1}^{\infty} \big(1+(k-1)^{2}\big)^{\gamma/2}
f_k\mfk{m}_k(x),
\ee
where, for a smooth $f$ with compact support in $(0,\pi)$,
$$
f_k=\int_0^{\pi} f(x)\mfk{m}_k(x)dx;
$$
recall that $\{\mfk{m}_k,\ k\geq 1\}$ is the Fourier cosine
basis \eqref{cos-basis} in $L_2((0,\pi))$ so that
$$
\Lambda^2 \mfk{m}_k(x)=\mfk{m}_k(x)+\mfk{m}''_k(x)
=\big(1+(k-1)^2\big)\mfk{m}_k(x).
$$
If $\gamma>1/2$, then, by \eqref{Ldg},
\bel{Ker-gm}
\big( \Lambda^{-\gamma}f\big)(x)=
\int_0^{\pi} R_{\gamma}(x,y)f(y)dy,
\ee
where
\bel{Ker-gmG}
R_{\gamma}(x,y)=\sum_{k\geq 1}\big(1+(k-1)^2\big)^{-\gamma/2}
\mfk{m}_k(x)\mfk{m}_k(y).
\ee
\begin{definition}
\label{def:SobSp}
The Sobolev space $H^{\gamma}_2((0,\pi))$ is
$\Lambda^{-\gamma}\Big(L_2((0,\pi))\Big)$.
The norm $\|f\|_{\gamma}$ in the space is defined by
$$
\|f\|_{\gamma}=\|\Lambda^{\gamma}f\|_{0}.
$$
\end{definition}
The next result is a variation on the theme of
Proposition \ref{prop:WP0}.
\begin{theorem}
\label{th:WP-main}
If $\gamma>1/2,$
then $\eta\mapsto \eta\diamond \dot{W}$
is a bounded linear operator from
$\mathbb{D}^1_2\big(W;L_2((0,\pi))\big)$ to
$L_2\big(W;H^{-\gamma}_2((0,\pi))\big)$.
\end{theorem}
\begin{proof}
By \eqref{wp-a}, if
$$
\eta=\sum_{\ba\in\cJ}\eta_{\ba}\xi_{\ba}
$$
with $\eta_{\ba}\in L_2((0,\pi))$, then
$$
\big(\eta\diamond \dot{W}\big)(x)=
\sum_{\ba\in\cJ}\left(\sum_k
\sqrt{\alpha_k} \mfk{m}_k(x)\eta_{\ba^-(k)}(x)\right)
\xi_{\ba},
$$
so that
$$
\bE\|\eta\diamond \dot{W}\|_{-\gamma}^2
=
\sum_{\ba\in\cJ}\left\Vert
\sum_k \sqrt{\alpha_k}
\Lambda^{-\gamma}\big(\mfk{m}_k\eta_{\ba^-(k)}\big)
\right\Vert_{0}^2.
$$
By the Cauchy-Schwarz inequality,
$$
\left\Vert
\sum_k \sqrt{\alpha_k}
\Lambda^{-\gamma}\big(\mfk{m}_k\eta_{\ba^-(k)}\big)
\right\Vert_{0}^2
\leq
|\ba|\sum_{k}\int_0^{\pi}
\Big(\Lambda^{-\gamma}\big(\mfk{m}_k\eta_{\ba^-(k)}\big)
\Big)^2(x)dx.
$$
After summing over all $\ba$ and shifting the summation
index,
$$
\bE\|\eta\diamond \dot{W}\|_{-\gamma}^2
\leq
\sum_{\ba\in\cJ} \big( |\ba|+1 \big)
\sum_{k}\int_0^{\pi}
\Big(\Lambda^{-\gamma}\big(\mfk{m}_k\eta_{\ba}\big)
\Big)^2(x)dx.
$$
By \eqref{Ker-gm} and Parsevals's equality,
$$
\sum_{k}\int_0^{\pi}
\Big(\Lambda^{-\gamma}\big(\mfk{m}_k\eta_{\ba}\big)
\Big)^2(x)dx=
\int_0^{\pi}\int_0^{\pi}R_{\gamma}^2(x,y)\eta_{\ba}^2(y)dydx,
$$
and then \eqref{Ker-gmG} implies
$$
\int_0^{\pi}R_{\gamma}^2(x,y)dx = \sum_{k\geq 1}
\big(1+ (k-1)^{2}\big)^{-\gamma} \mfk{m}_k^2(y)
\leq \frac{2}{\pi}\sum_{k\geq 0}\frac{1}{(1+k^2)^{\gamma}},
$$
that is,
$$
\int_0^{\pi}\int_0^{\pi}R_{\gamma}^2(x,y)\eta_{\ba}^2(y)dydx
\leq C_{\gamma} \|\eta_{\ba}\|^2_0,\ \ \
C_{\gamma}=\frac{2}{\pi} \sum_{k\geq 0}\frac{1}{(1+k^2)^{\gamma}}.
$$
As a result,
$$
\bE\|\eta\diamond \dot{W}\|_{-\gamma}^2
\leq C_{\gamma}\sum_{\ba\in \cJ}\big(|\ba|+1\big)
\|\eta_{\ba}\|^2_{0},
$$
concluding the proof of Theorem \ref{th:WP-main}.
\end{proof}
\section{The Chaos Solution}
\label{sec:CSol}
Let $(V,H,V')$ be a normal triple of Hilbert spaces,
that is
\begin{itemize}
\item $V\subset H\subset V^{\prime}$ and the embeddings $V\subset H$ and
$H\subset V^{\prime}$ are dense and continuous;
\item The space $V^{\prime}$ is dual to $V$ relative to the inner product
in $H;$
\item There exists a constant $C_H>0$ such that
$\left\vert (u,v)_{H}\right\vert
\leq C_H\left\Vert u\right\Vert _{V}\left\Vert v\right\Vert
_{V^{\prime}}$ for all $u\in V$ and $v\in H.$
\end{itemize}
An abstract homogeneous Wick-It\^{o}-Skorohod
evolution equation in $(V,H,V')$,
driven by the collection $\{\xi_k,\ k\geq 1\}$ of iid standard
Gaussian random variables, is
\bel{eq:gen}
\dot{u}(t)=\opr{A}u(t)+\sum_k \opr{M}_k u(t) \diamond \xi_k,\
t>0,
\ee
where $\opr{A}$ and $\opr{M}_k$ are bounded linear
operators from $V$ to $V'$. Except for Section \ref{sec:FS},
everywhere else in the paper,
the initial condition $u(0)\in H$ is non-random.
\begin{definition}
\label{def:CS}
The {\tt chaos solution} of
\eqref{eq:gen} is the collection of functions
$\{u_{\ba}=u_{\ba}(t),\ t>0, \ \ba\in \cJ\}$
satisfying the {\tt propagator}
\begin{equation*}
\begin{split}
\dot{u}_{\zm}(t)&=\opr{A}u_{\zm},\
u_{\zm}(0)=u(0),\\
\dot{u}_{\ba}&=
\opr{A}u_{\ba}+\sum_k \sqrt{\alpha_k}
\opr{M}_ku_{\ba^-(k)},\ u_{\ba}(0)=0,\ |\ba|>0.
\end{split}
\end{equation*}
\end{definition}
It is known \cite[Theorem 3.10]{LR-spn} that if
the deterministic equation $\dot{v}=\opr{A}v$ is
well-posed in $(V,H,V')$, then \eqref{eq:gen} has a unique
chaos solution
\begin{equation}
\label{eq:ua-gen}
\begin{split}
u_{\ba}\left( t\right) =\frac{1}{\sqrt{\ba!}}
&\sum_{\sigma
\in{\mathcal{P}}_{n}}\int_{0}^{t}
\int_{0}^{s_{n}}\ldots\int_{0}^{s_{2}}\\
& \Phi_{t-s_{n}}{\opr{M}}_{k_{\sigma(n)}}
\cdots\Phi_{s_{2}-s_{1}}{\opr{M}}_{k_{\sigma(1)}}
\Phi_{s_1}u_0 \, ds_{1}\ldots ds_{n},
\end{split}
\end{equation}
where
\begin{itemize}
\item ${\mathcal{P}}_{n}$ is
the permutation group of the set $(1,\ldots, n)$;
\item $K_{\alpha}=\{k_{1},\ldots,k_{n}\}$ is the
characteristic set of $\ba$;
\item $\Phi_{t}$ is the semigroup generated by ${\opr{A}}$:
$u_{\zm}(t)=\Phi_{t}u_{0}$.
\end{itemize}
Once constructed, the chaos solution does not depend on the
particular choice of the basis in $L_2(W;H)$
\cite[Theorem 3.5]{LR-spn}. In general, though,
$$
\sum_{\ba\in\cJ} \|u_{\ba}(t)\|_H^2 =\infty,
$$
that is, the chaos solution belongs to a space that is bigger than
$L_2(W;H)$; cf. \cite[Remark 3.14]{LR-spn}.
On the one hand, equation \eqref{eq:main}
is a particular case of \eqref{eq:gen}:
$\opr{A}f(x)=f''(x)$ with zero Neumann boundary conditions,
$\opr{M}_kf(x)=\mfk{m}_k(x)f(x)$,
$H=L_2((0,\pi)),$ $V=H^{1}((0,\pi))$, $V'=H^{-1}((0,\pi))$.
The corresponding propagator becomes
\bel{eq:ppgA}
\begin{split}
\frac{\partial {u}_{\zm}(t,x)}{\partial t}&=
\frac{\partial^2{u}_{\zm}(t,x)}{\partial x^2} ,\
u_{\zm}(0,x)=u_0(x),\\
\frac{\partial{u}_{\ba}(t,x)}{\partial t}&=
\frac{\partial^2 u_{\ba}(t,x)}{\partial x^2}
+\sum_k \sqrt{\alpha_k}
\mfk{m}_k(x)u_{\ba^-(k)}(t,x),\ u_{\ba}(0,x)=0,\ |\ba|>0.
\end{split}
\ee
Then existence and uniqueness of the chaos solution
of \eqref{eq:main} are immediate:
\begin{proposition}
\label{prop:CS-gen}
If $u_0\in L_2((0,\pi))$, then equation \eqref{eq:main},
considered in the normal triple
$\Big(H^1((0,\pi)), L_2((0,\pi)), H^{-1}((0,\pi))\Big),$
has a unique chaos solution.
\end{proposition}
\begin{proof} This follows from
\cite[Theorems 3.10]{LR-spn}.
\end{proof}
On the other hand, equation \eqref{eq:main} has two important
features that are, in general, not present in \eqref{eq:gen}:
\begin{itemize}
\item The semigroup $\Phi_t$ has a kernel $\hker(t,x,y)$:
\bel{eq:kernel}
\Phi_tf(x)=\int_0^{\pi}\hker(t,x,y)f(y)dy,\ t>0,
\ee
where
\bel{eq:kernel1}
\hker(t,x,y)=\sum_{k\geq 1} e^{-(k-1)^2t}\mfk{m}_k(x)\mfk{m}_k(y)=
\frac{1}{\pi}+\frac{2}{\pi}
\sum_{k=1}^{\infty}e^{-k^2t}\cos(kx)\cos(ky).
\ee
\item By Parseval's equality,
\bel{eq:Parseval}
\sum_k \left(\int_0^{\pi} f(x)\mfk{m}_k(x)dx\right)^2=
\int_0^{\pi} f^2(x)dx.
\ee
\end{itemize}
In fact, the properties of
the chaos solution of \eqref{eq:main} are closely connected with the properties of
the function $\hker(t,x,y)$ from \eqref{eq:kernel1}. Below are some of the properties we will need.
\begin{proposition}
\label{prop:hker}
For $t>0$ and $x,y\in [0,\pi]$,
\begin{align}
\label{eq:hker0}
&0\leq \hker(t,x,y)\leq \frac{\sqrt{t}+1}{\sqrt{t}},\\
\notag
&|\hker_x(t,x,y)|\leq \frac{4}{t},\ \
|\hker_{xx}(t,x,y)|\leq \frac{27}{t^{3/2}},\ \
|\hker_t(t,x,y)|\leq \frac{27}{t^{3/2}}.
\end{align}
\end{proposition}
\begin{proof}
The maximum principle implies $0\leq \hker(t,x,y)$.
To derive other inequalities, note that, by
integral comparison,
$$
\sum_{k\geq 1} e^{-k^2t} \leq
\int_0^{\infty} e^{-x^2t}dx = \frac{\sqrt{\pi}}{2\sqrt{t}},\ \ t>0,
$$
and more generally, for $t>0,\, r\geq 1$,
\bel{IntComp-g}
\sum_{k\geq 1} k^{r}e^{-k^2t}\leq
\left(\frac{r}{2t}\right)^{(r+1)/2}+
\int_0^{\infty} x^{r} e^{-x^2t} dx
\leq \frac{(r+1)^{(r+1)}}{t^{(r+1)/2}}.
\ee
To complete the proof, we use
\begin{align*}
&|\hker(t,x,y)|\leq \frac{1}{2}+\frac{2}{\pi}\sum_{k\geq 1} e^{-k^2t},\
|\hker_x(t,x,y)|\leq \sum_{k\geq 1}k e^{-k^2t},\\
&|\hker_{xx}(t,x,y)|\leq \sum_{k\geq 1}k^2 e^{-k^2t},\
|\hker_t(t,x,y)|\leq \sum_{k\geq 1}k^2 e^{-k^2t}.\\
\end{align*}
\end{proof}
The main consequence of \eqref{eq:kernel} and
\eqref{eq:Parseval} is
\begin{proposition}
\label{prop-WCNormA}
(1) For $|\ba|=0$,
\bel{u0-L2}
\|u_{\zm}(t,\cdot)\|_0\leq \|u_0\|_0,\ \ t>0,
\ee
and
\bel{RF-2}
|u_{\zm}(s,y)|\leq
C(p,s,t)\|u_0\|_{L_p((0,\pi))},\ 0<s\leq t,\ 0\leq y\leq \pi,
\ee
with
$$
C(p,s,t)=
\begin{cases}
(1+\sqrt{t}\,)s^{-1/2},& {\rm \ if\ } p=1,\\
\pi^{1/p'} (1+\sqrt{t}\,)s^{-1/2},
& {\rm \ if\ } 1<p<+\infty,\ p'=\frac{p}{p-1},\\
1,\ & {\rm \ if\ } p=+\infty.
\end{cases}
$$
In particular,
\bel{Cpst}
C(p,s,t) \leq \pi (1+\sqrt{t})s^{-1/2}
\ee
for all $0<s\leq t$ and $1\leq p\leq +\infty$.
(2) For $|\ba|=n\geq 1,$
\bel{mod-a-gen-tx}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n} |u_{\boldsymbol{\alpha}}(t,x)|^2\\
&\leq n!\int_{(0,\pi)^n}\left(\int_{\bT^n_{0,t}}
\hker(t-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)u_{\zm}(s_1,y_1)
ds^n\right)^2 dy^n.
\end{split}
\end{equation}
\end{proposition}
\begin{proof} (1) For $|\ba|=0$,
$$
u_{\zm}(s,y) = \int_0^{\pi} \hker(s,y,z)u_0(z)dz,
$$
with $\hker$ from \eqref{eq:kernel1}.
Then
$$
\|u_{\zm}(s,\cdot)\|_0=\sum_{k\geq 1} e^{-(k-1)^2s} \ u_{0,k}^2,
$$
from which \eqref{u0-L2} follows.
To derive \eqref{RF-2} when $p<\infty$,
we use the H\"{o}lder inequality and \eqref{eq:hker0};
if $p=+\infty$, then we use $\int_0^{\pi} \hker(s,y,z) dz=1$ instead of the
upper bound in \eqref{eq:hker0}.
(2) It follows from \eqref{eq:ua-gen} that, for $|\ba|\geq 1$,
\begin{equation}
\label{eq:ua-gen-1}
\begin{split}
u_{\ba}(t,x)=&\frac{1}{\sqrt{\ba!}}\sum_{\sigma\in \mathcal{P}_n}
\int_{(0,\pi)^n}\int_{\bT^n_{0,t}}
\hker(t-s_n,x,y_n)\mfk{m}_{k_{\sigma(n)}}(y_n)\\
&\cdots \hker(s_{2}-s_1,y_2,y_1) \mfk{m}_{k_{\sigma(1)}}(y_1)
u_{\zm}(s_1,y_1)\, ds^n\, dy^n.
\end{split}
\end{equation}
Using \eqref{eq:kernel} and notations
\begin{align}
\notag
\mathfrak{e}_{\ba}(y_1,\ldots,y_n)&=\frac{1}{\sqrt{n!\,\ba!}}
\sum_{\sigma\in \mathcal{P}_n}
\mfk{m}_{k_{\sigma(n)}}(y_n)\cdots \mfk{m}_{k_{\sigma(1)}}(y_1),\\
\label{Fn}
F_n(t,x;y_1,\ldots, y_n)&=
\int_{\bT^n_{0,t}}\hker(t-s_n,x,y_n)
\cdots \hker(s_{2}-s_1,y_2,y_1) u_{\zm}(s_1,y_1)\, ds^n,
\end{align}
we re-write \eqref{eq:ua-gen-1} as
\bel{FCAlfa}
u_{\ba}(t,x)=\sqrt{n!}\int_{(0,\pi)^n}
F_n(t,x,y_1,\ldots, y_n)\mathfrak{e}_{\ba}(y_1,\ldots,y_n) dy^n.
\ee
The collection $\{\mathfrak{e}_{\ba}, \ |\ba|=n\}$ is an
orthonormal basis in the symmetric part of the space
$L_2\big((0,\pi)^n\big)$, so that
$u_{\ba}$ becomes the
corresponding Fourier coefficient of the function $F_n$,
and \eqref{mod-a-gen-tx} becomes Bessel's inequality.
\end{proof}
\begin{remark}
\label{rem:sym}
It follows from \eqref{FCAlfa} that
$$
\sum_{|\boldsymbol{\alpha}|=n} |u_{\boldsymbol{\alpha}}(t,x)|^2
=n!\int_{(0,\pi)^n}\widetilde{F}_n^2(t,x;y_1,\ldots,y_n)dy^n,
$$
where
$$
\widetilde{F}_n(t,x;y_1,\ldots,y_n)=
\frac{1}{n!}\sum_{\sigma\in \mathcal{P}_n}
F_n(t,x;y_{\sigma(1)},\ldots,y_{\sigma(n)})
$$
is the symmertrization of $F_n$ from \eqref{Fn}. By the Cauchy-Schwarz
inequality,
$$
\|\widetilde{F}_n\|_{L_2((0,\pi)^n)}\leq
\|F_n\|_{L_2((0,\pi)^n)},
$$
and a separate analysis is necessary to establish a more precise connection
between $\|\widetilde{F}_n\|_{L_2((0,\pi)^n)}$ and
$\|F_n\|_{L_2((0,\pi)^n)}$.
The upper bound
\eqref{mod-a-gen-tx} is enough for the purposes of this paper.
\end{remark}
\section{Basic Regularity of the Chaos Solution}
\label{sec:RCS}
The objective of this section is to
show that, for each $t>0$, the chaos solution of \eqref{eq:main} is
a regular, as opposed to generalized, random variable, and to introduce the main
techniques necessary to establish better regularity of the solution.
\begin{theorem}
\label{th:Lp}
If $u_0\in L_2((0,\pi))$, then, for every $t>0$, the solution
of \eqref{eq:main} satisfies
\bel{eq:Lq-int}
u(t,\cdot)\in \bigcap_{q>1} L_{2,q}\big(W;L_2((0,\pi))\big).
\ee
\end{theorem}
\begin{proof}
It follows from \eqref{mod-a-gen-tx} that
\bel{mod-a-gen}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n} |u_{\boldsymbol{\alpha}}(t,x)|^2\\
&\leq n!\int_{(0,\pi)^n}\int_{\bT^n_{0,t}}\int_{\bT^n_{0,t}}
\Big(
\hker(t-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)u_{\zm}(s_1,y_1)
\\
&
\phantom{n!\int_{(0,\pi)^n}\int_{\bT^n_{0,t}}\int_{\bT^n_{0,t}}}
\times
\hker(t-r_n,x,y_n)\cdots \hker(r_{2}-r_1,y_{2},y_1)u_{\zm}(r_1,y_1)
\Big)\,
ds^n\, dr^n\ dy^n.
\end{split}
\ee
We now integrate both sides of
\eqref{mod-a-gen} with respect to $x$ and use the semigroup property
\bel{semigr}
\int_0^{\pi} \hker(t,x,y)\,\hker(s,y,z)\,dy=
\hker(t+s,x,z)
\ee
together with \eqref{eq:hker0} to evaluate the
integrals over $(0,\pi)$ on the right-hand side,
starting from the outer-most integral.
We also use \eqref{u0-L2}. The result is
\bel{aux-rf1}
\begin{split}
\sum_{|\boldsymbol{\alpha}|=n} \|u_{\boldsymbol{\alpha}}(t,\cdot)\|_0^2
& \leq n!\,\|u_0\|_{0}^2\, \big(1+\sqrt{t}\big)^{2n}
\int_{\bT^n_{0,t}}\int_{\bT^n_{0,t}}
(2t-s_n-r_n)^{-1/2}\\
&(s_n+r_n-s_{n-1}-r_{n-1})^{-1/2}
\cdots (s_2+r_2-s_1-r_1)^{-1/2}
ds^n\, dr^n.
\end{split}
\ee
Next, we use the inequality $4pq\leq (p+q)^2$, $p,q > 0$, to find
\begin{equation}
\label{ineq_1}
(p+q)^{-1/2}\leq p^{-1/4}q^{-1/4},
\end{equation}
so that
\bel{aux-time}
\begin{split}
&\int\limits_{\bT^n_{0,t}}\int\limits_{\bT^n_{0,t}}
(2t-s_n-r_n)^{-1/2}(s_n+r_n-s_{n-1}-r_{n-1})^{-1/2}
\cdots (s_2+r_2-s_1-r_1)^{-1/2}ds^ndr^n\\
& \leq \left(\int\limits_{\bT^n_{0,t}}(t-s_n)^{-1/4}(s_n-s_{n-1})^{-1/4}
\cdots (s_2-s_1)^{-1/4}ds^n\right)^2
\!\!\!\!=
\left(\frac{\big(\Gamma(3/4)\big)^n}{\Gamma((3/4)n+1)}\right)^2
t^{3n/2},
\end{split}
\ee
where $\Gamma$ is the Gamma function
$$
\Gamma(y)=\int_0^{\infty} t^{y-1}e^{-t}dt.
$$
The last equality in \eqref{aux-time} follows by induction using
\bel{beta}
\int_0^t s^p (t-s)^qds=t^{p+q+1}\,\frac{\Gamma(1+p)\Gamma(1+q)}
{\Gamma(2+p+q)},\ \ \ p,q>-1.
\ee
Combining \eqref{mod-a-gen}, \eqref{aux-rf1}, and \eqref{aux-time},
$$
\sum_{|\boldsymbol{\alpha}|=n} \|u_{\boldsymbol{\alpha}}(t,\cdot)\|_0^2
\leq n!
\left(\frac{\big(\Gamma(3/4)\big)^n}{\Gamma((3/4)n+1)}\right)^2
\,\big(1+\sqrt{t}\big)^{2n}\,t^{3n/2}\,\|u_0\|_{0}^2.
$$
As a consequence of the Stirling formula,
$$
\Gamma(1+p)\geq \sqrt{2\pi p} \, p^p e^{-p}\ \ {\rm and } \ \
n!\leq 2\sqrt{\pi} n^ne^{-n},
$$
meaning that
\bel{mod-a-gen-1-int}
\sum_{|\boldsymbol{\alpha}|=n}
\|u_{\boldsymbol{\alpha}}(t,\cdot)\|_0^2
\leq C^n(t) {n^{-n/2}}\, \|u_0\|_0^2,\ t>0,
\ee
with
$$
C(t)= \big(4/3\big)^{3/2}\,e^{1/2}\,\Gamma^2(3/4)\,
\big(1+\sqrt{t}\big)^{2}\,t^{3/2}.
$$
Since
$$
\bE\|u(t,\cdot)\|^2_{L_{2,q}(L_2((0,\pi))}=\sum_{n=0}^{\infty}
q^n \sum_{\ba\in \cJ: |\ba|=n} \|u_{\ba}(t,\cdot)\|^2_{0},
$$
and the series
$$
\sum_{n\geq 1} \frac{C^n}{n^{n/2}}=
\sum_{n\geq 1}\left(\frac{C}{\sqrt{n}}\right)^n
$$
converges for every $C>1$, we get \eqref{eq:Lq-int} and
conclude the proof of Theorem \ref{th:Lp}.
\end{proof}
\begin{corollary}
If $u_0\in L_2((0,\pi))$, then the chaos solution is an
$L_2((0,\pi))$-valued random process and, for all $t\geq 0$,
$$
\bE\|u(t,\cdot)\|_{0}^p
<\infty, \ \ 1\leq p<\infty.
$$
\end{corollary}
\begin{proof}
This follows from \eqref{eq:Lq-int} and Proposition \ref{prop:LcInLp}.
\end{proof}
We will need a slightly more general family of integrals than the one appearing
on the right-hand side of \eqref{aux-time}:
\bel{ab-Int}
\begin{split}
I_1(t;\alpha,\beta)&=\int_0^t (t-s)^{-\alpha}s^{-\beta}ds,\\
I_n(t;\alpha,\beta)&=\int_{\bT_{0,t}^n}(t-s_n)^{-\alpha}\prod_{k=2}^n (s_k-s_{k-1})^{-1/4}
s_1^{-\beta}ds^n, \ \ n=2,3,\ldots,
\end{split}
\ee
for $\alpha\in (0,1),\ \beta\in [0,1)$.
Note that
$$
I_1(t;\alpha,\beta)=\int_0^t (t-s)^{-\alpha}s^{-\beta}ds=
\frac{\Gamma(1-\alpha)\Gamma(1-\beta)}{\Gamma(2-\alpha-\beta)}\, t^{1-\alpha-\beta}
$$
and
$$
I_n(t;\alpha,\beta)=\int_0^t (t-s_n)^{-\alpha}I_{n-1}(s_n;1/4,\beta)ds_n,\ n\geq 1.
$$
By induction and \eqref{beta},
$$
I_n(t;\alpha,\beta)=
\frac{\big(\Gamma(3/4)\big)^{n-1}\Gamma(1-\alpha)\Gamma(1-\beta)}
{\Gamma\big((3n+5-4\alpha-4\beta)/4\big)}\,t^{(3n+1-4\alpha-4\beta)/4},
$$
and then
\bel{ab-Int-1}
n!\,I_n^2(t;\alpha,\beta) \leq C^n(\alpha,\beta,t)n^{-n/2};
\ee
cf. \eqref{mod-a-gen-1-int}.
Next, we show that the chaos solution of \eqref{eq:main} is,
in fact, a {\tt random field solution}, that is,
$u(t,x)$ is well-defined as a
random variable for every $t>0$, $x\in [0,\pi]$.
\begin{theorem}
\label{th:rf}
If $u_0\in L_{p}((0,\pi))$ for some $1\leq p\leq \infty$, then,
for every $t>0$ and $x\in [0,\pi]$,
\bel{eq:Lq}
u(t,x)\in \bigcap_{q>1} L_{2,q}(W;\bR).
\ee
\end{theorem}
\begin{proof}
By Proposition \ref{prop-WCNormA},
inequality \eqref{mod-a-gen} becomes
\bel{mod-a-gen-prf}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n} |u_{\boldsymbol{\alpha}}(t,x)|^2
\leq n!\pi^{2}\big(1+\sqrt{t}\big)^2\|u_0\|^2_{L_p((0,\pi))}\\
&\int\limits_{(0,\pi)^n}\ \ \ \iint\limits_{\bT^n_{0,t}\times \bT^n_{0,t}}
\Big(
\hker(t-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)s_1^{-1/2}
\\
&
\phantom{n!\int_{(0,\pi)^n}\int_{\bT^n_{0,t}}\int_{\bT^n_{0,t}}}
\times
\hker(t-r_n,x,y_n)\cdots \hker(r_{2}-r_1,y_{2},y_1)r_1^{-1/2}
\Big)\,
ds^n\, dr^n\ dy^n.
\end{split}
\ee
We now use the semigroup property \eqref{semigr}
together with \eqref{eq:hker0} to evaluate the
integrals over $(0,\pi)$ on the right-hand side of \eqref{mod-a-gen-prf}
starting from the inner-most integral with respect to $y_1$.
The result is
\bel{aux-prf1}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n} |u_{\boldsymbol{\alpha}}(t,x)|^2
\leq n!\, \pi^{2} \big(1+\sqrt{t}\big)^{2(n+1)}\,
\|u_0\|_{L_p((0,\pi))}^2
\iint\limits_{\bT^n_{0,t}\times \bT^n_{0,t}}
(2t-s_n-r_n)^{-1/2}\\
&(s_n+r_n-s_{n-1}-r_{n-1})^{-1/2}
\cdots (s_2+r_2-s_1-r_1)^{-1/2}s_1^{-1/2}r_1^{-1/2}
ds^n\, dr^n.
\end{split}
\ee
Next, similar to \eqref{aux-time}, we use \eqref{ineq_1}
and \eqref{ab-Int} to compute
\bel{aux-time-prf}
\begin{split}
\iint\limits_{\bT^n_{0,t}\times \bT^n_{0,t}}&
(2t-s_n-r_n)^{-1/2}(s_n+r_n-s_{n-1}-r_{n-1})^{-1/2}\\
&\cdots (s_2+r_2-s_1-r_1)^{-1/2}s_1^{-1/2}r_1^{-1/2}ds^ndr^n\\
& \leq \left(\int\limits_{\bT^n_{0,t}}(t-s_n)^{-1/4}(s_n-s_{n-1})^{-1/4}
\cdots (s_2-s_1)^{-1/4}s_1^{-1/2}ds^n\right)^2\\
&=
I_n^2(t;1/4,1/2).
\end{split}
\ee
Combining \eqref{aux-prf1}
with \eqref{aux-time-prf} and \eqref{ab-Int-1},
\bel{final-prf}
\sum_{|\boldsymbol{\alpha}|=n}
|u_{\boldsymbol{\alpha}}(t,x)|^2
\leq C^n(t) {n^{-n/2}}\, \|u_0\|_{L_p((0,\pi))}^2,
\ee
for a suitable $C(t)$.
Then \eqref{final-prf} leads to \eqref{eq:Lq} in the same way as
\eqref{mod-a-gen-1-int} lead to \eqref{eq:Lq-int}, completing the
proof of Theorem \ref{th:rf}.
\end{proof}
\begin{corollary}
For every $t>0$, $x\in [0,\pi]$, and $1\leq p<\infty$,
$$
\bE|u(t,x)|^p<\infty.
$$
\end{corollary}
\begin{proof}
This follows from \eqref{eq:Lq} and
Proposition \ref{prop:LcInLp}.
\end{proof}
Finally, we establish a version of the maximum principle for the chaos solution.
\begin{theorem}
\label{th:positivity}
If $u_0(x)\geq 0$ for all $x\in [0,\pi],$
and $u=u(t,x)$ is a random field solution
of \eqref{eq:main} such that
$$
u\in L_2\big(\Omega\times [0,T], L_p((0,\pi))\big),
$$
then, with probability one,
$u(t,x)\geq 0$ for all $t\in [0,T]$ and $x\in [0,\pi]$.
\end{theorem}
\begin{proof}
Let $h=h(x)$ be a smooth function with compact support
in $(0,\pi)$ and define
$$
V(t,x;h)=\bE \left(u(t,x)
\exp\left(\dot{W}(h)-\frac{1}{2}\|h\|^2_{L_2(0,\pi)}\right)
\right).
$$
Writing
$h(x)=\sum_{k=1}^{\infty} h_k\mfk{m}_k(x)$
and
$h^{\ba}=\prod_k h_k^{\alpha_k},$
we find
$$
V(t,x;h)=\sum_{\ba\in \cJ}
\frac{h^{\ba}u_{\ba}(t,x)}{\sqrt{\ba!}}.
$$
By \eqref{eq:ppgA}, the function
$V=V(t,x;h)$ satisfies
$$
\frac{\partial V(t,x;h)}{\partial t} =
\frac{\partial^2 V(t,x;h)}{\partial x^2} +
h(x)V(t,x;h),\ 0<t\leq T, \ x\in (0,\pi),
$$
with $V(0,x;h)=u_0(x)$ and $V_x(t,0;h)=V_x(t,\pi;h)=0$,
and then the
maximum principle implies
$V(t,x;h)\geq 0$ for all $t\in [0,T], \ x\in [0,\pi]$.
The conclusion of the theorem now follows, because the
collection of the random variables
$$
\left\{\exp\left(\dot{W}(h)-\frac{1}{2}\|h\|^2_{L_2((0,\pi))}\right),
\ \ h
\ {\rm smooth\ with \ compact\ support \ in } \ (0,\pi)\right\}
$$
is dense in $L_2(W;\bR)$; cf. \cite[Lemma 4.3.2]{Oksendal}.
\end{proof}
\begin{remark}
If $u=u(t,x)$ is continuous in $(t,x)$, then
there exists a single probability-one subset $\Omega'$ of
$\Omega$ such that $u=u(t,x,\omega)>0$ for all
$t\in [0,T]$, $x\in [0,\pi]$, and $\omega\in \Omega'$.
\end{remark}
\section{Equation With Additive Noise}
\label{sec:AdN}
The objective of this section is to establish the bench-mark
space-time regularity result for \eqref{eq:main} by considering the
corresponding equation with additive noise:
\bel{eq:add-n}
\begin{split}
U_t&=U_{xx}+\dot{W}(x),\ t>0,\ x\in (0,\pi),\\
U(0,x)&=0,\ U_x(t,0)=U_x(0,\pi)=0.
\end{split}
\ee
By the variation of parameters formula, the solution of
\eqref{eq:add-n} is
$$
U(t,x)=\int_0^t\int_0^{\pi} \hker(s,x,y)dW(y)ds.
$$
Using \eqref{eq:kernel1},
\begin{align}
\label{eq:add-n-sol}
U(t,x)&=\frac{t}{\pi}\zeta_0+\frac{2}{\pi}
\sum_{k\geq 1} k^{-2}\big(1-e^{-k^2t})\cos(kx)\zeta_k,\\
\label{AddN-Ux}
U_x(t,x)&=-\frac{2}{\pi}
\sum_{k\geq 1} k^{-1}\big(1-e^{-k^2t}\big)\sin(kx)\zeta_k,
\end{align}
where
$$
\zeta_0=W(\pi),\ \ \ \zeta_k=\int_0^{\pi} \cos(kx)dW(x),\ k\geq 1,
$$
are independent Gaussian random variables with zero mean.
In particular, the series on the right-hand sides of \eqref{eq:add-n-sol}
and \eqref{AddN-Ux} converge with probability one for
every $t>0$ and $x\in [0,\pi]$.
Let us now recall the necessary definitions of the H\"older spaces.
For a function $f=f(x), \ x\in (x_1,x_2)$,
$-\infty<x_1<x_2<+\infty$, we write
$$
f\in \mathcal{C}^{\alpha}((x_1,x_2)), \ 0<\alpha\leq 1,
$$
or, equivalently, $f$ is H\"older$(\alpha)$, if
$$
\sup_{x,y\in (x_1,x_2),x\not=y} \frac{|f(x)-f(y)|}{|x-y|^{\alpha}}<\infty.
$$
Similarly,
$$
f\in \mathcal{C}^{1+\alpha}((x_1,x_2))
$$
if $f$ is continuously differentiable on $[x_1,x_2]$ and
$$
\sup_{x,y\in (x_1,x_2),x\not=y} \frac{|f'(x)-f'(y)|}{|x-y|^{\alpha}}<\infty.
$$
We also write $f\in \mathcal{C}^{\beta-}((x_1,x_2))$, or
$f$ is almost H\"older$(\beta)$, if
$f\in \mathcal{C}^{\beta-\varepsilon}((x_1,x_2))$ for every
$\varepsilon\in (0,\beta)$.
The main tool for establishing H\"older regularity of
random processes is the Kolmogorov continuity criterion:
\begin{theorem}
\label{th:KCC}
Let $T$ be a positive real number and $X=X(t)$, a real-valued random process
on $[0,T]$. If there exist
numbers $C>0$, $p>1$, and $q\geq p$ such that, for all $t,s\in [0,T]$,
\begin{equation*}
\label{mean-cont0}
\bE |X(t)-X(s)|^q\leq C|t-s|^p,
\end{equation*}
then there exists a modification of $X$ with
sample trajectories that are almost H\"{o}lder$\big((p-1)/q\big)$.
\end{theorem}
\begin{proof}
See, for example Karatzas and Shreve \cite[Theorem 2.2.8]{KarShr}.
\end{proof}
We now apply Theorem \ref{th:KCC} to the solution of equation
\eqref{eq:add-n}.
\begin{theorem}
\label{th:AdN-reg}
The random field $U=U(t,x)$ defined in \eqref{eq:add-n-sol}
satisfies
\begin{align}
\label{AddN-time}
U(\cdot,x)&\in \mathcal{C}^{3/4-}((0,T)),\ x\in [0,\pi],\ T>0;\\
\label{AddN-tx}
U_x(\cdot,x)& \in \mathcal{C}^{1/4-}((0,T)),\ x\in [0,\pi],\ T>0;\\
\label{AddN-x}
U(t,\cdot)& \in \mathcal{C}^{3/2-}((0,\pi)),\ t>0.
\end{align}
\end{theorem}
\begin{proof}
For every $t>0$ and $x,y\in [0,\pi]$, the random variables
$\tilde{U}(t,x)=U(t,x)-\zeta_0t/\pi$ and $U_x(t,x)$
are Gaussian, so that, by Theorem \ref{th:KCC}, statements
\eqref{AddN-time}, \eqref{AddN-tx}, and \eqref{AddN-x} will
follow from
\begin{align}
\label{AddN-timeE}
\bE|\tilde{U}(t+h,x)-\tilde{U}(t,x)|^2\leq &C(\varepsilon) h^{3/2-\varepsilon},\
\varepsilon\in (0,3/2),\\
\label{AddN-txE}
\bE|U_x(t+h,x)-U_x(t,x)|^2\leq &C(\varepsilon) h^{1/2-\varepsilon},\
\varepsilon\in (0,1/2),\\
\label{AddN-xE}
\bE|U_x(t,x+h)-U_x(t,x)|^2\leq &C(\varepsilon) h^{1-\varepsilon},\
\varepsilon\in (0,1),
\end{align}
respectively, if we use $p=q\delta/2$ with suitable $\delta$ and
sufficiently large $q$.
Using \eqref{eq:add-n-sol} and \eqref{AddN-Ux}, and keeping in
mind that $\zeta_k,\ k\geq 1,$ are iid Gaussian with mean zero
and variance $\pi/2$,
\begin{align}
\label{AddN-timeE1}
\bE|\tilde{U}(t+h,x)-\tilde{U}(t,x)|^2=&\frac{2}{\pi}
\sum_{k\geq 1} k^{-4}e^{-2k^2t}(1-e^{-k^2h})^2\cos^2(kx)
;\\
\label{AddN-txE1}
\bE|U_x(t+h,x)-U_x(t,x)|^2=&\frac{2}{\pi}
\sum_{k\geq 1} k^{-2}e^{-2k^2t}(1-e^{-k^2h})^2\cos^2(kx);\\
\label{AddN-xE1}
\bE|U_x(t,x+h)-U_x(t,x)|^2=&\frac{2}{\pi}
\sum_{k\geq 1} k^{-2}(1-e^{-k^2h})^2\big(\sin(k(x+h))-\sin(kx)\big)^2.
\end{align}
We also use
\begin{align}
\label{ineq-exp}
1-e^{-\theta}\leq \theta^{\alpha}, \ 0<\alpha\leq 1,\ \theta>0,\\
\label{ineq-trig}
\ \sin \theta\leq \theta^{\alpha},\ 0<\alpha\leq 1,\ \theta>0.
\end{align}
Then
\begin{itemize}
\item Inequality \eqref{AddN-timeE} follows from
\eqref{IntComp-g}, \eqref{AddN-timeE1},
and \eqref{ineq-exp} by taking $\alpha<3/4$;
\item Inequality \eqref{AddN-txE} follows from
\eqref{IntComp-g}, \eqref{AddN-txE1},
and \eqref{ineq-exp} with $\alpha<1/4$;
\item Inequality \eqref{AddN-xE} follows from
\eqref{IntComp-g}, \eqref{AddN-xE1},
and \eqref{ineq-trig} with $\alpha<1/2$.
\end{itemize}
\end{proof}
\begin{remark}
\label{rem:compensate}
Similar to \cite[Theorem 3.3]{DK14} in the case of space-time
white noise, equalities \eqref{eq:add-n-sol} and \eqref{AddN-Ux} imply
that, for every $t>0$, the random field $U(t,x)$ is infinitely differentiable
in $t$, and the random field $U_x(t,x)+B(x)$ is infinitely differentiable in
$x$, where
$$
B(x)=\frac{2}{\pi} \sum_{k\geq 1} k^{-1}\zeta_k\sin(kx)
$$
is a Bronwian bridge on $[0,\pi]$.
\end{remark}
\section{Time Regularity of the Chaos Solution}
\label{sec:Time}
The objective of this section is to show that the chaos solution of
\eqref{eq:main} has a modification that is almost H\"older$(3/4)$
in time. To simplify the presentation,
we will not distinguish different modifications of the solution.
\begin{theorem}
\label{th:TimeReg}
If $u_0\in \mathcal{C}^{3/2}((0,\pi))$, then the
chaos solution of \eqref{eq:main} satisfies
$$
u(\cdot,x)\in \mathcal{C}^{3/4-}\left((0,T)\right)$$
for every $T>0$ and $x\in [0,\pi]$.
\end{theorem}
\begin{proof} We need to show that, for every $x\in [0,\pi]$,
$h\in(0,1)$, $\varepsilon\in (0,3/4)$, $t\in (0,T)$, and
$p\in (1,+\infty)$,
$$
\Big(\bE|u(t+h,x)-u(t,x)|^p\Big)^{1/p} \leq C(p,T,\varepsilon) h^{3/4-\varepsilon}.
$$
Then the statement of the theorem will follow Theorem \ref{th:KCC}.
Recall that $u_{(\bold{0})}(t,x)$ is the solution of
$$
\frac{\partial u_{(\bold{0})}(t,x)}{\partial t}
=\frac{\partial^2u_{(\bold{0})}(t,x)}{\partial x^2},\ u_{(\bold{0})}(0,x)=u_0(x),
$$
with boundary conditions
$$
\frac{\partial u_{(\bold{0})}(t,0)}{\partial x}=
\frac{\partial u_{(\bold{0})}(t,\pi)}{\partial x}=0,
$$
that is,
$$
\frac{\partial u_{(\bold{0})}(t,x)}{\partial t}=(1-\Lambda^2)u_{(\bold{0})}(t,x);
$$
the operator $\Lambda$ is defined in \eqref{LambdaOp}.
Applying \cite[Theorem 5.3]{LSU} to equation
$$
U_t(t,x)=(1-\Lambda^2)U(t,x),\ U(0,x)=\Lambda^{-1}u_0(x),
$$
we conclude that, for each $x\in [0,\pi]$,
\bel{det-time-reg}
u_{(\bold{0})}(\cdot,x)\in \mathcal{C}^{3/4}((0,T)).
\ee
For $n\geq 1$ and $h\in (0,1)$, similar to \eqref{mod-a-gen-tx},
\begin{equation}
\label{eq:Timedifference}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n}
\left|u_{\boldsymbol{\alpha}}(t+h,x)-u_{\boldsymbol{\alpha}}(t,x)\right|^2\\
\leq &
n!\int_{(0,\pi)^n}
\Bigg(\int_{\mathbb{T}_{0,t+h}^n}
\hker(t+h-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)u_{(\bold{0})}(s_1,y_1)
ds^n\\
-&
\int_{\mathbb{T}_{0,t}^n}
\hker(t-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)u_{(\bold{0})}(s_1,y_1)
ds^n\Bigg)^2 dy^n.
\end{split}
\end{equation}
We add and subtract
$$
\int_{\mathbb{T}_{0,t}^n} \hker(t+h-s_n,x,y_n)
\cdots \hker(s_2-s_1,y_2,y_1)u_{(\bold{0})}(s_1,y_1)ds^n
$$
inside the square on the right-hand side of \eqref{eq:Timedifference},
and then use $(p+q)^2 \leq 2p^2+2q^2$ to re-write
\eqref{eq:Timedifference} as
\begin{equation}
\label{eq:Timedifferenceineq}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n}
\left|u_{\boldsymbol{\alpha}}(t+h,x)-u_{\boldsymbol{\alpha}}(t,x)\right|^2\\
\leq& 2n!\int_{(0,\pi)^n}
\Bigg(\int_t^{t+h}\int_{\mathbb{T}_{0,s_n}^{n-1}}
\hker(t+h-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)
u_{(\bold{0})}(s_1,y_1)
ds^n\Bigg)^2dy^n\\
+&
2n!\int_{(0,\pi)^n}\Bigg(\int_{\mathbb{T}_{0,t}^n}
\Big[\hker(t+h-s_n,x,y_n)-\hker(t-s_n,x,y_n)\Big]
\hker(s_n-s_{n-1},y_n,y_{n-1})\\
&
\cdots \hker(s_{2}-s_1,y_{2},y_1)u_{(\bold{0})}(s_1,y_1)
ds^n\Bigg)^2 dy^n.
\end{split}
\end{equation}
To estimate the first term on the right-hand side of \eqref{eq:Timedifferenceineq},
we follow computations
similar to \eqref{aux-time} and \eqref{aux-prf1}, and
use
\bel{time-reg-aux000}
\hker(t,x,y)\geq 0,\ \
\|u_{\mathbf{(0)}}(s,\cdot)\|_ {L_{\infty}((0,\pi))} \leq \|u_0\|_{L_{\infty}((0,\pi))},\ \
s\geq 0,
\ee
as well as \eqref{ab-Int-1}:
\begin{equation}
\label{Time-FirstTerm}
\begin{split}
&2n!\int\limits_{(0,\pi)^n}
\Bigg(\int\limits_t^{t+h}\int\limits_{\mathbb{T}_{0,s_n}^{n-1}}
\hker(t+h-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)
u_{(\bold{0})}(s_1,y_1)
ds^n\Bigg)^2dy^n\\
\leq& 2n!\|u_{0}\|_{L_{\infty}((0,\pi))}^2
\Bigg(\int\limits_t^{t+h}\!\int\limits_{\mathbb{T}_{0,s_n}^{n-1}}\!\!
(t+h-s_n)^{-1/4} (s_n-s_{n-1})^{1/4}\cdots (s_2-s_1)^{-1/4}
ds^n\Bigg)^2\!\!dy^n\\
\leq& 2n!\|u_{0}\|_{L_{\infty}((0,\pi))}^2
\Bigg(
\int\limits_t^{t+h}(t+h-s_n)^{-1/4} I_{n-1}(s_n;1/4,1/4)ds_n
\Bigg)^2\\
&\leq
\|u_{0}\|_{L_{\infty}((0,\pi))}^2 \, C^n(t) n^{-n/2} \, h^{3/2},
\end{split}
\end{equation}
with a suitable $C(t)$.
To estimate the second term on the right-hand side of \eqref{eq:Timedifferenceineq},
define
\begin{equation*}
\begin{split}
\mathcal{I}(t,h,s,r,x)=\int_0^{\pi}& \Big(\hker(t+h-s,x,y)-\hker(t-s,x,y)\Big)\\
&\times \Big(\hker(t+h-r,x,y)-\hker(t-r,x,y)\Big)dy.
\end{split}
\end{equation*}
By \eqref{eq:kernel1},
$$
\mathcal{I}(t,h,s,r,x)
=\frac{4}{\pi^2}\sum_{k\geq 1} (e^{-k^2h}-1)^2
e^{-k^2(t-s)-k^2(t-r)}\cos^2(kx).
$$
Using \eqref{ineq-exp} and taking
$0<\gamma<3/4$, we conclude that
$$
\mathcal{I}(t,h,s,r,x)\leq
h^{2\gamma}\sum_{k\geq 1} k^{4\gamma}e^{-k^2(2t-s-r)}.
$$
Then \eqref{IntComp-g} implies
$$
\mathcal{I}(t,h,s,r,x)\leq
(4\gamma+1)^{4\gamma+1}\, h^{2\gamma}\,(2t-s-r)^{-(1/2+2\gamma)}.
$$
Note that $1/2+2\gamma<2$.
We now carry out computations similar to \eqref{aux-prf1}, and
use \eqref{ab-Int-1} and \eqref{time-reg-aux000}:
\begin{equation}
\label{eq:2ndTerm}
\begin{split}
&2n!\int\limits_{(0,\pi)^n}\Bigg(\ \int\limits_{\mathbb{T}_{0,t}^n}
\Big[\hker(t+h-s_n,x,y_n)-\hker(t-s_n,x,y_n)\Big]\\
&
\times \hker(s_{2}-s_1,y_{2},y_1)
\cdots \hker(s_{2}-s_1,y_{2},y_1)u_{(\bold{0})}(s_1,y_1)
ds^n\Bigg)^2 dy^n\\
&\leq
2n!\|u_{0}\|_{L_{\infty}((0,\pi))}^2 (1+\sqrt{t})^{2n}\\
&\iint\limits_{\mathbb{T}^{n}_{0,t}\times\mathbb{T}^{n}_{0,t}}
\!\!\!\mathcal{I}(t,h,s_n,r_n,x)
\prod^{n-2}_{k=1}(s_{k+1}+r_{k+1}-s_k-r_k)^{-1/2}
(s_{1}+r_{1}-2s)^{-1/2}ds^ndr^n\\
&\leq h^{2\gamma} \|u_{0}\|_{L_{\infty}((0,\pi))}^2
\frac{4(4\gamma+1)^{4\gamma+1}}{\pi}\, (1+\sqrt{t})^{2n}\,
n!\,I_n^2\big(t;1/4+\gamma, 1/4\big)\\
&\leq
\|u_{0}\|_{L_{\infty}((0,\pi))}^2 C^n(t) n^{-n/2} \, h^{2\gamma},
\end{split}
\end{equation}
with a suitable $C(t)$.
Combining \eqref{Time-FirstTerm} and \eqref{eq:2ndTerm},
\bel{t-incr}
\sum_{|\boldsymbol{\alpha}|=n}
\left|u_{\boldsymbol{\alpha}}(t+h,x)-
u_{\boldsymbol{\alpha}}(t,x)\right|^2\leq h^{3/2-2\varepsilon}\,
\|u_{0}\|_{L_{\infty}((0,\pi))}^2C^n(t,\varepsilon)n^{-n/2},
\ee
$\varepsilon \in (0,3/4),\ n\geq 1,$ and then, by \eqref{det-time-reg} and Proposition \ref{prop:LcInLp},
\begin{equation*}
\begin{split}
\Big(\mathbb{E}\left|u(t+h,x)-u(t,x)\right|^p\Big)^{1/p}
\leq&\sum_{n=0}^{\infty}(p-1)^{n/2}\left(\sum_{|\boldsymbol{\alpha}|=n}
\left|u_{\boldsymbol{\alpha}}(t+h,x)-
u_{\boldsymbol{\alpha}}(t,x)\right|^2\right)^{1/2}\\
\leq& C(p,T,\varepsilon)\|u_{0}\|_{L_{\infty}((0,\pi))}\,h^{3/4-\varepsilon},
\end{split}
\end{equation*}
for all $1< p<+\infty,\ t\in (0,T),\ \varepsilon \in (0,3/4),\ 0<h<1$,
completing the proof of Theorem \ref{th:TimeReg}.
\end{proof}
\section{Space Regularity of the Chaos Solution}
\label{sec:Space}
The objective of this section is to show that, for every $t>0$,
the chaos solution
$$
u(t,x)=\sum_{\ba\in \mathcal{J}} u_{\ba}(t,x)\xi_{\ba}
$$
of \eqref{eq:main} has a modification that is
in $\mathcal{C}^{3/2-}((0,\pi))$.
As in the previous section, we will not distinguish between different modifications of $u$.
To streamline the presentation, we will break the argument in two parts:
existence of $u_x$ as a random field, followed by H\"{o}lder$(1/2-)$
regularity of $u_x$ in space.
Define
$$
v(t,x)=\sum_{\ba\in \mathcal{J}} v_{\ba}(t,x)\xi_{\ba},
$$
where
\begin{equation*}
\label{eq:va-gen}
\begin{split}
v_{\ba}\left(t,x\right) &=\frac{\partial u_{\ba}(t,x)}{\partial x}\\
&=
\frac{1}{\sqrt{\ba!}}\int_{(0,\pi)^n}
\sum_{\sigma
\in{\mathcal{P}}_{n}}\int_{0}^{t}
\int_{0}^{s_{n}}\ldots\int_{0}^{s_{2}}
\hker_x(t-s_{n},x,y_n)\mfk{m}_{k_\sigma(n)}(y_n)\\
\times &\hker(s_{n}-s_{n-1},y_n,y_{n-1})\mfk{m}_{k_\sigma(n-1)}
\cdots\hker(s_{2}-s_{1},y_2,y_1)\mfk{m}_{k_\sigma(1)}(y_1)
u_0(s_1,y_1) \, ds^ndy^n.
\end{split}
\end{equation*}
\begin{theorem}
Assume that $u_0 \in L_p((0,\pi))$ for some $1\leq p\leq \infty$.
Then, for every $t>0$ and $x\in (0,\pi) $,
$$
u_x(t,x) \in \bigcap_{q>1} L_{2,q}(W;\mathbb{R}).
$$
\end{theorem}
\begin{proof}
By construction, $v=u_x$ as generalized processes. It remains to
show that
\bel{vx-reg}
v(t,x) \in \bigcap_{q>1} L_{2,q}(W;\mathbb{R}).
\ee
Similar to \eqref{mod-a-gen-tx},
\begin{equation}
\label{eq:vxn}
\begin{split}
\sum_{|\ba|=n} |v_{\ba}(t,x)|^2
\leq & n!\int\limits_{(0,\pi)^n}\Bigg(\int\limits_{\mathbb{T}_{0,t}^n} \hker_x(t-s_n,x,y_n)\hker(s_n-s_{n-1},y_n,y_{n-1})\\
\cdots &\hker(s_{2}-s_1,y_{2},y_1)u_{(\bold{0})}(s_1,y_1)ds^n\Bigg)^2dy^n.
\end{split}
\end{equation}
Using \eqref{Cpst},
\begin{equation*}
\begin{split}
&\sum_{|\ba|=n} |v_{\ba}(t,x)|^2\\
\leq& n!\|u_{0}\|_{L_{p}((0,\pi))}^2\pi^2(1+\sqrt{t})^2
\int\limits_{(0,\pi)^n}\!\!\Bigg(\ \int\limits_{\mathbb{T}^{n}_{0,t}}
\hker_x(t-s_{n},x,y_n)\hker(s_{n}-s_{n-1},y_{n},y_{n-1})\\
&\ \ \ \ \ \
\cdots \hker(s_{2}-s_1,y_{2},y_1)s_1^{-1/2}ds^{n}\Bigg)^2dy^n.
\end{split}
\end{equation*}
By \eqref{eq:kernel1},
$$
\int_0^{\pi}\hker_x(t,x,y)\hker_x(s,x,y)dy
=\frac{4}{\pi^2}\sum_{k=1}^{\infty} k^2 e^{-k^2(t+s)}\sin^2(kx)
\leq \frac{27}{(t+s)^{3/2}},
$$
and then
\begin{eqnarray*}
&&\sum_{|\ba|=n} |v_{\ba}(t,x)|^2
\leq 27n! \pi^{2}(1+\sqrt{t})^{2n}
\|u_{0}\|_{L_p((0,\pi))}^2\\
&\times
& \left(\int_{\mathbb{T}^{n}_{0,t}} (t-s_{n-1})^{-3/4}(s_{n-1}-s_{n-2})^{-1/4}
\cdots (s_2-s_1)^{-1/4}s_1^{-1/2}ds^{n}\right)^2ds\\
&=&27\pi^{2}(1+\sqrt{t})^{2n}\|u_{0}\|_{L_p((0,\pi))}^2\
n!\,I_n^2\big(t;3/4,1/2\big)
\leq \|u_{0}\|_{L_p((0,\pi))}^2 C^n(t)n^{-n/2}
\end{eqnarray*}
with a suitable $C(t)$; cf. \eqref{ab-Int-1}. Then \eqref{vx-reg} follows
in the same way as \eqref{eq:Lq-int} followed from
\eqref{mod-a-gen-1-int}.
\end{proof}
\begin{remark}Similar to the proof of
Theorem \ref{th:TimeReg}, an interested reader can confirm that
$u_x(\cdot,x)\in \mathcal{C}^{1/4-}([\delta,T])$ for every $x\in [0,\pi]$
and $T>\delta>0$.
\end{remark}
\begin{theorem}
\label{th:SpaceReg}
If $u_0\in L_p((0,\pi))$ for some $1\leq p\leq \infty$,
then, for every $t>0$,
$$
u_x(t,\cdot)\in \mathcal{C}^{1/2-}((0,\pi)).
$$
\end{theorem}
\begin{proof}
We continue to use the notation $v=u_x$.
Then the objective is to show that, for every sufficiently small $h>0$ and
every $x\in (0,\pi)$, $t>0$, $p>1$, and $\gamma \in (0,1/2),$
\bel{KC-vincr}
\Big(\,\bE |v(t,x+h)-v(t,x)|^p \Big)^{1/p}
\leq C(t,p,\gamma) h^{\gamma};
\ee
then the conclusion of the theorem will follow from the Kolmogorov continuity
criterion.
Similar to \eqref{mod-a-gen-tx},
\begin{equation*}
\begin{split}
&\sum_{|\ba|=n}
\left|v_{\ba}(t,x+h)
-v_{\ba}(t,x)\right|^2\\
\leq &
n!\int_{(0,\pi)^n}
\Bigg(\int_{\mathbb{T}^n_{0,t}}
\left[\hker_x(t-s_n,x+h,y_n)
-\hker_x(t-s_n,x,y_n)\right]\\
\times& \hker(s_{n}-s_{n-1},y_{n},y_{n-1}) \cdots
\hker(s_{2}-s_1,y_{2},y_1)u_{(\bold{0})}(s_1,y_1)
ds^n\Bigg)^2 dy^n,
\end{split}
\end{equation*}
and then
\begin{equation}
\label{ux-incr}
\begin{split}
&\sum_{|\ba|=n}
\left|v_{\ba}(t,x+h)-v_{\ba}(t,x)\right|^2\\
\leq& n!\pi^2\big(1+\sqrt{t}\,\big)^2\|u_0\|_{L_p((0,\pi))}^2
\int\limits_{(0,\pi)^n}\!\!
\Bigg(\,\int\limits_{\mathbb{T}^{n}_{0,t}}
\left[\hker_x(t-s_{n-1},x+h,y_n)
-\hker_x(t-s_{n-1},x,y_n)\right]\\
\times& \hker(s_{n-1}-s_{n-2},y_{n},y_{n-1})
\cdots \hker(s_{1}-s,y_{2},y_1)s_1^{-1/2}
ds^{n}\Bigg)^2dy^n;
\end{split}
\end{equation}
cf. \eqref{eq:vxn}.
Next, define
\begin{equation*}
\begin{split}
&J(t,s,r,x,y,h)\\
&=
\int_0^{\pi} \left(\hker_x(t-s,x+h,y)
-\hker_x(t-s,x,y)\right)
\left(\hker_x(t-r,x+h,y)
-\hker_x(t-r,x,y)\right)dy.
\end{split}
\end{equation*}
From \eqref{eq:kernel1},
\begin{equation*}
\begin{split}
J(t,s,r,x,y,h)
=\frac{2}{\pi}
\sum_{k\geq 1} k^2e^{-k^2(2t-s-r)}
\big(\cos(k(x+h))-\cos(kx)\big)^2.
\end{split}
\end{equation*}
Using
$$
\cos \varphi - \cos \psi = -2\sin((\varphi-\psi)/2)\, \sin ((\varphi+\psi)/2)
$$
and \eqref{ineq-trig},
and taking $\gamma\in (0,1/2)$,
$$
J(t,s,r,x,y,h)\leq 2h^{2\gamma} (2t-s-r)^{-3/2-\gamma}.
$$
Note that
$$
3/2+\gamma<2.
$$
After expanding the square and using the semigroup property,
\eqref{ux-incr} becomes
\begin{equation}
\label{sp-incr}
\begin{split}
&\sum_{|\ba|=n}
\left|v_{\ba}(t,x+h)
-v_{\ba}(t,x)\right|^2\\
&\leq 2 h^{2\gamma}n!
\pi^2\big(1+\sqrt{t}\,\big)^{2n}\|u_0\|_{L_p((0,\pi))}^2\\
&\iint\limits_{\mathbb{T}^{n}_{0,t}\times \mathbb{T}^{n}_{0,t}}
(2t-s_{n-1}-r_{n-1})^{-3/2-\gamma}
\prod^{n-2}_{k=1}(s_{k+1}+r_{k+1}-s_k-r_k)^{-1/2}
s_{1}^{-1/2}r_1^{-1/2}ds^{n}dr^{n}\\
&\leq 2 h^{2\gamma}\
\pi^2\big(1+\sqrt{t}\,\big)^{2n}\|u_0\|_{L_p((0,\pi))}^2\
n!\,I_n^2\big(t;3/4+(\gamma/2),1/2\big) \leq C^n(t,\gamma)n^{-n/2};
\end{split}
\end{equation}
cf. \eqref{aux-rf1} and \eqref{ab-Int-1}. Then Proposition \ref{prop:LcInLp}
implies \eqref{KC-vincr}, completing the proof of Theorem \ref{th:SpaceReg}.
\end{proof}
\section{The Fundamental Chaos Solution}
\label{sec:FS}
\begin{definition}
The fundamental chaos solution of \eqref{eq:main} is the
collection of functions
$$
\{\mathfrak{P}_{\ba}(t,x,y),\ t>0, \ x,y\in [0,\pi],\ \ba\in \cJ\}
$$
defined by
\begin{equation}
\label{eq:ua-fund-1}
\begin{split}
\mathfrak{P}_{\zm}(t,x,y)&=\hker(t,x,y),\\
\mathfrak{P}_{\ba}(t,x,y)&=\frac{1}{\sqrt{\ba!}}\sum_{\sigma\in \mathcal{P}_n}
\int_{(0,\pi)^n}\int_{\bT^n_{0,t}}
\hker(t-s_n,x,y_n)\mfk{m}_{k_{\sigma(n)}}(y_n)\cdots\\
&\cdots \hker(s_{2}-s_1,y_2,y_1) \mfk{m}_{k_{\sigma(1)}}(y_1)
\hker(s_1,y_1,y)\, ds^n\, dy^n.
\end{split}
\end{equation}
\end{definition}
The intuition behind this definition is that \eqref{eq:ua-fund-1} is
the chaos solution of \eqref{eq:main} with initial condition $u_0(x)=\delta(x-y)$.
More precisely, it follows from \eqref{eq:ua-gen-1} that if
\bel{eq:FS-Main}
\mathfrak{P}(t,x,y)=\sum_{\ba\in \cJ} \mathfrak{P}_{\ba}(t,x,y)
\xi_{\ba},
\ee
then
\bel{FCS-det}
u(t,x)=\int_0^{\pi}
\mathfrak{P}(t,x,y)u_{0}(y)dy
\ee
is the chaos solution of \eqref{eq:main} with non-random initial condition
$u(0,x)=u_0(x)$. Before
developing these ideas any further, let us apply the results of
Sections \ref{sec:CSol}--\ref{sec:Space}
to the random function $\mathfrak{P}$.
\begin{theorem}
\label{prop:FS}
The function $\mathfrak{P}$ defined by \eqref{eq:FS-Main} has
the following properties:
\begin{align}
\label{FS-reg1}
&\mathfrak{P}(t,x,y) \in \bigcap_{q>1} L_{2,q}(W;\mathbb{R}),\
t>0,\ \ {\rm \ uniformly\ in \ } x,y\in [0,\pi];\\
\label{FS-sym-pos}
&\mathfrak{P}(t,x,y)\geq 0,\ \ \mathfrak{P}(t,x,y)=\mathfrak{P}(t,y,x),
\ \ \ t>0,\ x,y\in [0,\pi];\\
\label{FS-reg2}
& \mathfrak{P}(\cdot,x,y) \in \mathcal{C}^{3/4-}((\delta, T)),
0<\delta<T,\ \ x,y\in [0,\pi];\\
\label{FS-reg3}
&\mathfrak{P}(t,\cdot;y) \in \mathcal{C}^{3/2-}((0,\pi)),\
t>0,\ y\in [0,\pi].
\end{align}
\end{theorem}
\begin{proof} Using \eqref{eq:hker0}, \eqref{mod-a-gen},
\eqref{aux-time}, and \eqref{ab-Int-1},
\begin{equation*}
\begin{split}
&\sum_{|\boldsymbol{\alpha}|=n}
|\mathfrak{P}_{\boldsymbol{\alpha}}(t,x,y)|^2
\leq n!\int\limits_{(0,\pi)^n}
\int\limits_{\bT^n_{0,t}}\int\limits_{\bT^n_{0,t}}
\Big(
\hker(t-s_n,x,y_n)\cdots \hker(s_{2}-s_1,y_{2},y_1)\hker(s_1,y_1,y)\\
&
\qquad
\times \hker(t-r_n,x,y_n)\cdots \hker(r_{2}-r_1,y_{2},y_1)\hker(r_1,y_1,y)
\Big)\,
ds^n\, dr^n\ dy^n\\
&\qquad
\leq n! (1+\sqrt{t})^{2(n+1)} I_n^2\big(t;1/4,1/2\big)\leq \frac{C^n(t)}{n^{n/2}},
\end{split}
\end{equation*}
from which \eqref{FS-reg1} follows.
To establish \eqref{FS-sym-pos},
note that \eqref{FCS-det} and Theorem \ref{th:positivity} imply $\mathfrak{P}\geq 0$,
whereas, by \eqref{mod-a-gen-tx}, using $\hker(t,x,y)=\hker(t,y,x)$ and
a suitable change of the time variables in the integrals,
$$
\sum_{|\ba|=n}
|\mathfrak{P}_{\boldsymbol{\alpha}}(t,x,y)-
\mathfrak{P}_{\boldsymbol{\alpha}}(t,y,x)|^2=0, \ n\geq 1,
$$
which implies $\mathfrak{P}(t,x,y)=\mathfrak{P}(t,y,x)$.
To establish \eqref{FS-reg2} and \eqref{FS-reg3}, we compute, for $n\geq 1$,
\begin{align*}
\sum_{|\boldsymbol{\alpha}|=n}&
|\mathfrak{P}_{\boldsymbol{\alpha}}(t+h,x,y) -
\mathfrak{P}_{\boldsymbol{\alpha}}(t,x,y)|^2
\leq h^{2\gamma} C^n(t,\gamma)n^{-n/2},\ \gamma\in (0,3/4);
\\
&{\rm \ cf. \ \eqref{t-incr},\ \ and }\\
\sum_{|\boldsymbol{\alpha}|=n}&
|\mathfrak{P}_{\boldsymbol{\alpha},x}(t,x+h,y)-
\mathfrak{P}_{\boldsymbol{\alpha},x}(t,x,y)|^2
\leq h^{2\gamma}C^n(t,\gamma)n^{-n/2},\ \gamma\in (0,1/2);\\
& {\rm \ cf. \ \eqref{sp-incr}. }
\end{align*}
Note that $\mathfrak{P}_{\zm}(t,x,y)=\hker(t,x,y)$ is infinitely
differentiable in $t$ and $x$ for $t>0$ but is unbounded as $t\searrow 0$;
cf. \eqref{eq:hker0}.
\end{proof}
Now we can give full justification of the reason why $\mathfrak{P}$
is natural to call the fundamental chaos solution of equation \eqref{eq:main}.
\begin{theorem}
If $u_0\in L_{2,q}\big(W;L_2((0,\pi))\big)$ for some $q>1$, then
the chaos solution of \eqref{eq:main} with initial condition
$u(0,x)=u_0(x)$ is
\bel{Gen-CS}
u(t,x)=\int_0^{\pi} \mathfrak{P}(t,x,y)\diamond u_{0}(y)dy,
\ee
and
\bel{Gen-CS-reg}
u(t,x) \in L_{2,p}(W;\bR)
\ee
for every $p<q$, $t>0$, and $x\in [0,\pi]$.
\end{theorem}
\begin{proof}
Let
$$
u_0(x)=\sum_{\ba\in \cJ} u_{0,\ba}(x)\xi_{\ba}
$$
be the chaos expansion of the initial condition.
By definition, the chaos solution of \eqref{eq:main} is
$$
u(t,x)=\sum_{\ba\in \cJ} u_{\ba}(t,x)\xi_{\ba},
$$
where
\begin{equation*}
\begin{split}
\frac{\partial {u}_{\zm}(t,x)}{\partial t}&=
\frac{\partial^2{u}_{\zm}(t,x)}{\partial x^2} ,\
u_{\zm}(0,x)=u_{0,\zm}(x);\\
\frac{\partial{u}_{\ba}(t,x)}{\partial t}&=
\frac{\partial^2 u_{\ba}(t,x)}{\partial x^2}
+\sum_k \sqrt{\alpha_k}\
\mfk{m}_k(x)u_{\ba^-(k)}(t,x),\\
& u_{\ba}(0,x)=u_{0,\ba}(x),\ |\ba|>0.
\end{split}
\end{equation*}
By \cite[Theorem 9.8]{LR_shir},
if $u(0,x)=f(x)\xi_{\boldsymbol{\beta}}$ for
some $f\in {L_2((0,\pi))}$ and ${\boldsymbol{\beta}}\in \cJ$, then
$$
u(t,x)=\int_0^{\pi} \mathfrak{P}(t,x,y)\diamond \xi_{\boldsymbol{\beta}}
f(y)dy.
$$
Then \eqref{Gen-CS} follows by linearity.
Next, given $1\leq p<q$, take $p'=qp/(q-p)$, so that $p'^{-1}+q^{-1}
=p^{-1}$. Then, by \eqref{FS-reg1} and \cite[Theorem 4.3(a)]{LRS},
$$
\mathfrak{P}(t,x,\cdot)\diamond u_{0}\in L_{2,p}\big(W;L_2((0,\pi))\big),
$$
which implies \eqref{Gen-CS-reg}.
\end{proof}
\section{Further Directions}
\label{sec:FD}
A natural question is whether the results of
Sections \ref{sec:CSol}--\ref{sec:FS} extend
to a more general equation
$$
\frac{\partial u(t,x)}{\partial t} =
\mathcal{L} u(t,x)+
u(t,x)\diamond \dot{W}(x),\ t>0,\ \ x\in G,
$$
where $\mathcal{L}$ is a second-order linear ordinary differential operator
and $G\subseteq \bR$.
\subsection{Equation on a Bounded Interval}
Consider a second-order differential \\
operator
$$
f\mapsto \mathcal{L}f=\rho(x)f''+r(x)f'+c(x)f = (\rho f')'+(r-\rho')f'+cf,\ \rho>0,\ x\in (a,b),
$$
$-\infty<a<b<+\infty$.
A change of variables $f(x)=g(x)\exp\left(
-\int\frac{r(x)-\rho'(x)}{2\rho(x)}dx\right)$
leads to the symmetric operator
$$
\tilde{\mathcal{L}}g= (\rho g')'+\tilde{c} g,
$$
with
$$
\tilde{c}(x)= c(x)+ \frac{\rho(x) H''(x)+r(x)H'(x)}{H(x)},\ \
H(x)=\exp\left(-\int\frac{r(x)-\rho'(x)}{2\rho(x)}dx\right).
$$
The most general form of the (real) homogenous
boundary conditions for the
operator $\tilde{\mathcal{L}}$ is as follows:
\bel{BC-gen}
AY(a)+BY(b)=0,
\ee
where $A,B\in \bR^{2\times 2}$, $Y(x)=\big(g(x) \ \ \rho(x)g'(x)
\big)^{\top}$. If
the matrix $[A\ B] \in \bR^{2\times 4}$ has rank 2 and
\bel{eq:SABC}
AEA^{\top}=BEB^{\top},\ \
E=
\left(
\begin{array}{cc}
0 & -1 \\
1 & 0
\end{array}
\right),
\ee
then boundary conditions \eqref{BC-gen} allow a self-adjoint
extension of the operator $\tilde{\mathcal{L}}$ to $L_2((a,b))$;
cf. \cite[Section 4.2]{Zettl}.
Particular cases of \eqref{BC-gen} satisfying \eqref{eq:SABC}
are {\tt separated boundary conditions}
$$
c_1g(a)+c_2\rho(a)g'(a)=0,\ c_3g(b)+c_4\rho(b)g'(b)=0,\
$$
when both matrix products in \eqref{eq:SABC} are zero,
and {\tt periodic boundary conditions}
$$
g(a)=g(b), \ \ g'(a)=g'(b),
$$
when $A=B$.
Consider the eigenvalue problem for $\tilde{\mathcal{L}}$:
$$
-\tilde{\mathcal{L}}\mfk{m}_k(x)=\lambda^2_k\mfk{m}_k,\
k=1,2,\ldots.
$$
The following properties of the eigenvalues $\lambda_k$
and the eigenfunctions $\mfk{m}_k$ ensure that the results
of Sections \ref{sec:CSol}--\ref{sec:FS} extend to equation
$$
u_t=\tilde{\mathcal{L}}u + u\diamond \dot{W}(x).
$$
\begin{itemize}
\item[{[EVS]}] The eigenvalues $\lambda_k$ satisfy
$$
\lim_{k\to \infty} \frac{\lambda_k}{k^2}=\bar{\lambda}>0;
$$
\item[{[CEF]}]
The set of normalized eigenfunctions $\{\mfk{m}_k,\ k\geq 1\}$
is complete in $L_2((a,b))$;
\item[{[MP]}] The kernel
$$
\hker(t,x,y)=\sum_{k=1}^{\infty}
e^{-\lambda_k^2t}\;\mfk{m}_k(x)\mfk{m}_k(y)
$$
of the semi-group generated by $\tilde{\mathcal{L}}$
is non-negative: $\hker(t,x,y)\geq 0$, $t>0,\ x,y\in (a,b)$;
\item[{[UB]}] The eigenfunctions $\mfk{m}_k$ are uniformly
bounded:
$$
\sup_{k\geq 1,\ x\in (a,b)} |\mfk{m}_k(x)|<\infty.
$$
\end{itemize}
The corresponding computations, although not necessarily trivial, are essentially
equivalent to what was done in Sections \ref{sec:CSol}--\ref{sec:FS}.
In the case of additive noise,
the results of \cite{Feller-1D} help with identification of the
diffusion process that, similar to the Brownian bridge in Remark
\ref{rem:compensate}, compensates the spatial derivative of the solution
to a smooth function.
There are general sufficient conditions for [EVS] and
[CEF] to hold; cf. \cite[Theorems 4.3.1 and 4.6.2]{Zettl}.
The maximum principle [MP] means special restrictions on the
boundary conditions; these restrictions are, in general, not related to
\eqref{BC-gen}; cf. \cite[Theorem 12]{Feller-1D}.
Condition [UB] appears to be the most difficult to verify without additional information about the operator $\tilde{\mathcal{L}}$.
\subsection{Equation on the Line}
All the results of Sections \ref{sec:CSol}--\ref{sec:FS} extend to the
chaos solution of the heat equation
$$
u_t=u_{xx}+u\diamond \dot{W}(x),\ t>0,\ x\in \bR,
$$
with suitable initial condition $u(0,x)=u_0(x)$. In the case of additive noise,
a two-sided Brownian motion compensates the space derivative of the
solution to a smooth function.
Further extensions, to the equation
$$
u_t=\mathcal{L}u + u\diamond \dot{W}(x),
$$
are also possible but might require additional effort.
For example, let $\mathcal{L}f=\big(c(x)f'(x)\big)'$
with a measurable function $c(x)$ such that
$$
c_1 \leq c(x) \leq c_2,\quad \mbox{ for all } x\in \mathbb{R},
$$
for some constants $c_1, c_2 >0$.
Then \cite[Theorems 4.1.11 and 4.2.9]{Stroock}
the kernel of the the corresponding semi-group satisfies
$$
\frac{\mfk{c}_1}{\sqrt{t}}\exp
\left(-\frac{(x-y)^2}{\mfk{a}_1\,t}\right)
\leq \hker(t,x,y)=\hker(t,y,x)
\leq \frac{\mfk{c}_2}{\sqrt{t}}\exp
\left(-\frac{(x-y)^2}{\mfk{a}_2\,t}\right)
$$
for some positive numbers $\mfk{c}_1,\mfk{c}_2,\mfk{a}_1,\mfk{a}_2$,
which is enough to carry out the computations from Sections \ref{sec:CSol}
and \ref{sec:RCS}.
Additional regularity of the chaos solution requires more delicate bounds on
$\hker_x(t,x,y)$ and $\hker_t(t,x,y)$.
\def\cprime{$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,564,291 | arxiv | \section{Introduction}
Image analysis as the intersection of digital signal processing, mathematical models, and machine perception has made a big advance in variety of scientific and commercial applications consisting biomedicine, medicine, and industry by assisting the extraction of semantic information from digital images. The task of image analysis broadly lies from low level image processing such as image sharpening and contrast enhancement, to medium and perhaps high level processing including image segmentation, image registration, object tracking, and image watermarking. A challenge in image analysis applications is to tackle the large computational complexity of almost every image processing task to make such a real-time application.
In the other side, the multithreading technique has been a very longstanding approach to employ instruction level parallelism which allows having several threads within the context of one single process and it definitely improves the computational capacity by concurrent execution of threads.
Over the past few years, using multithreading strategies has found significant interest from the variety of scientific researches. This is obviously evident from Figure 1 that shows the journal and conference papers related to the rubric which have been published in Elsevier from 2012 to 2014.
Using multithreading techniques in the field of digital image analysis can be fulfilled the following objectives:
\begin{itemize} \itemsep.2em
\item [$\bullet$] To facilitate the production of real-time image analysis applications.
\item [$\bullet$] To provide a capability for improving the computational aspects of digital image analysis.
\item [$\bullet$] To create scalable image analysis applications which can expand and work across multiple clusters.
\item [$\bullet$] To make distributed infrastructures for large scale digital image analysis.
\end{itemize}
While digital image analysis algorithms and its applications have been studied for a long period, the use of multithreading techniques in image processing has been limited so far. In this work, a survey is performed to review the current knowledge and so far progresses in incorporating digital image analysis with multithreading approaches. With the present paper, I also draw some possible enhancements for multithreading image analysis.
\begin{figure}
\caption{Number of Elsevier publications on multithreading over the last three years. Results obtained by submitting query “multithreading” from Elsevier website at http://www.sciencedirect.com.}
\centering
\includegraphics [height=6.2cm, width=12cm]{fig1.png}
\end{figure}
The rest of this survey is arranged as follows. In Section 2, a brief introduction for both digital image analysis and multithreading approach is presented. The recent works and advances in the field of multithreading image analysis are reviewed in Section 3. I discuss and conclude the work in Section 4.
\section{An Introduction to Digital Image Analysis and Multithreading Approach}
While image analysis and its applications have been around for almost 50 years now, multithreading techniques dates back to about 24 years ago. In this section, a brief introduction on these rubrics is presented.
\subsection{Digital Image Analysis}
Today, we can see digital images everywhere form biomedical as well as astronomical domain to artistic applications. Digital image analysis enables the multimedia technology revolution we are experiencing these days. Some important examples of image processing include image sampling and quantization, image filtering and correlation, coloring, image segmentation, morphological image processing, noise reduction and restoration, feature extraction and object recognition tasks.
All image analysis tasks are divided into three different categories: 1) Low-level, 2) Medium-level, and 3) High-level \cite{lvq1} and \cite{lvq2}. In low-level image analysis techniques, we consider a digital image as an input and produce a digital image as output. It means that the input and output of this level are both digital images. Some examples include noise reduction and contrast enhancement. Using medium-level image analysis, we get an image as an input and provide/extract some information form that. The output of this level is not an image. Image segmentation and object detection such as face detection are some examples. The most difficult part of the image analysis tasks is high-level one in which the input is an image but the output will be knowledge! For instance, an input could be a portrait image from a person’s face, so the output could be whether the person is happy or sad.
This is a very introductory level of digital image analysis. Readers interested in image analysis algorithms and its applications are referred to \cite{reg01} and \cite{reg02} for image registration, \cite{seg01}, \cite{seg02}, \cite{seg03}, \cite{seg04}, and \cite{seg05} for image segmentation, \cite{fr01}, \cite{fr02}, and \cite{fr03} for image forgery detection and image encryption, \cite{t01}, \cite{t02}, \cite {t03}, \cite {t04}, and \cite{sift03} for 3D surface reconstruction. Further information can be found in \cite{lvq1} and \cite{lvq2}.
\subsection{Multithreading Approach}
A Thread can define as a path of execution through a program. Single threaded application has only one path of execution, while multi-threaded application has two or more path of execution. In such traditional processes, there is only a unique thread of control and a single Program Counter in every process. These processes can perform only one task at a time, and should finish each task before they can start another in the sequence. Using multithreading techniques, the system can support for multiple threads of control for a process at a time. Multithreading offers several advantages as follows \cite{m01}:
\begin{itemize} \itemsep.2em
\item [$\bullet$] Better utilization for employing system resources.
\item [$\bullet$] Improved performance and concurrency.
\item [$\bullet$] Simultaneous access to multiple applications.
\item [$\bullet$] Task Parallelizations.
\end{itemize}
Readers interested in multithreading techniques and their implementation can refer to \cite{m02} and \cite{m03} for more information.
\section{Literature Review}
The following literature review aims to show the current knowledge and so far progress in incorporating multithreading strategies and digital image analysis.
In 1998, Yu et al. \cite{m04} developed an image convolution algorithm with traditional parallel methods. Their algorithm dramatically reduces computation cost. They discussed two parallel algorithms for performing image convolution, and concluded that using parallel techniques will speed up the image convolution task. However, they mentioned that selecting an algorithm for all application parameters is quite hard.
In 2003, Penha et al. \cite{m05} performed a comparative study on multithreading image analysis based on shared-variables programming approach. They employed explicit compiler directives from multi-thread support libraries. Their comparison between the implementations has done by considering two kinds of well-known operating systems: Windows and Linux. They have examined both general performance and programmability. The image convolution implementations experiments showed significant performance improvement over sequential ones both in Windows and Linux. The programmability analysis showed that it is simpler for the programmer to develop p-thread based applications rather than another types of thread such as win threads. In general, they showed that using multithreading implementation will improve the general performance, but the implementation itself could not be convenient and easy for developers.
In 2009, Lin et al. \cite{m06} carried out a multithreading strategy as parallel method to perform the PDE-based image registration. They implemented deformable image registration and examine it on a dual core personal computer. For implantation, they used OpenMP as an API which can support multi-processing programming in C++. Their experiments demonstrated that the method was able to produce the large size parallel image registration, reduce the computational complexity, and save nearly a half computing time, however the implementation part was not easy enough.
In 2010, GG Slabaugh et al. \cite{m07} presented the entire pipeline of using OpenMP for doing image processing tasks such as image morphology, image filtering, and normalization. They summarized the general capabilities of OpenMP and showed that Signal and image processing programmers can benefit dramatically from multithreading techniques provided through OpenMP, by modifying a single-threaded code to operate parallelism.
In 2013, Kika et al. \cite{m08} used Java programming to deploy digital image analysis tasks in single-core and multi-core CPU. They mentioned that the Java programming language is very appropriate to build and develop image analysis applications due to its features and the free and open source packages that it offers for this purpose. Their experimental results showed that such a multithreading method will definitely improve the performance of image analysis algorithms either in single-core or multi-core CPU platforms, however the improvements are different. In single-core, the best results is achieved by the combination of small image size and less complex algorithm, while in multi-core CPU the combination of small image size and more complex algorithm improves the performance. They also concluded that the multithreading programming can improve the performance on multi-core CPU whenever complex image processing algorithms is applied.
In 2015, Smistad et al. \cite{m09} developed an open-source efficient medical image analysis and visualization framework namely FAST (FrAmework for heterogeneouS medical image compuTing and visualization) using multithreading techniques. The code examples along with the evaluations have demonstrated that the framework is easy to use and performs better than existing frameworks, such as ITK and VTK.
\section{Discussion, Conclusion, and Possible Extension}
In this work, a survey is done to review the recent works in multithreading image analysis. After a brief introduction to digital image analysis and multithreading areas, five recent works multithreading image processing are reviewed. In almost every work, we can see that using multithreading strategies will improve the general performance of digital image analysis tasks such as image convolution, image filtering, and morphology either in single-core or multi-core CPU. The implementation multithreading techniques from scratch or using pre built open source library is still remain difficult. In general, multithreading approaches will improve the performance and time efficiency of the image analysis tasks and allows resource utilization. Difficulties in implementation, debugging, and managing concurrency are among some multithreading image analysis disadvantages.
Multithreading image analysis using Big Data infrastructures would systematically provide better performance and reliability to process Big Data images, such as very high resolution satellite images. Employing SaaS (Software as a Service) architecture would also provide application-to-application interaction for such systems. These could be considered as possible extensions for multithreading image analysis.
|
1,108,101,564,292 | arxiv | \section{I. INTRODUCTION}
Since the successful exfoliation of various two dimensional (2D) crystals in 2005~\cite{Novoselov2005}, the layered materials in a single layer as well as bulk forms have attracted serious attention owing to their versatile physical properties~\cite{Gupta2015,Geim2013}. Among them, the layered transition metal dichalcogenides (TMDs) show various interesting electronic properties such as type-II Weyl semimetallic (WSM) energy bands~\cite{Soluyanov2015}, gate dependent collective phenomena~\cite{Ye2012,Yu2015}, and quantum spin Hall (QSH) insulating state~\cite{Qian2015} to name a few.
Because of the layered structures of TMDs, several polymorphs can exist and show characteristic physical properties depending on their structures~\cite{Kolobov2016}. A typical TMD shows the trigonal prismatic (2$H$) or the octahedral (1$T$) structures~\cite{Mak2010,Wang2012,Mattheiss1973,Wilson1969}. For MoTe$_2$ and WTe$_2$, the 2$H$ structure ($\alpha$-phase, $P$6$_{3}$/$mmc$) is a stable semiconductor while the 1$T$ form is unstable~\cite{Qian2015,Keum2015}. The unstable 1$T$ structure turns into the distorted octahedral one (1$T'$)~\cite{Eda2012,Qian2015}. The stacked 1$T'$ single layer forms a three-dimensional bulk with the monoclinic structure ($\beta$-phase, $P$2$_1$/$m$) or the orthorhombic one ($\gamma$-phase, $P$$mn$2$_1$) (see Fig. 1)~\cite{Brown1966,Dawson1987,Mar1992}. Interestingly, the $\beta$ phase with a few layers is a potential candidate of QSH insulator~\cite{Qian2015} and the bulk $\gamma$ phase shows type-II Weyl semimetalic energy bands~\cite{Sun2015,Soluyanov2015,Chang2016}, respectively. Since the structural differences between $\beta$ and $\gamma$ phases are minute ($\sim$4$^\circ$ tilting of axis along out-of-plane direction in $\beta$ phase with respect to one in $\gamma$ phase), the sensitive change in their topological low energy electronic properties is remarkable and the transition between different structures can lead to alternation of topological properties of the system.
\begin{figure}[b]
\includegraphics[width=0.8\columnwidth]{FIG1.eps}
\caption{(Color online) Schematic atomic structures of (a) the $\beta$ and (b) the $\gamma$ phase of MoTe$_2$ and WTe$_2$ projected on the $bc$ plane. $\bf b$ and $\bf c$ denote unit vectors of the primitive unit cell ($\bf a$ is perpendicular to the $bc$ plane). The solid line indicates the unit cell. The dark (red) and bright (gray) circles represent Mo (W) and Te atoms, respectively. Te atom being close to (away from) the transition metal plane is denoted by Te$^{i(o)}$, respectively. For the $\beta (\gamma)$ phase, $d_1$$<$$d_3$ ($d_1$$>$$d_3$). The angle between $\bf b$ and $\bf c$ is (a) $\theta$ $\simeq$ 94$^\circ$ and (b) $\theta$ = 90$^\circ$.}
\end{figure}
A phase transition between the $\beta$- and $\gamma$-phase in the layered TMDs has been known for a long time~\cite{Clarke1978,Dawson1987}. MoTe$_2$ shows a first-order transition from the $\beta$- to $\gamma$-structure at around 250 K~\cite{Clarke1978} when temperature decreases. WTe$_2$, however, does not show any transition and stays at the $\gamma$-phase~\cite{Kang2015,Pan2015}. Since the structural parameters of a single layer of 1$T'$-MoTe$_2$ and 1$T'$-WTe$_2$ are almost the same~\cite{Brown1966,Mar1992,Choe2016} and Mo and W belong to the same group in the periodic table, the different phase transition behaviors are intriguing and origins of the contrasting features are yet to be clarified.
To understand the phase transition, the proper treatment of long and short range interlayer interaction in TMDs is essential. Most of the theoretical studies, however, fail to reproduce the experimental crystal structures of the two phases of MoTe$_2$ and WTe$_2$ so do their topological electronic structures using crystal structures obtained from $ab$ $initio$ calculations~\cite{Lee2015,Zhao2015,Lv2015,Homes2015,Liu2016,Qi2016,Lu2016}. Instead, the atomic structures from experiment data are routinely used to understand and predict the low energy electronic properties~\cite{Soluyanov2015,Sun2015,Wang2016,Tamai2016,Deng2016,Huang2016,Bruno2016}. This is because the calculated lattice parameters, especially interlayer distance, by using the conventional first-principles calculations~\cite{Lee2015,Zhao2015,Chang2016,supp} [even with advanced empirical van der Waals (vdW) interaction correction schemes~\cite{Qi2016,supp,Lee2015}] hardly reproduce the observed distances. Since the interlayer interaction governs the phase transition as well as structural properties, a successful description of interlayer interactions is required to understand or predict electronic structures and topological properties. Motivated by the current situation of experiment and theoretical studies, we perform $ab$ $initio$ calculations using a new vdW density functional method for the interlayer interaction~\cite{Hamada2014} and analyze the existence and absence of the first-order structural phase transition related with various low energy topological electronic properties of MoTe$_2$ and WTe$_2$.
Here we first compute crystal structures of the both compounds based on an advanced self-consistent density functional method for the vdW interaction~\cite{Hamada2014} and obtain the best agreement with the available crystal structures in experiments. Then we show theoretically that MoTe$_2$ and WTe$_2$ have distinct structural phase transitions because their interlayer bondings differ depending on valence electron configurations of transition metals. A critical role of low energy electronic states for crystal symmetry is further demonstrated by showing that an external charge doping can alter the structural phase transition significantly. From this, our results in this Rapid Communication can provide a firm computational and theoretical basis for future development in discovering and engineering various topological electronic states in layered materials.
\begin{figure}[t]
\centering{ \includegraphics[width=1.0\columnwidth]{FIG2.eps} }
\caption{(Color online) Optimized lattice parameters $a$, $b$, and $c$ for (a) the $\beta$ and (b) the $\gamma$ structures, obtained using different exchange-correlation functionals. Experimental values for $\beta$-MoTe$_2$ and $\gamma$-MoTe$_2$ (Ref.~\cite{Tamai2016}) and those for $\gamma$-WTe$_2$(Ref.~\cite{Mar1992}) are shown by horizontal solid lines, and horizontal dotted lines, respectively.}
\end{figure}
Our {\it ab initio} calculation method employs the projector-augmented wave method~\cite{PAW} as implemented in the Vienna Ab-initio Simulation Package (${\rm VASP}$)~\cite{VASP1,VASP2}. We use the plane-wave cutoff of 450 eV and the 32$\times$16$\times$8 Monkhorst-Pack meshes for the Brillouin zone integration to achieve the convergence criterion of 0.1 meV in total energy difference ($\Delta E_{\gamma-\beta}$) between $\beta$ and $\gamma$ phase. The spin-orbit coupling (SOC) effect is included in all calculations and on-site Coulomb repulsion ($U$)~\cite{Dudarev1998} is considered for the specific cases. These parameters are fully tested to achieve a desired accuracy for the calculations, and the energy and force are converged with thresholds of 10$^{-6}$ eV and 5$\times$10$^{-3}$ eV/\AA, respectively. On top of the conventional calculation method, we use a vdW density functional (rev-vdW-DF2) method which is recently proposed by one of the authors~\cite{Hamada2014}, where the revised Becke exchange functional (B86R)~\cite{Becke1986} is adopted for exchange functional together with the second version of nonlocal vdW-DF (vdW-DF2)~\cite{Dion2004,*Dion2004e,Lee2010} as a nonlocal correlation. The rev-vdW-DF2 improves the description of the attractive vdW interaction resulting in the most accurate interlayer distances of layered materials over the various other vdW calculation methods~\cite{supp,Bjorkman2014,Peng2016}. The electron and hole dopings are simulated by adding and removing the electron and the background charge is added to keep the charge neutrality. To evaluate the vibrational energy and entropy, we use the harmonic approximation as implemented in PHONOPY package~\cite{phonopy} where the vibrational frequencies are obtained from the force constant matrix of the fully relaxed geometries using numerical derivatives of the rev-vdW-DF2 energies.
The atomic structures obtained from our calculation match the available experiment data very well. The calculated structural parameters of MoTe$_2$ in the $\beta$ phase (hereafter called $\beta$-MoTe$_2$) are summarized in Fig. 2 (a) and those for MoTe$_2$ and WTe$_2$ in the $\gamma$ phase ($\gamma$-MoTe$_2$ and $\gamma$-WTe$_2$) are summarized in Fig. 2 (b) (see also Tables SI and SII~\cite{supp}). The comparison between the optimized lattice parameters using the various vdW functionals and experiment data for the $\beta$ and $\gamma$ phases are also illustrated, respectively. We note that the inclusion of SOC improves the accuracy marginally (see Fig. 2, Tables SI and SII~\cite{supp}). Among the various vdW correction schemes, we found that the rev-vdW-DF2 outperforms several other functionals. The calculated equilibrium unit cell volume using our method yields 306.5 \AA$^3$ for the $\beta$-MoTe$_2$ and 307.0 and 312.1 \AA$^3$ for the $\gamma$-MoTe$_2$ and $\gamma$-WTe$_2$, respectively, in very good agreement with experimental value of 303.6, 305.9, and 306.6 \AA$^3$, respectively. These are only larger by 1.0, 0.4, and 1.8 ${\%}$ than those from experiment, respectively. From the fully optimized structures for both phases, we find that the shortest interlayer distance between Te atoms (denoted by $d_2$ in Fig. 1) changes negligibly between the two phases while other distances ($d_1$ and $d_3$) vary significantly (see Fig. 1 and Table SIII~\cite{supp}).
\begin{figure}[b]
\centering{ \includegraphics[width=1.0\columnwidth]{FIG3.eps} }
\caption{(Color online) (a) Energy profile calculated using rev-vdW-DF2 with and without SOC and $U$ along the transition path from $\beta$- to $\gamma$-phase of MoTe$_2$ and WTe$_2$ with respect to the total energy of the $\beta$-phase. (b) Calculated free energy difference $\Delta F=F_{\gamma}-F_{\beta}$ using rev-vdW-DF2 with SOC and without $U$, where $F_{\gamma(\beta)}$ is a free energy of $\gamma(\beta)$ phase.}
\end{figure}
As the temperature increases, the stable $\gamma$-MoTe$_2$ at the low temperature undergoes a first-order phase transition to the $\beta$ phase~\cite{Clarke1978,Dawson1987,Mar1992} while WTe$_2$ stays in the $\gamma$ phase~\cite{Kang2015,Pan2015}. These observations are consistent with our total energy calculation including the vdW interaction. We found that the $\gamma$ phase is energetically more stable than the $\beta$ phase by $\Delta E_{\gamma-\beta}$ = 0.40 and 0.46 meV per unit cell for MoTe$_2$ and WTe$_2$, respectively, in good agreement with recent other studies~\cite{Qi2016,Lu2016}. For MoTe$_2$, the transition state is unstable by 0.75 and 1.15 meV per unit cell than the $\beta$- and $\gamma$-phase, respectively, indicating $\beta$-MoTe$_2$ is metastable state, while WTe$_2$ shows no energy barrier, implying that $\beta$-WTe$_2$ does not exist [see Fig. 3(a)]. An atomic structure of the hypothetical $\beta$-WTe$_2$ is assumed to follow $\beta$-MoTe$_2$. We also calculated the free energy of each system without $U$ and found that the structural phase transition occurs at around 150 K for MoTe$_2$ and no transition for WTe$_2$, compatible with the experiment [Fig. 3(b)].
Recent studies~\cite{Keum2015,Zheng2016} show that the insulating behavior of a few layers of MoTe$_2$ and WTe$_2$ are not described well within the mean-field treatment of Coulomb interactions. This implies a critical effect of many-body interaction. Thus, we further add the local Coulomb repulsion of $U$ on top of our rev-vdW-DF2 method to reproduce the finite energy band gap obtained from previous hybrid density functional calculations~\cite{Keum2015,supp}. We set $U$ to be 5.0 and 3.0 eV for Mo 4$d$ and W 5$d$ orbitals, respectively~\cite{supp} and obtain further increasing $\Delta E_{\gamma-\beta}$ = 1.9 and 1.0 meV per unit cell for MoTe$_2$ and WTe$_2$, respectively. We note that inclusion of $U$ stabilizes the $\gamma$ phase of both materials while the transition energy barrier for MoTe$_2$ decreases with increasing $U$ [Fig. 3(a)].
\begin{figure}[t]
\centering{ \includegraphics[width=1.0\columnwidth]{FIG4.eps} }
\caption{(Color online) Band structures of (a) $\beta$-MoTe$_2$, (b) $\gamma$-MoTe$_2$, (c) $\beta$-WTe$_2$, and (d) $\gamma$-WTe$_2$ using rev-vdW-DF2 method with SOC. The Fermi energy ($E_F$) is set to zero. The bands are plotted along $Y$(0,$\frac{1}{2}$,0)$\rightarrow$$\Gamma$(0,0,0)$\rightarrow$$X$($\frac{1}{2}$,0,0) and $\Gamma$(0,0,0)$\rightarrow$$A$(0,0,$\frac{1}{2}$). The bands projected onto the $d_{xz}$ and $d_{z^2}$ orbitals of Mo and $p_x$ and $p_z$ orbitals of Te are displayed with circles whose radii are proportional to the weights of each orbital. To visualize the bonding nature of valence bands, the wave functions at the $\Gamma$ point are drawn for (e) $\psi_1$ and $\psi_2$ of $\beta$-MoTe$_2$ and (f) $\varphi_1$ and $\varphi_2$ of $\beta$-WTe$_2$ where blue (green) color denotes plus (minus) sign.}
\end{figure}
In Fig. 4, we show the low energy electronic bands near the Fermi energy ($E_F$) for two different phases of MoTe$_2$ and WTe$_2$, respectively. We first find that the two compounds show the markedly different band dispersion along the $\Gamma$-$A$ direction. For MoTe$_2$, the topmost partially occupied valence band state [denoted by $\psi_1$ in Figs. 4(a) and 4(e)] is mainly an antibonding state along the $d_1$ direction (see Fig. 1) between the hybridized states of $p_z$ orbital of the lower Te atom (denoted by Te$^i$ in Fig. 1) and $d_{z^2}$ orbital of Mo. The next valence band state [$\psi_2$ in Fig. 4(a)] is mainly an antibonding state between the hybridized states of $p_x$ orbital of Te$^i$ and $d_{xz}$ orbital of Mo [Fig. 4(e)]. We also note that, in the first two valence bands, contribution of $p$ orbitals of Te$^o$ (see Fig. 1) is relatively smaller than those of Te$^i$. In contrast to the case of MoTe$_2$, the topmost valence state [$\varphi_1$ state in Figs. 4(c) and 4(f)] of WTe$_2$ is similar to the second valence state ($\psi_2$) of MoTe$_2$ and vice versa [Fig. 4(f)]. Because of the different atomic orbital configurations between Mo ([Kr]5s$^1$4d$^5$) and W atom ([Xe]6s$^2$4f$^{14}$5d$^4$), those two valence bands of WTe$_2$ are fully occupied along the $\Gamma$-$A$ and $\Gamma$-$Y$ direction [Figs. 4(b) and 4(d)] while those of MoTe$_2$ are partially occupied along all directions [Figs. 4(a) and 4(c)]. The estimated band width along $\Gamma$-$A$ for those two bands of MoTe$_2$ is four times larger than the width of WTe$_2$. These apparent differences between the two compounds are found to originate from the fact that WTe$_2$ has a quite smaller contribution of $p$ orbital of Te atoms to the first two valence states compared to that of MoTe$_2$ (Fig. S2~\cite{supp}]).
We also calculated the whole band structures again using a semilocal correlational functional (Fig. S3~\cite{supp}) instead of the rev-vdW-DF2 while keeping the fully relaxed atomic structures to check the effect of vdW functional on the energy band structures. Changes in the band structures are found to be minimal agreeing with previous studies~\cite{Thonhauser2007, Hamada2011}.
Since the calculated total energy difference between the two phases is very small, we do not expect significant changes between energy bands of different phases. Indeed, as shown in Figs. 4 and S4~\cite{supp}, there are little modifications in the band structures between the two phases of MoTe$_2$ (WTe$_2$) except that all bands in the $\beta$ phase split into spin-polarized ones in the $\gamma$ phase due to its broken inversion symmetry. However, in MoTe$_2$, there is a small but important variation in the band structures with the transition: The partially occupied valence bands related with the interlayer antibonding states ($\psi_1$ and $\psi_2$) in the $\beta$-MoTe$_2$ move down in energy (are steadily occupied) along the transition pathway to the $\gamma$-MoTe$_2$ while the corresponding states in the $\beta$-WTe$_2$ does not (Figs. 4 and S4~\cite{supp}). The increase in the occupancies in the first two valence bands stabilize the antibonding states along the elongated distance of $d_1$~\cite{Kim2015}. This is made possible because there is a net charge transfer from the intralayer bonding states around the $Y$-point to the interlayer anti-bonding states near the $E_F$ as shown in Fig. S4~\cite{supp}. This costs energy and explains the metastability of the $\beta$-MoTe$_2$. Since those bands in WTe$_2$ are all occupied, there is no metastable phase for the WTe$_2$.
\begin{figure}[t]
\centering{ \includegraphics[width=1.0\columnwidth]{FIG5.eps} }
\caption{(Color online) Calculated energy profile (with SOC and without $U$) along the transition path from $\beta$- to $\gamma$-phase of (a) MoTe$_2$ and (b) WTe$_2$ as a function of doping ($n_{3D}$) ranging from $-$3.3$\times$10$^{20}$cm$^{-3}$ to +3.3$\times$10$^{20}$cm$^{-3}$. The energy profiles with electron (positive) doping, hole (negative) doping and neutral case are drawn by red, blue, and black lines, respectively. Doping density difference between the consecutive lines is 6.6$\times$10$^{19}$ cm$^{-3}$.}
\end{figure}
Considering the crucial role of occupancy of the interlayer bonding states near the $E_F$, we expect that the external doping can control the structural phase transition. Indeed, we find that the hole (electron) doping can stabilize the $\beta$($\gamma$) phase of both compounds as shown in Fig. 5. The amount of doping density that is necessary to invert the direction of phase transition is about 1.0$\times$10$^{20}$ cm$^{-3}$. We note that few recent experiments~\cite{Ye2012,Yu2015} can achieve such a level of doping for thin TMD flakes. It is anticipated that the $in$-$situ$ charge or hole injection can turn on and off QSH insulating phase and WSM states, respectively. We also note that only electron doping can push the $E_F$ to the Weyl points of WSM states because hole doping destroys the $\gamma$ phase.
Lastly, we comment on the existence of Weyl points calculated from our $ab$ $initio$ atomic structures of both compounds. For $\gamma$-MoTe$_2$ and $\gamma$-WTe$_2$, all the bands are split into spin polarized ones thanks to the broken inversion symmetry and SOC [Figs. 4(b) and 4(d)]. As already shown by other studies~\cite{Sun2015,Tamai2016}, we also find eight Weyl points of $\gamma$-MoTe$_2$ in the $k{_z}$ = 0 plane (see Fig. S5~\cite{supp}). Unlike the robust Weyl points in $\gamma$-MoTe$_2$, the slight overestimation of $a$ and $c$ axes (by 0.5 and 1.0$\%$) in our calculation for $\gamma$-WTe$_2$ [see Fig. 2(b)] merges the topological Weyl points with the opposite chiralities~\cite{Soluyanov2015}, highlighting their sensitivity on the detailed structure parameters. We can recover the eight Weyl points in the $k_{z}$ = 0 plane of $\gamma$-WTe$_2$ under biaxial strain ($\bf a$ and $\bf c$) of $-$1.5$\%$ (see Fig. S6~\cite{supp}).
In conclusion, using an advanced $ab$ $initio$ calculation method for the vdW interaction, we computed accurate lattice structures of MoTe$_2$ and WTe$_2$ and uncovered origins of their disparate structural phase transition phenomena. We showed that the slight differences in low energy states related with the interlayer bondings are shown to be pivotal in determining the symmetry of bulk crystals. Since the structural transition intertwines their QSH phase and WSM states, our results shed light onto understanding delicate interplay between topological electronic properties and crystal structures. Furthermore, we find that the electron and hole doping alter the structural phase transitions, opening a way to control the topological electronic properties of layered TMDs using available experiment techniques.
We thank Dr. Jun-Ho Lee for fruitful discussions at an early stage of this work. Y.-W.S. was supported by the National Re- search Foundation of Korea funded by the Ministry of Science, ICT and Future Planning of Korean government (QMMRC, No, R11-2008-053-01002-0). The computing resources were supported by the Center for Advanced Computation (CAC) of KIAS.
\newpage
\twocolumngrid
|
1,108,101,564,293 | arxiv | \section{Introduction}
It would be desirable to create an artificial system that exploits the basic principles of natural photosynthesis in order to
produce energy in an usable form \cite{GustSc89,Gust01,Crabtree07,LaVan06,Wasiel06,Hambourger09,Barber09}. Indeed, natural
photosynthetic structures efficiently convert the energy of light into chemical form \cite{Barber09,Alberts}.
The overall energy transduction process in plant photosynthesis
occurs through a number of strongly coupled successive stages (see, e.g.,
\cite{Alberts,GustSc89,Gust01}). In the first step, light of the appropriate wavelength is absorbed by a light harvesting complex.
The second step involves the conversion of electronic excitation energy to redox-potential in the form of the long-lived
transmembrane charge separation via multi-step electron transfer processes. The first two steps involve three constituents: (a)
light-absorbing pigments, (b) an electron acceptor, and (c) an electron donor. In the third step, the energy stored in the electron
subsystem is used for energetically uphill proton pumping, which generates the proton motive force across the membrane.
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig1.pdf}
\caption[]{ (Color online) The top figure presents the triad
(donor ``D", photo-sensitive part ``B,C", and acceptor ``A") and
the shuttle ``S" \cite{gali1,gali2}. These are enclosed by color
circles, which are schematically shown in the bottom figure. The
tetraarylporphyrin group acts as a photosensitive moiety (B,C)
(inside the green circle in the top structure). This is connected
to both a naphthoquinone moiety fused to a norbornene system with a
carboxylic acid group (which acts as an electron acceptor (A)) and
to a carotenoid polyene (which acts as an electron donor (D)).
2,5-diphenylbenzoquinone is the proton shuttle (S), denoted by a
pink hollow circle in the structure and by a solid pink circle in
the cartoon.}
\end{figure}
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig2.pdf}
\caption[]{(Color online) Schematic diagram of the light-induced
proton pump across the lipid bilayer in a liposomic membrane. A
molecular triad D--BC--C is symmetrically inserted in the lipid
bilayer. The different stages in the proton pumping process are
here denoted by (a,b,c,d,e,f). The two bluish vertical rectangles
on both sides schematically represent two proton reservoirs with
electrochemical potentials $\mu_{\rm{P}}$ and $\mu_{\rm{N}}$. These
two proton reservoirs correspond to the aqueous phases inside and
outside of the liposome, respectively. The shuttle molecule S, is
shown as a pink-colored oval and the protonated neutral shuttle is
shown as a yellow oval. This shuttle freely diffuses in (d) (the
black scribbled curves represent the thermal stochastic motion of
the shuttle) across the membrane to transport a proton from the
lower proton potential $\mu_{\rm{N}}$ to the higher proton
potential $\mu_{\rm{P}}$ side of the membrane, where
$(\mu_{\rm{P}}-\mu_{\rm{N}})$ denotes the total potential
difference between the two reservoirs. }
\end{figure}
The study of natural photosynthesis has inspired researchers to perform the photo-induced energy transduction processes in the
laboratory \cite{GustSc89,Gust01,gali1,gali2,Crabtree07,LaVan06,Wasiel06,Hambourger09,Barber09,moo1,Imahori04}. A convenient
approach to photosynthesis in artificial reaction centers is to use synthetic pigments, electron acceptors and electron donors that
are very similar in molecular structure to natural pigments (e.g., chlorophylls, carotenoids and quinones). In this direction, the
experimental model proposed in Refs.~\cite{gali1,gali2} provides a paradigm for the conversion of light energy to a proton potential
gradient. These seminal works \cite{gali1,gali2} have motivated research in the design and synthesis of new artificial
photosynthetic systems \cite{bho1,pal12,pol1} (i.e., light-harvesting antennas and reaction centers) and triggered considerable
experimental \cite{syk2,ima1,sah1,riz1,ima2} and theoretical \cite{sod1,cri2,oka1,Andreas} activities to investigate more
sophisticated and more efficient mechanisms for the conversion of light energy.
The transformation of light energy into the electrochemical gradient of protons across the membrane can be quantitatively
characterized by the quantum yield (or quantum efficiency), $\Phi$, of proton translocation. This parameter is defined as the total
number of translocated protons divided by the number of photons absorbed by the triad \cite{gali1}. A quantum yield of the order of
0.4\% has been measured in Ref.~\cite{gali1}. A much higher quantum efficiency, $\Phi \sim 7\%,$ for the conversion of photons into
ATP molecules, was found in Ref.~\cite{gali2}. As argued in Ref.~\cite{gali2}, the actual quantum yield of ATP formation could be of
the order of 15\%, if we take into account the real rate of light absorbance, which is $\sim$~50\%. Near four protons are necessary
for the synthesis of a single ATP molecule. This means that the real quantum yield $\Phi$ of proton translocation measured in
Ref.~\cite{gali2} can be about 60\%. The total thermodynamic (or power-conversion) efficiency, $\eta $, of the light-to-ATP
conversion process is estimated in Ref.~\cite{gali2} as $\eta \sim$~4\%.
In the present paper, using methods from quantum transport theory \cite{Wingr93,PumpPRE08,FlagPRE08,PumpTime08}, we analyze the
photoinduced electron and proton transfer in a molecular triad inserted into a liposomal membrane, which contains a single molecular
shuttle. We calculate the photon-to-proton quantum yield $\Phi \sim $ 55\% (and the thermodynamic efficiency $\eta \sim$ 6.3\%) for
the resonant tunneling conditions, when the reorganization energy, $\lambda$, of the electron transitions matches the detuning
$\delta$ between the electron energy levels: $\lambda \sim \delta.$
We note that due to a small optimal value of the reorganization
energy ($\lambda \sim 400$~meV) the charged recombination process
in the triad is described by the inverted region of the Marcus
formula \cite{rd,bath2,Imahori02}. This further enhances the
performance of the system. Our results explain experiments made in
Ref.~\cite{gali2} using artificial photosynthetic centers. The
obtained power-conversion efficiency corresponds to the highest
value, $\eta \sim$ 6.5\%, achieved recently with polymer solar
cells \cite{KimSc07}. It is expected that the proton current and
the efficiency should increase with increasing the number of the
shuttles in the membrane.
This article is organized as follows. In Sec. II (see also the
Appendix) we introduce the basis set for the system and write the
Hamiltonian of the problem. In Sec. III, we present the master
equation for the density matrix coupled to the Langevin equation
describing the diffusive motion of the shuttle in the lipid
bilayer. In Sec. IV, we numerically solve these equations and
analyze the light-induced proton pumping process. In Sec. V we
summarize our results.
\section{Model}
We use a slightly modified version of the well-accepted model already presented, e.g., in Refs.~\cite{gali1,gali2}. In this model
the reaction center is a molecular triad containing an electron donor and an electron acceptor both
linked to a photosensitive porphyrin group (shown in Fig.~1). The triad molecule (D--BC--A) is inside the bilayer of a liposome. The
lipid bilayer also contains freely diffusing 2,5 diphenylbenzoquinones, acting as proton shuttles. The molecular triad absorbing a
photon establishes a negative charge near the outer surface and a positive charge near the inner surface of the liposome, by
generating charge separated species D$^+$--BC--A$^-$. The freely diffusing quinone shuttle translocates an electron-proton pair
across the membrane and neutralizes the molecular triads.
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig3.pdf}
\caption[]{ (Color online) Energy diagram depicting the energy
levels of states involved in an artificial photosynthetic reaction
center, \textit{before} the diffusion of the shuttle to the
P-reservoir. The subfigures (a,b,c) correspond to the stages
(a,b,c) in Fig.~2. The left and right panels represent electron and
proton energy levels, respectively. The abbreviations D, B, C, A, S
are the same as used in the text and in Fig.~1. Also, $x_{\rm{D}}$
and $x_{\rm{ A}}$ represent the spatial coordinates of the sites D
and A, respectively. The thick brown arrows denote the path the
electrons follow in this energy diagram, generating charge
separation, in (b), and shuttle charging and protonation in (c).
Initially, light excites an electron from B to C, and eventually to
A, making it A$^-$. Afterwards, in (b), the donor D loses an
electron, thus becoming D$^+$, and that electron moves to BC. Later
on, the shuttle S in (c) receives the electron from A. }
\end{figure}
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig4.pdf}
\caption[]{ (Color online) Energy levels involved in an artificial
photosynthetic reaction center. This figure is similar to Fig.~3,
but now the energy profile corresponds to the stage \emph{after}
the shuttle diffuses to the P-reservoir. Here the subfigures
(d,e,f) correspond to the stages (d,e,f) in Fig.~2. The left and
right panels represent proton and electron energy levels,
respectively. The thick brown arrows denote the path followed by
the electron (e) and proton (f). In (d), an electron on the shuttle
S moves to the donor site D, neutralizing it in (e). This electron
transition in the right panels increase the proton energy of the
shuttle, as shown in the left panels (from (d) to (e)). The proton
finally leaves the shuttle in the left panel of (f). }
\end{figure}
In Fig.~2 we schematically illustrate the process of light-induced proton
pumping in liposomes by artificial photosynthetic reaction centers \cite{gali1,gali2}. The transmembrane proton pumping requires a
symmetric arrangement of the molecular triad (of length $\sim 8$ nm) inside the bilayer and with a specific direction: with the
acceptor (A) site towards the outer membrane of the liposome (the negative (N) side of the membrane), and with the donor (D) towards
the inside of the liposome (the positive (P) side of the membrane) \cite{gali1,gali2}.
The energy diagrams of the electron and proton sites are shown in Figs.~3 and 4. There are two electrons in the system, one of which
is initially on the D site, and another electron is on the lower energy level B. The quinone molecular shuttle has one electron
state S (denoted by S, instead of S$_{\rm{e}}$), and one proton state Q (denoted here by Q instead of S$_{\rm{p}}$). Thus, S denotes
the shuttle electron state and Q denotes the shuttle proton state.
The overall process leading to the proton translocation from the N-reservoir with a lower proton potential, $\mu_{\rm{N}}$, to the
P-reservoir with a \emph{higher} electrochemical potential, $\mu_{\rm{P}}$, can be considered as a sequence of eight stages (most of
which are shown in Fig.~2).
\begin{itemize}
\item Step I: The photosensitive moiety of the molecular triad absorbs light and an electron goes from the ground state B to the
excited state C (see Fig.~3b). \item Step II: The unstable excited state C transfers the electron to the acceptor A, producing an
unstable charge-separated intermediate species D--BC$^{+}$--A$^{-}$. \item Step III: The unstable intermediate charge-separated
species is rapidly rearranged to a relatively stable charge-separated form (D$^{+}$--BC--A$^{-}$) by the thermal electron transfer
from the state D to the state B$^+$ having a lower energy than the state D (Fig.~3b). \item Step IV: The shuttle in the position
near the N-side of the membrane accepts an electron from A$^{-}$ and becomes negatively charged. \item Step V: The shuttle molecule
receives a proton from the N-reservoir and becomes neutralized (Fig.~3c right panel). \item Step VI: The neutral shuttle slowly
diffuses through the lipid bilayer and carries the electron and the proton to the P-side of the membrane and to the D-site (stage
(d) in Fig.~2). \item Step VII: The shuttle gives away the electron to the positively charged site D$^{+}$ (stage (e) in Fig.~2 and
Fig.~4e). \item Step VIII: The shuttle is deprotonated by donating the proton to the P-reservoir (Fig.~4f).
\end{itemize}
This sequence of eight steps describes the photo-induced electron transfer that generates the intra-membrane redox potential, which
in turn drives the energetically uphill vectorial translocation of protons by the shuttle.
Electrons in the states $i$ (= D,B,C,A,S) and protons in the state Q are characterized by the corresponding Fermi operators
$a_i^+,a_i$ and $b_{\rm{Q}}^+,b_{\rm{Q}}$, with the electron population operator $n_i$ and the proton population $n_{\rm{Q}}$. We
assume that each electron or proton state can be occupied by a single electron or a single proton. Spin degrees of freedom are
neglected. The proton site on the shuttle, denoted by Q, can be populated from the N-reservoir provided the shuttle is within the
transition length $L_{\rm{Q}}$ from the N-side of the membrane. The protonated shuttle, located within the transition (or tunneling)
range from the P-side of the membrane, can donate its proton to the P-reservoir. Protons in the reservoirs are described by the
Fermi operators $d_{k\alpha}^+,d_{k,\alpha}$, where $\alpha = \rm {N,P} $; and $k$ is an additional parameter which has the meaning
of a wave vector in condensed matter physics \cite{Wingr93,PumpPRE08,FlagPRE08,PumpTime08}. The number of protons in the reservoirs
is determined by the operator $\sum _{k}N_{k\alpha}$, with $N_{k\alpha}=d_{k\alpha}^+ d_{k\alpha}$.
\subsection{Hamiltonian}
The Hamiltonian of the electron-proton system,
\begin{equation}
H = H_{0} + H_{\rm {dir}} + H_{\rm{tr}} + H_{\rm{B}},
\end{equation}
has a term $H_{0}$ related to the energies $E_i$ of the electron eigenstates
($i$ = D,B,C,A,S), and to the energy $\epsilon_{\rm{Q}}$ of a proton, on the shuttle:
\begin{eqnarray}
H_{0} &=& \sum _{i} E_i n_i + \epsilon_{\rm{Q}} n_{\rm{Q}} +
u_{\rm{DB}} (1-n_{\rm {D}})(1-n_{\rm {B}}-n_{\rm{C}}) \nonumber\\
&-& u_{\rm {DA}} (1-n_{\rm {D}})n_{\rm{A}} -
u_{\rm{BA}}(1-n_{\rm{B}} -n_{\rm{C}})n_{\rm{A}} \nonumber\\
&-&u_{\rm{SQ}}\; n_{\rm{S}} \;n_{\rm{Q}}.
\end{eqnarray}
We include here the electrostatic interaction between the electron sites, $u_{\rm{DB}},u_{\rm{DA}},u_{\rm{BA}}$, and the Coulomb
attraction $u_{\rm{SQ}}$ between the electron and proton sites on the shuttle. It is assumed that the empty donor state D (with
$n_{\rm{D}}=0$) as well as the empty photosensitive group B and C ($n_{\rm {B}} + n_{\rm{C}} = 0$) have positive charges, and
$u_{\rm {DB}}=u_{\rm {DC}},\, u_{\rm{CA}}=u_{\rm{BA}}.$
The term,
\begin{eqnarray}
H_{\rm{dir}} &=&-\Delta_{\rm{DB}}\; a_{\rm{D}}^{\dag}a_{\rm{B}} -\Delta_{\rm{AC}}\; a_{\rm{A}}^{\dag} a_{\rm{C}} - \Delta_{\rm{DS}}
(x) \; a_{\rm{D}}^
{\dag} a_{\rm{S}}\nonumber \\
&-& \Delta_{\rm{AS}} (x)\; a_{\rm{A}}^{\dag} a_{\rm{S}} - F(t) \; a_{\rm{B}}^{\dag} a_{\rm{C}} + h.c.,
\end{eqnarray}
describes the tunneling of electrons between the sites D--B, C--A, A--S, and D--S, with the corresponding amplitudes
$\Delta_{ii'}$. Notice that the tunneling elements $\Delta_{\rm{DS}}(x)$ and $\Delta_{\rm{AS}}(x)$ depend on the shuttle position
$x$. The Hamiltonian $H_{\rm{dir}}$ is also responsible for the electron transitions between the states B and C induced by the
electromagnetic field (light), $F(t)= F_0 \exp(i\omega_0 t)$, with a frequency $\omega_0$ and an amplitude $F_0$. Proton transitions
between the shuttle (site Q) and the N- and P-proton reservoirs are governed by the Hamiltonian
\begin{eqnarray}
H_{\rm{tr}} =-\sum _{k\alpha}T_{k\alpha}(x)\;d_{k\alpha}^{\dag}b_{\rm{Q}} - \sum _{k\alpha}
T_{k\alpha}^*(x)\;b_{\rm{Q}}^{\dag}d_{k\alpha},
\end{eqnarray}
with the position-dependent coefficients, $T_{k\alpha}(x)$. We have chosen the following form of $T_{k\alpha}(x)$:
\begin{eqnarray}
T_{kN}(x) &=&
T_{kN} \theta[x -(x_N-L_Q)],\nonumber\\ T_{kP}(x) &=& T_{kP}
\theta[x_P+L_Q - x],\nonumber
\end{eqnarray}
where $\theta(x)$ is the Heaviside step function, and the
parameter $L_{\rm Q}$ defines the proton loading range of the shuttle.
\subsection{Interaction with the environment}
To take into consideration the effect of a dissipative environment we consider the well-known system-reservoir model
\cite{rd,bath2,bath1}, where the medium surrounding the active sites is represented by a system of harmonic oscillators with the
Hamiltonian:
\begin{eqnarray}
H_{\rm{B}}=\sum _{j}\left[ \frac{p_j^2}{2m_j}+ \frac{m_j\omega_j^2
}{2} \left( x_j +\frac{1}{2} \sum_i x_{ji}n_i\right)^2 \right],
\end{eqnarray}
where ${x_j,p_j}$ are the positions and momenta of the oscillators with effective masses $m_j$ and frequencies $\omega_j$. The
parameters $x_{ji}$ determine the strengths of the coupling between the electron subsystem and the environment. The system of
independent oscillators are conveniently characterized by the spectral functions $J_{ii'}(\omega)$, defined by
\begin{eqnarray}
J_{ii'} (\omega) = \sum_j \frac{m_j\omega_j^3 (x_{ji}-x_{ji'})^2}{2} \delta(\omega - \omega _ j),
\end{eqnarray}
so that the reorganization energy $\lambda_{ii'}$, related to the $i \rightarrow i'$ transition, has the form
\begin{eqnarray}
\lambda_{ii'} = \int_{0}^{\infty} \frac{d \omega} {\omega} J_{ii'}(\omega)
= \sum_j \frac{m_j\omega_j^2 (x_{ji}-x_{ji'})^2}{2}.
\end{eqnarray}
With the unitary transformation $\hat{U}=\prod _i \hat{U}_i$, where
\begin{eqnarray}
\hat{U}_i = \exp{\left[\frac{i}{2} \sum_{j} p_j x_{ji}n_i\right]},
\end{eqnarray}
we can transform the Hamiltonian $H$ to the form $H'=U^{\dag}HU$, becoming (after dropping the prime)
\begin{eqnarray}
H &=& H_{0} - \sum_{ii'} \Delta_{i i'}\; e^{(i/2) (\xi_{i}
-\xi_i')} \;a^{\dag}_{i'} \;a_i \nonumber\\&-& F(t)
e^{-(i/2)(\xi_{\rm{B}} -\xi_{\rm{C}})} \;a^{\dag}_{\rm{B}}
a_{\rm{C}}
-F^*(t) \;a^{\dag}_{\rm{C}} \;a_{\rm{B}}\; e^{(i/2)(\xi_{\rm{B}} -\xi_{\rm{C}})} \nonumber\\
&-& \sum_{k\alpha} T_{k\alpha}(x)\;d_{k\alpha}^ {\dag}\; b_{\rm{Q}} -
\sum_{k\alpha} T_{k\alpha}^*(x)\;b_{\rm{Q}}^{\dag} \;d_{k\alpha}\nonumber\\&+&
\sum _{j}\left(\frac{p_j^2}{2m_j}+\frac{m_j\omega_j^2 x_j^2}
{2}\right),
\end{eqnarray}
where $\alpha$ = N,P, and the tunneling coefficients, $\Delta_{i i'}^* = \Delta_{i' i}$, take non-zero values only for transitions
between the sites D and B, A and C, A and S, as well as D and S.
The stochastic phase operator $\xi_i$ is given by
\begin{eqnarray}
\xi_i = \frac{1}{\hbar} \sum_{j} p_j x_{ji}.
\end{eqnarray}
The result of this transformation follows from the fact that, for an arbitrary function $\Phi(x_j)$, the operator $\hat{U}$
produces a shift of the oscillator positions:
\begin{eqnarray}
\hat{U}^{\dag}\Phi(x_j)\hat{U}=\Phi \left(x_j + \frac{1}{2}\sum_i x_{ji}n_i\right).
\end{eqnarray}
This transformation also results in the phase factors for the electron
amplitudes (see Eq.~(9)).
The basis sets, composed of the electron-proton eigenstates, and
their corresponding energy eigenvalues are presented in an
Appendix. Thus, the reader is encouraged to read this short
Appendix before proceeding further.
\section{Time evolution of density matrix}
\subsection{Master equations}
To describe the time evolution of the diagonal elements
of the density matrix, $\langle \rho_m\rangle$, we
write the Heisenberg equation for the operators $\rho_m$ with the subsequent averaging over the environment fluctuations and over
the states of the proton reservoirs:
\begin{eqnarray}
\langle \dot{\rho}_m \rangle = - \langle i [\rho_m, H_{\rm dir}]_{-}\rangle - \langle i [\rho_m, H_{\rm tr}]_{-}\rangle.
\end{eqnarray}
The protons in the reservoirs ($\alpha$ = N,P) are characterized by the Fermi distributions,
\begin{eqnarray}
F_{\alpha}(E_{k\alpha})= \left[\exp\left(\frac{ E_{k\alpha}-\mu_{\alpha}}{T}\right)
+1 \right]^{-1}.
\end{eqnarray}
with the temperature $T$ ($k_{\rm{B}} = 1$). The electrochemical potentials $\mu_{\rm{N}}$ and $\mu_{\rm{P}}$, correspond to the
negative (N) and positive (P) proton reservoirs, respectively. The proton motive force ($\Delta \mu$) across the membrane is given
by
\begin{eqnarray}
\Delta \mu = \mu_{\rm{P}}-\mu_{\rm{N}} = V - \frac{2.3 \ R T}{F}\left(\Delta pH\right), \label{dMu}
\end{eqnarray}
where $R$ and $F$ are the gas constant and Faraday constant, respectively, and $V$ is the transmembrane voltage gradient. Hereafter
we change $\Delta \mu$ by changing the $pH$ of the solution by $\Delta pH$.
The contribution of the transitions between the shuttle and the proton reservoirs to the time evolution of the density matrix is
described by the second term in the right hand side of Eq.~(12), which can be calculated with methods of quantum transport theory
\cite{Wingr93,PumpPRE08}
\begin{eqnarray}
\langle i [\rho_m, H_{\rm tr}]_{-}\rangle = \sum _{n} \left[ \gamma_{nm}^{\rm tr}(x)\langle \rho_m \rangle - \gamma_{mn}^{\rm
tr}(x)\langle \rho_n \rangle \right],
\end{eqnarray}
with the relaxation matrix
\begin{eqnarray}
\gamma_{mn}^{\rm tr}(x) &=& \sum_{\alpha} \Gamma_{\alpha}(x)
\left\{ |b_{Q,mn}|^2[1-F_{\alpha}(\omega_{nm})]\right. \nonumber\\
&+& \left. |b_{Q,nm}|^2 F_{\alpha}(\omega_{mn})\right\}.
\end{eqnarray}
Here we introduce the frequency-independent coefficients,
\begin{eqnarray}
\Gamma_{\alpha}(x) = 2\pi \sum_{k} |T_{k\alpha}(x)|^2 \;\delta (\omega -E_{k\alpha}),
\end{eqnarray}
which determine the transition rates between the shuttle state Q and the sides of the membrane (N- and P-reservoirs). Notice that
these coefficients are functions of the shuttle position $x$.
The transitions between the electron levels are described by the Hamiltonian $H_{\rm dir}$, which can be written as
\begin{eqnarray}
H_{\rm dir} = - \sum_{mn} {\cal A}_{mn} \;\rho_{m,n} - \sum_{mn} \rho_{n,m} \;{\cal A}_{mn}^{\dag},
\end{eqnarray}
with the functions
\begin{eqnarray}
{\cal A}_{mn} &=& Q_{\rm{DB}} (a_{\rm{B}}^{\dag}a_{\rm{D}})_{mn} +
Q_{\rm{CA}} (a_{\rm{A}}^{\dag}a_{\rm{C}})_{mn} + Q_{\rm{SA}}
(a_{\rm{A}}^{\dag}a_{\rm{S}})_{mn}
\nonumber\\
&+& Q_{\rm{SD}} (a_{\rm{D}}^{\dag}a_{\rm{S}})_{mn} + Q_{\rm{CB}} (a_{\rm{B}}^{\dag}a_{\rm{C}})_{mn},
\end{eqnarray}
which are defined as superpositions of the heat-bath operators
\begin{eqnarray}
Q_{i i'} &=& \Delta_{i' i} \exp[(i/2)(\xi_{i} - \xi_{i'})]
\nonumber\\
&=& \Delta_{i' i} \exp[(i/2)\sum_j p_j(t)(x_{j i} - x_{ji'})],
\end{eqnarray}
for the pairs of the electron sites $(i i')$ = (DB),(CA),(SA),(SD),
whereas for the pair (CB) we have
\begin{equation}
Q_{\rm{CB}} = F_0 \exp(i\omega_0 t) \exp[(i/2)\sum_j p_j(t)(x_{j C} - x_{j B})],
\end{equation}
In the case of a high-enough temperature of the bath \cite{bath2}, the cumulant functions of the unperturbed operators $Q_{i
i'}^{(0)}$ are determined by the relations:
\begin{eqnarray}
\langle Q_{i i'}^{(0)}(t),Q_{i i'}^{(0)\dag}(t')\rangle &=&
|\Delta_{i' i}|^2 e^{- i \lambda_{i i'}(t-t')} e^{-\lambda_{i i'} T
(t-t')^2},
\nonumber\\
\langle Q_{i i'}^{(0)\dag}(t),Q_{i i'}^{(0)}(t)\rangle &=&
|\Delta_{i' i}|^2 e^{ i \lambda_{i i'}(t-t')} e^{-\lambda_{i i'} T
(t-t')^2}.\nonumber\\
\end{eqnarray}
The contribution of the electron transitions to Eq.~(12) is determined by the term
\begin{eqnarray}
\langle -i [\rho_m, H_{\rm dir}]_{-}\rangle = i \sum_n \langle
{\cal A}_{mn} \rho_{mn} - {\cal A}_{nm} \rho_{nm} \rangle + h.c.
\nonumber \\
\end{eqnarray}
Within the theory of open quantum systems developed in
Refs.~\cite{PumpTime08}, the correlation function $\langle {\cal
A}_{mn} \rho_{mn}\rangle $ is proportional to the density matrix
elements of the system, $\langle \rho_m \rangle,$ with coefficients
defined by the unperturbed correlators $\langle {\cal
A}_{mn}^{(0)}(t),{\cal A}_{mn}^{(0)\dag}(t')\rangle$ of the bath
operators:
\begin{eqnarray}
\langle {\cal A}_{mn}(t) \rho_{mn}(t) \rangle &=& i \int dt_1
\theta(t-t_1) e^{i\omega_{mn}(t-t_1)}\nonumber
\\
&\times& \left\{
\langle {\cal A}_{mn}^{(0)}(t),{\cal A}_{mn}^{(0)\dag}(t_1)\rangle
\langle \rho_m (t) \rangle \right. \nonumber
\\
&-& \left.\langle {\cal
A}_{mn}^{(0)\dag}(t_1),{\cal A}_{mn}^{(0)}(t)\rangle \langle \rho_n
(t) \rangle \right\},\nonumber\\
\end{eqnarray}
where
\begin{eqnarray}
\langle {\cal A}_{mn}^{(0)}(t),{\cal A}_{mn}^{(0)\dag}(t_1)\rangle
&=& \langle Q_{\rm{CB}}^{(0)}(t),Q_{\rm{CB}}^{(0)\dag}(t_1)\rangle
|(a_{\rm{B}}^{\dag}a_{\rm{C}})_{mn}|^2 \nonumber
\\
&+& \langle Q_{\rm{DB}}^{(0)}(t),Q_{\rm{DB}}^{(0)\dag}(t_1)\rangle
|(a_{\rm{B}}^{\dag}a_{\rm{D}})_{mn}|^2 \nonumber
\\
&+& \langle Q_{\rm{CA}}^{(0)}(t),Q_{\rm{CA}}^{(0)\dag}(t_1)\rangle
|(a_{\rm{A}}^{\dag}a_{\rm{C}})_{mn}|^2 \nonumber
\\
&+& \langle Q_{\rm{SA}}^{(0)}(t),Q_{\rm{SA}}^{(0)\dag}(t_1)\rangle
|(a_{\rm{A}}^{\dag}a_{\rm{S}})_{mn}|^2 \nonumber
\\&+& \langle
Q_{\rm{SD}}^{(0)}(t),Q_{\rm{SD}}^{(0)\dag}(t_1)\rangle
|(a_{\rm{D}}^{\dag}a_{\rm{S}})_{mn}|^2,\nonumber\\
\end{eqnarray}
and the reverse expression can be obtained for the correlator $\langle {\cal A}_{mn}^{(0)\dag}(t_1),{\cal A}_{mn}^{(0)}(t)\rangle$.
The formula (24) is valid in the case of weak tunneling and weak driving force $F_0$. The effects of quantum coherence are also
neglected here.
Finally, we derive the master equation for the density matrix of the system,
\begin{eqnarray}
\langle \dot{\rho}_m\rangle + \sum_{n}\gamma_{nm}(x) \langle \rho_m\rangle = \sum_{n}\gamma_{mn}(x) \langle \rho_n\rangle,
\end{eqnarray}
with the total relaxation matrix
\begin{eqnarray}
\gamma_{mn}(x) &=& \gamma_{mn}^{\rm tr}(x) +
(\kappa_{\rm{DB}})_{mn} + (\kappa_{\rm{CA}})_{mn} \nonumber
\\ &+&
(\kappa_{\rm{SA}})_{mn} + (\kappa_{\rm{SD}})_{mn} +
(\kappa_{\rm{CB}})_{mn},
\end{eqnarray}
containing the contribution of proton transitions to and from the
shuttle, $\gamma_{mn}^{\rm tr}(x)$, together with the Marcus rate
$(\kappa_{\rm{CB}})_{mn}$ describing the light-induced electron
transfer between the sites $B$ and $C$:
\begin{eqnarray}
(\kappa_{\rm{BC}})_{mn}&=&|F_0|^{2}\sqrt{\frac{\pi}{\lambda_{\rm{BC}}T}}
|(a_{\rm{B}}^{\dag}a_{\rm{C}})_{mn}|^2 \nonumber
\\
&\times&\exp\!\left[-\;\frac{\left( \omega_{mn}+\omega_0
+\lambda_{\rm{BC}}\right)^2}{4\lambda_{\rm{BC}}T}\right] \nonumber
\\
&+& |F_0|^{2}\sqrt{\frac{\pi}{\lambda_{\rm{BC}}T}}
|(a_{\rm{B}}^{\dag}a_{\rm{C}})_{nm}|^2 \nonumber
\\
&\times&\exp\!\left[-\;\frac{\left( \omega_{mn}-\omega_0
+\lambda_{\rm{BC}}\right)^2}{4\lambda_{\rm{BC}}T}\right],
\end{eqnarray}
as well as the rates related to the electron transfers between the pairs of sites $(i i')$ = (DB),(CA),(AS), and (DS):
\begin{eqnarray}
(\kappa_{i i'})_{mn}&=& |\Delta_{i' i}|^2
\sqrt{\frac{\pi}{\lambda_{i i'}T}} \left[
|(a_{i'}^{\dag}a_i)_{mn}|^2 + |(a_{i'}^{\dag}a_i)_{nm}|^2 \right]
\nonumber
\\
&\times&\exp\left[-\frac{\left( \omega_{mn} + \lambda_{i
i'}\right)^2}{4\lambda_{i i'}T}\right].
\end{eqnarray}
We note that the tunneling coefficients $\Delta_{\rm{AS}} $ and
$\Delta_{\rm{DS}}$ depend on the shuttle position $x$.
\subsection{Equation of motion for the shuttle}
We assume that the shuttle moves along the linear molecular triad (Fig.~1), and this motion can be described by the overdamped
Langevin equation for the shuttle position $x$:
\begin{eqnarray}\label{langevin}
\eta_{\rm{drag}} \;\frac{dx}{dt} = -\;\frac{dU(x)}{dx}+\zeta(t).
\end{eqnarray}
Here $\eta_{\rm{drag}}$ is the drag coefficient of the shuttle in the lipid membrane, and the thermal fluctuation of the medium is
modelled by a zero-mean delta-correlated Gaussian fluctuation force $\zeta(t)$, $\langle \zeta(t)\rangle = 0, $
\begin{eqnarray}
\langle \zeta(t)\zeta(t')\rangle = 2\eta_{\rm{drag}} T \delta(t-t'),
\end{eqnarray}
where $T$ is the temperature of the medium ($k_B$=1). The diffusion of the shuttle is determined by the diffusion coefficient
$D_{\rm{s}}=T/\eta_{\rm{drag}}$. The potential $U(x)$ in Eq.~(\ref{langevin}) is responsible for the spatial confinement of the
hydrophobic shuttle (quinone) inside the lipid membrane.
\section{Results and discussions}
To analyze the light-induced proton pumping process
quantitatively, we use the standard Heun's algorithm to numerically
solve the
twenty coupled master equations (26) along with the equation (30)
for the shuttle. For initial conditions we have assumed that at
$t=0$, $\rho_{1,1} = 1$, and the other elements of the density
matrix are zero (this corresponds to one electron on site D and
another electron on site B with no electrons and no protons on the
shuttle). We also assume that at $t=0$ the shuttle is located
nearby the acceptor (A): $x(t=0) = x_{\rm A} \simeq x_{\rm{N}}$.
Throughout our simulation we focus on the long-term asymptotic
regime, where the effects due to the influence of transient
processes have been smoothed out. The time-homogeneous statistical
properties are obtained in the long-time limit after the temporal
and ensemble averaging are performed.
The efficiency (quantum yield) of the proton pumping device is defined by the formula:
\begin{eqnarray}
\Phi= {\rm \frac{ number \;of\; protons\; pumped}{number\; of\; photons \;absorbed}} \;. \nonumber
\end{eqnarray}
\noindent The photon absorption rate, $\kappa_{\rm{B\rightarrow C}}$, is approximately equal to the rate of light-induced
transitions from the state B to the state C. Thus we assume,
\begin{eqnarray}
\Phi \simeq \frac{I_{\rm{p}}}{\kappa_{\rm{B\rightarrow C}}}, \label{Eff}
\end{eqnarray}
where $I_{\rm{p}}$ is the proton current (the number of protons, $N_{\rm p},$ translocated across the membrane per unit of time).
\subsection{Diffusive motion of the shuttle in the lipid bilayer}
In Fig.~5 we present the diffusive motion (see Fig.~5(a)) of the
shuttle in the lipid bilayer together with the time dependencies
of the electron and proton populations of the shuttle (Fig.~5(b))
\begin{figure}[htp]
\centering\includegraphics[width=8.5cm,angle=0,clip]{Fig5.pdf}
\caption[]{(Color online) (a) Stochastic motion of the shuttle with
time. The horizontal black dashed lines denote the borders of the
membrane, $x_{\rm N}$ = 40~\AA, $x_{\rm P}$ = -- 40~\AA.
Via this diffusion the shuttle transports protons and electrons through the membrane.
(b) Variation of the electron and proton population on the shuttle. Note that the proton density (red curve) and the electron
density (black curve) mostly coincide in (b). (c) Number of protons pumped versus time.
The main parameters used here are the light intensity $I=0.138$
mWcm$^{-2}$, temperature $T = 298$ K, and the chemical potentials
$\mu_{\rm{P}}=110$ meV and $\mu_{\rm{N}}=-110$ meV. The light
intensity $I$ corresponds to the photosensitive BC-group with a
dipole moment $\sim |e|\times 1$~nm, where $e$ is the electron
charge.}
\end{figure}
complemented by the time evolution of the number of
pumped protons (Fig.~5(c)). We assume that the reorganization
energies for the thermal electron transfers are low enough to
provide a high performance of the system:
\begin{eqnarray}
\lambda \sim \lambda_{\rm DB} \sim \lambda_{\rm AC}
\sim \lambda_{\rm AS} \sim \lambda_{\rm DS} \sim 400 \;{\rm meV.}\nonumber
\end{eqnarray}
This value of $\lambda$ is quite common for porphyrin-quinone
dyads having a lower limit (the internal reorganization energy) of
the order of 0.3 eV \cite{Heitele94}. Even smaller reorganization
energies ($\lambda \sim$ 230 meV) have been measured for the
porphyrin-fullerene dyads \cite{ImahoriJPC01}. The initial stages
of electron transfer in bacterial reaction centers \cite{Parson98}
are also characterized by a low reorganization energy: $\lambda
\sim$ 70--300 meV, depending on the environment. This is
due to the fact that the bacteriochlorophyll molecules (and the molecules of porphyrin involved in our molecular triad)
contain highly delocalized $\pi$-electron systems. In the next section, we also analyze how sensitive the results are to changes in
the values of $\lambda$.
Electrochemical measurements \cite{GustSc89} show that the energy
of the carotene(D)-porphyrin(BC)-quinone(A) molecular triad sweeps
from the value $\sim$1.9 eV (the first excited state of the
porphyrin, D--B$^1$C--A), to the energy $\sim$1.4 eV, related to
the intermediate state D--BC$^{+}$--A$^{-}$, and, finally, to the
energy, $\sim$1.1 eV, of the charge-separated state
D$^{+}$--BC--A$^{-}$. We assume here that the energy of the first
excited state of the porphyrin, $E_{\rm C} - E_{\rm B}$, is
1908~meV, which corresponds to a photon wavelength of 650 nm as
used in experiments \cite{gali1,gali2}. We have taken the energy
gap between the site C and A to be approximately equal to the
reorganization energy, $ (E_{\rm{C}}-E_{\rm{A}})\sim \lambda = 400$
meV. This gap is about the energy difference between the
D--B$^1$C--A and D--BC$^{+}$--A$^{-}$ states.
The energies of the electron sites S and A are comparable,
$(E_{\rm{A}}-E_{\rm{S}})\simeq 300$ meV, due to a structural
similarity of the quinone shuttle (S) and quinone moiety of the
molecular triad. The protonation of the shuttle leads to the
lowering of the electron energy on site S due to the
electron-proton Coulomb attraction \cite{gali1}, $u_{\rm{SQ}} \sim
360 $ meV. The other Coulomb interaction terms are chosen as
$u_{\rm{DB}}=u_{\rm{BA}}=120$ meV and $u_{\rm{DA}}=60$ meV. These
values correspond to the electrostatic interaction of two charges
located at distances 4 nm and 8 nm, respectively (in a medium with
a dielectric constant $\sim$ 3). Furthermore, we assume that
$E_{\rm{D}}-E_{\rm{B}} = 400$ meV and $\epsilon_{\rm{Q}} =200$~meV.
We have chosen $\epsilon_{\rm{Q}}$ such that, for the above
mentioned parameters, the device works well at the transmembrane
potential difference $\sim$ 200 mV.
We choose $\mu_{\rm{P}} = 110$ meV, $\mu_{\rm{N}}=-110$ meV, the
resonant tunneling rates $\Delta/\hbar =15\;$ns$^{-1}$, $\Gamma
/\hbar =1.5\;$ns$^{-1}$, and the reorganization energy for the
light-induced electron transfer, $\lambda_{\rm{BC}}\sim 80$ meV.
The majority of parameters in our model are deduced from
experimental data. The rates of electron transfer reactions are
given by
\begin{eqnarray}
\kappa_{C \rightarrow A} &\simeq& \kappa_{D \rightarrow B} \simeq 26 \;\mu{\rm s}^{-1} ,\nonumber \\
\kappa_{A \rightarrow S} &\simeq& \kappa_{S \rightarrow A} \simeq 20 \;\mu{\rm s}^{-1} .\nonumber
\end{eqnarray}
Therefore, the loading and unloading time scales of the shuttle are about $0.05\;\mu$s. The shuttle has enough time to be loaded and
unloaded with electrons and protons when it enters the loading/unloading domain with a size about the electron tunneling length,
$L_{\rm tun}\sim$0.5 nm, and the proton transition length, $L_{\rm Q}\sim$0.2 nm. Figures~5(a,b) show a time synchronization between
the spatial motion of the shuttle and the time variations of the shuttle populations.
It follows from Fig.~5(c) that in 1 ms the shuttle performs near 16 trips and translocates $~10$ protons through the membrane,
provided that the light intensity $I$ = 0.133 mWcm$^{-2}$. We assume that the diffusion coefficient $D_{\rm s}$ is of the order of
$2~$nm$^2 \mu {\rm s}^{-1}$ \cite{geyer1}, and the dipole moment of the BC moiety is about $|e|\times$1~nm, where $e$ is the
electron charge. The number of photons absorbed in 1 ms is $\sim \;18$. Thus, the approximate quantum yield $\Phi$ of the pumping
process is $\sim 55 \; \%$. For this parameters, the diffusive motion of the shuttle is the slow and rate-limiting step of the
pumping process.
\subsection{Robustness of the model}
To show a tolerance of the system to variations of parameters we
explore here the parameter space of our model. Keeping fixed the
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig6.pdf}
\caption[]{(Color online) Contour plots presenting the variations
of the quantum efficiency $\Phi$ with the reorganization energy
$\lambda$ and with the energy gap $\delta$, where $\delta =E_{\rm
{C}} - E_{\rm {A}} = E_{\rm {S}} - E_{\rm {D}}$. The parameters
used here are: light intensity $I=0.138$ mWcm$^{-2}$, temperature
$T = 298$ K, and chemical potentials $\mu_{\rm{P}}=110$ meV and
$\mu_{\rm{N}}=-110$ meV. The detunings take the following values:
(a) $E_{\rm{A}} - E_{\rm{S}} = 100$ meV; (b) $E_{\rm{A}} -
E_{\rm{S}} = 300$ meV; (c) $E_{\rm{A}} - E_{\rm{S}} = 500$ meV.}
\end{figure}
energy difference between the sites B and C, we calculate and plot
(see Fig.~6) the pumping efficiency $\Phi$ (photon-to-proton
quantum yield) as a function of the reorganization energy,
$\lambda$, and the energy gap, $\delta$,
\begin{eqnarray}
\lambda \sim \lambda_{\rm DB} \sim \lambda_{\rm AC}
\sim \lambda_{\rm AS} \sim \lambda_{\rm DS}, \nonumber\\
\delta = E_{\rm{C}} - E_{\rm{A}} = E_{\rm{S}} - E_{\rm{D}}, \nonumber
\end{eqnarray}
\noindent between the energy levels $E_{\rm{C}}$ and $E_{\rm{A}}$, and between the levels $E_{\rm{S}}$ and $E_{\rm{D}}$. The figures
6(a), 6(b), 6(c) correspond to the different values of detuning between the acceptor energy level, $E_{\rm{A}}$, and the electron
energy level on the shuttle, $E_{\rm{S}}$: $E_{\rm{A}} - E_{\rm{S}} = 100$ meV (Fig.~6(a)); $E_{\rm{A}} - E_{\rm{S}} = 300$ meV
(Fig.~6(b)); $E_{\rm{A}} - E_{\rm{S}} = 500$ meV (Fig.~6(c)). These plots clearly demonstrate the existence of quite wide areas in
the plane $\lambda$ -- $\delta$, where the pump performs with maximum efficiency. For the detuning $E_{\rm {A}} - E_{\rm {S}}$ = 100
meV (Fig.~6(a)) the pumping efficiency reaches its maximum, $\Phi \sim 48 \%$, in the region of parameters (in meV): $ 270 <
\lambda < 500$, and $400 < \delta < 700$. In this region, the energy gaps between the redox sites are close to the reorganization
energy, which results in higher site-to-site tunneling rates and, consequently, in a high pumping efficiency.
The higher pumping efficiency, $\Phi\sim 55 \%$, can be achieved at the detuning $E_{\rm {A}} - E_{\rm {S}}$ = 300 meV (Fig.~6(b)).
In this case the parameter $\delta$ can be tuned in such a way that the energy gaps between all relevant electron sites are equal to
the reorganization energy:
\begin{eqnarray}
&&(E_{\rm{C}} - E_{\rm{A}}) \sim (E_{\rm{A}} - E_{\rm{S}})
\nonumber\\
&&\sim (E_{\rm{S}}
- u_{\rm{SQ}} - E_{\rm{D}}) \sim (E_{\rm{D}} - E_{\rm{B}})
\sim \lambda.\nonumber
\end{eqnarray}
\noindent We recall that a shuttle populated with a proton has the electron energy, $E_{\rm{S}} - u_{\rm{SQ}}$, which differs from
the initial value $E_{\rm{S}}$ by the charging energy, $u_{\rm{SQ}} \sim 360$ meV. Summing all the above-mentioned detunings and
taking into account the energy difference, $E_{\rm{C}} - E_{\rm{B}} = 1908$ meV, between the optically-active levels B and C, we
estimate the optimum values of the reorganization energy $\lambda $ and the detuning $\delta$:
\begin{eqnarray}
\lambda \sim \delta
\sim (E_{\rm{C}} - E_{\rm{B}} - u_{\rm{SQ}})/4 = 387 {\rm\; meV}. \nonumber
\end{eqnarray}
The maximum of the efficiency in Fig.~6(b) is observed at $\delta \sim \lambda \sim 400$ meV, which is very close to our
estimations. For a larger energy gap, $E_{\rm{A}} - E_{\rm{S}} = 500$ meV (see Fig.~6(c)), the proton pumping efficiency $\Phi$
decreases and the region of the optimum parameters shrinks compared to Fig.~6(b).
\subsection{Effects of the resonant tunneling rates}
The fine-tuning of tunneling couplings between active electron
sites is feasible in some nanostructures. This tuning can be
implemented by changing the site-to-site distance, as well as by
varying the height of the potential barriers (see, e.g.,
\cite{Milliron04}).
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig7.pdf}
\caption[]{Color online) Proton pumping quantum efficiency $\Phi$
versus resonant tunneling rate $\Delta$, at different
reorganization energies $\lambda$, shown in (a,b), and for
different detunings $\delta$, shown in (c,d). Note that $\Delta$
here represents $\Delta/\hbar$, since we set $\hbar = 1.$ We use
the following parameters: $I=0.138$ mWcm$^{-2}$, $T = 298$ K,
$\mu_{\rm{P}}=110$ meV, $\mu_{\rm{N}}=-110$ meV, and the energy gap
$(E_{\rm {A}} - E_{\rm {S}})$ = 300 meV. Panels (a,b) are plotted
at fixed $\delta = 400$ meV, whereas in (c,d) the reorganization
energy is fixed, with $\lambda = 400$ meV.}
\end{figure}
Artificial photosynthetic systems, such as the
molecular triads, also allow to engineer desirable tunneling and
electrostatic properties of the structures \cite{GustSc89} with the
goal to achieve the highest possible efficiency. As in colloidal
nanocrystals \cite{Milliron04}, this can be done by inserting
additional molecular bridges between the side centers D, A and the
photosensitive part BC utilizing the exponential dependence of
electron tunneling rates on the distance \cite{Wasiel06}.
In Fig.~7 we illustrate the variation of the proton pumping
efficiency $\Phi$ as a function of the resonant tunneling rate
($\Delta/\hbar$) for different values of the reorganization energy
$\lambda$ (Figs.~7(a), 7(b)) and the energy gap $\delta$
(Figs.~7(c), 7(d)). The detuning, $E_{\rm{A}} - E_{\rm{S}}$, is
fixed to the value 300 meV for all plots in Fig.~7.
In Fig.~7(a) we plot four curves, $\Phi(\Delta)$, for the
following set of reorganization energies: $\lambda = 100, \ 130, \
200, \ 400$ meV and for a detuning $\delta = 400$ meV. In Fig.~7(b)
the efficiencies $\Phi(\Delta)$ are plotted for the reorganization
energies: $\lambda = 500, \ 800, \ 1000, \ 1200$ meV, for the same
detuning $\delta$. Similar dependencies, $\Phi(\Delta)$, are
depicted in Fig.~7(c) for $\delta = 100, \ 130, \ 200,\ 400$ meV
and in Fig.~7(d) for $\delta = 500,\ 600,\ 700,\ 800$ meV. In
both, Figs.~7(c) and 7(d), the reorganization energy $\lambda$ is
equal to 400 meV.
It follows from Fig.~7 that initially the proton pumping
efficiency rapidly increases with increasing $\Delta$, followed by
its saturation for higher values of the resonant tunneling rate.
The saturation limit depends on the reorganization energy $\lambda$
as well as on the energy gap $\delta$. For the optimum values of
$\lambda $ and $\delta$: $\lambda \sim \delta \sim$~400~{\rm meV},
the pumping efficiency is sufficiently high, $\Phi \sim 55$ \%,
even for moderate tunneling rates, $\Delta/\hbar \leq 5$ ns$^{-1}$.
\subsection{Effects of Coulomb interactions}
In Fig.~8 we plot the efficiency $\Phi$ versus the dielectric
constant $\varepsilon$ of the medium, to explore the effects of the
Coulomb couplings $u_{\rm{DB}}$, $u_{\rm{BA}}$, and $u_{\rm{DA}}$
on the performance of the proton pump. The electrostatic
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig8.pdf}
\caption[]{(Color online) (a) Coulomb energies $u_{\rm{DA}}$ and
$u_{\rm{DB}}$ versus the dielectric constant $\varepsilon$ of the
medium. (b) Proton pumping efficiency $\Phi$ versus dielectric
constant $\varepsilon$ for different values of $\delta$ and for
$\lambda = 400$ meV. (c) The pumping efficiency $\Phi$ as a
function of the dielectric constant $\varepsilon$ for different
reorganization energies $\lambda$ and at the fixed detuning $\delta
= 400$ meV. The other parameters are the same as in Fig.~7:
$I=0.138$ mWcm$^{-2}$, $T = 298$ K, $\mu_{\rm{P}}=110$ meV,
$\mu_{\rm{N}}=-110$ meV, and $E_{\rm {A}} - E_{\rm {S}}$ = 300 meV.
}
\end{figure}
interactions between the photosensitive part B and C and the donor,
$u_{\rm{DB}}$, between the sites B and C and the acceptor,
$u_{\rm{BA}}$, and between the donor and the acceptor,
$u_{\rm{DA}}$, are inversely proportional to the dielectric
constant $\varepsilon$ and to the distance between the relevant
sites. For example, we have
\begin{eqnarray}
u_{DB} = \frac{e^2}{4\pi \,\varepsilon_0\, \varepsilon\, r_{\rm{DB}}}, \nonumber
\end{eqnarray}
\noindent where $r_{\rm{DB}}$ characterizes the spatial separation of the sites D and B, and $\varepsilon_0$ is the vacuum
permittivity. The Coulomb interactions between the sites D and B and between the sites B and A (with $r_{\rm{DB}} = r_{\rm{BA}}$ = 4
nm) are decreased from 360 meV to 36 meV when the dielectric constant $\varepsilon$ scans the range from 1 to 10 (see Fig.~8(a)). We
note that in our model $r_{\rm{DA}} = 8$ nm, so that $u_{\rm{DA}} = u_{\rm{DB}}/2$. Figure~8(b) shows the efficiencies
$\Phi(\varepsilon)$ for different values of the detuning $\delta$ : $\delta = 100, \ 200,\ 400,\ 600$ meV, for $\lambda = 400$ meV.
Moreover, in Fig.~8(c) we plot the efficiencies, $\Phi(\varepsilon)$, for fixed detuning $\delta = 400$ meV and for $\lambda = 100,\
200,\ 400,\ 600$ meV.
It should be emphasized that near the optimum working point (at $\lambda \sim \delta \sim 400$ meV) the pump operates with the high
efficiency, $\Phi \sim 55$ \%, which practically does not depend on the dielectric properties of the medium.
\subsection{Effect of light intensity} In Fig.~9 we plot the proton
current as a function of the light intensity for different values of the temperature. At zero light intensity the proton current is
zero. Initially, with increasing light intensity, the proton current also increases linearly and then saturates around $0.2$ mW
cm$^{-2}$. This saturation is probably caused by the slow diffusion of the shuttle inside the lipid membrane. A similar intensity of
saturation ($\sim$ 0.1 mW/cm$^2$) has been observed in experiments \cite{gali2}.
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig9.pdf}
\caption[]{(Color online) (a) Proton current versus light
intensity $I$ for different temperatures, at $\mu_{\rm{N}}= - 110$
meV, and $\mu_{\rm{P}}= 110$ meV.
Notice that the proton current is roughly linear for small intensities
of light, but it saturates with higher light intensity. In this saturation
region, the proton current is larger with higher temperatures. (c)
The standard deviation, $\sigma_{\rm p},$ of the number $N_{\rm p}$ of pumped protons as a function of the light intensity $I$, for
different temperatures.
(b) The pumping quantum efficiency $\Phi$ decreases with light intensity for all temperatures shown.
}
\end{figure}
In a warm environment, the shuttle moves faster and carries more protons. To do this, the system should absorb more photons, so that
at high temperatures a full saturation takes place at higher light intensities. We note that the low saturation limit obtained above
and measured in the experiment \cite{gali2} with the carotene-porphyrin-quinone triads is far below the average intensity of solar
light, $I \sim$ 30 mW/cm$^2$ \cite{Hambourger09}. This fact points to the relative inefficiency of the energy-conversion process
available at normal daylight conditions. An ideal highly-efficient photosynthetic system should not have any saturation limits for
the standard daylight intensity of light.
It is evident from Fig.~5 that the number of protons, $N_{\rm p}$, translocated across the membrane fluctuates in time. To estimate
these fluctuations we calculate the standard deviation,
\begin{equation}
\sigma_{\rm p} = \sqrt{\langle N_{\rm p}^2 \rangle - \langle N_{\rm p} \rangle^2}, \label{sigmaP}
\end{equation}
which characterizes the magnitude of its shot noise. The dependence of the noise level $\sigma_{\rm p}$ on the light
intensity $I$ is shown in Fig.~9(b). For an intensity of light $I \sim$~0.14 mW$^2/{\rm cm}^2$ (when $\sim$~10 protons are
translocated across the membrane and the efficiency, $\Phi \sim$ 55\%, is sufficiently high) the uncertainty $\sigma_{\rm p}$ in the
number of pumped protons is about 1.3. In Fig.~9(c) we demonstrate that the efficiency $\Phi$ of the light-induced pumping
decreases monotonically with increasing light intensity. At low light intensities, a relatively small number of photons are absorbed
per unit time. Thus, a higher fraction of the absorbed photons is used for the uphill pumping of the protons.
\subsection{Effect of temperature}
Figure~10 shows the effects of temperature on the pumping current
and on the efficiency of the photosynthetic device for different
values of the light intensity. The temperature effects appear in
the light-induced proton pumping dynamics through two factors: (i)
The electron transfer rates, including the loading and unloading
rates of the shuttle, increase with increasing temperature. (ii)
The diffusion coefficient of the shuttle increases with
temperature.
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig10.pdf}
\caption[]{(Color online) (a) Proton current versus temperature
for different values of the light intensity $I$. (b) Pumping
efficiency $\Phi$ versus temperature. Here, the electrochemical
gradient $\Delta \mu~=~220 $ meV ($\mu_{\rm{P}}~=~110$ meV, and
$\mu_{\rm{N}}~=~-110$ meV).}
\end{figure}
Because of this, the shuttle can perform a higher
number of trips to translocate protons at higher temperatures. Here
the electron transfer reactions are not rate-limiting ones. The
diffusive trips of the shuttle from the N terminal to the P
terminal dominate the transfer rate. Therefore, the increase of the
efficiency and the pumping current with temperature is due to the
increase of the number of diffusive trips of the shuttle. A
temperature increase from 200 K to 400 K results in an increase of
about a factor of two in the diffusion constant.
It is expected that the proton current should increase at the same rate. However, our calculated ratio is about 1.5 (see Fig.~10(a)).
This is, probably, due to the fact that at high temperatures the shuttle has not enough time to be completely loaded with
electrons and protons near the acceptor site A and the N-side of the membrane (and unloaded near the donor site D and the P-side of
the membrane). A similar enhancement of the pumping current and the efficiency with temperature
can be useful for photosynthetic microorganisms to compensate a leakage of protons caused by the high-temperature increase of the
membrane permeability \cite{Vossenberg95}. The simple physical features which come into play in our model are also important for
the creation of thermostable artificial photosynthetic devices efficiently converting energy of light into electrical and chemical
energy in a wide range of temperatures and light intensities.
\subsection{Effect of the electrochemical potential gradient on the proton current}
It follows from Eq.~(\ref{dMu}) that the difference, $\Delta \mu =
\mu_{\rm P} - \mu_{\rm N},$ between the electrochemical potentials
of P- and N-proton reservoirs can be changed by changing the $pH$
levels of the solutions inside and outside of the liposome.
\begin{figure}[htp]
\centering\includegraphics[width=8cm,angle=0,clip]{Fig11.pdf}
\caption[]{(Color online) Proton pumping current versus
electrochemical potential $\mu_{\rm{P}}$ of the positive side
(P-reservoir) of the membrane for different values of the potential
$\mu_{\rm{N}}$ of the negative side (N-reservoir) for the light
intensity $I=0.132$ mWcm$^{-2}$ and temperature $T=298$ K.}
\end{figure}
In
doing so, one unit change in $pH$ corresponds to $\sim 59$ meV
variation of the transmembrane proton gradient $\Delta \mu$ (at
standard conditions). To demonstrate the effect of the $pH$ levels
on the performance of the pump, in Fig.~11 we plot the dependencies
of the proton current on the electrochemical potential $\mu_{\rm
P}$ of the positive side of the membrane at three different values
of the N-side potential: $\mu_{\rm N}$ = --110; --140; and --200
meV. The proton current saturates when the P-side potential is
sufficiently low, $\mu_{\rm P} < 160$~meV, and goes to zero at
$\mu_{\rm P} > 200$~meV). At this condition, the potential of the
P-side exceeds the energy, $\epsilon_Q = 200$~meV, of the proton on
the shuttle: $\mu_{\rm P} > \epsilon_Q$, so that the proton cannot
be translocated to the P-reservoir. On the other hand, the shuttle
cannot be loaded with a proton at the N-side of the membrane if the
electrochemical potential $\mu_{\rm N}$ is below the energy,
$\epsilon_Q - u_{SQ} = - 160$~meV, of the proton on the shuttle
populated with a single electron: $\mu_{\rm N} < -160$~meV. This is
the reason why the last curve in Fig.~11 (taken at $\mu_{\rm N}$ =
--200 meV) goes far below the other two curves (plotted for
$\mu_{\rm N} > -160$~meV).
\section{Conclusions}
We have analyzed a simple model for light-induced proton pumps in artificial photosynthetic systems. This model has five electron
sites [four sites (D,B,C,A) for the triad molecule and one site for the shuttle (S)] and one proton-binding site on the shuttle (Q).
The shuttle exhibits diffusive motion in the lipid bilayer, so that the electron and proton populations of the shuttle depend on the
shuttle position. Based on the methods of quantum transport theory we have derived and solved numerically a system of master
equations for electron and proton state probabilities evolving in time together with the Langevin equation for the position of the
shuttle. This allows us to calculate the proton current and the pumping efficiency of the system and determine their dependence on:
the intensity of light, temperature and electrochemical potential gradient.
For a reasonable set of parameters, closely related to the experimental setup, we demonstrate that this photosynthetic device can
translocate protons against an electrochemical gradient of the order of 220 meV with the efficiency (photon-to-proton quantum yield)
which exceeds 55\%. Our results explain experiments on artificial photosynthetic reaction centers \cite{gali2}. We predict that both
the proton current and the pumping efficiency grow linearly with temperature due to the related increase of the number of diffusive
trips of the shuttle. We also show that the pumping current increases linearly with the light intensity and saturates at the
experimentally-observed limit, which is lower than the average intensity of solar light.
\section*{Acknowledgments}
We acknowledge partial support from the National Security Agency
(NSA), Laboratory for Physical Sciences (LPS), Army Research Office
(ARO), and National Science Foundation (NSF) grant No. 0726906. We
also acknowledge the RIKEN Super Combined Cluster System for
computational facilities.
|
1,108,101,564,294 | arxiv | \section{Introduction}
\label{sec:intro}
Automatic Speech Recognition (ASR) \cite{lideng}, thanks to the substantial performance improvement achieved with modern deep learning technologies \cite{Goodfellow-et-al-2016-Book}, has recently been applied in several fields,
and it is currently used by millions of users worldwide.
Nevertheless, most state-of-the-art systems are still based on close-talking solutions, forcing the user to speak very close to a microphone-equipped device.
It is easy to predict, however, that in the future users will prefer to relax the constraint of handling or wearing any device to access speech recognition services, requiring technologies able to cope with a distant-talking (far-field) interaction.
In the last decade, several efforts have been devoted to improving Distant Speech Recognition (DSR) systems. Valuable examples include the AMI/AMIDA projects \cite{ami}, who were focused on automatic meeting transcription, DICIT \cite{dicit_1} which investigated voice-enabled TVs and, more recently, DIRHA which addressed speech-based domestic control.
The progress in the field was also fostered by the considerable success of some international challenges such as CHiME \cite{chime,chime3} and REVERB \cite{revch_full}
Despite the great progress made in the past years, current systems still exhibit a significant lack of robustness to acoustic conditions characterized by non-stationary noises and acoustic reverberation \cite{adverse}.
To counteract such adversities, even the most recent DSR systems \cite{nakatani}
must rely on a combination of several interconnected technologies, including for instance speech enhancement \cite{BrandWard}, speech separation \cite{bss}, acoustic event detection and classification \cite{aed1,eusipco}, speaker identification \cite{Beigi}, speaker localization \cite{gcf,hscma}, just to name a few.
A potential limitation of most current solutions lies in the weak matching and communication between the various modules being combined.
For example, speech enhancement and speech recognition are often designed independently and, in several cases, the enhancement system is tuned according to metrics which are not directly correlated with the final ASR performance.
An early attempt to mitigate this issue was published in \cite{limabeam}. In LIMABEAM, the goal was to tune the parameters of a microphone array beamformer by maximizing the likelihood
obtained through a GMM-based speech recognizer. Another approach was proposed in \cite{droppo}, where a front-end for feature extraction and a GMM-HMM back-end were jointly trained using maximum mutual information.
An effective integration between the various systems, however, was very difficult for many years, mainly due to the different nature of the technologies involved at the various steps.
Nevertheless, the recent success of deep learning has not only largely contributed to the substantial improvement of the speech recognition part of a DSR system \cite{pawel2,hain,dnn_rev,dnn_rev2,dnn3,rav_in14,ravanelli15}, but has also enabled the development of competitive DNN-based speech enhancement solutions \cite{dnn_se1,dnn_se2,dnn_se3}.
Within the DNN framework, one way to achieve a fruitful integration of the various components is joint training.
The core idea is to pipeline a speech enhancement and a speech recognition deep neural networks and to jointly update their parameters as if they were within a single bigger network. Although joint training for speech recognition is still an under-explored research direction, such a paradigm is progressively gaining more attention and some interesting works in the field have been recently published \cite{joint2,joint1,joint3,joint6,joint7,joint4,joint5}.
In this paper, we contribute to this line of research by proposing an approach based on joint training of a speech enhancement and a speech recognition DNN coupled with batch normalization in order to help making one network less sensitive to changes in the other. Batch normalization \cite{batchnorm}, which has recently been proposed in the machine learning community, has been shown crucial to significantly improve both the convergence and the performance of the proposed joint training algorithm.
Differently to previous works \cite{joint1,joint3}, thanks to batch normalization, we are able to effectively train the joint architecture even without any pre-training steps.
Another interesting aspect concerns a deeper study of a gradient weighting strategy, which ended up being particularly effective to improve performance.
The experimental validation has been carried out in a distant-talking scenario considering different training datasets, tasks and acoustic conditions.
\section{Batch-normalized joint training}
The proposed architecture is depicted in Fig.~\ref{fig:arch}. A bigger joint DNN is built by concatenating a speech enhancement and a speech recognition MLP. The speech enhancement DNN is fed with the noisy features $x_{noise}$ gathered within a context window and tries to reconstruct at the output the original clean speech (regression task).
The speech recognition DNN is fed by the enhanced features $x_{enh}$ estimated at the previous layer and performs phone predictions $y_{pred}$ at each frame (classification task). The architecture of Fig. \ref{fig:arch} is trained with the algorithm described in Alg. \ref{alg}.
\label{sec:format}
\begin{figure}[t!]
\centering
\includegraphics[width=0.42\textwidth]{prop_sys2.png}
\caption{The DNN architecture proposed for joint training.}
\label{fig:arch}
\end{figure}
The basic idea is to perform a forward pass, compute the loss functions at the output of each DNN (mean-squared error for speech enhancement and negative multinomial log-likelihood for speech recognition), compute and weight the corresponding gradients, and back-propagate them.
In the joint training framework, the speech recognition gradient is also back-propagated through the speech enhancement DNN. Therefore, at the speech enhancement level, the parameter updates not only depend on the speech enhancement cost function but also on the speech recognition loss, as shown by Eq.~\ref{eq:updates}:
\begin{equation}
\theta_{SE} \gets \theta_{SE}- lr * (g_{SE}+\lambda g_{SR}) \,.
\label{eq:updates}
\end{equation}
In Eq.~\ref{eq:updates}, $\theta_{SE}$ are the parameters of the speech enhancement DNN, $g_{SE}$ is the gradient of such parameters computed from the speech enhancement cost function (mean squared error), while $g_{SR}$ is the gradient of $\theta_{SE}$ computed from the speech recognition cost function (multinomial log-likelihood). Finally, $\lambda$ is a hyperparameter for weighting $g_{SR}$ and $lr$ is the learning rate.
The key intuition behind joint training is that since the enhancement process is in part guided by the speech recognition cost function, the front-end would hopefully be able to provide enhanced speech which is more suitable and discriminative for the subsequent speech recognition task.
From a machine learning perspective, this solution can also be considered as a way of injecting a useful task-specific prior knowledge into a deep neural network.
On the other hand, it is well known that training deep architectures is easier when some hints are given about the targeted function \cite{know_matter}.
As shown previously \cite{know_matter}, such prior knowledge becomes progressively more precious as the complexity of the problem increases and can thus be very helpful for a distant speech recognition task. Similarly to the current work, in \cite{know_matter,Romero-et-al-ICLR2015-small} a task-specific prior knowledge has been injected into an intermediate layer of a DNN for better addressing an image classification problem.
In our case, we exploit the prior assumption that to solve our specific problem, it is reasonable to first enhance the features and, only after that, perform the phone classification.
Note that this is certainly not the only way of solving the problem, but among all the possible functions able to fit the training data, we force the system to choose from a more restricted subset, potentially making training easier.
On the other hand, good prior knowledge is helpful to defeat the curse of dimensionality, and
a complementary view is thus to consider the proposed joint training as a regularizer.
According to this vision, the weighting parameter $\lambda$ of Eq. \ref{eq:updates} can be regarded as a regularization hyperparameter, as will be better discussed in Sec. \ref{sec:gw}.
\begin{algorithm}[t!]
\caption{Pseudo-code for joint training}
\label{alg}
\begin{algorithmic}[1]
\State \textbf{DNN initialization}
\For {i in minibatches}
\State \textbf{Forward Pass:}
\State Starting from the input layer do a forward pass
\State (with batch normalization) through the networks.
\State \textbf{Compute SE Cost Function:}
\State $MSE_i=\frac{1}{N}\sum_{n=1}^{N}(x_{enh}^i-x_{clean}^i)^2$
\State \textbf{Compute SR Cost Function:}
\State $NLL_i=-\frac{1}{N}\sum_{n=1}^{N}y_{lab}^i log(y_{pred}^i)$
\State \textbf{Backward Pass:}
\State Compute the grad. $g_{SE}^i$ of $MSE_i$ and backprogate it.
\State Compute the grad. $g_{SR}^i$ of $NLL_i$ and backprogate it.
\State \textbf{Parameters Updates:}
\State $\theta_{SE}^i \gets \theta_{SE}^i - lr * (g_{SE}^i+\lambda g_{SR}^i)$
\State $\theta_{SR}^i \gets \theta_{SR}^i - lr * g_{SR}^i$
\EndFor
\State Compute NLL on the development dataset
\If {$NLL_{dev} < NLL_{dev}^{prev}$}
\State Train for another epoch (go to 2)
\Else
\State Stop Training
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{Batch normalization} \label{sec:batchnorm}
Training DNNs is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change.
This problem, known as \textit{internal covariate shift}, slows down the training of deep neural networks.
Batch normalization \cite{batchnorm}, which has been recently proposed in the machine learning community, addresses this issue by normalizing the mean and the variance of each layer for each training mini-batch, and back-propagating through the normalization step. It has been long known
that the network training converges faster if its inputs are properly normalized \cite{yann} and, in such a way, batch normalization extends the normalization to all the layers of the architecture. However, since a per-layer normalization may impair the model capacity, a trainable scaling parameter $\gamma$ and a trainable shifting parameter $\beta$ are introduced in each layer to restore the representational power of the network.
The idea of using batch normalization for the joint training setup is motivated by a better management of the internal covariate shift
problem, which might be crucial when training our (very) deep joint architecture.
As will be shown in Sec.\ \ref{sec:bn_exp}, batch normalization allows us to significantly improve the performance of the system, to speed-up the training, and to avoid any time-consuming pre-training steps.
Particular attention should anyway be devoted to the initialization of the $\gamma$ parameter. Contrary to \cite{batchnorm}, where it was initialized to unit variance ($\gamma=1$), in this work we have observed better performance and convergence properties with a smaller variance initialization ($\gamma=0.1$).
A similar outcome has been found in \cite{initbn}, where fewer vanishing gradient problems are empirically observed with small values of $\gamma$ in the case of recurrent neural networks.
\subsection{System details}
The features considered in this work are standard 39 Mel-Cepstral Coefficients (MFCCs) computed every 10 ms with a frame length of 25 ms. The speech enhancement DNN is fed with a context of 21 consecutive frames and predicts (every 10 ms) 11 consecutive frames of enhanced MFCC features. The idea of predicting multiple enhanced frames was also explored in \cite{joint3}.
All the layers used Rectified Linear Units (ReLU), except for the output of the speech enhancement (linear) and the output of speech recognition (softmax).
Batch normalization \cite{batchnorm} is employed for all the hidden layers, while dropout \cite{dropout} is adopted in all part of the architecture, except for the output layers.
The datasets used for joint training are obtained through a contamination of clean corpora (i.e., TIMIT and WSJ) with noise and reverberation.
The labels for the speech enhancement DNN (denoted as $x_{clean}$ in Alg.1) are the MFCC features of the original clean datasets.
The labels for the speech recognition DNN (denoted as $y_{lab}$ in Alg.1) are derived by performing a forced alignment procedure on the original training datasets. See the standard s5 recipe of Kaldi for more details \cite{kaldi}.
The weights of the network are initialized according to the \textit{Glorot} initialization \cite{xavier}, while biases are initialized to zero.
Training is based on a standard Stochastic Gradient Descend (SGD) optimization with mini-batches of size 128. The performance on the development set is monitored after each epoch and the learning rate is halved when the performance improvement is below a certain threshold. The training ends when no significant improvements have been observed for more than four consecutive epochs.
The main hyperparameters of the system (i.e., learning rate, number of hidden layers, hidden neurons per layer, dropout factor and $\lambda$) have been optimized on the development set.
The proposed system, which has been implemented with Theano \cite{theano},
has been coupled with the Kaldi toolkit \cite{kaldi} to form a context-dependent DNN-HMM speech recognizer.
\subsection{Relation to prior work}
Similarly to this paper, a joint training framework has been explored in \cite{joint2,joint1,joint3,joint6,joint7,joint4,joint5}. A key difference with previous works is that we propose to combine joint training with batch normalization.
In \cite{joint1,joint3}, for instance, the joint training was actually performed as a fine-tuning procedure, which was carried out only after training the two networks independently. A critical aspect of such an approach is that the learning rate adopted in the fine-tuning step has to be properly selected in order to really take advantage of pre-training. With batch normalization we are able not only to significantly improve the performance of the system, but also to perform joint training from scratch, skipping any pre-training phase.
Another interesting aspect of this work is a deeper study of the role played by the gradient weighting factor $\lambda$
\section{Corpora and tasks}
\label{sec:corpora}
In order to provide an accurate evaluation of the proposed technique, the experimental validation has been conducted using different training datasets, different tasks and various environmental conditions\footnote{To allow reproducibility of the results reported in this paper, the code of our joint-training system will be available at \url{https://github.com/mravanelli}. In the same repository, all the scripts needed for the data contamination will be available. The public distribution of the DIRHA-English dataset is under discussion with the Linguistic Data Consortium (LDC).}.
The experiments with TIMIT are based on a phoneme recognition task (aligned with the Kaldi s5 recipe). The original training dataset has been contaminated with a set of realistic impulse responses measured in a real apartment. The reverberation time ($T_{60}$) of the considered room is about 0.7 seconds. Development and test data have been simulated with the same approach. More details about the data contamination approach can be found in \cite{IRs_paper,lrec,rav_is16}.
The WSJ experiments are based on the popular wsj5k task (aligned with the CHiME 3 \cite{chime3} task) and are conducted under two different acoustic conditions. For the \textit{WSJ-Rev} case, the training set is contaminated with the same set of impulse responses adopted for TIMIT. For the \textit{WSJ-Rev+Noise} case, we also added non-stationary noises recorded in a domestic context (the average SNR is about 10 dB). The test phase is carried out with the DIRHA English Dataset, consisting of 409 WSJ sentences uttered by six native American speakers in the above mentioned apartment. For more details see \cite{dirha_asru,rav_is16}.
\section{Experiments}
\subsection{Close-talking baselines}
\label{sec:ct_baseline}
The Phoneme Error Rate (PER\%) obtained by decoding the original test sentences of TIMIT is $19.5\%$ (using DNN models trained with the original dataset). The Word Error Rate (WER\%) obtained by decoding the close-talking WSJ sentences is $3.3\%$. It is worth noting that, under such favorable acoustic conditions, the DNN model leads to a very accurate sentence transcription, especially when coupled with a language model.
\subsection{Joint training performance}
\label{sec:jt_pers}
In Table \ref{tab:res1}, the proposed joint training approach is compared with other competitive strategies.
\begin{table}[t!]
\centering
\tabcolsep=0.28cm
\begin{tabular}{ | l | c | c | c | c | }
\cline{1-4}
\multirow{2}{*}{\backslashbox{\em{System}}{\em{Dataset}}} & \multicolumn{1}{ | c |}{TIMIT} & \multicolumn{1}{ | c |}{WSJ} & \multicolumn{1}{ | c |}{WSJ} \\ \cline{2-4}
& \textit{Rev} & \textit{Rev} & \textit{Rev+Noise} \\ \hline
Single big DNN & 31.5 & 8.1 & 14.3 \\ \hline
SE + clean SR & 31.1 & 8.5 & 15.7 \\ \hline
SE + matched SR & 30.1 & 8.0 & 13.7 \\ \hline
SE + SR joint training & \textbf{29.2} & \textbf{7.8} & \textbf{12.7} \\ \hline
\end{tabular}
\caption{Performance of the proposed joint training approach compared with other competitive DNN-based systems.}
\label{tab:res1}
\end{table}
\label{sec:bn_exp}
\begin{table}[t!]
\centering
\tabcolsep=0.108cm
\begin{tabular}{ | l | c | c | c | c | c |}
\cline{1-5}
\multirow{2}{*}{\backslashbox{\em{Dataset}}{\em{System}}} & \multicolumn{2}{ | c |}{Without Pre-Training} & \multicolumn{2}{ | c |}{With Pre-Training} \\ \cline{2-5}
& \textit{no-BN} & \textit{with-BN} & \textit{no-BN} & \textit{with-BN} \\ \hline
TIMIT-Rev & 34.2 & 29.2 & 32.6 & 29.5 \\ \hline
WSJ-Rev & 9.0 & 7.8 & 8.8 & 7.8 \\ \hline
WSJ-Rev+Noise & 15.7 & 12.7 & 15.0 & 12.9 \\ \hline
\end{tabular}
\caption{Analysis of the role played by batch normalization within the proposed joint training framework.}
\label{tab:test2}
\end{table}
In particular, the first line reports the results obtained with a single neural network. The size of the network has been optimized on the development set (4 hidden layers of 1024 neurons for TIMIT, 6 hidden layers of 2048 neurons for WSJ cases). The second line shows the performance obtained when the speech enhancement neural network (4 hidden layers of 2048 neurons for TIMIT, 6 hidden layers of 2048 neurons for WSJ) is trained independently and later coupled with the close-talking DNN of Sec.~\ref{sec:ct_baseline}. These results are particularly critical because, especially in adverse acoustic conditions, the speech enhancement model introduces significant distortions that a close-talking DNN trained in the usual ways is not able to cope with. To partially recover such a critical mismatch, one approach is to first train the speech enhancement, then pass all the training features though the speech enhancement DNN, and, lastly, train the speech recognition DNN with the dataset processed by the speech enhancement. The third line shows results obtained with such a matched training approach. The last line reports the performance achieved with the proposed joint training approach. Batch normalization is adopted for all the systems considered in Table \ref{tab:res1}.
Although joint training exhibits in all the cases the best performance, it is clear that such a technique is particularly helpful especially when challenging acoustic conditions are met. For instance, a relative improvement of about $8\%$ over the most competitive matched training system is obtained for the WSJ task in noisy and reverberant conditions.
\subsection{Role of batch normalization}
In Table \ref{tab:test2}, the impact of batch normalization on the joint training framework is shown.
The first two columns report, respectively, the results obtained with and without batch normalization when no pre-training techniques are employed. The impact of pre-training is studied in the last two columns. The pre-training strategy considered here consists of initializing the two DNNs with the matched training system discussed in Sec.~\ref{sec:jt_pers}, and performing a fine-tuning phase with a reduced learning rate. The column corresponding to the pre-training without batch normalization represents a system that most closely matches the approaches followed in \cite{joint1,joint3}.
Table~\ref{tab:test2} clearly shows that batch normalization is particularly helpful. For instance, a relative improvement of about 23\% is achieved when batch normalization is adopted for the WSJ task in a noisy and reverberant scenario. The key importance of batch normalization is also highlighted in Fig.~\ref{fig:bn_frame}, where the evolution during training of the frame-level phone error rate (for the TIMIT-Rev dataset) is reported with and without batch normalization. From the figure it is clear that batch normalization, when applied to the considered deep joint architecture, ensures a faster convergence and a significantly better performance. Moreover, as shown in Table~\ref{tab:test2}, batch normalization eliminates the need of DNN pre-training, since similar (or even slightly worse results) are obtained when pre-training and batch normalization are used simultaneously.
\begin{figure}
\centering
\includegraphics[width=0.52\textwidth]{batch_norm_fig.png}
\caption{Evolution of the test frame error rate across various training epochs with and without batch normalization.}
\label{fig:bn_frame}
\end{figure}
\subsection{Role of the gradient weighting}
\label{sec:gw}
In Fig. \ref{fig:grad_w}, the role of the gradient weighting factor $\lambda $ is highlighted.
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{lambda}
\caption{Training and development frame error rates obtained on the TIMIT-Rev dataset for different values of $\lambda$.}
\label{fig:grad_w}
\end{figure}
From the figure one can observe that small values of $\lambda$ lead to a situation close to underfitting, while higher values of $\lambda$ cause overfitting. The latter result is somewhat expected since, intuitively, with very large values of $\lambda$ the speech enhancement information tends to be neglected and training relies on the speech recognition gradient only.
In the present work, we have seen that values of $\lambda$ ranging from 0.03 to 0.1 provide the best performance. Note that these values are smaller than that considered in \cite{joint1,joint2}, where a pure gradient summation ($\lambda=1$) was adopted. We argue that this result is due to the fact that, as observed in \cite{initbn}, the norm of the gradient decays very slowly when adopting batch normalization with a proper initialization of $\gamma$, even after the gradient has passed through many hidden layers. This causes the gradient backpropagated through the speech recognition network and into the speech enhancement network to be very large.
\section{Conclusion}
In this paper, a novel approach for joint training coupled with batch normalization is proposed. The experimental validation, conducted considering different tasks, datasets and acoustic conditions, showed that batch-normalized joint training is particularly effective in challenging acoustic environments, characterized by both noise and reverberation. In particular, batch normalization was of crucial importance for improving the system performance. A remarkable result is the relative improvement of about 23\% obtained for the WSJ task in a noisy and reverberant scenario when batch normalization is used within the joint training framework.
This system can be seen as a first step towards a better and more fruitful integration of the various technologies involved in current distant speech recognition systems. Future efforts for improving the current solution will be devoted to progressively involve different NN architectures or to embed other technologies such as speech separation, speaker identification and acoustic scene analysis.
\bibliographystyle{IEEEbib}
|
1,108,101,564,295 | arxiv |
\section{For every submission}
\subsection{Did you discuss the \textit{limitations} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{mainClaims}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{mainClaimsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss any potential \textit{risks} of your work?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{risks}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{risksJustification}
\end{tabular}
\end{Form}
\subsection{Do the abstract and introduction summarize the paper’s main claims?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{abstractIntro}{Yes,No,N/A}\\[0.2cm]
\tf[0.85]{abstractIntroJustification}
\end{tabular}
\end{Form}
\section{Did you use or create \textit{scientific artifacts}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this sectio. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{createArtifacts}{Yes,No}\\[0.2cm]
\end{tabular}
\end{Form}
If yes:
\subsection{Did you cite the creators of artifacts you used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{citeCreators}{Yes,No,N/A}\\[0.2cm]
\tf{citeCreatorsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the \textit{license or terms} for use and/or distribution of any artifacts?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{legalGrounds}{Yes,No,N/A}\\[0.2cm]
\tf{legalGroundsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss if your use of existing artifact(s) was consistent with their \textit{intended use}, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{intendedUse}{Yes,No,N/A}\\[0.2cm]
\tf{intendedUseJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the steps taken to check whether the data that was collected/used contains any \textit{information that names or uniquely identifies individual people} or \textit{offensive content}, and the steps taken to protect / anonymize it?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{personallyIdentifiableInformationOrOffensiveContent}{Yes,No,N/A}\\[0.2cm]
\tf{personallyIdentifiableInformationOrOffensiveContentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{documentation}{Yes,No,N/A}\\[0.2cm]
\tf{documentationJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report relevant statistics like the number of examples, details of train/test/dev splits, etc. for the data that you used/created?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{relevantStatistics}{Yes,No,N/A}\\[0.2cm]
\tf{relevantStatisticsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you run \textit{computational experiments}?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{computationalExperiments}{Yes,No}
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the \textit{number of parameters} in the models used, the \textit{total computational budget} (e.g., GPU hours), and \textit{computing infrastructure} used?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{reportReproducibility}{Yes,No,N/A}\\[0.2cm]
\tf{reportReproducibilityJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss the experimental setup, including \textit{hyperparameter search} and \textit{best-found hyperparameter} values?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{bestFoundHyperparameter}{Yes,No,N/A}\\[0.2cm]
\tf{bestFoundHyperparameterJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report \textit{descriptive statistics} about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{descriptiveStatistics}{Yes,No,N/A}\\[0.2cm]
\tf{descriptiveStatisticsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{existingPackages}{Yes,No,N/A}\\[0.2cm]
\tf{existingPackagesJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\section{Did you use \textit{human annotators} (e.g., crowdworkers) or \textit{research with human subjects}?} If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, you can skip the rest of this section. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{hummanAnnotators}{Yes,No}\\
\end{tabular}
\end{Form}
If yes:
\subsection{Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{fullTextInstructions}{Yes,No,N/A}\\[0.2cm]
\tf{fullTextInstructionsJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such \textit{payment is adequate} given the participants’ demographic (e.g., country of residence)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{payment}{Yes,No,N/A}\\[0.2cm]
\tf{paymentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you discuss whether and how \textit{consent} was obtained from people whose data you're using/curating (e.g., did your instructions explain how the data would be used)?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{consent}{Yes,No,N/A}\\[0.2cm]
\tf{consentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Was the data collection protocol \textit{approved (or determined exempt)} by an ethics review board?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{ethicsAmountSpent}{Yes,No,N/A}\\[0.2cm]
\tf{ethicsAmountSpentJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\subsection{Did you report the basic demographic and geographic characteristics of the \textit{annotator} population that is the source of the data?}
If you answer {\bf Yes}, provide the section number; if you answer {\bf No}, provide a justification. \\[0.3cm]
\begin{Form}
\begin{tabular}{l}
\cm{annotator}{Yes,No,N/A}\\[0.2cm]
\tf{annotatorJustification}
\end{tabular}
\end{Form} \\[0.3cm]
\end{document}
\section{Introduction}
Peer review is increasingly coming under criticism for its arbitrariness. Two NeurIPS experiments \cite{,Price_2014_NIPS_experiment,CortesLawrence_2021_Inconsistency_in_Conference_Peer_Review_Revisiting_2014_NeurIPS_Experiment,BeygelzimerDauphinEtAl_2021_NeurIPS_2021_Consistency_Experiment} have shown that the reviewers are good at identifying papers that are clearly bad, but the agreement on the ``good'' papers appears to be close to random. Among the likely reasons for that are cognitive and social biases of NLP reviewers \cite[see overview by][]{RogersAugenstein_2020_What_Can_We_Do_to_Improve_Peer_Review_in_NLP}, fundamental disagreements in such an interdisciplinary field as NLP,
and acceptance rates that are kept low\footnote{\url{https://twitter.com/tomgoldsteincs/status/1388156022112624644}} irrespective of the ratio of high-quality submissions.
Such arbitrariness leads to understandable frustration on the part of the authors whose jobs and graduation depend on publications, and it also means lost time and opportunities \cite{AczelSzasziEtAl_2021_billion-dollar_donation_estimating_cost_of_researchers_time_spent_on_peer_review,GordonPoulin_2009_Cost_of_NSERC_Science_Grant_Peer_Review_System_Exceeds_Cost_of_Giving_Every_Qualified_Researcher_Baseline_Grant} for science overall. Reviews written by someone who does not have the requisite expertise, or does not even consider the given type of research as a contribution, it is a loss for all parties: the authors do not get the intellectual exchange that could improve their projects and ideas, and reviewers simply lose valuable time without learning something they could use. It is also a loss for the field overall: less popular topics could be systematically disadvantaged, leading to ossification of the field \cite{ChuEvans_2021_Slowed_canonical_progress_in_large_fields_of_science}.
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{figures/venndiagram2.pdf}
\caption{Overview of all respondents and overlap of their roles for their last experience at NLP venues.}
\label{fig:venn}
\end{figure}
This paper contributes a snapshot of this problem in NLP %
venues, based on a survey of authors, reviewers and area chairs (ACs). We collected 180 responses, which is is comparable to the volume of feedback collected for implementing the ACL Rolling Review (ARR). The overall distribution of respondents' roles is shown in \cref{fig:venn}. We present the commonly reported issues and community preferences for different paper assignment workflows (\cref{sec:results}). We derive actionable recommendations to how peer review in NLP could be improved (\cref{sec:recommendations}),
discuss the limitations of survey methodology (\cref{sec:limitation}), and conclude with desiderata for interpretable peer review assignments (\cref{sec:similarity}).
\section{Background: Peer Review in NLP}
\label{sec:background}
Paper-reviewer assignments are matches between submissions to conferences or journals and their available pool of reviewers, taking into account the potential conflicts of interest (COI) and reviewer assignment quotas.
Among the systems used in recent NLP conferences, the Softconf matching algorithm takes into account bidding, quotas, and manual assignments, and randomly assigns the remaining papers as evenly as possible%
\footnote{\url{https://www.softconf.com/about/index.php/start/administration-view}}. NAACL and ACL 2021 used SoftConf, but also provided their ACs with affinity scores produced by a ``paraphrastic similarity'' system based on an LSTM encoder, which is trained on Semantic Scholar abstracts
\cite{wieting-etal-2019-simple,2021_ACL_Reviewer_Matching_Code}. Affinity scores are scores indicating how well a given submission matches a given reviewer. They are typically computed as the similarity (e.g. cosine similarity) between the embeddings of certain information about the submission and the reviewer's publication history (e.g. abstracts and titles).
ARR switched to OpenReview and currently uses\footnote{Source: personal communication with the ARR team.} their SPECTER-MFR system \cite{2021_Paper-reviewer_affinity_modeling_for_OpenReview} which is based on SPECTER \cite{specter2020cohan} and MFR embeddings \cite{mfr}
for computing affinity scores. The assignments are then made with the MinMax matching algorithm\footnote{\url{https://github.com/openreview/openreview-matcher}}.
The problem of paper-reviewer assignment is by itself an active area of research (see overview of key issues for CS conferences by \citet{shah2022overview}). There are many proposals for paper-reviewer assignment systems \cite[][inter alia]{HartvigsenWeiEtAl_1999_Conference_Paper-Reviewer_Assignment_Problem,WangShiEtAl_2010_comprehensive_survey_of_reviewer_assignment_problem,LiWatanabe_2013_Automatic_Paper-to-reviewer_Assignment_based_on_Matching_Degree_of_Reviewers}, some of which also consider the problem of ``fair'' assignments %
\cite{LongWongEtAl_2013_On_Good_and_Fair_Paper-Reviewer_Assignment,StelmakhShahEtAl_2019_PeerReview4All_Fair_and_Accurate_Reviewer_Assignment_in_Peer_Review}.
Such studies tend to be hypothesis-driven: they make an assumption about what criteria should be taken into account, design a system and evaluate it. To the best of our knowledge, ours is the first study in the field to address the opposite question: what criteria should be taken into account, given the diversity of perspectives in an interdisciplinary field? We take that question to the community.
\section{Methodology: survey structure and distribution}
We developed three separate surveys for the main groups of stakeholders in the peer review process: authors, reviewers and ACs.
They follow the same basic structure: consent to participation (see Impact Statement), background information, questions on most recent experiences in the role which the survey pertains to, and how the respondents believe paper-reviewer matching should be performed.
Most questions are asked to respondents in all three roles, reformulated to match their different perspectives.
The responses were collected late 2021 and all respondents are required to confirm that their most recent experience as an AC/reviewer/author is in 2019-2021.
The full surveys and response data are publicly available\footnote{\url{https://github.com/terne/Paper-Reviewer-Matching-Surveys}}.
\paragraph{Participant background.}
All surveys include questions on career status and the number of times the respondents have been ACs/reviewers/authors at NLP venues. We ask what venues they have experience with (as broad categories) and what types of contributions they make in their work.
\paragraph{Participant experience with peer review.} We further ask the respondents a range of questions about their experience as AC/reviewer/author:
how satisfied they are with the process, what issues they have experienced, what was the assignment load (ACs and reviewers), how paper-reviewer matching was done, %
how they would prefer it to be done, and which factors they believe to be important for paper-review matching. Most of the questions are multiple-choice, with addition of some open-ended
questions where appropriate, so that respondents can elaborate their answers or add to the available options.
Whenever possible, the question formulations were taken from the question bank of UK Data Service \cite{hyman2006use}. Attitude questions use a 5-point Likert scale.%
Limited memory is an important concern in surveys \cite{10.2307/2284504,Ayhan2005}, and we cannot expect the respondents to accurately recall all their experience
with peer review. To reduce memory recall errors, the survey focuses on
the respondent's most recent experience, but they also have a chance to reflect on prior experience in open-ended questions,
and to report whether they experienced certain issues at any time in their career.
\paragraph{Survey distribution.}
We distributed the surveys via three channels: by handing out flyers at EMNLP 2021, through mailing lists (ML-news, corpora list, Linguist list), and through Twitter with the hashtag \#NLProc. Participation was voluntary, with no incentives beyond potential utility of this study for improving NLP peer review.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/careerstripplot3.pdf}
\caption{Career status of the respondents vs their experience receiving peer review. Numerical data is available in \cref{tab:background} in the appendix.}
\label{fig:career}
\end{figure}
\paragraph{Data validation.}
Given that links to surveys were distributed openly and that we did not ask for any identifiable information, the surveys needed to include other means of validation to ensure that the responses included in the analysis were from attentive, relevant individuals. Our approach for validating the data quality follows \textit{satisficing theory} \cite{doi:10.1177/1470785317744856}, with the main safeguards being 1) the checking of response consistency, including a few ``traps" where inconsistency or illogical responses can be exposed, and 2) the inclusion of open-ended questions.
73\% ACs, 40\% reviewers, 33\% authors have provided at least one response to our open-ended questions, and we did not find any meaningless or incoherent comments not addressing the question. For consistency checks, all respondents were asked:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0pt]
\item How many times they have been an AC/reviewer/author. One of the options was ``0'', contradicting the earlier confirmation of experience in a given role.
\item When was the last time they were an AC/reviewer/author. One of the options was ``earlier than 2019'', contradicting the earlier confirmation of peer review experience in 2019-2021.
\item Whether they have performed the other roles. New authors may have not reviewed or AC-ed, but reviewers should also have been authors, and ACs should have experience with all roles.
\end{itemize}
\section{Results}
\label{sec:results}
Overall we received 38 responses from ACs, 87 from reviewers and 81 from authors (206 in total). After removing 20 incomplete responses and 8 responses inconsistent with the ``trap'' questions, we report the results for 30
responses from ACs, 77 from reviewers and 73 from authors (180 in total).
\subsection{Who are the respondents?}\label{sec:who}
According to the past conference statistics, we could expect that many submissions would be primarily authored by the students, and reviewers are generally expected to be relatively senior, which should correspond to their going through peer review more often. We can use this expected pattern as an extra validation step for the survey responses.
\Cref{fig:career} shows that the responses are in line with this expected pattern. We received the most responses from academic researchers (62), PhD students (54), and postdocs (32). Most academic researchers and postdocs, but not PhD students, have had their work reviewed more than 10 times. At the same time 65\% of the PhD students who served as reviewers went through peer review more than 5 times, as opposed to 24.2\% of PhD students in the author role. Fewer industry than academic researchers responded to the survey. This could be related to the fact that a large part of the ``academic'' demographic are students -- and in 2020-2021 the ACL membership among students was equal to or exceeding other demographics \cite{acl2021-members}.
\begin{figure}[!b]
\centering
\includegraphics[width=.9\linewidth]{figures/papertypespercent.pdf}
\caption{Types of research performed by respondents (multiple options could be selected).}
\label{fig:papertypes}
\end{figure}
\subsection{Paper types}
The next question is to see what kinds of research papers the respondents to our surveys have authored: engineering experiment, survey, position paper etc., according to the COLING taxonomy by \citet{BenderDerczynski_2018_Paper_Types}%
. We expect that more senior researchers will have more experience with different types of work.
Indeed, on average the authors have worked with 2.5 types of papers, vs. 3.0 for reviewers and 3.6 for ACs. The distribution is shown in \cref{fig:papertypes}. The most respondents have authored engineering experiment papers (with the authors reporting the most such work).
Note that this only indicates whether the respondents to our surveys have or have not authored certain types of papers, rather than how many. In terms of volume, the engineering papers are a lot more prevalent: e.g. at ACL 2021 the ``Machine learning'' track had 332 submissions, vs 168 in the ``Resources and evaluation'' track \cite{XiaLiEtAl_2021_ACL-IJCNLP_2021_Program_Chair_Report}.
\subsection{What kinds of problems do people report?}
As with any voluntary feedback, our surveys were likely to receive more responses from people who had a grievance with the current process. Indeed, we find that only 6.7\% of ACs, 20.5\% of authors, and 22.1\% of reviewers say that they have not had any issues in their last encounter with NLP venues.
The overall distribution for the types of problems reported by the authors, reviewers and ACs in their last and overall experience
is shown in \cref{fig:lastissues}.
Given that at the time of this survey the ARR was recently deployed as the only ACL submission channel, we highlight the responses from the people for whom the most recent venue was ARR: 28\% %
reviewers, 18\% %
authors, 50\% ACs. %
The key takeaways are as follows:
\begin{itemize}[leftmargin=*,noitemsep,topsep=0pt]
\item Two of the most frequent complaints of ACs (about 50\% of the respondents) are insufficient information about reviewers and clunky interfaces;
\item Many paper-reviewer mismatches (about 30\%, if the report of the last experience is representative) are \textit{avoidable}: they should have been clear from the reviewers' publication history;
\item Over a third of the author respondents in their last submission (about 50\% over all history) received reviews from reviewers lacking either expertise or interest, and that is supported by the reviewers' reports of being assigned papers that were mismatched on one of these dimensions;
\item The authors report that many reviews (over a third in last submission, close to 50\% overtime) are biased or shallow, which might be related to the above mismatches in expertise or interest;
\item Two patterns are exclusive to ARR: insufficient time for ACs\footnote{ARR has since switched to 6-week cycles, which might help to address this issue (\url{https://aclrollingreview.org/six-week-cycles/}).}, and zero authors with no issues.
\end{itemize}
\subsection{Knowledge of the workflow}
Our next question is what methods NLP venues use to match submissions to reviewers, and to what extent the stakeholders (authors and reviewers) are aware of how it is done.
We find that relatively few authors (23.3\%) and reviewers (23.4\%) know for sure what process was used, which begs for \textit{more transparency in the conference process}. The ACs report that the most frequent case (37\%) is a combination of automated and manual assignments. Interestingly, most reviewers believe that their assignments were automated (36\%), and only (28\%) believe they were automated+manual. See App. \Cref{fig:whatis} for full distribution.
\begin{figure}[t!]
\includegraphics[width=0.95\linewidth]{figures/WhatshouldbethemethodSMALL.pdf}
\begin{subfigure}[t]{\linewidth}
\begin{mdframed}
\scriptsize
\begin{center}\textbf{Topics mentioned in the open-ended comments}%
(See supplementary materials for full categorized comments)
\end{center}
\vspace{.5em}
\textbf{ACs:} bidding (2), similarity+manual (1), similarity+bidding+manual (5), keyword-based filtering + bidding (2), similarity (1), tracks (1), other info (2), ARR (1), interface (1)
\vspace{.5em}
\textbf{Reviewers:} manual (2), similarity + bidding (3), similarity+bidding+manual (3), keywords (1), keywords+similarity (1), tracks (2), tracks+bidding (1), other (4)
\vspace{.5em}
\textbf{Authors:} against similarity (2), similarity + bidding (2), similarity+bidding+manual (2), ARR (2), random (2)
\end{mdframed}
\end{subfigure}
\begin{minipage}{3.1cm}
\vfill
\end{minipage}
\caption{%
\textit{Which of the following options would you consider best for assigning reviewers to submissions?}%
}
\label{fig:whatshould}
\end{figure}
\section{The Ideal Process}
\label{sec:recommendations}
\subsection{Ideal workflow}
When asked about what paper-assignment process they would prefer (given that fully manual
\onecolumn
\begin{figure}
\centering
\includegraphics[width=\textwidth]{figures/issues.pdf}
\begin{mdframed}[userdefinedwidth=\textwidth]
\scriptsize
\begin{center}\textbf{Topics mentioned in the open-ended comments}
(The full comments categorized by these topics can be found in the survey data \href{https://github.com/terne/Paper-Reviewer-Matching-Surveys}{repository})
\end{center}
\vspace{1em}
\textbf{Area chairs:} interface issues (7), bad reviewers/reviews (5), workload issues (6), issues with ARR (4), lacking information on reviewers (4), communication issues between both systems and other human agents (4), lack of qualified reviewers in the pool (3), issues with meta-reviews (2), affinity score complaints (2), affinity score for finding reviewers the AC does not know personally (1), preference for manually recruited reviewers (1), papers assigned to ACs outside their area of expertise (1), too many declines (1), mismatch in goals of reviewers and authors (1), emergency reviews (1), bidding enabling bias (1).
\vspace{1em}
\textbf{Reviewers:} choices forced by ACs (5), preference for bidding (4), areas of past expertise not currently of interest (4), lack of interest in the paper (3), methodological mismatch between generations of NLP researchers (3), mismatch in research methods (2), publication records as an unreliable indicator for assignments (1), mismatch in languages (1), time issues (1), reviewer bias (1)
\vspace{1em}
\textbf{Authors:} reviewer expectation for a certain kind of research (6), inattentive reviews (5), short reviews (3), mismatch between the score and the text of the review (3), requests for irrelevant citations (2), confirmation bias (1), non-constructive criticism (1), shallow reviews (1), lack of reviewer competence (2), missing reviews (2), requests for irrelevant comparisons (1), ``wild'' estimates of impact (1), unannounced policy changes (1)
\end{mdframed}
\caption{The issues with peer review process, reported by ACs, reviewers and authors, in their last (on the left) versus historical (on the right) experience with CL/NLP venues.}
\label{fig:lastissues}
\begin{multicols}{2}
\justifying
\noindent matching is impractical for large conferences), most ACs and authors opted for automated+manual process, but for the reviewers this is the second preferred process (26\%), with 30\% opting for bidding + manual checks (see \cref{fig:whatshould}). There was also relatively large support for pure bidding (13-18\% of respondents in all roles), and cumulatively pure bidding and bidding with manual adjustments have as much or more support from all respondent categories than the automated matching + manual assignments.
The analysis of open-ended comments suggested that the respondents were aware that bidding is quite labor-intensive on the part of the reviewers. 5 ACs, 3 reviewers and 2 authors suggested using affinity scores to filter the papers on which bids would be requested, followed with manual checking. Another suggestion was keywords or more fine-grained areas/tracks, potentially as alternative to affinity scores for filtering down the list of papers to bid on. One AC suggested \textit{``an extensive, but still finite, set of tags (e.g. an ACL-version of}
\end{multicols}
\end{figure}
\twocolumn
\noindent\textit{ACM CCS concepts, or FAccT's submission tags''}. One reviewer stressed that the keywords should be provided by the authors, to match what \textit{they} perceive to be the salient aspects of the paper.
1 reviewer and 1 author suggested looking at whether the paper \textit{cites} the potential reviewer\footnote{We believe this is an interesting idea, but it could lead to authors strategically placing citations to maximize the chances of acceptance, or being punished for citing work
that they may criticize or claim to improve upon.}, as this could be a good indicator for the reviewer's interest. 1 reviewer and 2 authors voiced support for some randomness in the assignments (given a track-level match): \textit{``Bidding + some random assignment to ensure diversity in the matching. We don't want reviewers to review only papers they *want* to review. However these random assignments should be clearly indicated to all, and treated accordingly.''}
\subsection{Ideal assignment criteria} \label{sec:ideal}
\textbf{AC past experience.} \Cref{fig:lastissues} shows that one of the most common problems for the ACs is that they were not provided with enough information to facilitate the paper-reviewer matching. The follow-up question is what information they \textit{are} provided with, and how useful they find it.
\Cref{fig:usefulness} shows that the types of information with the highest utility information are links to reviewer profiles, bidding information, and affinity scores.
\noindent But affinity scores are also the most controversial: it is the type of information that the most ACs find ``not very useful'' or ``not useful at all'' (20\%).
Overall the results suggest that ACs are presented with little structured information about reviewers, and have to identify the information they need from a glance at the reviewers' publication record. Seniority, expertise, and reviewer history notes from other ACs are all reported to be useful, but they were never provided directly to many ACs.
An avenue for future research is offered by three types of information that the most ACs are not sure about, presumably because they are rarely provided: structured information about the methods that the reviewers were familiar with, the languages they spoke, and affinity score explanations. We will show below that there is much support for taking such methods into account. For the languages, this might be due to the ``default'' status of English \cite{Bender_2019_BenderRule_On_Naming_Languages_We_Study_and_Why_It_Matters}. We hypothesize that providing this information would make it easier to provide better matches for papers on other languages, which would in turn encourage the authors to submit more such work. Affinity will be discussed in \cref{sec:similarity}.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/usefulnessforACPercent.pdf}
\begin{mdframed}
\scriptsize
\textbf{Topics mentioned in the open-ended comments:} reviewer history (2), number of assigned papers (1), being able to ask SACs for advice (1), reviewer affiliation (e.g. academic or industry) (1), correct area match for both ACs and reviewers (1).
\end{mdframed}
\caption{
The diverging bars shows the experienced utility of different kinds of information about reviewers that ACs may have been presented with to assist in manual checks of paper-reviewer matches. If the respondent had never been presented with the specific kind of information they chose ``Never provided''.}
\label{fig:usefulness}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{figures/importances4.pdf}
\caption{Question: \textit{How important do you think the following factors are for a good paper-reviewer match?}}
\label{fig:importances}
\end{figure*}
\paragraph{Stakeholder preferences.} We then asked the respondents
what factors they believe are important for paper-reviewer assignments. Their answers are shown in \cref{fig:importances}. The overall mean importance rankings (on scale 0-5) are as follows:
\begin{mdframed}[topline=false,rightline=false,leftline=false,bottomline=false,skipbelow=-0.5em,skipabove=-2.5em]
\footnotesize
\begin{tabular}{p{1pt}p{.3cm}p{6cm}}%
\tikzmark{a}{} & 3.95 & Reviewer has worked on the same task\\
& 3.85 & Reviewer bid on the paper\\
& 3.72 & Reviewer has worked with the same method \\
& 3.32 & Reviewer has authored the same type of paper \\
& 3.11 & AC knows \& trusts the reviewer \\
& 2.81 & Reviewer has worked with the same kind of data \\
\tikzmark{c}{} & 1.99 & The affinity score is high \\
\end{tabular}
\link{a}{c}
\end{mdframed}
The fact that affinity scores rank the least important for NLP researchers (who would know the most about them) is interesting, and perhaps related to the fact that evaluation of paper-reviewer matching systems remains an open problem, with little empirical evidence for how well our current systems really work. In the absence of such evidence, our results suggest that the respondents across all groups are not very positive about their experience with such systems. In the authors' personal experience, when the conference chairs provide automated affinity scores they caution the area chairs against fully relying on them, and urge to adjust the assignments manually.
Our data suggests that within groups of stakeholders the individual variation in importance of different factors is higher for some factors and stakeholders than others: e.g. ACs vary within 1 point on the importance of knowing the data, but only within 0.74 points on importance of knowing the tasks. This has implications for approaches who would rely on AC assignments as ground truth for automated assignment systems: they could end up modeling the annotator instead of the task \cite{GevaGoldbergEtAl_2019_Are_We_Modeling_Task_or_Annotator_Investigation_of_Annotator_Bias_in_Natural_Language_Understanding_Datasetsa}. See App. \cref{tab:meanimportancescores} for full data.
We then explored the question of whether the experience of having authored research of a certain type correlates with any changes in the attitude towards some of these paper-reviewer matching factors. For each pair of type of research and matching factor, we ran two-sided Fisher's Exact tests for all respondents who have authored (or not) the types of research and the importance they attached to different factors in paper-reviewer assignment (binning on less than moderately important and more than moderately important).
For some pairs there were statistically significant differences: e.g. the respondents who have authored reproduction papers were significantly more likely to believe it important that the reviewer has worked with the same kind of data ($p=0.004$), and respondents who authored position papers were significantly \textit{less} likely to believe a high automated affinity score is important ($p=0.003$).
See \cref{tab:fisher1} in the appendix for all $p$-values and more details on the tests. We note that the relationships are not necessarily causal.
We conclude that our sample does provide evidence (the first, to our knowledge) that researchers in interdisciplinary fields who perform different kinds of research may have differing preferences for what information should be taken into account for paper-reviewer assignments. If that effect is robust, it should be considered in assignment systems for interdisciplinary fields. We hope that this finding would be explored in a larger study, taking into account both the experience of authoring a given type of paper and how central that type of research is for a given researcher (a factor that we did not consider). Another direction for future work is exploring this question from the perspective of demographic characteristics and the type of institution the respondents work in. Should there be significant differences, more targeted assignments could be a powerful tool for diversifying the field.
\subsection{Ideal workload}
\label{sec:workload}
We asked our reviewer and AC respondents how many assignments they received at their most recent NLP venue, and what would be the optimal number (given a month to review, and a week for AC assignments). For ACs, the mean optimal number of assignments is 8.5$\pm$4.2 vs. 9.1$\pm$5.1 they received at the most recent venue, and for reviewers it is 2.8$\pm$1.0 vs. 3.3$\pm$1.8. Whether this is an issue depends on how much time a given venue allows. The ARR reviewers have even less than a month, and they indicated preference for fewer assignments than they received (2.4$\pm$1.0 vs 3.3$\pm$1.9).
See App. \cref{fig:revload} for data on other venues.
The lack of reviewers is a well-known problem. One of the possible causes is that many authors are students not yet ready to be reviewers. To investigate that, we asked the authors if they also reviewed for the venues where they last submitted a paper, and the reviewers and ACs - if they also submitted.
If the core problem is that many authors are not qualified, we would expect more non-student authors to also be reviewers. Among all respondents there are 24\% authors who submit to a venue but do not review there or help in some other role (\cref{fig:venn}), but if we consider only non-student respondents that ratio is still 18\% (see non-student role distribution in App. \cref{fig:vennSenior}). This suggests that \textit{many qualified people do not review}.
\section{Discussion}
\subsection{Reviewer interests}
Our results suggest the lack of interest is one of the most common problems in paper-reviewer matching, for both authors and reviewers. The authors are aware of this problem and sometimes try to optimize for it by pursuing the ``safe'', popular topics. Unenthusiastic reviewers will likely produce shallow, heuristic-based reviews, essentially penalizing non-mainstream research. Both tendencies contribute to ossification of the field \cite{ChuEvans_2021_Slowed_canonical_progress_in_large_fields_of_science}, and generally need to be minimized.
It is in the AC's interest to find interested reviewers, since that minimizes late reviews, but they need to know who finds what interesting. That is not as simple as a match by topic/methodology, clear from the publication record. Interests change not only gradually over time but also according to what is popular or \textit{salient} at the given moment \cite{10.2307/1738360,DaiJane2020Psif}, or even in seemingly irrational ways (e.g. by being sensitive to the framing of the problem) \cite{TverskyA1981TFoD}. But although experience and knowledge may provide more stable descriptions of a reviewer, looking into dated publication records may be fundamentally counter-productive. According to one of our respondents: \textit{``I prefer the conferences who offer bidding processes to select the papers to review... I am more enthusiastic to review the papers compared to conferences that assign papers based on what my interests were x years ago.''}
Bidding however has its own set of problems, including the practical impossibility to elicit all preferences over a big set of papers, the possibility of collusion rings \cite{Littman_2021_Collusion_rings_threaten_integrity_of_computer_science_research}, and, as one of our respondents put it, ``\textit{biases towards/against certain paper types when bidding is enabled}''. But these problems potentially have solutions: there is work on detecting collusion rings \cite{BoehmerBredereckEtAl_2022_Combating_Collusion_Rings_is_Hard_but_Possible}, and several respondents suggested that bidding could be facilitated by subsampling with either keyword- or affinity-score-based approaches.
We support some of our respondents' recommendation for a combination of interest-based and non-interest-based (within a matching area) assignments, with the latter clearly marked as such for ACs and reviewers, and separate playbooks for the two cases. The reviewer training programs should aim to develop the expectation that peer review is something that combines utility and exploration.
\subsection{Limitations}
\label{sec:limitation}
We readily acknowledge that, like with any surveys with voluntary participation, our sample of respondents may not be representative of the field overall, since the people who have had issues with peer review system are more incentivized to respond. However, precisely for that reason this methodology can be used to learn about the commonly reported types of problems, which was our goal. Our response rate turned out to be comparable to the response rate of the official ACL survey soliciting feedback on its peer review reform proposal \cite{Neubig_2020_ACL_Rolling_Review_Proposal}, %
which received 199 responses.
It is an open problem how future conferences could systematically improve, if they cannot rely on surveys to at least reliably estimate at what scale an issue occurs. Asking about satisfaction with reviews does not seem to produce reliable results \cite{Some_NAACL_2013_statistics_on_author_response_review_quality_etc,ACL_2018_Report_on_Review_Process_of_ACL_2018}. Our survey included a question about satisfaction with the paper-reviewer matching, and whether the most recent experience was better or worse than on average. Both reviewers and authors were more satisfied than dissatisfied, and considered the recent experience better than on average, despite reporting so many issues (see App. \cref{fig:satisfaction} for the distribution).
\subsection{Interpretable Paper-Reviewer Matching: Problem Formulation}
\label{sec:similarity}
There already are many proposed solutions for paper-reviewer matching (see \cref{sec:background}), but their evaluation is the more difficult problem. The obvious approach would be to use bidding information or real assignments made by ACs as ground truth, but this data is typically not shared to protect reviewer anonymity.
It would also provide a very noisy signal not just due to different assignment strategies between ACs, but also different quality of assignments depending on how much time they have on a given day. Both ACs and bidding reviewers are also likely\footnote{Position bias is well documented in search \& recommendation systems \cite{CraswellZoeterEtAl_2008_experimental_comparison_of_click_position-bias_models,CollinsTkaczykEtAl_2018_Study_of_Position_Bias_in_Digital_Library_Recommender_Systems}.} to favor top-listed candidates. And, as our findings suggest, the optimal assignment strategies in an interdisciplinary field might genuinely vary between different types of papers and tracks. A system unaware of that might systematically disadvantage whole research agendas.
Given that even the human experts cannot tell what the best possible assignments are, we propose to reformulate the problem as \textit{interpretable paper-reviewer matching}.
That problem is \textit{not} the same as the problem of faithfully explaining why a given paper-reviewer matching system produced a certain score, for which we have numerous interpretability techniques \cite{Sogaard_2021_Explainable_Natural_Language_Processing}. The AC goal is fundamentally different: not to understand the system, but to quickly find the information that the AC\footnote{Or the program chairs, should the conference aim to have consistent policies for all ACs.} considers relevant for making the best possible match. Therefore \textit{the task of interpretable paper-reviewer matching is rather to help to identify the information that the stakeholders wish the decisions to be based on, and to provide that information as justification for the decisions}.
\section{Conclusion}
We present the results of the first survey on paper-reviewer assignment from the perspective of three groups of stakeholders in the NLP community: authors, reviewers, and ACs. The results point at a host of issues, some immediately actionable (e.g. providing the ACs with better information), some normative (e.g. different kinds of research may need different assignment strategies), and some open (e.g. how do we evaluate the effect of any changes to peer review process?) A big issue for both authors and reviewers is mismatches due to lack of interest, which is in tension with explorative aspects of peer review. We recommend to address this issue with a combination of assignments based on bidding and random matches within area, backed up by reviewer training.
\section*{Acknowledgments}
Many thanks to Marzena Karpinska, Friedolin Merhout, and the anonymous reviewers for their insightful comments. We would also like to thank all our survey respondents, without whom this study would not have been possible.
\section*{Impact Statement}\label{sec:impact}
\paragraph{Broader impact.} The study identifies types of information that could be used to provide better paper-reviewer matches. Used strategically by a conference, it could be a powerful tool for diversifying the field, by helping the non-mainstream papers find the reviewers more open to them. By the same token, if the entity organizing the review process aimed for suppressing such research, de-prioritising this information could harm such papers. Our proposal of interpretable paper-reviewer assignments would mitigate this potential risk by requiring the organizers to disclose their rationale for any given match.
\paragraph{Personal data.} The surveys are designed to not solicit any personally identifiable information (including comments about individual peer review cases in the past conferences), or demographic information about participants.
\paragraph{Potential risks.} The respondents are participants in anonymous peer review process, and as such being tracked back to individual peer review cases could expose them to retaliation. The survey therefore did not solicit information about specific venues (only broader categories such as ``*ACL conferences''), and we manually verified that the open-ended comments also do not contain references to specific cases. We thus foresee no potential risks from deanonymization of the respondents.
\paragraph{Informed consent.} The respondents are informed about the organizers and the objective of the study: to identity current practises of paper-reviewer assignments in CL/NLP conferences and ways in which this process can be improved. Responses are anonymous and respondents consent to the use and sharing of their responses for research purposes. Respondents must give consent to continue the survey.
\paragraph{Intended use.} The survey data and forms will be made publicly available for research purposes.
\paragraph{Institutional approval.} The study was approved by the Research Ethics Committee at the authors' institution.
|
1,108,101,564,296 | arxiv | \section{Introduction}
To answer the questions concerning cosmic star formation history and galaxy evolution, it is critical to have a comprehensive understanding of the infrared (IR) luminous population of galaxies \citep{Casey14,Kirkpatrick12,Madau14,Sanders14}.
They are presumably star-forming systems containing a large amount of dust, where the critical phase in their activities of star formation (SF) or active galactic nuclei (AGN) take place, hidden behind the obscuring dust \citep{Galliano18, Goto10, Hickox18, Lutz14}.
Wide-field cosmological surveys at IR wavelengths are the most efficient way to collect data for various populations of galaxies, especially for dusty star-forming galaxies (dusty SFGs, DSFGs) and obscured AGNs, at different cosmological epochs \citep{Matsuhara06,HHwang07,HHwang10,Toba15}.
Statistically significant samples of dusty galaxies based on large-area surveys covering significant cosmological volumes have to be obtained.
Also, follow-up surveys should be made to sample spectral energy distributions (SEDs): a comprehensive physical description requires wide wavelength coverage to capture the range of processes involved.
Most importantly, a deep optical follow-up survey is necessary because the optical identification is an essential prerequisite to understand nature of the sources \citep{Sutherland92, HHwang12}, e.g., star-galaxy separation or to derive photometric redshift (photo-z).
\begin{figure*}
\begin{center}
\resizebox{0.99\textwidth}{!}{\includegraphics{f01_NEP_maps.pdf}}
\caption{An overall map showing a variety of surveys around the NEP. The red circular area shows the AKARI's NEP-Wide field \citep{K12}. The green and grey (meshed) areas represent the optical surveys done with the CFHT MegaCam \citep{H07} and the Maidanak SNUCam \citep{J10}, respectively. The yellow square shows a slightly deeper observation with the MegaCam as well as the WIRCam on the NEP-Deep field \citep{Oi14}. The area surrounded by blue line shows the additional u-band observation by the MegaPrime \citep{Huang20}. The pink shaded area indicates the recent near-IR survey with Spitzer \citep{Nayyeri18}. The small black square inside the yellow box shows the area observed by the Herschel/PACS \citep{Pearson19}. A broken magenta circle overlaid with the black square indicates the area observed by S2CLS \citep{Geach17}. Nine brown circles around the S2CLS show the areas surveyed by NEPSC2 850 $\mu$m mapping program with SCUBA-2 \citep{Shim20}.
The largest rhombus (brown long-dashed line) shows the Herschel/SPIRE coverage \citep{Pearson17}.
\label{fig01}}
\end{center}
\end{figure*}
The north ecliptic pole (NEP; $\alpha = 18^{h}00^{m}00^{s}$, $\delta = 66^{\circ}33^{\prime}88^{\prime\prime}$) has been a good target for deep, unbiased, and contiguous surveys for extra-galactic objects such as galaxies, galaxy clusters and AGNs because the NEP is a natural deep field location for a wide class of observatory missions \citep{Serjeant12}. Many astronomical satellites, such as ROSAT \citep{Henry06}, GALEX\footnote{http://www.galex.caltech.edu/} \citep{Burgarella19}, \textit{Spitzer}\footnote{http://www.spitzer.caltech.edu/} Space Telescope \citep{Jarrett11,Nayyeri18}, have accumulated a large number of exposures towards the NEP area because the Earth-orbiting satellites pass over the ecliptic poles and, for the Earth-trailing satellites, these poles are always in the continuous viewing zone.
The AKARI \citep{Murakami07} also devoted a large amount of observing time (a span of more than a year) to cover a wide area over the NEP using the infrared camera \citep[IRC,][]{Onaka07} with excellent visibility thanks to its polar sun-synchronous orbit \citep{Matsuhara06}.
A noticeable aspect of this NEP field surveyed by AKARI is that the ancillary optical data sets are abundant, supporting the identification of the infrared sources which enabled subsequent analyses properly \citep{K12}. In addition, many other surveys or follow-up observations on the NEP (but on a limited area) have been carried out from X-ray to radio wavelengths to cover the NEP area \citep{Krumpe15,Burgarella19,Pearson19,Geach17,White10,White17} since the AKARI obtained valuable data sets (see Figure 1).
However, a fraction ($\sim 30\%$ at $N4$) of the IR sources detected by AKARI has been left unidentified by optical data because of the insufficient depths and incomplete areal coverage of the previous optical surveys. The different photometric filter systems used at different surveys also hampered homogeneous analyses based on unbiased sample selection, therefore a deeper and coherent optical survey on this field was necessarily required.
A new deep optical survey consistently covering the entire NEP-Wide (NEPW) field was carried out by the Hyper Suprime-Cam \citep[HSC:][]{Miyazaki18} with five photometric filter bands ($g$, $r$, $i$, $z$ and $y$). These HSC data were reduced \citep{Oi20} by the recent version of the pipeline \citep[\texttt{v6.5.3},][]{Bosch18}, which allowed the depth of the new optical data to reach down to $\sim$ 2 mag deeper (at $g$ and $i$ band) than the previous optical survey with Canada-France-Hawaii Telescope \citep[CFHT,][]{H07}.
In addition, supplemental observation using the CFHT/MegaPrime \citep{Huang20} replenished the insufficient coverage of $u^*$-band from the previous CFHT surveys, which brings photo-z accuracy improvement along with this new HSC data \citep{Ho20}. The source matching and band merging process (see Section 2 for the details) have been encouraging various subsequent works such as the recent luminosity function (LF) update \citep{Goto19}, properties of mid-IR (MIR) galaxies detected at 250$\mu$m \citep{Kim19}, estimation of number fraction of AGN populations \citep{Chiang19}, study on high-z population \citep{Barrufet20}, obscured AGN activity \citep{Wang20}, merger fraction depending on star formation mode (Kim et al. in prep), AGN activities depending on the environments (Santos et al. in prep), machine learning algorithms to classify/separate IR sources (Poliszczuk et al. in prep; Chen et al. in prep), cluster candidates finding (Huang et al. in prep), and even on the AKARI sources without any HSC counterpart \citep{Toba20}.
The science on the NEP initiated by AKARI is now entering a new era with a momentum driven by Subaru/HSC observations as well as current survey projects, such as homogeneous spectroscopic survey (MMT2020A/B, PI: H. S. Hwang) and the 850 $\mu$m mapping over the entire NEP area using with the Submillimetre Common-User Bollometric Array 2 (SCUBA-2) at the James Clerk Maxwell Telescope \citep{Shim20}.
More extensive imaging observations with HSC are still on going with spectroscopy with Keck/DEIMOS+MOSFIRE as part of the Hawaii Two-O project (H20)\footnote{https://project.ifa.hawaii.edu/h20}. Spitzer also finished its ultra deep NIR observations recently as one of the Legacy Surveys (PI: Capak)
before it retired early 2020 to carry out precursor survey for Euclid \citep{Laureijs11}, the {\it James Webb Space Telescope} \citep[{\it JWST},][]{Gardner06} and the Wide Field InfraRed Survey Telescope \citep[WFIRST:][]{Spergel15} over this field.
The Spektr-RG was launched in 2019 to the L2 point of the Sun-Earth system (1.5 million km away from us) and eROSITA\citep{Merloni12} started mission towards the NEP.
Spectro-Photometer for the History of the Universe, Epoch of Reinonization, and Ice Explorer \citep[SPHEREx:][]{Dore16,Dore18} are also planning to target this field.
The main goal of this work is to identify optical counterparts of the AKARI/NEPW sources with more reliable optical photometry of the HSC images (even for the faint NIR sources), and cross-check with all available supplementary data covering this field to build the panchromatic data.
We briefly describe various data supporting AKARI/NEPW data, but mostly focus on explaining how we matched sources and combined all the data together.
This paper is organised as follows. Sec. 2 introduces the HSC and AKARI data, and gives the detailed process how we cross-matched the sources between them. In Sec. 3, we present the complementary data sets used to construct the multi bands catalogue. We describe their optical-IR properties (in statistical ways) in Sec. 4. Sec. 5 gives the summary and conclusions. All magnitudes are presented in AB the magnitude system.
\section{ Identification of the AKARI's NEP-Wide sources using deep HSC data }
\subsection{AKARI NEP-Wide survey data}
The NEP-Wide (NEPW) survey \citep{Matsuhara06, Lee09, K12}, as one of the large area survey missions
of the AKARI space telescope \citep{Murakami07}, has provided us with a unique IR data set, sampling the near- to mid-IR wavelength range without large gaps between the filter bands (the circular area surrounded by a thick red line in Figure 1). In this program, they observed the 5.4 deg$^2$ circular area centred at the NEP using nine photometric bands covering the range from 2 to 25 $\mu$m continuously. The overall strategy of the survey was explained by \cite{Lee09}. \cite{K12} presented the description of the data reduction, source detection, photometry, and catalogue. They also combined the nine separated catalogues (i.e., for $N2$, $N3$, $N4$ in the NIR, $S7$, $S9W$, $S11$ in the MIR-S, and $L15$, $L18W$, $L24$ in the MIR-L channel) along with the optical identification/photometry. Before combining, they carefully discriminated the spurious objects and false detection, in order to confirm the validity of the IR sources: they tried to identify the optical counterparts using the CFHT \citep{H07} and Maidanak \citep{J10} data and then cross-checked against the NIR $J$, $K$ band data obtained from the KPNO/FLAMINGOS \citep{Jeon14}.
The number of sources detected at the nine IRC bands (\texttt{DETECT\_THRESH=3},
\texttt{DETECT\_MINAREA=5}) by
\texttt{SExtractor} \citep{Bertin96}
are 87858, 104170, 96159, 15390, 18772, 15680, 13148, 15154, and 4019 (the detection limits are 21 mag in NIR, 19.5 - 19 mags in MIR-S, and 18.6 - 17.8 mags in MIR-L), respectively \citep{K12}. A significant fraction of these sources (17 \% of the $N2$, 26 \% of the $N3$, and 30 \% of $N4$ sources) did not have optical data (mostly because they are not detected in optical surveys). In addition, $\sim$ 4 \% of the NIR sources were finally rejected because they are detected at only one NIR band (e.g., $N2$, $N3$, or $N4$) and are strongly suspected to be false objects: they suffered from the `multiplexer bleeding trails' (MUX-bleeding) due to the characteristics of the InSb detector array \citep{Holloway86,Offenberg01} so that the source detection near the MUX-bleeding was strongly affected by artificial effects and spurious objects near the abnormally bright pixels. Also, the false detection caused by cosmic ray hits were serious at the $N4$ band mostly because the telescope time to take dithering frames was sometimes assigned to the telescope maneuvering (by the IRC03 observation template). If a certain source were detected at only one NIR band, it could potentially be an artifact or false detection. Therefore, the sources detected at only one NIR band were excluded in the first release of the band-merged NEPW catalogue. In the MIR bands, cosmic ray hits and other artifacts are not numerous, and so the sources detected by only one MIR band were included.
Note that the AKARI NEP-Deep (NEPD) survey data \citep{Wada08,Takagi12}, which is similar to the NEPW survey (with the consistent photometries and the same astrometric accuracy) but different in terms of the coverage (0.7 deg$^2$) and the depth ($\sim$ 0.5 mag deeper), is not included in this work.
We expect that the new optical data obtained by the HSC \citep{Oi20} will allow us to identify more IR sources, most of which are faint in the IR as well as the optical bands. Also, we may be able to examine if there are any real NIR sources that have been rejected just because they did not have any counterpart against the other bands (from the optical to MIR). We, therefore, repeated the merging of nine AKARI bands without any exclusion process, in order to attempt to recover possibly real AKARI sources not included in the study of \cite{K12}.
The sources detected at least one AKARI band can be included in the new AKARI 9 bands merged catalogue. Spurious objects or artifacts can be excluded later if we find any, during further analyses.
When we carried out this procedure, we began with the matching between $N2$ and $N3$ band first. After that, we used these results against the $N4$ band using $N3$ coordinates. In the case without $N3$ coordinates (i.e., a source detected at $N2$ but not detected at $N3$), we took $N2$ coordinates for the matching against $N4$. This process went through down to the $L18W$ or $L24$. In the resulting catalogue, we kept the coordinates from the shortest and the longest bands in this matching process. Therefore, if a certain source was detected at neither $N2$ nor $L24$ but at the other bands, then the coordinates information for the shortest band is from $N3$ and the longest from $L18W$.
There is no systematic offset among the astrometry from different AKARI bands (all of them are fixed at the same focal plane). We eventually registered 130,150 IR sources in the new NEPW catalogue, which were detected at least one of the AKARI bands from $N2$ to $L24$.
\begin{figure}
\begin{center}
\resizebox{\columnwidth}{!}{\includegraphics{f02_NEP_HSCmap.pdf}}
\caption{The map showing the HSC coverage over the NEPW survey area (a red circular area; Kim et al. 2012). The areas marked by yellow dashed circles show the region observed by the HSC $r$-band, while the four blue solid circles indicate the region observed by the $g$, $i$, $z$, and $y$-band (Oi et al. 2020).
\label{fig02}}
\end{center}
\end{figure}
\subsection{Deep Optical Survey with HSC over the NEP}
A deep optical survey covering the whole NEP field with the HSC was proposed \citep{Goto17} in order to detect all the IR sources observed by AKARI. Two optical datasets, one obtained with the CFHT \citep[the central green square in Figure 1;][]{H07} and one with Maidanak telescope \citep[the grey area in Figure 1;][]{J10} have been supporting the AKARI IR data. However, the depths (5$\sigma$) of these optical surveys (25.4 and 23.1 AB mag at $r^{\prime}$ and $R$ band, respectively) were insufficient to identify all the AKARI IR sources. A slightly deeper observation by MegaCam, \citep{Oi14} on a smaller field was also carried out, but the areal coverage (0.67 deg$^2$) was only for the NEPD field (a yellow box in Figure 1).
\cite{Goto17} intended to use the large field of view (FoV; 1.5 deg in diameter, see Figure 2) of the HSC so that the entire NEPW field of AKARI was able to be covered by taking only 4 FoVs (for the $g$, $i$, $z$, and $y$ bands; the blue circles in Figure 2). Ten FoVs, in total, were allocated in six nights \citep{Goto17, Oi20} including $r$ band to take the whole NEPW field using those five HSC filter bands.
The $r$ band imaging was taken earlier during the first observation in 2014, where the observations suffered from air disturbance (including the dome shutter opening error), which made the seeing at $r$ band worse (1$^{\prime\prime}$.25) than those of the other four (0.7 -- 0.8$^{\prime\prime}$ ) obtained later in the second observations (Aug. 2015) \citep{Oi20}.
The data reduction was carried out with the official HSC pipeline, \texttt{hscPipe} 6.5.3 \citep{Bosch18}.
Apart from the fundamental pre-processing procedures (e.g., bias, dark, flat, fringe, etc.), the performance in the sky subtraction and artifact rejection was enhanced in this recent version. In particular, the peak culling procedure was included to cull the spurious or unreliable detections, which improved the source detection results in the pipeline process. Owing to the bad seeing ($\sim 1^{\prime\prime}.25 $) in the $r$ band, the source detection was carried out on the $gizy$ band stacked image. However the photometry was forced to be performed at all five bands. All these procedures with recent pipeline (e.g., updated flag information of the \texttt{hscPipe} 6.5.3 \citep{Bosch18}) eventually helped resolve the issues that have been reported for a few years regarding false detections (e.g., damaged sources near the boundary or along the image edge of each frame, and effects from the saturation of bright stars, etc.). The 5$\sigma$ detection limits are 28.6, 27.3, 26.7, 26.0, and 25.6 mag at $g$, $r$, $i$, $z$, and $y$, respectively. The limiting magnitudes of the $g$, $r$, $i$, and $z$ band of the previous CFHT data were 26.1, 25.6, 24.8, and 24.0 mag, respectively. We, therefore, obtained 1.7 -- 2.5 mags deeper optical data at the corresponding filter bands (even though the effective wavelengths of the filters are slightly different; see Table 1).
Finally,
we catalogued 3.25 million sources observed by the optical survey with the HSC along with a large number of parameters appended from the HSC data pipeline.
The magnigude at each band was given in terms of the \texttt{Cmodel} photometry, which performs well for galaxies and asymptotically approaches PSF photometry for compact sources. The colours estimated with \texttt{Cmodel flux\footnote{AB mag $ = -2.5 $ log$_{10}$(\texttt{Cmodel flux}) + 27}} are robust against seeing conditions \citep{Huang18}. As we mentioned in the previous paragraph, seeing conditions for $r$-band are different from the other four bands, therefore by taking \texttt{Cmodel flux} to calculate colours, these different seeing condition effects can be compensated.
\begin{figure*}
\begin{center}
\resizebox{0.8\textwidth}{!}{\includegraphics{f03_HSC_AKR_m_radius.pdf}}
\caption{The positional offset distribution of the matched sources between the HSC and AKARI data. On the bottom left panel (a), the dotted contours with numbers represent the density levels normalised by the central peak density. The yellow circle is the matching radius determined based on the 3-sigma ($\sigma$) of the Gaussian, fitted to the histograms (magenta curves), on the top left (b) and bottom right (c) panels, shown in terms of 0.2$^{\prime \prime}$ bin. On a top right box (d), bars show how many sources were matched within the 0.5$^{\prime \prime}$ annuli (the green bars indicate the number of clean sources, the red bars above green ones show the increments by the flagged sources. See sec. 2.3 for the details).
\label{fig03}}
\end{center}
\end{figure*}
\subsection{ Matching of the AKARI Infrared sources against the HSC optical data}
After AKARI band merging (Section 2.1), we performed source matching between the AKARI and HSC data. To identify the counterparts from each other by positional matching, a reasonable search radius was assigned. Figure 3 summarises how we decided the radius for the source matching between the AKARI and the HSC data.
In Figure 3, the bottom left panel (Figure 3a) shows the distribution of the positional offsets of the matched sources on the $\Delta$RA versus $\Delta$Dec plane (before the decision of the radius, we used 3$^{\prime \prime}$ as a tentative criterion). The contours in the reddish dotted lines with numbers represent the number density of the green dots normalised by the peak value at the center.
A yellow circle indicates the 3-sigma ($\sigma$) radius determined based on the
Gaussian (magenta curves) fitted to the histograms on the top left (Figure 3b) and the bottom right (Figure 3c) panels.
Here, 3-$\sigma$ radius corresponds to 1$^{\prime \prime}$.78, where the source density in the $\Delta$RA$cos$(dec) vs $\Delta$Dec plane approaches down to the 1 $\%$ level compared to the density peak.
Within this matching radius, we have 111,535 AKARI sources matched against the HSC optical data, which were finally divided into two groups: the clean (91,861) vs. flagged (19,674) sources
based on the HSC flag information such as
\texttt{base$\_$PixelFlags$\_$flag$\_$bad} (to discriminate bad pixels in the source footprint),
\texttt{base$\_$PixelFlags$\_$flag$\_$edge} (to indicate a source is in the masked edge region or has no data),
\texttt{base$\_$PixelFlags$\_$flag$\_$saturatedCenter} (to notice that saturated pixels are in the center of a source).
These parameters helped us when we discriminated unreliable results with the saturated sources or ones lying at the image edge/border.
In this work, we construct a band-merged catalogue only for the ``clean" sources, excluding the flagged ones because the derivation of photo-z or physical modeling by SED-fitting requires accurate photometry
The remaining 23,620 sources did not match to any HSC data (i.e., none-HSC, hereafter), some of which seem to be obscured objects in the optical bands, residing in the high-z ($>1$) universe \citep{Toba20}.
\begin{figure*}
\begin{center}
\resizebox{0.85\textwidth}{!}{\includegraphics{f04_number_hist_N2.pdf}}
\caption{The number distribution of $N2$ sources as a function of magnitude (0.2 mag$^{-1}$ deg$^{-2}$). All $N2$ sources are divided into three sub-categories according to the matching results against the HSC data: clean (sold black), flagged (dotted blue), and none HSC (dashed red). The violet line shows the optically matched sources (i.e., the sum of the clean and flagged sources). The grey line shows all the $N2$ sources (i.e., the sum of the clean, flagged, and none HSC sources).
Green line represents the $N2$ sources matched to the previous optical data from the CFHT or Maidanak \citep{K12}.
\label{fig04}}
\end{center}
\end{figure*}
The histogram on the top right panel (in Figure 3d) shows the distribution of the matched sources as a function of radius interval. The green (red) bars show the number of the clean (flagged) sources matched in each radius bin with the 0.5$^{\prime \prime}$ width. A yellow mark indicates the matching radius (1.78$^{\prime\prime}$, therefore all the sources in the 1.5--2.0 bin are matched within this radius). The green histogram shows that half of the clean sources (48\%, 43,789) are matched within 0.5$^{\prime\prime}$ positional offsets and 34\% (31,215) are matched with the offsets between 0.5 -- 1.0$^{\prime\prime}$. Therefore, within a 1$^{\prime\prime}$ radius, we have 82$\%$ (75,000) sources in total, matched between the AKARI and the HSC data without flagging. Within a 1.5$^{\prime\prime}$ radius, we have 95$\%$ of the sources (87,645) matched.
\begin{figure*}
\begin{center}
\resizebox{0.95\textwidth}{!}{\includegraphics{f05_number_hist_all.pdf}}
\caption{The number distribution of the sources as a function of magnitude, plotted in the same fashion as Figu. 4, but for the sources in the other AKARI bands. Distribution for three different sub-categories are given in the same colour as shown in Figure 4. Vertical dot-dashed line in each panel represents the detection limit in Table 1.
\label{fig05}}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\resizebox{0.95\textwidth}{!}{\includegraphics{f06_number_histg2b.pdf}}
\caption{Selected narrow ranges from the Figure 5 near the peak of the histograms (to show the green and violet lines), which give the comparison between the sources matched to the HSC data (violet) and to the previous optical data obtained with the CFHT and Maidanak (green). The small numbers written on the green histogram represent the difference between the violet and green histogram in each magnitude bin, i.e., newly matched to the HSC data.
The sum of the small numbers are presented in the middle of each panel. In this range, the violet histogram lower than green one (in the bright side) seems mostly due to the flagged sources (blue dotted line in Figure 5) excluded in this work.
\label{fig06}}
\end{center}
\end{figure*}
We describe the details of the matching results in Figure 4, which compares the distribution of sub-categories of the sources divided into three (clean, flagged, and none-HSC) as a function of $N2$ magnitude. A grey-coloured histogram indicates all the $N2$ sources which corresponds to the sum of three (clean $+$ flagged $+$ none-HSC) groups. A violet histogram shows the $N2$ sources that have HSC counterpart (clean $+$ flagged). On the other hand, the green line shows the distribution of the $N2$ sources matched to the previous optical data (CFHT and/or Maidanak).
In the bright magnitude range (up to 14.8 mag), there are no clean sources: all the sources are `flagged (blue dotted)' or `none HSC (red dashed)', which means that most of the bright $N2$ sources are accompanied by one of three HSC flags used to filter out the problematic sources or do not have HSC counterpart. This indicates that their HSC counterparts were affected by saturated/bad pixels, masked/edge region, or do not have valid HSC parameters, otherwise they did not match to any HSC sources. Between 13.5 and 17.5 mag, the flagged sources prevail.
The clean sources (black line) begin to appear around 15 and go above the flagged sources at 17.5, and this predominance continues to the faint end. The N2 source counts decrease rapidly before the detection limit due to source confusion \citep{K12}. We did not include the sources fainter than the detection limit (i.e., a small number of objects fainter than the vertical dot-dashed line as shown in Figure 4).
Some of the none-HSC sources have optical counterparts in the previous CFHT/Maidanak data (in the brightest range) because they are located in the region where the $gizy$ bands (which were used for the source detection) did not cover but the CFHT/Maidanak surveys provided normal photometric measurements (see the small bump in the red dashed histogram below the green one around 13 -- 14 mag.).
These sources are beyond the concern of this work and the AKARI sources classified into flagged and/or none HSC group might be discussed later (in separate works).
The same description for the other AKARI bands are summarised in Figure 5. The overall trends in the NIR bands is similar to $N2$: the red histogram on the top left panel (Figure 5a for the $N4$ band) shows a smaller bump around 14 mag and the other peak around 20 mag. The smaller bump is ascribed to the unobserved area, otherwise they are probably the brightest stars (almost all the NIR sources brighter than 15 mag seem to be stars; \citep{K12}), but rejected by the pipeline or classified as flagged.
Figure 5a also shows there is no clean source brighter than 16 mag. This trend in the bright end weakens/disappears as we move to the mid-IR bands. It seems that the saturation levels of the HSC bands \citep[17 - 18 mag,][]{Aihara18} correspond to the valley between two red bumps in the NIR bands, where the clean sources begin to appear.
The smaller red bump fades out in the longer wavelength (MIR) bands: it becomes weak in the $S7$ (Figure 5b), appears weaker in the $S9W$ (Figure 5c), and completely disappears in the MIR-L bands, implying the Rayleigh-Jeans tail of the stars fades out.
In the NIR bands, the number of clean sources (black solid line) are higher than those of flagged sources (blue dotted) near the broad peak. In the $S9W$ bands, these two classes become comparable, while the none-HSC sources are much lower.
In the $S9W$ and $S11$ bands, the fraction of the flagged sources (blue dotted) are between the black and the red lines. In the MIR-L bands, the blue dotted line becomes lower than none HSC sources (red line).
While some of the bright IR sources are not fully available in this work consequentially because they are unobserved/rejected or eventually classified as flagged sources, on the other hand, there are many more fainter sources newly matched to the deeper HSC data as shown in Figure 6 (for example, 19,771 at $N4$, and 2,593 at $L15$, respectively). Only small ranges were presented in the figure: note the range where the violet histogram is higher than the green one. The number in each magnitude bin indicate the sources newly matched to the HSC data.
However, the bigger red bumps (in Figure 5) near the faint ends indicate our HSC survey was still not deep enough to identify all the faint IR sources, which left some of the AKARI sources unmatched against the HSC data.
On the AKARI colour-colour diagrams (in sec. 4), these faint IR sources are located in the same area as the other (optically identified) sources, which implies that they have the similar IR properties. They are probably infrared luminous SFGs, but appear to have dropped out in the HSC bands. Seemingly, they might be a certain kind of highly obscured dusty systems in high-z. A detailed discussion based on the selected sample is presented in \cite{Toba20}.
\begin{table*}
\centering
\caption{Summary of the multiwavelength data sets: the detection limits }
\label{tab1}
\begin{tabular}{ccccc}
\hline
\hline
Data & Band & Effective wavelength & (5$\sigma$) detection limit \\
& & ($\mu$m) & AB / $\mu$Jy \\
\hline
& $N2$ & 2.3 & 20.9 / 15.4 \\
& $N3$ & 3.2 & 21.1 / 13.3 \\
AKARI/IRC & $N4$ & 4.1 & 21.1 / 13.6 \\
NEP-Wide Survey & $S7$ & 7 & 19.5 / 58.6 \\
5.4 deg$^2$ & $S9W$ & 9 & 19.3 / 67.3 \\
\citep{K12} & $S11$ & 11 & 19.0 / 93.8 \\
& $L15$ & 15 & 18.6 / 133 \\
& $L18W$ & 18 & 18.7 / 120 \\
& $L24$ & 24 & 17.8 / 274 \\
\hline
& $g$ & 0.47 & 28.6 / 0.01 \\
Subaru/HSC & $r$ & 0.61 & 27.3 / 0.04 \\
5.4 deg$^2$ & $i$ & 0.76 & 26.7 / 0.08 \\
\citep{Oi20} & $z$ & 0.89 & 26.0 / 0.14 \\
& $y$ & 0.99 & 25.6 / 0.21 \\
\hline
CFHT/MegaPrime &\multirow{2}{*}{$u$} &\multirow{2}{*}{0.36} & \multirow{2}{*} {25.4 / 0.25 } \\
3.6deg$^2$\citep{Huang20} & & & \\
\hline
& $u^{*}$ & 0.39 & 26.0 / 0.16 \\
CFHT/MegaCam$^{\rm a}$ & $g$ &0.48 & 26.1 / 0.13 \\
2 deg$^2$ \citep{H07}&$r$& 0.62 & 25.6 / 0.21 \\
0.7 deg$^2$ \citep{Oi14} &$i$& 0.75 & 24.8 / 0.39 \\
& $z$ & 0.88 & 24.0 / 0.91 \\
\hline
Maidanak/SNUCam & $B$ & 0.44 & 23.4 / 1.58 \\
4 deg$^2$ \citep{J10} &$R$& 0.61 & 23.1 / 2.09 \\
& $I$ & 0.85 & 22.3 / 4.36 \\
\hline
KPNO/FLAMINGOS & $J$ & 1.2 & 21.6 / 8.32 \\
5.1 deg$^2$ \citep{Jeon14} &$H$& 1.6 & 21.3 / 10.96 \\
\hline
CFHT/WIRCam & $Y$ & 1.02 & 23.4 / 1.58 \\
0.7 deg$^2$ \citep{Oi14} & $J$ & 1.25 & 23.0 / 2.29 \\
& $K_S$ & 2.14 & 22.7 / 3.02 \\
\hline
Spitzer/IRAC & IRAC1 & 3.6 & 21.8 / 6.45 \\
7 deg$^2$ \citep{Nayyeri18} & IRAC2 & 4.5 & 22.4 / 3.95 \\
\multirow{2}{*}{0.4 deg$^2$ \citep{Jarrett11}}& IRAC3 & 5.8 & 20.3 / 27.0 \\
& IRAC4 & 8 & 19.8 / 45.0 \\
\hline
& W1 & 3.4 & 18.1 / 18 \\
WISE & W2 & 4.6 & 17.2 / 23 \\
\citep{Jarrett11} & W3& 12 & 18.4 / 139 \\
& W4 & 22 & 16.1 / 800 \\
\hline
Herschel/PACS$^{\rm b}$ & Green & 100 & 14.7 / 4.6 mJy \\
0.44 deg$^2$ \citep{Pearson19} & Red & 160 & 14.1 / 8.7 mJy \\
\hline
Herschel/SPIRE$^{\rm c}$ & PSW & 250 & 14 / 9.0 mJy \\
9 deg$^2$ \citep{Pearson17}& PMW & 350 & 14.2 / 7.5 mJy \\
& PLW & 500 & 13.8 / 10.8 mJy \\
\hline
SCUBA-2/NEPSC2$^{\rm d} $ & \multirow{2}{*}{850} & \multirow{2}{*}{850} & \multirow{2}{*} {1.0 - 2.3 mJy } \\
2 deg$^2$ \citep{Shim20} & & & \\
\hline
\end{tabular}
(a) The detection limits refer to the 4$\sigma$ flux over a circular area with a diameter of 1$^{\prime\prime}$.
(b) The detection limits refer to 3$\sigma$ instrumental noise sensitivities.
(c) The detection limits refer to the Open Time 2 (OT2) sensitivity.
(d) The detection limits refer to the 1-$\sigma$ rms noise (or 4.7-11 mJy at 80\% completeness).
\end{table*}
\section {Complementary Data Sets}
After the identification of $AKARI$ IR sources with the HSC optical data, we used all available photometric catalogue/data over the NEPW field to construct multi-band catalogue. In this section, we briefly describe the data sets used in this work. Just as Figure 1 showed the coverages of various surveys, Table 1 and Figure 7 summarise the photometric bands and depths of the surveys. Figure 7a also shows why source detection changes in different instrument/filter systems.
\subsection{Ancillary Optical Data: CFHT and Maidanak }
It is not easy to take the entire 5.4 deg$^2$ area in a uniform manner unless we have a large-FoV instrument with an appropriate filter system covering a good enough wavelength range. This was what made our previous optical surveys divided into two different data sets 10 years ago: one obtained with the MegaCam ($u^{*}$, $g$, $r$, $i$, $z$) on the central 2 deg$^2$ area, and the other with SNUCAM $B$, $R$, and $I$ \citep{Im10} of the Maidanak observatory over the remaining part of the NEPW field. The detailed description of these two surveys can be found from \cite{H07} and \cite{J10}, respectively. However, the western half of the central CFHT field was not observed by $u^*$ band. Also, due to the different filter systems and depths between these two optical surveys, homogeneous analysis with optical counterparts over the whole field was practically impossible.
Another optical survey was carried out later \citep{Oi14} on the NEPD field ($\sim$0.7 deg$^2$) and finally provided MegaCam $u^*$ data for the western half area as well as the supplementary WIRcam data ($Y, J, K_{s}$).
In addition, the CFHT MegaPrime $u$-band observation was performed over a 3.6 deg$^2$ area on the eastern side of the NEPW field \citep{Huang20}\footnote{http://doi.org/10.5281/zenodo.3980635} to replenish the insufficient (central 2 deg$^2$ only; see Figure 1) coverage of the MegaCam $u^*$-band. Because how to calibrate photo-z is a significant issue under the circumstances that a huge number of sources remain without redshift information, availability of u-band data is crucial to check the UV extinction properties and to improve photo-z accuracy \citep{Ho20}.
We combined all these supplementary optical data: a small systematic shift of WCS ($< 1^{\prime\prime}$) in each optical data with respect to the HSC were corrected first, and the matching radii for each data were decided based on the mean positional differences (see Figure 8).
The number of sources matched to the HSC data are summarised in Table 2.
\begin{figure*}
\begin{center}
\resizebox{0.95\textwidth}{!}{\includegraphics{f07_panchromatica.pdf}}
\caption{(Top) The depths of various surveys with different instruments/filter systems over the NEP, in terms of the 5-sigma limiting magnitude. The cross symbols (\textbf{+}) imply that the survey was dedicated only to the NEPD field. Typical templates \citep{Polletta07} are given to show how a local ULIRG (e.g., Arp220-type at z=0.3, faintest thick line), a type-1 Seyfert (at z=0.6, dark grey), or a dusty torus model (at z=1.0, black thin line) looks in this plot. All of them are normalised at $N2$ detection limit. (Middle) The system transmission/filter shapes are presented. (Bottom) The comparison of the spectral range in the IR to be covered by the future space missions is presented, as shown by the horizontal bars.
\label{fig07}}
\end{center}
\end{figure*}
\begin{figure*}[h]
\includegraphics[width=0.33\textwidth]{astrmtroffs_a.png}
\includegraphics[width=0.33\textwidth]{astrmtroffs_b.png}
\includegraphics[width=0.33\textwidth]{astrmtroffs_c.png}
\includegraphics[width=0.33\textwidth]{astrmtroffs_d.png}
\includegraphics[width=0.33\textwidth]{astrmtroffs_e.png}
\includegraphics[width=0.33\textwidth]{astrmtroffs_f.png}
\includegraphics[width=0.33\textwidth]{astrmtroffs_g1.png}
\caption{Some examples of the astrometry-offset distributions of the sources matched between the clean (HSC-AKARI) and the other supplementary data. The matching circles (or ellipses) were decided based on the representative value of the offset (width of the offset histogram), 3-$\sigma$ derived from the Gaussian fit to the histogram of $\Delta$RA and $\Delta$Dec. Most of them appear to be circular shape except for the matching against the CFHT data. (a) HSC vs Maidanak (b) HSC vs CFHT (2007) (c) HSC vs CFHT (2014) (d) AKARI vs FLAMINGOS (e) AKARI vs WISE (f) AKARI vs Spitzer (g) AKARI (MIR-L) vs SPIRE.
\label{fig08}}
\end{figure*}
\begin{table*}
\centering
\caption{Summary of the Matching against the Supplementary Data }
\label{table2}
\begin{tabular}{cccccc}
\hline
\hline
\multirow{2}{*}{Main Data}&Supplementary Data &3-$\sigma$ Radius & PSF Size & Number of&\\
& (Reference) & ($^{\prime\prime}$) & (FWHM, $^{\prime\prime}$) & matched sources &\\
\hline
\multirow{5}{*}{HSC}& Maidanak/SNUCam \citep{J10} & 1.14 & 1.1 - 1.4 & 33,485 & \\
{Subaru} & CFHT/MegaCam \citep{H07} &0.54/0.78$^{\rm a}$ & 0.7 - 1.1 & 23,432 & \\
& CFHT/Mega-WIR \citep{Oi14} &0.28/0.44$^{\rm a}$ & 0.8 - 0.9 & 15,261 & \\
& CFHT/MegaPrime-u \citep{Huang20} &0.43/0.55$^{\rm a}$ & 0.8 - 1 & 31,851 & \\
& GALEX \citep{Burgarella19} & 3.2 & 5.0 & 58 & \\
\hline
\multirow{6}{*}{AKARI} & KPNO/FLAMINGOS \citep{Jeon14} & 2.2 & 1.7 - 1.8 & 46,544 \\
& WISE \citep{Jarrett11} & 0.9 & $\sim$ 6 & 60,062 \\
& Spitzer \citep{Nayyeri18} & 1.2 & 1.78 & 79,070 \\
&PACS \citep{Pearson19} & 3.6/ 6.3$^{\rm b}$ & 6.8/ 11.3 & 882/ 463 \\
&SPIRE \citep{Pearson17} & 8.1$^{\rm c}$ & 17.6 & 3,109 \\
\hline
\hline
\end{tabular}\\
(a) The matching radii along the RA and Dec are not the same (see Figure 8).
(b) The radii for the 100 $\mu$m and 160 $\mu$m band, respectively.
(c) The source extraction was done on the 250 $\mu$m map, and the sources were catalogued with photometry in all three SPIRE bands.
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\textwidth]{f09_random_mtch2.pdf}
\caption{(Left) Random matching rate in terms of the source density and matching radius. The open boxes are taken from the Fig 14 in \citet{K12}, which represent the actual tests, i.e., number counts of the sources randomly matched to each data using 3 $^{\prime\prime}$ radius. The grey thick line shows these measurements are described by a simple relation ($n\pi r^{2}$). If we use 2$^{\prime\prime}$ or 1$^{\prime\prime}$ radius, the random matching rate will be decreased as described by cyan dashed and green dot-dashed lines. The faint dotted lines between them indicate 0.5$^{\prime \prime}$ increments. The filled boxes in different colours show the random matching estimates when a certain source is matched against the data with our radii determined in this work. The grey histogram is taken from Fig 3d. (Right) The embedded grey histogram on the left panel can be plotted with random matching rate as shown in the right panel. The magenta curve shows the random matching rate when the number density of source is 4.5$\times 10^{5}$ (which corresponds to the magenta vertical line in the left panel). Vertical yellow line represents the matching radius from Figure 3. }
\label{fig09}
\end{center}
\end{figure*}
\subsection{Spectroscopic and Photometric Redshifts }
Following the
optical identification of the AKARI sources with the deep HSC data, we incorporated all available spectroscopic redshifts (spec-z) data to the clean AKARI-HSC sources (therefore, the redshift information matched to the flagged sources are not included here.).
There have been many spectroscopic observations over the AKARI's NEP area. The most extensive and massive campaign covering the entire NEPW field was done by \citet{Shim13}: they targeted the NEPW sources selected primarily based on the MIR fluxes at 11 $\mu$m ($S11<18.5$ mag)
and at 15 $\mu$m ($L15<17.9$ mag)
to see the properties of the MIR selected SFGs.
Most of these flux-limited sources turned out to be various types of IR luminous populations of galaxies. A smaller number of secondary targets (35$\%$ out of their targets) are also selected to catch some rare types of objects, such as obscured AGNs, BzKs, super cluster candidates, etc. They provided the spectra of 1796 sources (primary targets: 1155, secondary targets: 641), and the redshifts for 1645 sources were measured. These spectroscopic sources are classified into several types (e.g., star, type1 AGN, type2 AGNs, galaxy, unknown). Recently, a new spectroscopic campaign over the whole NEPW area with the MMT/Hectospec has been initiated to carry out a homogeneous survey for the 9 $\mu$m selected galaxies (MMT2020A/B, PI: H. S. Hwang).
We also took the redshift/type information from many other spectroscopic surveys on the NEPD field. For example, Keck/DEIMOS observations were conducted in order to measure the spectroscopic redshift and calibrate photo-zs for MIR galaxies (DEIMOS08) \citep{Takagi10}, and to measure [OII] luminosity against 8$\mu$m luminosity (DEIMOS11) \citep{Shogaki18}, and more recently, to check the line emission evidence of AGNs and metallicity diagnostics of SFGs, etc. (DEIMOS14, DEIMOS15, DEIMOS17) \citep{HKim18}.
Another series of spectroscopic observations with Gran Telescope Canarias (GTC)/OSIRIS were carried out between 2014 and 2017 (e.g., GTC7-14AMEX, GTC4-15AMEX, GTC4-15BMEX, and GTC4-17MEX) \citep{DiazTello17} to see the X-ray signatures of highly obscured and/or Compton-thick (CT) AGNs along with the identification by the \textit{Chandra} data \citep{Krumpe15}.
Subaru/FMOS spectroscopy was also obtained to investigate the mass-metallicity relation of IR SFGs in the NEP field \citep{Oi17}. \citet{Ohyama18} provided the polycyclic aromatic hydrocarbons (PAHs) galaxy sample with redshift measurements through the SPICY projects done by AKARI/slitless spectroscopy. We combined all these redshift information.
Using these spectroscopic redshifts as a calibration sample, \citet{Ho20} estimated photo-zs using the photometry from $u^{*}$-band to the NIR band (IRAC2 and/or WISE2).
They checked photometry by comparing colours/magnitudes to discriminate the unreasonable data so that they could obtain reliable results when they use the software \texttt{Le PHARE}. After the photo-zs were assigned, they presented effective star-galaxy separation scheme based on the $\chi^2$ values.
\subsection{Supplementary Near-/Mid-IR Data}
While the optical data are crucial for identifying the nature of the corresponding AKARI sources,
the $J$-, $H$-, and $K$-band data are useful to bridge the gap in wavelength between optical $y$ and $N2$ band. The CFHT/WIRCam covered a limited area, but provided useful $J$- and $K$-band photometry \citep{Oi14}. The NIR ($J$, $H$) survey that covered almost the entire NEPW area ($\sim$ 5.2 deg$^{2}$) was done by FLAMINGOS of the Kitt Peak National Observatory (KPNO) 2.1 m telescope although the depth is shallower than WIRcam data
(see Figure 1 and 7).
For complementary photometry in the near- to mid-IR, we also included the publicly available data taken by Spitzer \citep{Werner04} and WISE\footnote{Also see https://wise2.ipac.caltech.edu/docs/release/allwise/expsup/sec2\_1.html} \citep{Wright10}.
The catalogue by \cite{Nayyeri18} provides 380,858 sources covering the entire NEPW field ($\sim$ 7 deg$^2$) with higher sensitivity (21.9 and 22.4 mag at the IRAC1 and IRAC2, respectively) and slightly better spatial resolutions compared to the $N3$ and $N4$, which are useful to cross check against the longer wavelength data having larger PSFs (e.g., the SPIRE or SCUBA-2 data).
\subsection{FIR/Smm Data from the Herschel and SCUBA-2}
Herschel carried out the 0.44 deg$^2$ and 9 deg$^2$ surveys over the NEP field with the Photoconductor Array Camera and Spectrometer \citep[PACS:][]{Poglitsch10} and Spectral and Photometric Imaging REceiver instrument \citep[SPIRE:][]{Griffin10}, respectively.
From the PACS NEP survey \citep{Pearson19, Burgarella19}, the Green (100 $\mu$m) and Red (160 $\mu$m) bands provide 1380 and 630 sources over the NEPD field, with the flux densities of 6 mJy and 19 mJy at the 50\% completeness level, respectively.
The SPIRE also carried out NEP survey as an open time 2 program (PI: S. Serjeant, in 2012), and completely covered the entire NEPW field, at the 250, 350, and 500 $\mu$m (achieving 9.0, 7.5, and 10.8 mJy sensitivities at each band). Source extraction was carried out on the 250 $\mu$m map, and approximately $\sim$4800 sources were catalogued with the photometry in all three SPIRE bands. The more detailed description of the data reduction and photometry can be found in \cite{Pearson17}.
Compared to the optical or NIR data, the Herschel (PACS or SPIRE) data have larger positional uncertainties with much larger PSF sizes. This can make the identification of sources against the AKARI data potentially ambiguous when we carry out the positional matching, even though the radius was determined reasonably (the 3-sigma radii are smaller than the PSF sizes, in general, as shown in Table 2).
In our catalogue, the cases that multiple AKARI clean sources are lying within the searching radius around the SPIRE/PACS positions were not included so that we clearly chose only one AKARI counterpart against the Herschel sources.
The 850 $\mu$m submilimetre (sub-mm) mapping on the NEPW field is currently ongoing by one of the large programs with the JCMT/SCUBA-2 \citep{Shim20}. They released a mosaic map and a catalogue for the central 2 deg$^2$ area first. They provide 549 sources above 4-$\sigma$ with a depth of 1.0--2.3mJy per beam. The source matching against the AKARI-HSC clean catalogue was carried out based on the likelihood ratio. We derived the probability of counterpart for a 850$\mu$m source, using both the magnitudes distribution of IRAC1 and IRAC2 bands (which are deeper than those of the AKARI NIR bands) and their colour as well as those of three SPIRE bands. We took 46 sources as robust AKARI counterparts for the 850 $\mu$m sources because they are matched to both IRAC and SPIRE with high (95$\%$) probability. We also included 16 sources as decent AKARI counterpart because in these examples there is only one IRAC/SPIRE source within the 850$\mu$m beam. Lastly, 4 sources matched to IRAC with high probablity but we are uncertain about the SPIRE cross-identification.
However, when multiple optical sources were associated with any given SPIRE or SCUBA-2 source, if real optical counterpart was already classified as flagged sources, then it could be complicated/a potential issue.
\section{The properties of the AKARI sources Identified by HSC Survey Data }
\begin{figure*}
\begin{center}
\resizebox{0.9\textwidth}{!}{\includegraphics{f10_cc_diags_Opt.png}}
\caption{Colour-colour diagrams based on the HSC optical and AKARI NIR bands. Violet dots represent the sources classified as star-like sources, and black dots represent the extra-galactic sources with z$_{\rm phot} < 1$ while the grey dots are the source with z$_{\rm phot} > 1$ \citep{Ho20}. Cyan dots are high-stellarity (sgc$>0.8$) sources. Yellow stars represent the Galactic stars observed by the spectroscopic survey \citep{Shim13}. Red crosses are AGNs (type1), also confirmed by \citet{Shim13}. Green boxes are AGNs that have X-ray data \citep{Krumpe15,DiazTello17}. Salmon diamons are galaxies observed by the SCUBA-2 survey \citep{Shim20}. All axes are in units of AB mag. }
\label{fig10}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\resizebox{0.9\textwidth}{!}{\includegraphics{f11_cc_diags_AKRf.png}}
\caption{Colour-colour diagrams based on the HSC and various NIR bands (AKARI, Spitzer, and WISE), with the symbols in the same fashion as presented in Figure 10: star-galaxy separation and how their locations change in different colour-colour diagrams are described. For top panels (a and b), we present the evolutionary tracks of several model templates from \citet{Polletta07}. All axes are in units of AB mag except for the lower right panel (d), which is given in units of Vega mag to compare with the diagram from \citet{Wright10}. The mag offsets ($\Delta m$) between AB and Vega system ($m_{\textrm{AB}} = m_{\textrm{Vega}} + \Delta m$) are 2.70, 3.34, 5.17 mag for W1, W2, and W3, respectively. }
\label{fig11}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\resizebox{0.9\textwidth}{!}{\includegraphics{f12_cc_diags_MIRs.png}}
\caption{Colour-colour diagrams based on the various combination of NIR to MIR bands, in the same fashion as presented in Figure 10 and 11. Stars begin to fade out in MIR bands. For top panels (a and b), we also present the evolutionary tracks (a$^{\prime}$ and b$^{\prime}$ of several model templates from \citet{Polletta07}. All axes are in units of AB mag }
\label{fig12}
\end{center}
\end{figure*}
\subsection{Overview: reliability vs random matching }
When we seek to find a counterpart against a certain data using a searching radius ($r$), there is always a possibility that we could encounter a random source (which is not associated with real counterpart) inside the circle defined by the radius ($\pi r^{2}$).
This is the probability that a source can be captured simply by this circular area at any random position of the data.
The higher the source density of data we match against, the higher the random matching rate becomes. If we use a larger radius, then the probability is also increased.
\cite{K12} showed that it can be expressed in terms of number density ($n$) of data and cross-section. See the open boxes and grey solid lines in Figure 9a, which are taken from the Figure 14 in \cite{K12}.
Practically, these have to be regarded as upper limits of a false match possibilities (the worst case), because, in general, two data sets for source matching are correlated with each other: these data are obtained from the same field of view, not from two different arbitrary sky positions. Therefore, the matching results are generally better than this estimation (as indicated by the downward arrows).
To give how to interpret the source matching in a consistent way with \cite{K12}, we used the same analyses, using our matching radii determined in sections 2 and 3, and compared with the plots in the previous work.
In \cite{K12}, they used the uniform radius (3$^{\prime\prime}$, the open boxes in Figure 9a). In this work, however, the matching radii were chosen based on the mean positional offset of the matched sources between the data. This gives much smaller radii compared to those from \cite{K12}, even smaller than the PSF sizes which are also used frequently for matching criteria.
In Figure 9a, the highest number density of the HSC data ($\sim 4.5\times10^{5}$ sources per deg$^{2}$) seems to have a high random matching (the filled blue box). Here, we should remind that only 5\% of the sources are matched outside 1.5$^{\prime\prime}$ and 82\% of the sources are matched within 1$^{\prime\prime}$, as shown by the grey background histogram, which is the same as Figure 3d (but x- and y-axis are transposed/rotated here). Therefore, the random matching rate is better than the filled circle on the green dot-dashed line.
On the right panel (b) in Figure 9, this grey histogram is re-plotted with random matching rate, which gives more straightforward description.
However, it should be noted that this is just an estimation of the probability when the test was performed on random positions. Only a small fraction will suffer from the random matching, in reality.
The sources matched within 0.$^{\prime\prime}$5 seem obviously reliable (downward below the open circle). In the same fashion, the matching with the other supplementary data (in green, grey, salmon boxes, and so on), which have a much lower number density, are less affected by random matching and relatively safe compared to the HSC data.
\subsection{Colour-colour Diagrams}
We describe the photometric properties of the NEPW infrared sources matched to the HSC optical data using various colour-colour diagrams (Figures 10 to 12). The colour-colour diagrams are also helpful to see, from a statistical standpoint, if the source matching was accomplished properly. In each diagram, we used several different colours and symbols to distinguish between the different types of sources. Violet dots indicate the sources classified as star-like, which were fitted better to the stellar model templates rather than the galaxy templates, following the diagnostics ($\chi^{2}_{\rm star} <\chi^{2}_{\rm galaxy}$) by \cite{Ilbert09} when the photo-z estimation was performed with \texttt{Le PHARE} \citep{Ho20}.
The sources fitted better to the galaxy templates are divided into two groups by redshift: black dots represent the local (z$_{\rm phot}<1$) galaxies and the grey dots represent the high-z ($>1$) objects. To see if the star-like sources are classified appropriately, we over-plotted (in cyan) the sources having high stellarity (i.e., star-galaxy classifier; sgc$>0.85$ ), measured with the \texttt{SExtractor} on the CFHT data \citep{H07,J10}. The stellar sequence appears prominently in the optical colour space because the stellar SED shows blackbody-like behaviors, which naturally generate a systematically and statistically well-defined sequence.
In our spectroscopic sample (as explained in sec. 3.2), we have various types classified by the line emission diagnostics.
We also plotted some of them here: spectroscopically confirmed stars (marked by yellow star), type-1 AGNs (red cross), AGNs identified in X-ray data (green square), as well as the sub-mm galaxies detected in SCUBA-2 survey (salmon diamond).
Figure 10 shows the colour-colour diagrams using the photometry in the HSC and AKARI NIR bands. The violet dots form a well-distinct track, cyan dots are exactly tracing this track, and five spectrocopic stars are overlaid on them.
Those star-like sources (as well as the point sources) seem to be the representative of the Galactic stars, but it is obvious that not all of them are stars (i.e., quite a few real galaxies, which just happen to fall on the stellar locus, are included in the vicinity of the sequence).
This implies that the source matching was properly achieved and the star-galaxy separation was also effectively done.
In the optical colour-colour diagrams, the stellar sequence overlaps with extragalactic sources, positionally entangled with them on the same area (Figure 10a and 10b), but when the NIR bands are involved, this stellar sequence gets separated from the extra-galactic populations (Figure 10c, 10d). In the NIR colour-colour diagrams, however, stars seem to stay together in a rather circular/elliptical area, not in the form of a track (Figure 11). They gradually disappear in the longer wavelengths (Figure 12).
In the optical colour-colour diagrams (Figure 10), the black and grey dots (local and high-z galaxies, respectively) are gathered mostly in the same area, but the grey dots seem to be more widespread. In Figure 11, the black dots seem to be gathering in an apparently different place from the grey dots which are spread towards the redder direction, being consistent with the photo-z classification and implying high-z populations. This separation becomes obvious in $N2-N4$ colour (Figure 12b and 12c), which appears to be a very good selector of high-z objects or AGNs, seemingly related to hot dust heated by energetic sources \citep{Fritz06,Lee09}.
In Figure 11 and 12, we present the redshift tracks ($0<$z$<5.5$) to compare with the colour-colour diagrams based on the HSC and AKARI NIR bands (See Figure 11a$^{\prime}$, 11b$^{\prime}$, 12a$^{\prime}$, and 12b$^{\prime}$). They enable us to seize the characteristics of the sources by comparing the trajectories of typical models with real galaxies observed by the optical-IR surveys (as well as the symbols in the top panels, Figure 11a, 11b, 12a, and 12b). In Figure 12, it should be noted that the model tracks show some overlaps in a certain area suggesting there seems to be partly a mixture of SFGs and AGNs. To select AGN types, the $N2-N4$ colour seems to be more effective when we use a combination with MIR bands (e.g., $N4-S7$ or $S7-S11$).
The AGN types and SMM galaxies stay close to the black/grey dots. It is not easy to discern clear trends. However, the green boxes are widely overlapped with all the extragalacic sources (black and grey dots), while the red crosses (type-1) tend to stay in a specific area, all through the colour-colour diagrams. Salmon diamonds (SMM galaxies) are also spread over the black and grey dots, but more widely spread compared to the green boxes and they seem to prefer to stay around the grey dots, implying the SMM galaxies are more likely to be high-z populations.
On the other hand, it would be interesting to see the follow-up studies (e.g., Poliszczuk et al. in prep; Chen et al. in prep)
if machine learning algorithms such as the support vector machine or deep neural network, etc. can do more effective separations in various color/parameter spaces (not just in two-dimensional projections of them).
\section{Summary and Conclusion }
The NEP field has been a long-standing target since it was surveyed by the legacy program of AKARI space telescope \citep{Serjeant12}. Previous optical surveys \citep{H07,J10} were incomplete, which was a strong movitation to obtain deep Subaru/HSC optical data covering the entire NEP field \citep{Goto17}.
We achieved the faint detection limits \citep{Oi20}, which enabled us to identify faint AKARI sources in the near and mid-IR bands, and initiated a variety of new studies.
We constructed a band-merged catalogue containing photometric information for 42 bands from UV to the sub-mm 500$\mu$m.
The photo-zs for the NEPW sources were derived based on this data with all available redshift information \citep{Ho20}, and were incorporated into the catalogue as well.
We investigated the photometric properties of the NEPW sources observed by the HSC using colour-colour diagrams based on this band-merged catalogue.
We are able to roughly see how the shape of stellar sequence changes and which areas the AGN types prefer to stay in different colour spaces, as the observed wavelength increases, although it is difficult to tell about the clear trend of extragalactic populations because none of the quantitative analysis has been made.
This band-merging gives us the benefits of constructing full SEDs for abundant dusty galaxy samples for SED modeling, e.g., using CIGALE \citep{Boquien19} or MAGPHYS \citep{daCunha08}, especially taking advantage of the uniqueness of the continuous MIR coverage as well as a wide range of panchromatic photometry. It provides more opportunities to disentangle otherwise degenerate properties of galaxies or to excavate hidden information for a better understanding of the physics behind various IR features.
Due to the uniqueness of the filter coverage by AKARI, this legacy data remains the only survey having continuous mid-IR imaging until JWST carries out its first look at the sky. The science on this NEP field is currently driven by Subaru/HSC \citep{Oi20}, SCUBA-2 \citep{Shim20}, and homogeneous spectroscopic surveys.
Since many future space missions are planning to conduct deep observations of this area -- e.g., Euclid \citep{Laureijs11}, JWST \citep{Gardner06}, SPHEREx \citep{Dore16,Dore18}, etc.,
a great deal of synergy is expected together with the legacy data as well as our ongoing campaigns.
\section*{Acknowledgements}
We thank the referees for the careful reading and constructive suggestions to improve this paper.
This work is based on observations with AKARI, a JAXA project with the participation of ESA, universities, and companies in Japan, Korea, the UK. This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. Support for this work was provided by NASA
through an award issued by JPL/Caltech. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator
consortia and with important participation from NASA. TG acknowledges the supports by the Ministry of Science and Technology of Taiwan through grants 105-2112-M-007-003-MY3 and 108-2628-M-007-004-MY3. HShim acknowledges the support from the National Research Foundation of Korea grant No. 2018R1C1B6008498. TH is supported by the Centre for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University (NTHU) through a grant from the Ministry of Education of the Republic of China (Taiwan).
\section*{Data availability}
The band-merged catalogue in this work is available at Zenodo (https://zenodo.org/record/4007668$\#$.X5aG8XX7SuQ). Other data addressed in this work will be shared on reasonable request to the corresponding author.
|
1,108,101,564,297 | arxiv | \section{Introduction}
Let $ 0 < T < \infty $ and let $ \Omega \subset \mathbb R^d $ ($d=1,2,3$) be a
bounded domain with Lipschitz continuous boundary. Assume that $ \mathcal A $ is
the realization of a second-order partial differential operator with homogeneous
Dirichlet boundary condition in $ L^2(\Omega) $. We consider the following
fractional diffusion equation:
\begin{equation}
\label{eq:model}
\D_{0+}^\alpha y (t) - \mathcal A y (t) = f (t), \quad 0 < t \leqslant T,
\quad\text{ with } y(0) = 0,
\end{equation}
where $ 0 < \alpha < 1 $, $ \D_{0+}^\alpha $ is a Riemann-Liouville fractional
differential operator of order $ \alpha $, and $ f $ is a given function.
The L1 scheme is one of the most popular numerical methods for fractional
diffusion equations. Lin and Xu \cite{Lin2007} analyzed the L1 scheme for the
fractional diffusion equation and obtained the temporal accuracy $
O(\tau^{2-\alpha})$ with $0< \alpha <1$, where $\tau$ denotes the time step
size. Sun and Wu \cite{sun2006fully} proposed the L1 scheme and derived temporal
accuracy $ O(\tau^{3-\alpha})$ with $1<\alpha <2$ for the fractional wave
equation. The analysis in the above two papers both assume that the underlying
solution is sufficiently smooth. However, Jin et~al.~\cite{Jin2016-L1} proved
that the L1 scheme is of only first-order temporal accuracy for fractional
diffusion equations with non-vanishing initial value, and Jin et~al.~\cite[Lemma
4.2]{Jin2018} derived only first-order temporal accuracy for an inhomogeneous
fractional equation. This phenomenon is caused by the well-known fact that the
solution of a fractional diffusion equation generally has singularity in time no
matter how smooth the data are, and it indicates that numerical analysis without
regularity restrictions on the solution is important for the fractional
diffusion equation. Recently, Yan et al.~\cite{Yan2018} proposed a modified L1
scheme for a fractional diffusion equation, which has $ (2-\alpha) $-order
temporal accuracy. For the L1 scheme with nonuniform grids, we refer the reader
to \cite{Stynes2017,Liao2018}; we also note that analyzing the L1 scheme with
nonuniform grids for a fractional diffusion equation with nonsmooth initial
value remains to be an open problem.
Although the sectorial operator is considered, the theoretical results in
\cite{Jin2016-L1,Yan2018} can not be applied to a fractional diffusion equation
with an arbitrary sectorial operator, since they require the spectral angle of
the sectorial operator not to be greater than $ \pi/4 $ (cf.~\cite[Remark
3.8]{Jin2016-L1}), that is the resolvent set of this operator must contain $ \{z
\in \mathbb C \setminus \{0\}:\, \snm{\operatorname{Arg} z} < 3\pi/4\} $. In our
work, the analysis is suitable for an arbitrary sectorial operator with spectral
angle $ < \pi/2 $.
As the fractional diffusion equation is an extension of the normal diffusion
equation, the solution of a fractional diffusion equation will naturally
converge to the solution of a normal diffusion equation as $ \alpha \to {1-} $,
and hence the L1 scheme is expected to be robust as $ \alpha \to {1-}
$. Recently, Huang et~al.~\cite{Huang2020} obtained an $ \alpha $-robust error
estimate for a multi-term fractional diffusion problem. However, to our best
knowledge, the $ \alpha $-robust convergence of the L1 scheme with an arbitrary
sectorial operator is not available in the literature. Here we note that the
constants in the error estimates in \cite{Lubich1996,Jin2016,Jin2016-L1,Yan2018}
all depend on $ \alpha $ and that the constants in the error estimates in
\cite{Jin2018-time-dependtn} will clearly blow up as $ \alpha \to {1-} $. This
motivates us to develop new techniques to analyze the convergence of the L1
scheme with an arbitrary sectorial operator and to investigate the robustness of
the L1 scheme as $ \alpha \to {1-} $.
The theory of inverse problems for differential equations has been extensively
developed within the framework of mathematical physics. One important class of
inverse problems for parabolic equations is to reconstruct the source term, the
initial value or the boundary conditions from the value of the solution at the
final time; see \cite{Prilepko2000,Samarskii2007}. The time fractional diffusion
equation is an extension of the normal diffusion equation, widely used to model
the physical phenomena with memory effect. Hence, this paper considers the
source term identification of a time fractional diffusion equation, based on the
value of the solution at the final time. For the related theoretical results, we
refer the reader to \cite{Jin2012,Liu2010,Murio2007,Tuan2016,Tuan2017,Wei2014}
and the references therein. We apply the famous Tikhonov regularization
technique to this inverse problem and establish the convergence of its temporal
discretization that uses the L1 scheme.
The main contributions of this paper are as follows:
\begin{enumerate}
\item the convergence of the L1 scheme for solving time fractional diffusion
equations with an arbitrary sectorial operator of spectral angle $ < \pi/2 $
is established;
\item the constants in the derived error estimates will not blow up as $
\alpha \to {1-} $, which shows that the L1 scheme is robust as $ \alpha \to
{1-} $;
\item the convergence analysis of a temporally discrete inverse problem
subject to a fractional diffusion equation is provided.
\end{enumerate}
Moreover, a feature of the error estimates in this paper is that they
immediately derive the corresponding error estimates of the backward Euler
scheme, by passing to the limit $ \alpha \to {1-} $.
Before concluding this section, we would also like to mention two important
algorithms for solving fractional diffusion equations. The first algorithm uses
the convolution quadrature proposed by Lubich \cite{Lubich1986,Lubich1988}.
Lubich et al.~\cite{Lubich1996,Cuesta2006} firstly used the convolution
quadrature to design numerical methods for fractional diffusion-wave equations,
and then Jin et al.~\cite{Jin2016,Jin-corre-2017} further developed these
algorithms. The second algorithm employs the Galerkin methods to discretize the
time fractional operators, which was firstly developed by McLean and
Mustapha
\cite{Mclean2009Convergence,Mustapha2009Discontinuous,Mustapha2011Piecewise,Mustapha2014A}.
The rest of the paper is organized as follows. Section 2 introduces some
conventions, the definitions of $ \mathcal A $ and $ \mathcal A^* $, the
Riemann-Liouville fractional operators and the mild solution theory of
linear fractional diffusion equations. Section 3 derives the convergence of the
L1 scheme. Section 4 investigates an inverse problem of a fractional diffusion
equation and establishes the convergence of a temporally discrete inverse
problem. Finally, Section 5 performs three numerical experiments to verify the
theoretical results.
\section{Preliminaries}
\label{sec:pre}
Throughout this paper, we will use the following conventions: for each linear
vector space, the scalars are the complex numbers; $ H_0^1(\Omega) $ is
a standard complex-valued Sobolev space, and $ H^{-1}(\Omega) $ is the usual
dual space of $ H_0^1(\Omega) $; $ \mathcal L(L^2(\Omega)) $ is the space of all
bounded linear operators on $ L^2(\Omega) $; for a Banach space $\mathcal{B} $,
we use $ \dual{\cdot,\cdot}_\mathcal{B} $ to denote a duality paring between $
\mathcal{B}^* $ (the dual space of $ \mathcal{B} $) and $\mathcal{B} $; for a
Lebesgue measurable subset $ \mathcal D \subset \mathbb R^l $, $ 1 \leqslant l
\leqslant 4 $, $ \dual{p, q}_{\mathcal D} $ means the integral $ \int_{\mathcal
D} p \overline q $, where $ \overline q $ is the conjugate of $ q $; for a
function $ v $ defined on $ (0,T) $, by $ v(t-) $, $ 0 < t \leqslant T $, we
mean the limit $ \lim_{s \to t-} v(s) $; the notations $c_\times, d_\times, C_\times $ mean some
positive constants and their values may differ
at each occurrence. In addition, for any $ 0 < \theta < \pi $, define
\begin{align}
\Sigma_\theta &:= \{
z \in \mathbb C \setminus \{0\}:
-\theta < \operatorname{Arg} z < \theta
\}, \label{eq:Sigma_theta-def} \\
\Gamma_{\theta} &:= \{
z \in \mathbb C \setminus \{0\}:\
\snm{\operatorname{Arg} z} = \theta
\} \cup \{0\} \label{eq:Upsilon-def} \\
\Upsilon_\theta &:= \{
z \in \Gamma_{\theta}:\
-\pi \leqslant \Im z \leqslant \pi
\}, \label{eq:Upsilon1-def}
\end{align}
where $ \Gamma_{\theta} $ and $ \Upsilon_\theta $ are so oriented that the
negative real axis is to their left. For the integral $ \int_{\Gamma_\theta} v
\, \mathrm{d}z $ or $ \int_{\Upsilon_\theta} v \, \mathrm{d}z $, if $ v $ has
singularity or is not defined at the origin, then $ \Gamma_\theta $ or $
\Upsilon_\theta $ should be deformed so that the origin is to its left; for
example, $ \Gamma_\theta $ is deformed to
\[
\{
z \in \mathbb C: \, \snm{z} > \epsilon, \,
\snm{\operatorname{Arg} z} = \theta
\} \cup \{
z \in \mathbb C: \, \snm{z} = \epsilon, \,
\snm{\operatorname{Arg} z} \leqslant \theta
\},
\]
where $ 0 < \epsilon < \infty $.
\medskip\noindent{\bf Riemann-Liouville fractional calculus operators.} Assume
that $ -\infty \leqslant a < b \leqslant \infty $ and $ X $ is a Banach space.
For any $ \gamma > 0 $, define
\begin{align*}
\left( \D_{a+}^{-\gamma} v\right)(t) &:=
\frac1{ \Gamma(\gamma) }
\int_a^t (t-s)^{\gamma-1} v(s) \, \mathrm{d}s,
\quad \text{a.e.}~t \in (a,b), \\
\left(\D_{b-}^{-\gamma} v\right)(t) &:=
\frac1{ \Gamma(\gamma) }
\int_t^b (s-t)^{\gamma-1} v(s) \, \mathrm{d}s,
\quad\text{a.e.}~t \in (a,b),
\end{align*}
for all $ v \in L^1(a,b;X) $, where $ \Gamma(\cdot) $ is the gamma function. In
addition, let $ \D_{a+}^0 $ and $ \D_{b-}^0 $ be the identity operator on $
L^1(a,b;X) $. For $ j - 1 < \gamma \leqslant j $, $ j \in \mathbb N_{>0} $,
define
\begin{align*}
\D_{a+}^\gamma v & := \D^j \, \D_{a+}^{\gamma-j}v, \\
\D_{b-}^\gamma v & := (-\D)^j \, \D_{b-}^{\gamma-j}v,
\end{align*}
for all $ v \in L^1(a,b;X) $, where $ \D $ is the first-order differential
operator in the distribution sense.
\medskip\noindent{\bf Definitions of $ \mathcal A $ and $ \mathcal A^* $.} Let $
\mathcal A: H_0^1(\Omega) \to H^{-1}(\Omega) $ be a second-order partial
differential operator of the form
\[
\mathcal A v := \sum_{i,j=1}^d \frac{\partial}{\partial_{x_i}}
(a_{ij}(x) \frac{\partial}{\partial x_j} v ) +
b(x) \cdot \nabla v + c(x)v,
\quad \forall v \in H_0^1(\Omega),
\]
where $ a_{ij} \in L^\infty(\Omega) $, $ b \in [L^\infty(\Omega)]^d $ and $
c \in L^\infty(\Omega) $ are real-valued. Assume that $ \mathcal A:
H_0^1(\Omega) \to H^{-1}(\Omega) $ is a sectorial operator satisfying that
\begin{subequations}
\begin{numcases}{}
\rho(\mathcal A) \supset \Sigma_{\omega_0},
\label{eq:rho(A)} \\
\nm{R(z,\mathcal A)}_{\mathcal L(L^2(\Omega))} \leqslant
\mathcal M_0 \, |z|^{-1} \quad
\forall z \in \Sigma_{\omega_0},
\label{eq:R(z,A)} \\
\dual{\mathcal Av,v}_{H_0^1(\Omega)} \leqslant 0,
\quad \forall v \in H_0^1(\Omega),
\label{eq:A-positive}
\end{numcases}
\end{subequations}
where $ \rho(\mathcal A) $ is the resolvent set of $ \mathcal A $, $ \pi/2 <
\omega_0 < \pi $, $ R(z,\mathcal A) := (z-\mathcal A)^{-1} $, and $ \mathcal M_0
$ is a positive constant. Define the adjoint operator $ \mathcal A^*:
H_0^1(\Omega) \to H^{-1}(\Omega) $ of $ \mathcal A $ by that
\[
\mathcal A^* v := \sum_{i,j=1}^d \frac{\partial}{\partial_{x_j}}
(a_{ij}(x) \frac{\partial}{\partial x_i} v ) -
\nabla \cdot ( b(x) v) + c(x)v,
\quad \forall v \in H_0^1(\Omega).
\]
It is evident that
\[
\dual{\mathcal A v, w}_{H_0^1(\Omega)} =
\overline{
\dual{\mathcal A^*w, v}_{H_0^1(\Omega)}
}
\quad\text{ for all } v, w \in H_0^1(\Omega).
\]
\medskip\noindent{\bf Solutions of the fractional diffusion equation.} For any $ t >
0 $, define
\begin{equation}
\label{eq:E-def}
E(t) := \frac1{2\pi i} \int_{\Gamma_{\omega_0}}
e^{tz} R(z^\alpha,\mathcal A) \, \mathrm{d}z.
\end{equation}
By (\ref{eq:R(z,A)}), it is evident that $ E $ is an $ \mathcal L(L^2(\Omega))
$-valued analytic function on $ (0,\infty) $. Moreover, a direct computation
gives the following two estimates (cf.~Jin et al.~\cite{Jin2016}): for any $ t >
0 $,
\begin{align}
\nm{E(t)}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0} t^{\alpha-1},
\label{eq:E} \\
\nm{E'(t)}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0} t^{\alpha-2}.
\label{eq:E'}
\end{align}
For any $ g \in L^1(0,T;L^2(\Omega)) $, we call
\begin{equation}
\label{eq:Sg-l1}
(Sg)(t) := (E*g) (t) = \int_0^t E(t-s) g(s) \, \mathrm{d}s,
\quad \text{a.e.}~0 < t \leqslant T,
\end{equation}
the mild solution to the following fractional diffusion equation
\begin{equation}
\label{eq:linear}
(\D_{0+}^\alpha - \mathcal A) w = g, \quad \mbox{with} \; w(0)=0,
\end{equation}
where the symbol $*$ denotes the convolution.
If $ g = v \delta_0 $ with $ v \in L^2(\Omega) $ and $ \delta_0 $ being the
Dirac measure (in time) concentrated at $ t=0 $, then we call
\begin{equation}
\label{eq:Sdelta}
(S(v\delta_0))(t) := E(t) v, \quad 0 < t \leqslant T,
\end{equation}
the mild solution to equation \eqref{eq:linear}. Symmetrically, for any $ g \in
L^1(0,T;L^2(\Omega)) $, we call
\begin{equation}
\label{eq:S*g}
(S^*g)(t) := \int_t^T E^*(s-t) g(s) \, \mathrm{d}s,
\quad \text{a.e.}~0 < t < T,
\end{equation}
the mild solution to the following backward fractional diffusion equation:
\begin{equation}
\label{eq:linear*}
(\D_{T-}^\alpha - \mathcal A^*) w = g, \quad \mbox{with} \; w (T)=0.
\end{equation}
If $ g = v\delta_T $ with $ v \in L^2(\Omega) $ and $ \delta_T $ being the Dirac
measure (in time) concentrated at $ t=T $, then we call
\begin{equation}
\label{eq:S*delta}
(S^*(v\delta_T))(t) := E^*(T-t) v, \quad 0 < t \leqslant T,
\end{equation}
the mild solution to equation \eqref{eq:linear*}. The above $ E^* $ is defined by
\begin{equation}
\label{eq:E*-def}
E^*(t) := \frac1{2\pi i} \int_{\Gamma_{\omega_0}}
e^{tz} R(z^\alpha,\mathcal A^*) \, \mathrm{d}z,
\quad t > 0.
\end{equation}
Similarly to \eqref{eq:E}, \eqref{eq:E'}, for any $ t>0 $, we have
\begin{align}
\nm{E^*(t)}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0} t^{\alpha-1},
\label{eq:E*} \\
\nm{(E^*)'(t)}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0} t^{\alpha-2}.
\label{eq:E*'}
\end{align}
Evidently, for any $ t > 0 $, $ E^*(t) $ is the adjoint operator of $ E(t) $ in
the sense that
\begin{equation}
\label{eq:E-E*}
\dual{E(t)v, w}_\Omega = \dual{v, E^*(t)w}_\Omega
\quad \forall v, w \in L^2(\Omega).
\end{equation}
\begin{remark}
By \eqref{eq:E}, a routine calculation (cf.~\cite[Theorem 2.6]{Diethelm2010})
yields that
\begin{equation}
\label{eq:Sg-C}
\nm{Sg}_{C([0,T];L^2(\Omega))} \leqslant
C_{\alpha,q,\omega_0,\mathcal M_0,T} \nm{g}_{L^q(0,T;L^2(\Omega))}
\end{equation}
for all $ g \in L^q(0,T;L^2(\Omega)) $ with $ q > 1/\alpha $.
\end{remark}
\begin{remark}
For the above solution theory of fractional diffusion equations, we refer the
reader to \cite{Lubich1996,McLean2010-B,Jin2016}.
\end{remark}
\medskip\noindent{\bf The L1 scheme.} Let $ J \in \mathbb N_{>0} $ and define $
t_j := j\tau $ for each $j=0, 1, 2, \dots, J$, where $ \tau := T/J $. Define $
b_j := j^{1-\alpha}/\Gamma(2-\alpha)$ for each $ j \in \mathbb N $. Assume that
$ g \in L^1(0,T;H^{-1}(\Omega)) $. Applying the L1 scheme \cite{Lin2007} to
problem \eqref{eq:linear} yields the following discretization: seek $
\{W_j\}_{j=1}^J \subset H_0^1(\Omega) $ such that, for any $ 1 \leqslant k
\leqslant J $,
\begin{equation}
\label{eq:L1}
b_1 W_k + \sum_{j=1}^{k-1} (b_{k-j+1}-2b_{k-j}+b_{k-j-1})
W_j - \tau^\alpha \mathcal A W_k =
\tau^{\alpha-1} \int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t
\end{equation}
in $ H^{-1}(\Omega) $, where $ W_{j} $, $ 1 \leqslant j \leqslant J $, is an
approximation of $ w(t_{j}) $. Symmetrically, applying the L1 scheme to problem
\eqref{eq:linear*} yields the following discretization: seek $ \{\mathcal
W_j\}_{j=1}^J \subset H_0^1(\Omega) $ such that, for any $ 1 \leqslant k
\leqslant J $,
\begin{equation}
\label{eq:L1*}
b_1 \mathcal W_k + \sum_{j=k+1}^J ( b_{j-k+1} - 2b_{j-k} + b_{j-k-1} )
\mathcal W_j - \tau^\alpha \mathcal A^* \mathcal W_k =
\tau^{\alpha-1} \int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t
\end{equation}
in $ H^{-1}(\Omega) $. For each $ 1 \leqslant j \leqslant J $, we will use $
S_{\tau,j} g $ and $ S_{\tau,j}^* g $ to denote the above $ W_j $ and $ \mathcal
W_j $, respectively, that is
\begin{equation} \label{eq:Staug}
S_{\tau,j} g := W_j, \quad S_{\tau,j}^* g := \mathcal W_j.
\end{equation}
In addition, for each $ 1 \leqslant j \leqslant J $, we
define
\begin{equation}
\label{eq:Stau-delta}
\mathcal S_{\tau,j}(v\delta_0) := \mathcal
S_{\tau,j}(v\widehat\delta_0), \quad
\mathcal S_{\tau,j}^*(v\delta_T) := \mathcal
S_{\tau,j}^*(v\widehat\delta_T), \quad
\end{equation}
where $ v \in H^{-1}(\Omega) $ and
\begin{align}
\widehat\delta_0(t) &:=
\begin{cases}
\tau^{-1} & \text{ if } 0 < t < t_1, \\
0 & \text{ if } t_1 < t < T,
\end{cases} \label{eq:delta_0} \\
\widehat\delta_T(t) &:=
\begin{cases}
0 & \text{ if } 0 < t < t_{J-1}, \\
\tau^{-1} & \text{ if } t_{J-1} < t < T.
\end{cases}
\label{eq:delta_T}
\end{align}
\section{Convergence of the L1 scheme}
\label{sec:L1}
\begin{theorem}
\label{thm:conv-Stau}
Let $ 0< \alpha <1$. Let $Sg$ and $S_{\tau, j} g$ be defined by \eqref{eq:Sg-l1}
and \eqref{eq:Staug}, respectively. Then we have the following estimates:
\begin{enumerate}
\item
For any $ g \in L^\infty(0,T;L^2(\Omega)) $,
\begin{small}
\begin{equation}
\label{eq:S-Stau-g}
\max_{1 \leqslant j \leqslant J}
\nm{(Sg)(t_j)-S_{\tau,j}g}_{L^2(\Omega)}
\leqslant C_{\omega_0,\mathcal M_0}
\tau^\alpha \Big(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\Big) \nm{g}_{L^\infty(0,T;L^2(\Omega))}.
\end{equation}
\end{small}
\item
For any $ v \in L^2(\Omega) $,
\begin{small}
\begin{align}
\max_{1 \leqslant j \leqslant J} j^{2-\alpha} \nm{
S(v\delta_0)(t_j) - S_{\tau,j}(v\delta_0)
}_{L^2(\Omega)} & \leqslant
C_{\omega_0,\mathcal M_0} \tau^{\alpha-1}
\nm{v}_{L^2(\Omega)},
\label{eq:S-Stau-vdelta} \\
\sum_{j=1}^J \nm{
S(v\delta_0) - S_{\tau,j}(v\delta_0)
}_{L^1(t_{j-1},t_j;L^2(\Omega))} & \leqslant
C_{\omega_0,\mathcal M_0} \tau^\alpha
\Big(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\Big) \nm{v}_{L^2(\Omega)}.
\label{eq:S-Stau-vdelta-l1}
\end{align}
\end{small}
\end{enumerate}
\end{theorem}
\begin{remark}
Assume that $ g \in L^\infty(0,T;L^2(\Omega)) $. Passing to the limit $ \alpha
\to {1-} $ in \eqref{eq:L1} and \eqref{eq:S-Stau-g} yields that, for the parabolic equation
\[
w' - \mathcal A w = g, \quad\text{with $ w(0) = 0 $},
\]
and the corresponding backward Euler scheme
\[
\begin{cases}
W_0 = 0, \\
W_k - W_{k-1} - \tau \mathcal A W_k =
\int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t,
\quad 1 \leqslant k \leqslant J,
\end{cases}
\]
one has the error estimate, noting that $\lim_{\alpha \to 1} \frac{1-J^{\alpha-1}}{1-\alpha} = \ln J$,
\[
\max_{1 \leqslant j \leqslant J}
\nm{w(t_j)-W_j}_{L^2(\Omega)}
\leqslant C_{\omega_0,\mathcal M_0}
(1 + \ln J) \tau \nm{g}_{L^\infty(0,T;L^2(\Omega))}.
\]
\end{remark}
\begin{remark}
Let us consider the following time fractional diffusion equation
\[
\D_{0+}^\alpha(y - y_0)(t) - \mathcal Ay(t) = 0, \quad
0 < t \leqslant T, \quad
\text{with } y(0) = y_0,
\]
where $ y_0 \in L^2(\Omega) $ is given. Applying the L1 scheme to this
equation yields the following discretization: seek $ \{W_j\}_{j=1}^J \subset
H_0^1(\Omega) $ such that, for any $ 1 \leqslant k \leqslant J $,
\begin{equation*}
b_1 W_k + \sum_{j=1}^{k-1} (b_{k-j+1}-2b_{k-j}+b_{k-j-1})
W_j - \tau^\alpha \mathcal A W_k =
\tau^{\alpha-1} (b_k - b_{k-1}) y_0
\end{equation*}
in $ H^{-1}(\Omega) $. Following the proof of \cite[Theorem 3.1]{Jin2016-L1},
we can use the technical results in Subsection 3.1 to derive that, for any $ 1
\leqslant j \leqslant J $,
\[
\nm{y(t_j) - W_j}_{L^2(\Omega)} \leqslant
C_{\omega_0,\mathcal M_0} \tau t_j^{-1} \nm{y_0}_{L^2(\Omega)}.
\]
\end{remark}
The main task of the rest of this section is to prove the above theorem.
\subsection{Some technical results}
\label{ssec:foo}
Define the discrete Laplace transform of $ \{b_j\}_{j=1}^\infty $ by that
\[
\widehat b(z) := \sum_{j=1}^\infty b_j e^{-jz},
\quad z \in \Sigma_{\pi/2}.
\]
By the analytic continuation technique, $ \widehat b $ has an analytic
continuation (cf.~\cite[Equation (21)]{Mclean2015Time})
\begin{equation}
\label{eq:wtb-int}
\widehat b(z) = \frac1{2\pi i}
\int_{-\infty}^{(0+)} \frac{e^{w-z}}{1-e^{w-z}}
w^{\alpha-2} \, \mathrm{d}w,
\quad z \in \Sigma_{\pi},
\end{equation}
where $ \int_{-\infty}^{({0+})} $ means an integral on a piecewise smooth and
non-self-intersecting path enclosing the negative real axis and orienting
counterclockwise, and $ 0 $ and $ \{z+2k\pi i \neq 0: k \in \mathbb Z\} $ lie on
the different sides of this path. Define
\begin{equation}
\label{eq:psi-def}
\psi(z) := (e^z-1)^2 \, \widehat b(z),
\quad z \in \Sigma_\pi.
\end{equation}
For $ z = x + iy \in \mathbb C \setminus (-\infty,0] $, we have that
(cf.~\cite[Equation (3.7)]{Jin2016-L1})
\begin{small}
\begin{equation}
\label{eq:re-psi}
\Re \big( e^{-z} \psi(z) \big) =
\frac{\sin(\pi(1\!-\!\alpha))}\pi \!
\int_0^\infty \! \frac{
s^{\alpha-2}(1\!-\!e^{-s})
(1\!+\!e^{-2x-s} \!-\! e^{-x-s}\cos y \!-\! e^{-x}\cos y)
}{
1-2e^{-x-s}\cos y + e^{-2x-2s}
} \mathrm{d}s.
\end{equation}
\end{small}
\begin{lemma}
\label{lem:wtb}
For any $ L > 0 $, we have
\begin{equation}
\label{eq:wtb}
\sup_{0 < \alpha < 1} \quad \sup_{
\substack{
z \in \Sigma_\pi \\
-L \leqslant \Re z \leqslant 0 \\
-\pi \leqslant \Im z \leqslant \pi
}
} \Snm{\widehat b(z) - z^{\alpha-2}} = C_L.
\end{equation}
\end{lemma}
\begin{proof}
For any $ z \in \Sigma_\pi $ satisfying $ -L \leqslant \Re z \leqslant 0 $ and
$ 0 \leqslant \Im z \leqslant \pi $, by \eqref{eq:wtb-int}, Cauchy's integral
theorem and the residue theorem we obtain
\begin{small}
\begin{align*}
\widehat b(z) & = z^{\alpha-2} + \frac1{2\pi i}
\int_{-\infty -i\pi}^{1-i\pi}
\frac{e^{w-z}}{1-e^{w-z}} w^{\alpha-2} \, \mathrm{d}w +
\frac1{2\pi i} \int_{1-i\pi}^{1+i3\pi/2}
\frac{e^{w-z}}{1-e^{w-z}} w^{\alpha-2} \, \mathrm{d}w \\
& \qquad {} + \frac1{2\pi i}\int_{1+i3\pi/2}^{-\infty + i3\pi/2}
\frac{e^{w-z}}{1-e^{w-z}} w^{\alpha-2} \, \mathrm{d}w \\
& =: z^{\alpha-2} + G(\alpha,z).
\end{align*}
\end{small}
A routine calculation verifies that $ G $ is continuous on
\[
[0,1] \times \{
\xi \in \mathbb C:\
-L \leqslant \Re \xi \leqslant 0,
0 \leqslant \Im \xi \leqslant \pi
\},
\]
and so
\[
\sup_{0 < \alpha < 1} \quad \sup_{
\substack{
-L \leqslant \Re z \leqslant 0 \\
0 \leqslant \Im z \leqslant \pi
}
} \snm{G(\alpha,z)} = C_L.
\]
It follows that
\[
\sup_{0 < \alpha < 1} \sup_{
\substack{
z \in \Sigma_\pi \\
-L \leqslant \Re z \leqslant 0 \\
0 \leqslant \Im z \leqslant \pi
}
} \Snm{\widehat b(z) - z^{\alpha-2}} = C_L.
\]
Similarly,
\[
\sup_{0 < \alpha < 1} \sup_{
\substack{
z \in \Sigma_\pi \\
-L \leqslant \Re z \leqslant 0 \\
-\pi \leqslant \Im z \leqslant 0
}
} \Snm{\widehat b(z) - z^{\alpha-2}} = C_L.
\]
Combining the above two estimates proves \eqref{eq:wtb} and hence this lemma.
\end{proof}
\begin{lemma}
\label{lem:psi}
For any $ 0 < \delta < \pi $ and $ L > 0 $, we have
\begin{align}
\inf_{0 < \alpha < 1} \quad \inf_{
\delta \leqslant y \leqslant \pi
} \Re \big( e^{-iy}\psi(iy) \big) & = C_\delta,
\label{eq:psi-1} \\
\sup_{0 < \alpha < 1} \quad \sup_{
\substack{
-L \leqslant \Re z \leqslant 0 \\
\delta \leqslant \Im z \leqslant \pi
}
}\Snm{
\frac{\mathrm{d}}{\mathrm{d}z} (e^{-z}\psi(z))
} & = C_{\delta,L}.
\label{eq:psi-2}
\end{align}
\end{lemma}
\begin{proof}
For any $ \delta \leqslant y \leqslant \pi $, we have, by \eqref{eq:re-psi}
with $ z= 0+i y $,
\begin{align*}
& \Re \big ( e^{-iy} \psi (iy) \big )
= \frac{\sin (\pi (1- \alpha))}{\pi} \int_{0}^{\infty}
\frac{
s^{\alpha-2} (1- e^{-2s}) (1- \cos y)
}{1- 2 e^{-s} \cos y + e^{-2 s}} \, \mathrm{d}s \\
& > \frac{\sin(\pi(1-\alpha))}\pi (1-\cos\delta)
\int_0^\infty \frac{s^{\alpha-2}(1-e^{-2s})}{
1+2e^{-s}+e^{-2s}
} \, \mathrm{d}s, \\
&= \frac{\sin(\pi(1-\alpha))}\pi (1-\cos\delta)
\Big [
\int_0^1 \frac{s^{\alpha-2}(1-e^{-2s})}{
1+2e^{-s}+e^{-2s}
} \, \mathrm{d}s
+
\int_1^\infty \frac{s^{\alpha-2}(1-e^{-2s})}{
1+2e^{-s}+e^{-2s}
} \, \mathrm{d}s
\Big ].
\end{align*}
In view of the two simple estimates
\begin{align*}
\int_0^1 \frac{s^{\alpha-2}(1-e^{-2s})}{
1+2e^{-s}+e^{-2s}
} \, \mathrm{d}s
& > \int_0^1 \frac{s^{\alpha-2}(e^{-2s} 2s)}{4 } \, \mathrm{d}s
= \int_0^1 \frac{s^{\alpha-1}(e^{-2s})}{2 } \, \mathrm{d}s \\
& > \int_0^1 \frac{s^{\alpha-1}(e^{-2})}{2
} \, \mathrm{d}s
= \frac{e^{-2}}{2 \alpha}
\end{align*}
and
\begin{align*}
&\int_1^\infty \frac{s^{\alpha-2}(1-e^{-2s})}{
1+2e^{-s}+e^{-2s}
} \, \mathrm{d}s
> \int_1^{\infty} s^{\alpha-2} \frac{1-e^{-2}}{4} \, \mathrm{d}s =
\frac{1-e^{-2}}{4(1-\alpha)},
\end{align*}
we then obtain, for any $ \delta \leqslant y \leqslant \pi $,
\begin{align*}
\Re \big( e^{-iy} \psi(iy) \big)
\geqslant \frac{\sin(\pi(1-\alpha))}\pi
(1-\cos\delta) \Big(
\frac{e^{-2}}{2\alpha} + \frac{1-e^{-2}}{4(1-\alpha))}
\Big) \geqslant C_\delta.
\end{align*}
This implies inequality \eqref{eq:psi-1}.
Now let us prove \eqref{eq:psi-2}. For any $ z \in \mathbb C $ satisfying $
\delta \leqslant \Im z \leqslant \pi $, using the residue theorem yields, by
\eqref{eq:wtb-int}, that
\begin{equation}
\label{eq:wtb-series}
\widehat b(z) = \sum_{k = -\infty}^\infty
(z+2k\pi i)^{\alpha-2},
\end{equation}
and hence
\[
\widehat b'(z) = (\alpha-2)\sum_{k = -\infty}^\infty
(z+2k\pi i)^{\alpha-3}.
\]
A simple calculation then gives
\[
\sup_{0 < \alpha < 1} \quad \sup_{
\substack{
-L \leqslant \Re z \leqslant 0 \\
\delta \leqslant \Im z \leqslant \pi
}
} \snm{e^{-z}(e^z-1)^2 \widehat b'(z)} = C_{\delta,L}.
\]
In addition, Lemma \ref{lem:wtb} implies
\[
\sup_{0 < \alpha < 1} \quad \sup_{
\substack{
-L \leqslant \Re z \leqslant 0 \\
\delta \leqslant \Im z \leqslant \pi
}
}\snm{(e^z-e^{-z})\widehat b(z)} = C_{\delta,L}.
\]
Consequently, \eqref{eq:psi-2} follows from the equality
\[
\frac{\mathrm{d}}{\mathrm{d}z} (e^{-z}\psi(z)) =
(e^z-e^{-z})\widehat b(z) +
e^{-z}(e^z-1)^2 \widehat b'(z),
\quad z \in \Sigma_\pi.
\]
This completes the proof.
\end{proof}
\begin{lemma}
\label{lem:psi-esti}
Assume that $ \pi/2 < \theta_0 < \pi $. Then there exists $ \pi/2 < \theta^*
\leqslant \theta_0 $ depending only on $ \theta_0 $ such that
\begin{equation}
\label{eq:psi_sector}
e^{-z} \psi(z) \in \Sigma_{\theta_0}
\quad\text{ for all } z \in \Sigma_{\theta^*}
\text{ with } -\pi \leqslant \Im z \leqslant \pi
\end{equation}
and
\begin{equation}
\label{eq:psi-esti}
\snm{e^{-z}\psi(z)} \geqslant C_{\theta_0}
\snm{z}^\alpha \quad \text{for all }
z \in \Upsilon_{\theta^*} \setminus \{0\}.
\end{equation}
\end{lemma}
\begin{proof}
Step 1. By \eqref{eq:re-psi}, a simple calculation gives
\[
\Re \big( e^{-z}\psi(z) \big) > 0
\text{ for all } z \in D_1,
\]
so that
\begin{equation}
\label{eq:bj-1}
e^{-z}\psi(z) \in \Sigma_{\pi/2} \quad\text{for all } z \in D_{1},
\end{equation}
where
\[
D_{1}= \{
z \in \mathbb{C}: \Re z \geqslant 0, \, 0 \leqslant \Im z
\leqslant \pi, \, z \ne 0
\}.
\]
Step 2. From \eqref{eq:psi-def} and \eqref{eq:wtb} we conclude that there
exists a continuous function $ G $ on $ (0,1) \times D_2 $, such that
\begin{equation}
\label{eq:64}
e^{-z}\psi(z) = z^\alpha\big(1 + z G(\alpha,z)\big)
\quad \forall z \in D_2
\end{equation}
and that
\[
\sup_{0 < \alpha <1} \sup_{
z \in D_2
} \snm{G(\alpha,z)} = C_{\theta_0},
\]
where
\[
D_2 := \{
\xi \in \mathbb C \setminus \{0\}:
\ \pi/2 \leqslant \mbox{Arg}(\xi) \leqslant \theta_0,\,
0 < \Im \xi \leqslant \pi
\}.
\]
Hence, there exists $ 0 < \epsilon_0 < \pi $, depending only on $ \theta_0
$, such that
\begin{align*}
& \Snm{
\mbox{Arg}(1+zG(\alpha,z))
} \leqslant (\theta_0-\pi/2)/2 \quad \text{ and } \quad
\snm{e^{-z}\psi(z)} \geqslant C_{\theta_0}
\snm{z}^\alpha \\
& \text{ for all } z \in \Sigma_{\theta_0} \setminus
\Sigma_{\pi/2} \text{ with }
0 < \Im z \leqslant \epsilon_0.
\end{align*}
Since
\begin{align*}
& \mbox{Arg} \big ( e^{-z} \psi (z) \big ) =
\mbox{Arg} \big ( z^{\alpha} ( 1+ zG(\alpha,z)) \big )
\quad \text{(by \eqref{eq:64})} \\
={} &
\alpha \mbox{Arg}(z) + \mbox{Arg} \big ( 1 + zG(\alpha,z) \big ),
\end{align*}
it follows that
\begin{equation}
\label{eq:bj-2}
\begin{aligned}
& e^{-z}\psi(z) \in \Sigma_{\theta_0} \text{ and }
\snm{e^{-z}\psi(z)} \geqslant C_{\theta_0} \snm{z}^\alpha \\
& \text{for all} \,
z \in \Sigma_{(\theta_0+\pi/2)/2} \setminus \Sigma_{\pi/2} \, \text{ with }
0 < \Im z \leqslant \epsilon_0.
\end{aligned}
\end{equation}
Step 3. Note that $ \epsilon_0 $ is a constant depending only on $ \theta_0 $.
By \eqref{eq:psi-1} we have
\[
\inf_{0 < \alpha < 1} \, \inf_{
\substack{
\Re z =0 \\
\epsilon_0 \leqslant \Im z \leqslant \pi
}
} \Re \big ( e^{-z}\psi(z) \big ) = C_{\theta_0}.
\]
From \eqref{eq:psi-2} we then conclude that there exists $ 0 < \epsilon_1 <
\pi $,
depending only on $ \theta_0 $, such that
\[
\inf_{0 < \alpha < 1} \, \inf_{
\substack{
-\epsilon_1 \leqslant \Re z \leqslant 0 \\
\epsilon_0 \leqslant \Im z \leqslant \pi
}
} \Re \big ( e^{-z}\psi(z) \big ) = C_{\theta_0} > 0.
\]
It follows that
\begin{equation}
\label{eq:bj-31}
\begin{aligned}
& e^{-z} \psi(z) \in \Sigma_{\pi/2} \text{ and }
\snm{e^{-z}\psi(z)} \geqslant C_{\theta_0} \text{ for all } \\
& z \in \Sigma_{(\theta_0+\pi/2)/2}
\setminus \Sigma_{\pi/2} \, \text{ with } -\epsilon_1 \leqslant \Re z
\leqslant 0 \text{ and }
\epsilon_0 \leqslant \Im z \leqslant \pi.
\end{aligned}
\end{equation}
Letting $ \theta^* := \pi/2 + \arctan(\epsilon_1/\pi) $, by
\eqref{eq:bj-1}, \eqref{eq:bj-2} and \eqref{eq:bj-31} we obtain that
\begin{equation}
\label{eq:zq-1}
e^{-z} \psi(z) \in \Sigma_{\theta_0} \quad
\text{for all } z \in \Sigma_{\theta^*}
\text{ with } 0 \leqslant \Im z \leqslant \pi
\end{equation}
and that
\begin{equation}
\label{eq:zq-2}
\snm{e^{-z}\psi(z)} \geqslant C_{\theta_0}
\snm{z}^\alpha \text{ for all } z \in \Upsilon_{\theta^*}
\,\text{ with }\, 0 < \Im z \leqslant \pi.
\end{equation}
Step 4. By the fact that
\[
\overline{e^{-z}\psi(z)} =
e^{-\overline{z}} \psi(\overline z)
\quad\text{ for all } z \in \Sigma_\pi,
\]
using \eqref{eq:zq-1} and \eqref{eq:zq-2} proves \eqref{eq:psi_sector} and
\eqref{eq:psi-esti}, respectively. This completes the proof.
\end{proof}
By \eqref{eq:psi-def} and Lemma \ref{lem:wtb}, a routine calculation gives the following lemma.
\begin{lemma}
Assume that $ \pi/2 < \theta < \pi $. Then
\begin{equation}
\label{eq:129}
\snm{\psi(z) - z^\alpha} \leqslant
C_\theta \snm{z}^{\alpha+1}
\end{equation}
for all $ z \in \Upsilon_\theta \setminus \{0\} $.
\end{lemma}
\begin{remark}
In Lemma \ref{lem:psi-esti}, we prove that for any given $\theta_{0} \in (\pi/2, \pi)$, we can show that $e^{-z} \psi (z) \in \Sigma_{\theta_{0}}$ for $z \in \Sigma_{\theta^{*}}$ with some $\pi/2 < \theta^{*} \leqslant \theta_{0}$. Therefore our error estimates hold for any elliptic operator $\mathcal{A}$ where the resolvent set of $\mathcal{A}$ lies in $\Sigma_{\theta_{0}}$. The techniques used in the proof of Lemma \ref{lem:psi-esti} are new and may be extended to consider the error estimates for the higher order L-type schemes. Let us recall some available approach in literature for proving Lemma \ref{lem:psi-esti}. In Jin et al. \cite{Jin2016-L1} the authors use the following steps to show $e^{-z} \psi (z) \in \Sigma_{\theta_{0}}$:
Step 1. Let $z \in \{ z: \mbox{Arg} (z) = \theta^{*}= \pi/2 \}$ and prove that $e^{-z} \psi (z) \in \Sigma_{\theta_{0}}$ for some suitable $\theta_{0} \in (\pi/2, \pi)$.
Step 2. By the continuity of $e^{-z} \psi (z)$ with respect to $\theta^{*}$, one may claim that $e^{-z} \psi (z) \in \Sigma_{\theta_{0}}$ also for $\theta^{*} \in (\pi/2, \pi)$ for $\theta^{*}$ sufficiently close to $\pi/2$.
By using this approach, Jin et al. \cite{Jin2016-L1} show that $\theta_{0} = 3 \pi/4 - \epsilon$, with $\epsilon >0$, which implies that this approach do not work for the elliptic operator $\mathcal{A}$ where the resolvent set of $\mathcal{A}$ lies in $\Sigma_{\theta_{0}}$ with $\theta_{0} < 3 \pi/4$. It seems also very difficult to prove the similar results as in Lemma \ref{lem:psi-esti} for the higher order L-type scheme by using the approach in \cite{Jin2016-L1}. Therefore the new techniques developed in the proof of Lemma \ref{lem:psi-esti} may open a door to consider the numerical analysis for high order L-type schemes for solving time fractional partial differential equations.
\end{remark}
\subsection{Proof of Theorem \ref{thm:conv-Stau}}
By Lemma \ref{lem:psi-esti}, there exists $ \pi/2 < \omega^* \leqslant \omega_0 $,
depending only on $ \omega_0 $, such that
\begin{equation}
\label{eq:0}
e^{-z} \psi(z) \in \Sigma_{\omega_0}
\text{
for all $ z \in \Sigma_{\omega^*} $
with $ -\pi \leqslant \operatorname{Im} z \leqslant \pi $
}
\end{equation}
and that
\begin{equation}
\label{eq:psi>}
\snm{e^{-z}\psi(z)} \geqslant C_{\omega_0}
\snm{z}^\alpha \quad \text{for all }
z \in \Upsilon_{\omega^*} \setminus \{0\}.
\end{equation}
Define
\begin{equation}
\label{eq:calE-def}
\mathcal E(t) := \tau^{-1} \mathcal E_{\lfloor t/\tau \rfloor},
\quad t > 0,
\end{equation}
where $ \lfloor \cdot \rfloor $ is the floor function and
\begin{equation}
\label{eq:calEj}
\mathcal E_j := \frac1{2\pi i} \int_{\Upsilon_{\omega^*}}
e^{jz} R(\tau^{-\alpha}e^{-z}\psi(z), \mathcal A) \, \mathrm{d}z,
\quad j \in \mathbb N.
\end{equation}
Note that (\ref{eq:rho(A)}) and \eqref{eq:0} guarantee that the above $ \mathcal
E_j $ is well defined, and we recall that $ \psi $ is defined by
\eqref{eq:psi-def}.
\begin{lemma}
For any $ g \in L^1(0,T;L^2(\Omega)) $, we have
\begin{equation}
\label{eq:Stau-g}
S_{\tau,j}g = \int_0^{t_j} \mathcal E(t_j-t) g(t) \, \mathrm{d}t
\quad \forall 1 \leqslant j \leqslant J.
\end{equation}
\end{lemma}
\begin{proof}
Since the techniques used in this proof are standard in the theory of Laplace
transform, we only provide a brief proof; see
\cite{Mclean2015Time,Jin2015IMA,Yan2018} for more details. Extend $ g $ to $
(T,\infty) $ by zero and define $ t_j := j\tau $ for each $ j > J $. Define $
\{W_k\}_{k=1}^\infty \subset H_0^1(\Omega) $ by that, for any $ k \geqslant 1
$,
\begin{equation}
\label{eq:W}
b_1 W_k + \sum_{j=1}^{k-1} (b_{k-j+1}-2b_{k-j}+b_{k-j-1}) W_j
- \tau^\alpha \mathcal A W_k =
\tau^{\alpha-1} \int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t
\end{equation}
in $ H^{-1}(\Omega) $. By definition,
\begin{equation}
\label{eq:Staujg}
S_{\tau,j} g = W_j, \quad \forall 1 \leqslant j \leqslant J.
\end{equation}
The rest of this proof is divided into three steps.
Step 1. We prove that the following discrete Laplace transform of $
\{W_k\}_{k=1}^\infty $ is analytic on $ \Sigma_{\pi/2} $:
\begin{equation}
\label{eq:wtW}
\widehat W(z) := \sum_{k=1}^\infty e^{-kz} W_k,
\quad z \in \Sigma_{\pi/2}.
\end{equation}
Note first that we can assume that $ g \in L^\infty(0,\infty;L^2(\Omega)) $.
Since
\[
\sup_{a > 0} \, \nm{g}_{{}_0H^{-\alpha/2}(0,a;L^2(\Omega))}
< \infty,
\]
by the techniques to prove \eqref{eq:Stau-Stauj} and \eqref{eq:Stau-stab-infty}
we can obtain
\[
\sup_{k \geqslant 1} \, \nm{W_k}_{L^2(\Omega)} < \infty.
\]
Therefore, it is evident that $ \widehat W $ is analytic on $ \Sigma_{\pi/2}
$.
Step 2. Let us prove that, for any $ 1 \leqslant j \leqslant J $,
\begin{equation}
\label{eq:eve}
W_j = \sum_{k=1}^J \frac{\tau^{-1}}{2\pi i}
\int_{1-\pi i}^{1+\pi i} R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
e^{(j-k)z} \, \mathrm{d}z
\int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t.
\end{equation}
Multiplying both sides of \eqref{eq:W} by $ e^{-kz} $ and summing over $ k $
from $1$ to $\infty$, we obtain
\[
\big(
(e^z-2+e^{-z})\widehat b(z) - \tau^\alpha \mathcal A
\big) \widehat W(z) =
\tau^{\alpha-1} \sum_{k=1}^\infty
\int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t
e^{-kz}, \quad \forall z \in \Sigma_{\pi/2},
\]
which, together with \eqref{eq:psi-def}, yields
\begin{equation}
\label{eq:lxy}
(e^{-z}\psi(z) - \tau^\alpha \mathcal A) \widehat W(z) =
\tau^{\alpha-1} \sum_{k=1}^\infty
\int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t
e^{-kz}, \quad \forall z \in \Sigma_{\pi/2}.
\end{equation}
Hence, from (\ref{eq:rho(A)}), \eqref{eq:0} and the fact $ g|_{(T,\infty)} = 0
$, it follows that
\begin{align*}
\widehat W(z) &= \tau^{-1} R(\tau^{-\alpha} e^{-z}\psi(z),\mathcal A)
\sum_{k=1}^\infty \int_{t_{k-1}}^{t_k}
g(t) \, \mathrm{d}t e^{-kz} \\
& = \tau^{-1} R(\tau^{-\alpha} e^{-z}\psi(z),\mathcal A)
\sum_{k=1}^J \int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t e^{-kz}
\end{align*}
for all $ z \in \Sigma_{\pi/2} $ with $ -\pi \leqslant \operatorname{Im} z
\leqslant \pi $. Therefore, \eqref{eq:eve} follows from the equality
\[
W_j = \frac1{2\pi i} \int_{1-\pi i}^{1+\pi i}
\widehat W(z) e^{jz} \, \mathrm{d}z,
\]
which is evident by \eqref{eq:wtW}.
Step 3. By Cauchy's integral theorem, we have, for any $a>1$, when $ k \geqslant
j+1$,
\begin{align}
&\qquad \Big \| \int_{1-\pi i}^{1+\pi i}
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
e^{(j-k)z} \, \mathrm{d}z \Big \|_{\mathcal L(L^2(\Omega))} \notag \\
& =\Big \| \int_{a-\pi i}^{a+\pi i}
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
e^{(j-k)z} \, \mathrm{d}z \Big \|_{\mathcal L(L^2(\Omega))} \notag \\
& \leqslant \mathcal M_0 e^{(j-k) a}
\int_{a-\pi i}^{a+\pi i} \frac{|dz|}{\tau^{-\alpha} | e^{-z} \psi (z) |}
\quad\text{(by \eqref{eq:R(z,A)}).}
\label{eq:zq}
\end{align}
Since \eqref{eq:re-psi} implies
\[
\snm{e^{-z}\psi(z)} \geqslant C_\alpha
\quad\text{ for all $ z \in \mathbb C $ with $ \Re z \geqslant 1 $},
\]
passing to the limit $ a \to \infty $ in \eqref{eq:zq} yields
\[
\int_{1-\pi i}^{1+\pi i}
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
e^{(j-k)z} \, \mathrm{d}z = 0, \quad \mbox{for} \; k \geqslant j+1.
\]
Thus from \eqref{eq:eve} we obtain
\begin{align*}
W_j &= \sum_{k=1}^j \frac{\tau^{-1}}{2\pi i}
\int_{1-\pi i}^{1+\pi i} R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
e^{(j-k)z} \, \mathrm{d}z
\int_{t_{k-1}}^{t_k} g(t) \, \mathrm{d}t \\
&= \sum_{k=1}^j \mathcal E_{j-k} \int_{t_{k-1}}^{t_k}
g(t) \, \mathrm{d}t =
\int_0^{t_j} \mathcal E(t_j-t) g(t) \, \mathrm{d}t.
\end{align*}
Here we have used the equality
\[
\int_{1-\pi i}^{1+\pi i}
R(\tau^{-\alpha} e^{-z} \psi(z), \mathcal A) e^{(j-k)z} \, \mathrm{d}z
= \int_{\Upsilon{\omega^*}}
R(\tau^{-\alpha} e^{-z} \psi(z), \mathcal A) e^{(j-k)z} \, \mathrm{d}z,
\]
which can be easily verified by Cauchy's integral theorem. By
\eqref{eq:Staujg}, this proves \eqref{eq:Stau-g} and thus completes the proof.
\end{proof}
\begin{remark}
In \eqref{eq:Stau-g}, we use the piecewise kernel function $\mathcal{E}(t)$ to
express the discrete solution $S_{\tau, j}g$, which is different from the
discrete solution expression in literature \cite{Jin2016-L1,Yan2018}, where
the authors assumed that the function $g$ has more regularities at 0 and has
the Taylor expansion at $0$ and then applied the convolution techniques for
obtaining the discrete solution. In our paper, we only assume that $ g \in
L^\infty(0,T;L^2(\Omega))$ and we did not use the convolution techniques for
obtaining the discrete solutions as in \cite{Jin2016-L1,Yan2018}. One may use
the similar idea to consider more general function $g$; for example, $g$ is a
stochastic Wiener process $g= \frac{d W(t)}{dt}$, where $W$ is the Hilbert
space valued cylindrical Wiener process.
\end{remark}
\begin{lemma}
\label{lem:resolvent1}
For any $ z \in \Upsilon_{\omega^*} \setminus \{0\} $,
\begin{equation}
\label{eq:lsj-1}
\nm{
e^{z} R(\tau^{-\alpha}z^\alpha,\mathcal A) -
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
}_{\mathcal L(L^2(\Omega))}
\leqslant C_{\omega_0,\mathcal M_0}
\snm{z}^{1-\alpha} \tau^{\alpha}.
\end{equation}
\end{lemma}
\begin{proof}
We have
\begin{align*}
& e^{z}R(\tau^{-\alpha}z^\alpha,\mathcal A) -
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A) \\
={} &
\big(
\tau^{-\alpha} \big( \psi(z) - z^\alpha) + (1-e^{z})\mathcal A
\big) R(\tau^{-\alpha} z^\alpha, \mathcal A)
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A) \\
={} &
\mathbb I_1 + \mathbb I_2,
\end{align*}
where
\begin{align*}
\mathbb I_1 &:= \tau^{-\alpha}(\psi(z)-z^\alpha)
R(\tau^{-\alpha}z^\alpha,\mathcal A)
R(\tau^{-\alpha} e^{-z}\psi(z),\mathcal A), \\
\mathbb I_2 &:= (1-e^z)\mathcal A R(\tau^{-\alpha}z^\alpha,\mathcal A)
R(\tau^{-\alpha} e^{-z}\psi(z),\mathcal A).
\end{align*}
Note that \eqref{eq:R(z,A)}, \eqref{eq:0} and \eqref{eq:psi>} imply
\begin{align}
\nm{R(\tau^{-\alpha}z^\alpha, \mathcal A)}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\mathcal M_0}
\snm{z}^{-\alpha} \tau^{\alpha}, \label{eq:lxy-1} \\
\nm{
R(\tau^{-\alpha}e^{-z}\psi(z), \mathcal A)
}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0}
\snm{z}^{-\alpha} \tau^{\alpha}. \label{eq:lxy-2}
\end{align}
By \eqref{eq:129}, \eqref{eq:lxy-1} and \eqref{eq:lxy-2} we have
\begin{align*}
\nm{\mathbb I_1}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0}
\snm{z}^{1-\alpha} \tau^\alpha.
\end{align*}
Since
\begin{align*}
& \nm{
\mathcal AR(\tau^{-\alpha}z^\alpha,\mathcal A)
R(\tau^{-\alpha}e^{-z}\psi(z), \mathcal A)
}_{\mathcal L(L^2(\Omega))} \\
={} &
\nm{
(\tau^{-\alpha}z^\alpha R(\tau^{-\alpha}z^\alpha,\mathcal A) - I)
R(\tau^{-\alpha}e^{-z}\psi(z), \mathcal A)
}_{\mathcal L(L^2(\Omega))} \\
\leqslant{} &
C_{\omega_0,\mathcal M_0}
\snm{z}^{-\alpha} \tau^\alpha
\quad\text{(by \eqref{eq:lxy-1} and \eqref{eq:lxy-2}),}
\end{align*}
we obtain
\[
\nm{\mathbb I_2}_{\mathcal L(L^2(\Omega))}
\leqslant C_{\omega_0,\mathcal M_0}
\snm{z}^{1-\alpha} \tau^\alpha.
\]
Combining the above estimates of $ \mathbb I_1 $ and $ \mathbb I_2 $ proves
\eqref{eq:lsj-1} and hence this lemma.
\end{proof}
\begin{lemma}
\label{lem:E-calE}
For any $ 1 \leqslant j \leqslant J $,
\begin{equation}
\label{eq:E-calE}
\nm{E(t_j) - \mathcal E(t_j-)}_{\mathcal L(L^2(\Omega))}
\leqslant C_{\omega_0,\mathcal M_0}
\tau^{\alpha-1} j^{\alpha-2}.
\end{equation}
\end{lemma}
\begin{proof}
Inserting $ t = t_j $ into \eqref{eq:E-def} yields
\[
E(t_j) = \frac1{2\pi i} \int_{\Gamma_{\omega^*}} e^{t_jz}
R(z^\alpha,\mathcal A) \, \mathrm{d}z =
\frac{\tau^{-1}}{2\pi i}
\int_{\Gamma_{\omega^*}} e^{jz}
R(\tau^{-\alpha}z^\alpha, \mathcal A) \, \mathrm{d}z,
\]
so that from \eqref{eq:calE-def} and \eqref{eq:calEj} it follows that
\[
E(t_j) - \mathcal E(t_j-) = \mathbb I_1 + \mathbb I_2,
\]
where
\begin{align*}
\mathbb I_1 &:= \frac{\tau^{-1}}{2\pi i}
\int_{\Gamma_{\omega^*}\setminus\Upsilon_{\omega^*}}
e^{jz} R(\tau^{-\alpha}z^\alpha,\mathcal A) \, \mathrm{d}z, \\
\mathbb I_2 &:= \frac{\tau^{-1}}{2\pi i}
\int_{\Upsilon_{\omega^*}} e^{(j-1)z} \big(
e^z R(\tau^{-\alpha}z^\alpha,\mathcal A) -
R(\tau^{-\alpha}e^{-z}\psi(z),\mathcal A)
\big) \, \mathrm{d}z.
\end{align*}
For $ \mathbb I_1 $, we have, by \eqref{eq:R(z,A)},
\begin{align*}
& \nm{\mathbb I_1}_{\mathcal L(L^2(\Omega))} \leqslant
C_{\mathcal M_0} \tau^{-1} \int_{\pi/\sin\omega^*}^\infty
e^{j\cos\omega^* r} (\tau^{\alpha}r^{-\alpha} ) \, \mathrm{d}r \\
\leqslant {}
& C_{\mathcal M_0} \tau^{\alpha-1}
\int_{\pi/\sin\omega^*}^\infty
e^{j\cos\omega^* r} r^{-\alpha} \, \mathrm{d}r \\
\leqslant {}
& C_{\mathcal M_0} \tau^{\alpha-1}
\int_{\pi/\sin\omega^*}^\infty e^{j\cos\omega^* r} r^{1-\alpha} \, \mathrm{d}r
\quad \mbox{( since}\, r \, \mbox{is lower bounded)}
\\
\leqslant {} & C_{\omega_0,\mathcal M_0} \tau^{\alpha-1}
j^{\alpha-2} e^{j\pi\cot\omega^*}
\leqslant C_{\omega_0,\mathcal M_0} \tau^{\alpha-1}
j^{\alpha-2}.
\end{align*}
For $ \mathbb I_2 $, by \eqref{eq:lsj-1} we obtain
\begin{align*}
\nm{\mathbb I_2}_{\mathcal L(L^2(\Omega))}
& \leqslant C_{\omega_0,\mathcal M_0}
\tau^{-1} \int_0^{\pi/\sin\omega^*}
e^{(j-1)\cos\omega^* r} r \big (\tau^{\alpha} r^{-\alpha} \big ) \, \mathrm{d}r \\
& \leqslant C_{\omega_0,\mathcal M_0}
\tau^{\alpha-1} \int_0^{\pi/\sin\omega^*}
e^{(j-1)\cos\omega^* r} r^{1-\alpha} \, \mathrm{d}r \\
& \leqslant C_{\omega_0,\mathcal M_0}
\tau^{\alpha-1} \int_0^{\pi/\sin\omega^*}
e^{ j\cos\omega^* r} r^{1-\alpha} \, \mathrm{d}r
\leqslant C_{\omega_0,\mathcal M_0}
\tau^{\alpha-1} j^{\alpha-2}.
\end{align*}
Combining the above estimates of $ \mathbb I_1 $ and $ \mathbb I_2 $ yields
\eqref{eq:E-calE} and thus concludes the proof.
\end{proof}
\begin{lemma}
\label{lem:int-E-calE}
We have
\begin{small}
\begin{equation}
\label{eq:int-E-wtE}
\nm{E-\mathcal E}_{L^1(0,T;\mathcal L(L^2(\Omega)))}
\leqslant C_{\omega_0,\mathcal M_0} \,
\Big(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\Big) \tau^\alpha.
\end{equation}
\end{small}
\end{lemma}
\begin{proof}
By \eqref{eq:E} we have
\begin{equation}
\label{eq:E-Et1}
\nm{E - E(t_1)}_{L^1(0,t_1;\mathcal L(L^2(\Omega)))}
\leqslant C_{\omega_0,\mathcal M_0}
\tau^\alpha \alpha^{-1},
\end{equation}
and a straightforward calculation gives, by \eqref{eq:E'},
\begin{align}
& \sum_{j=2}^J \nm{
E - E(t_j)
}_{L^1(t_{j-1},t_j;\mathcal L(L^2(\Omega)))}
\leqslant \tau \nm{E'}_{L^1(t_1,T;\mathcal L(L^2(\Omega)))} \notag \\
\leqslant{} &
C_{\omega_0,\mathcal M_0} \tau \int_{t_1}^T t^{\alpha-2} \, \mathrm{d}t
= C_{\omega_0,\mathcal M_0} \tau^\alpha (1-J^{\alpha-1})(1-\alpha)^{-1}.
\label{eq:E-Ej}
\end{align}
It follows that
\begin{align*}
& \sum_{j=1}^J \nm{E-E(t_j)}_{
L^1(t_{j-1},t_j;\mathcal L(L^2(\Omega)))
}
\leqslant
C_{\omega_0,\mathcal M_0} \tau^\alpha \Big(
\alpha^{-1} + (1-J^{\alpha-1})(1-\alpha)^{-1}
\Big).
\end{align*}
Further we have, by Lemma \ref{lem:E-calE},
\begin{align*}
\sum_{j=1}^J \tau \nm{E(t_j) - \mathcal E(t_j-)}_{
\mathcal L(L^2(\Omega))
} & \leqslant C_{\omega_0,\mathcal M_0}
\tau^\alpha \sum_{j=1}^J j^{\alpha-2} \\
& \leqslant C_{\omega_0,\mathcal M_0}
\tau^\alpha (1-J^{\alpha-1})(1-\alpha)^{-1}.
\end{align*}
Thus we get
\begin{align*}
& \nm{E-\mathcal E}_{L^1(0,T;\mathcal L(L^2(\Omega)))} \\
\leqslant{} & \sum_{j=1}^J \Big(
\nm{E-E(t_j)}_{L^1(t_{j-1},t_j;\mathcal L(L^2(\Omega)))} +
\tau \nm{ E(t_j)-\mathcal E(t_j-) }_{
L^1(t_{j-1},t_j;\mathcal L(L^2(\Omega)))
}
\Big) \\
\leqslant{} & C_{\omega_0,\mathcal M_0} \tau^\alpha
\Big(
\alpha^{-1} +
(1-J^{\alpha-1})(1-\alpha)^{-1}
\Big).
\end{align*}
This proves \eqref{eq:int-E-wtE} and hence this lemma.
\end{proof}
Finally, we are in a position to conclude the proof of Theorem \ref{thm:conv-Stau} as
follows. By \eqref{eq:Sg-l1} and \eqref{eq:Stau-g} we have
\begin{align*}
\max_{1 \leqslant j \leqslant J}
\nm{(Sg)(t_j)-S_{\tau,j}g}_{L^2(\Omega)} \leqslant
\nm{E-\mathcal E}_{L^1(0,T;\mathcal L(L^2(\Omega)))}
\nm{g}_{L^\infty(0,T;L^2(\Omega))},
\end{align*}
so that \eqref{eq:S-Stau-g} follows from \eqref{eq:int-E-wtE}. By
\eqref{eq:calE-def} we see that $ \mathcal E $ is piecewise constant, and then by
\eqref{eq:Stau-delta}, \eqref{eq:Stau-g} and \eqref{eq:delta_0} we obtain $ S_{\tau,j}(v\delta_0) =
\mathcal E(t_j-) v $, $ 1 \leqslant j \leqslant J $. Hence, a straightforward
computation yields, by \eqref{eq:Sdelta},
\begin{small}
\begin{align*}
& \max_{1 \leqslant j \leqslant J} j^{2-\alpha} \nm{
S(v\delta_0)(t_j) \!-\! S_{\tau,j}(v\delta_0)
}_{L^2(\Omega)} \! \leqslant \!
\max_{1 \leqslant j \leqslant J} j^{2-\alpha}
\nm{E(t_j) \!-\! \mathcal E(t_j\!-)}_{\mathcal L(L^2(\Omega)\!)}
\nm{v}_{L^2(\Omega)}, \\
& \sum_{j=1}^J \nm{
S(v\delta_0) - S_{\tau,j}(v\delta_0)
}_{ L^1(t_{j-1},t_j;L^2(\Omega)) } \leqslant
\nm{E-\mathcal E}_{L^1(0,T;\mathcal L(L^2(\Omega)))}
\nm{v}_{L^2(\Omega)}.
\end{align*}
\end{small}
Therefore, \eqref{eq:S-Stau-vdelta}, \eqref{eq:S-Stau-vdelta-l1} follow from
\eqref{eq:E-calE}, \eqref{eq:int-E-wtE}, respectively. This completes the proof of
Theorem \ref{thm:conv-Stau}.
\section{An inverse problem of a fractional diffusion equation}
\label{sec:inverse}
\subsection{Continuous problem}
We consider reconstructing the source term of a fractional diffusion equation
from the value of the solution at a fixed time; more precisely, the task is to
seek a suitable source $ f $ to ensure that the solution of problem
\eqref{eq:model} achieves a given value $ y_d $ at the final time $ T $. Applying
the well-known Tikhonov regularization technique to this inverse problem yields
the following minimization problem:
\begin{equation}
\label{eq:inverse}
\min\limits_{
\substack{
u \in U_{\text{ad}} \\
y \in C((0,T];L^2(\Omega))
}
} J(y,u) := \frac12 \nm{y(T) - y_d}_{L^2(\Omega)}^2 +
\frac\nu2 \nm{u}_{L^2(0,T;L^2(\Omega))}^2,
\end{equation}
subject to the state equation
\begin{equation}
(\D_{0+}^\alpha - \mathcal A) y = u,
\quad\text{ with } y(0) = 0,
\end{equation}
where $ y_d \in L^2(\Omega) $, $ \nu > 0 $ is a regularization parameter, and
\begin{align*}
U_{\text{ad}} &:= \left\{
v \in L^2(0,T;L^2(\Omega)):\
u_* \leqslant v \leqslant u^* \text{ a.e.~in } \Omega \times (0,T)
\right\},
\end{align*}
with $ u_* $ and $ u^* $ being two given constants.
\begin{remark}
We refer the reader to \cite{Prilepko2000,Samarskii2007} for the inverse
problems of parabolic partial differential equations, and refer the reader to
\cite[Chapter 3]{Troltzsh2010} for the linear-quadratic parabolic control
problems.
\end{remark}
We call $ u \in U_\text{ad} $ a mild solution to problem \eqref{eq:inverse} if $
u $ solves the following minimization problem:
\begin{equation}
\label{eq:mild_sol}
\min_{u \in U_\text{ad}}
J(u) := \frac12 \nm{(Su)(T) - y_d}_{L^2(\Omega)}^2 +
\frac\nu2 \nm{u}_{L^2(0,T;L^2(\Omega))},
\end{equation}
where we recall that $ S $ is defined by \eqref{eq:Sg-l1}.
\begin{lemma}
\label{lem:Sg-dual-weakly}
Assume that $ g \in L^q(0,T;L^2(\Omega)) $ with $ q > 1/\alpha $. Then
\begin{equation}
\label{eq:Sdelta-dual}
\dual{(Sg)(T), v}_\Omega =
\dual{g, S^*(v\delta_T)}_{\Omega \times (0,T)}
\end{equation}
for all $ v \in L^2(\Omega) $.
\end{lemma}
\begin{proof}
By \eqref{eq:Sg-l1} and \eqref{eq:Sg-C}, $ Sg \in C([0,T];L^2(\Omega)) $ and
\begin{small}
\[
(Sg)(T) = \int_0^T E(T-t) g(t) \, \mathrm{d}t,
\]
\end{small}
so that
\begin{small}
\[
\dual{(Sg)(T), v}_\Omega =
\Dual{\int_0^T E(T-t) g(t) \, \mathrm{d}t, \, v }_\Omega =
\int_0^T \dual{E(T-t)g(t), v}_\Omega \, \mathrm{d}t.
\]
\end{small}
Because \eqref{eq:E-E*} implies
\[
\dual{E(T-t)g(t), v}_\Omega = \dual{g(t),E^*(T-t)v}_\Omega,
\quad \text{a.e.}~t \in (0,T),
\]
it follows that
\begin{align*}
\dual{(Sg)(T), v}_\Omega & =
\int_0^T \dual{g(t), E^*(T-t) v}_\Omega \, \mathrm{d}t
= \dual{g, S^*(v\delta_T)}_{\Omega \times (0,T)}
\quad\text{(by \eqref{eq:S*delta}),}
\end{align*}
namely, \eqref{eq:Sdelta-dual} holds indeed. This completes the proof.
\end{proof}
Assume that $ q > 1/\alpha $ and $ q \geqslant 2 $. By \eqref{eq:Sg-C}, $
(S\cdot)(T) $ is a bounded linear operator form $ L^q(0,T;L^2(\Omega)) $ to $
L^2(\Omega) $. Clearly, $ J $ in \eqref{eq:mild_sol} is a strictly convex
functional on $ L^q(0,T;L^2(\Omega)) $, and $ U_\text{ad} $ is a convex, bounded
and closed subset of $ L^q(0,T;L^2(\Omega)) $. By Lemma \ref{lem:Sg-dual-weakly}, a
routine argument (cf.~\cite[Theorems 2.14 and 2.21]{Troltzsh2010}) yields the
following theorem.
\begin{theorem}
\label{thm:basic-regu} Problem \eqref{eq:mild_sol} admits a unique mild
solution $ u \in U_\text{ad} $, and the following first-order optimality
condition holds:
\begin{subequations}
\begin{numcases}{}
y = Su, \label{eq:optim-y} \\
p = S^*\big( (y(T)-y_d)\delta_T \big), \label{eq:optim-p} \\
\Dual{p + \nu u, v-u}_{\Omega \times (0,T)}
\geqslant 0 \quad \text{ for all } v \in U_\text{ad}.
\label{eq:optim-u}
\end{numcases}
\end{subequations}
\end{theorem}
\begin{remark}
Assume that $ u $, $ y $ and $ p $ are defined in Theorem \ref{thm:basic-regu}. By
(\ref{eq:optim-u}) we have $ u = f(p) $, where
\begin{small}
\begin{equation}
\label{eq:f}
f(r) := \begin{cases}
u_* & \text{ if } r > -\nu u_*, \\
r & \text{ if } -\nu u^* \leqslant r \leqslant -\nu u_*, \\
u^* & \text{ if } r < -\nu u^*.
\end{cases}
\end{equation}
\end{small}
Noting that $ f $ is Lipschitz continuous with Lipschitz constant $ 1/\nu $,
we obtain
\begin{small}
\[
u'(t) = f'(p(t)) p'(t) \text{ in } L^2(\Omega),
\quad \text{a.e.}~0 < t < T,
\]
\end{small}
and hence $ \nm{u'(t)}_{L^2(\Omega)} \leqslant \nu^{-1}
\nm{p'(t)}_{L^2(\Omega)} $, a.e.~$ 0 < t < T $. It follows from
(\ref{eq:optim-p}), \eqref{eq:S*delta} and \eqref{eq:E*'} that
\[
\nm{u'(t)}_{L^2(\Omega)} \leqslant
C_{\omega_0,\mathcal M_0} \nu^{-1} (T-t)^{\alpha-2}
(\nm{y(T)}_{L^2(\Omega)} + \nm{y_d}_{L^2(\Omega)}),
\quad\text{a.e.}~0 < t < T.
\]
Since (\ref{eq:optim-y}), \eqref{eq:Sg-l1}, \eqref{eq:E} and the fact $ u \in
U_\text{ad} $ imply
\begin{equation}
\label{eq:yT}
\nm{y(T)}_{L^2(\Omega)} \leqslant
C_{u_*,u^*,\omega_0,\mathcal M_0,T,\Omega}
\alpha^{-1},
\end{equation}
we conclude therefore that
\begin{small}
\begin{equation}
\label{eq:u}
\nm{u'(t)}_{L^2(\Omega)} \leqslant
C_{u_*,u^*,\omega_0,\mathcal M_0,T,\Omega} \nu^{-1}
(T-t)^{\alpha-2}(\alpha^{-1} + \nm{y_d}_{L^2(\Omega)}),
\quad\text{a.e.}~0 < t < T.
\end{equation}
\end{small}
\end{remark}
\begin{remark}
\label{rem:nu=0}
Let $ u_\nu $ be the mild solution of problem \eqref{eq:mild_sol}. A standard
argument yields that there exits $ y_T \in L^2(\Omega) $ such that
\begin{equation}
\label{eq:731}
\nm{(Su_\nu)(T)-y_T}_{L^2(\Omega)} \leqslant
C_{u_*,u^*,T,\Omega} \sqrt\nu.
\end{equation}
Since $ U_\text{ad} $ is a convex, bounded and closed subset of $
L^q(0,T;L^2(\Omega)) $, $ q > 1/\alpha $, there exist $ u_0 \in U_\text{ad} $
and a decreasing sequence $ \{\nu_n\}_{n=0}^\infty \subset (0,\infty) $ with
limit zero such that
\[
\lim_{n \to \infty} u_{\nu_n} = u_0 \quad\text{ weakly in }
L^q(0,T;L^2(\Omega)).
\]
As $ (S \cdot)(T) $ is a bounded linear operator from $ L^q(0,T;L^2(\Omega)) $
to $ L^2(\Omega) $, we have that $ (Su_{\nu_n})(T) $ converges to $ (Su_0)(T)
$ weakly in $ L^2(\Omega) $ as $ n \to \infty $, so that \eqref{eq:731} implies
$ (Su_0)(T) = y_T $. Furthermore, a trivial calculation yields that $ u_0 $ is
a mild solution of problem \eqref{eq:inverse} with $ \nu=0 $.
\end{remark}
\subsection{Temporally discrete problem}
\label{ssec:discr}
Define
\[
W_\tau := \{
V \in L^\infty(0,T; H_0^1(\Omega)):\,
V \text{ is constant on } (t_{j-1},t_j)
\quad \forall 1 \leqslant j \leqslant J
\}.
\]
For any $ g \in W_\tau^* $, define $ S_\tau g \in W_{\tau} $ and $ S_{\tau}^*g
\in W_{\tau} $, respectively, by that
\begin{align}
\dual{ \, \D_{0+}^\alpha S_\tau g, V}_{\Omega \times (0,T)} -
\dual{\mathcal AS_\tau g, V}_{L^2(0,T; H_0^1(\Omega))} =
\dual{g,V}_{W_\tau}, \label{eq:Stau} \\
\dual{( \, \D_{T-}^\alpha S_\tau^* g, V}_{\Omega \times (0,T)} -
\dual{\mathcal A^*S_\tau^* g, V}_{L^2(0,T; H_0^1(\Omega))} =
\dual{g, V}_{W_\tau},
\label{eq:S*tau}
\end{align}
for all $ V \in W_{\tau} $. By \eqref{eq:dual} we have that
\begin{equation}
\label{eq:Stau-dual}
\dual{S_{\tau}f, g}_{\Omega \times (0,T)} =
\dual{f, S_{\tau}^*g}_{\Omega \times (0,T)}
\quad \forall f, g \in L^1(0,T;L^2(\Omega)).
\end{equation}
A direct calculation yields that (cf.~\cite[Remark 3]{Jin-maximal2018}), for any
$ g \in W_\tau^* $,
\begin{equation}
\label{eq:Stau-Stauj}
(S_\tau g)(t_j-) = S_{\tau, j} g \quad \forall 1 \leqslant j \leqslant J.
\end{equation}
Hence, from Theorem \ref{thm:conv-Stau}, we readily conclude the following two
estimates: for any $ g \in L^\infty(0,T;L^2(\Omega)) $,
\begin{equation}
\label{eq:S-Stau-g-2}
\nm{(Sg)(T) - (S_\tau g)(T-)}_{L^2(\Omega)}
\leqslant C_{\omega_0, \mathcal M_0} \tau^\alpha
\Big(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\Big) \nm{g}_{L^\infty(0,T;L^2(\Omega))};
\end{equation}
for any $ v \in L^2(\Omega) $,
\begin{equation}
\label{eq:S-Stau-delta}
\nm{S(v\delta_0) - S_\tau(v\widehat\delta_0)}_{L^1(0,T;L^2(\Omega))}
\leqslant C_{\omega_0,\mathcal M_0} \tau^\alpha
\Big(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\Big) \nm{v}_{L^2(\Omega)}.
\end{equation}
Furthermore, we have the following stability estimate.
\begin{lemma}
\label{lem:Stau-stab-infty}
Assume that $ g \in {}_0H^{-\alpha/2}(0,T;L^2(\Omega)) $. Then, for any $ 1
\leqslant j \leqslant J $,
\begin{equation}
\label{eq:Stau-stab-infty}
\nm{(S_\tau g)(t_j-)}_{L^2(\Omega)} \leqslant
C_\alpha \tau^{(\alpha-1)/2}
\nm{g}_{{}_0H^{-\alpha/2}(0,T;L^2(\Omega))}.
\end{equation}
\end{lemma}
\begin{proof}
We only prove \eqref{eq:Stau-stab-infty} with $ j=J $, since the other cases $
1 \leqslant j < J $ can be proved analogously. Let $ v:= (S_\tau g)(t_j-) $.
We have
\begin{align*}
& \nm{v}_{L^2(\Omega)}^2 =
\dual{v\widehat\delta_T, S_\tau g}_{\Omega \times (0,T)} \\
={} & \dual{
\D_{T-}^{\alpha/2} \D_{T-}^{-\alpha/2} (v\widehat\delta_T),
S_\tau g
}_{\Omega \times (0,T)} \\
={} &
\dual{
\D_{T-}^{-\alpha/2}(v\widehat\delta_T),
\D_{0+}^{\alpha/2} S_\tau g
}_{\Omega \times (0,T)} \quad\text{(by \eqref{eq:dual})} \\
\leqslant{} & \nm{\D_{0+}^{\alpha/2} S_\tau g}_{L^2(0,T;L^2(\Omega))}
\nm{\D_{T-}^{-\alpha/2}(v\widehat\delta_T)}_{L^2(0,T;L^2(\Omega))},
\end{align*}
where we recall that $ \widehat\delta_T $ is defined by \eqref{eq:delta_T}.
Since inserting $ V:= S_\tau g $ into \eqref{eq:Stau} yields, by
\eqref{eq:dual}, \eqref{eq:coer} and \eqref{eq:A-positive}, that
\[
\nm{\D_{0+}^{\alpha/2}S_\tau g}_{L^2(0,T;L^2(\Omega))}
\leqslant C_{\alpha} \nm{g}_{{}_0H^{-\alpha/2}(0,T;L^2(\Omega))},
\]
it follows that
\[
\nm{v}_{L^2(\Omega)}^2
\leqslant
C_\alpha
\nm{g}_{{}_0H^{-\alpha/2}(0,T;L^2(\Omega))}
\nm{\D_{T-}^{-\alpha/2}(v\widehat\delta_T)}_{L^2(\Omega)}.
\]
It suffices, therefore, to prove
\begin{equation}
\label{eq:frac-z}
\nm{\D_{T-}^{-\alpha/2}(v\widehat\delta_T)}_{L^2(0,T;L^2(\Omega))}
\leqslant C_\alpha \tau^{(\alpha-1)/2} \nm{v}_{L^2(\Omega)}.
\end{equation}
To this end, we note that
\begin{align*}
& \nm{\D_{T-}^{-\alpha/2}(v\widehat\delta_T)}_{L^2(0,T;L^2(\Omega))}^2 \\
={}& \left(
\frac{\nm{v}_{L^2(\Omega)}}{\Gamma(\alpha/2)}
\right)^2 \tau^{-2} \int_0^T \Snm{
\int_t^T (s-t)^{\alpha/2-1} \widehat\delta_T(s) \, \mathrm{d}s
}^2 \, \mathrm{d}t \\
={}& \left(
\frac{\nm{v}_{L^2(\Omega)}}{\Gamma(\alpha/2)}
\right)^2 \tau^{-2} (\mathbb I_1 + \mathbb I_2),
\end{align*}
where
\begin{small}
\begin{align*}
\mathbb I_1 &:= \int_0^{T-\tau} \Snm{
\int_{T-\tau}^T (s-t)^{\alpha/2-1} \, \mathrm{d}s
}^2 \, \mathrm{d}s \, \mathrm{d}t, \\
\mathbb I_2 &:= \int_{T-\tau}^T \Snm{
\int_t^T (s-t)^{\alpha/2-1} \, \mathrm{d}s
}^2 \, \mathrm{d}t.
\end{align*}
\end{small}
A straightforward calculation gives
\begin{align*}
\mathbb I_1 &= 4/\alpha^2 \int_0^{T-\tau} \big(
(T-t)^{\alpha/2} - (T-\tau-t)^{\alpha/2}
\big)^2 \, \mathrm{d}t \\
&= 4/\alpha^2 \tau^{1+\alpha}
\int_0^{T/\tau} \big(
s^{\alpha/2} - (s-1)^{\alpha/2}
\big)^2 \, \mathrm{d}s \\
&< 4/\alpha^2 \tau^{1+\alpha}
\int_0^\infty \big(
s^{\alpha/2} - (s-1)^{\alpha/2}
\big)^2 \, \mathrm{d}s = C_\alpha \tau^{1+\alpha}
\end{align*}
and
\begin{align*}
\mathbb I_2 = 4/\alpha^2 \int_{T-\tau}^T
(T-t)^\alpha \, \mathrm{d}t =
C_\alpha \tau^{1+\alpha}.
\end{align*}
Combining the above estimates of $ \mathbb I_1 $ and $ \mathbb I_2 $ proves
\eqref{eq:frac-z} and hence this lemma.
\end{proof}
\begin{remark}
We note that if the temporal grid is nonuniform, then \eqref{eq:Stau} is not
equivalent to the L1 scheme for fractional diffusion equations. For the
numerical analysis of \eqref{eq:Stau} with nonuniform temporal grid, we refer
the reader to \cite{Li2019SIAM,Li-Wang-Xie2020}.
\end{remark}
Following the idea in \cite{Hinze2005}, we consider the following temporally
discrete problem:
\begin{equation}
\label{eq:numer_opti}
\min\limits_{U \in U_{\text{ad}}} J_{\tau}(U) :=
\frac12 \nm{ ( S_{\tau}U)(T-) - y_d }_{L^2(\Omega)}^2 +
\frac\nu2 \nm{U}_{L^2(0,T;L^2(\Omega))}^2.
\end{equation}
Note that $ U_\text{ad} $ is a convex, bounded and closed subset of $
L^2(0,T;L^2(\Omega)) $ and that $ (S_\tau\cdot)(T-) $ is, by
\eqref{eq:Stau-stab-infty}, a bounded linear operator from $ L^2(0,T;L^2(\Omega))
$ to $ L^2(\Omega) $. Hence, applying \cite[Theorems 2.14 and
2.21]{Troltzsh2010} to problem \eqref{eq:numer_opti} yields the following
theorem.
\begin{theorem}
\label{thm:regu-U}
Problem \eqref{eq:numer_opti} admits a unique solution $ U \in U_\text{ad} $,
and the following optimality condition holds:
\begin{subequations}
\begin{numcases}{}
Y = S_\tau U, \label{eq:optim-Y} \\
P = S_\tau^*\big( (Y(T-)-y_d)\widehat\delta_T \big), \label{eq:optim-P} \\
\Dual{P + \nu U, V-U}_{\Omega \times (0,T)}
\geqslant 0 \quad \text{ for all } V \in U_\text{ad},
\label{eq:optim-U}
\end{numcases}
\end{subequations}
where $ \widehat\delta_T $ is defined by \eqref{eq:delta_T}.
\end{theorem}
\begin{theorem}
\label{thm:conv}
Let $ u $ and $ y $ be defined in Theorem \ref{thm:basic-regu}, and let $ U $ and $ Y
$ be defined in Theorem \ref{thm:regu-U}. Then
\begin{equation}
\label{eq:conv}
\begin{aligned}
& \nm{(y-Y)(T-)}_{L^2(\Omega)} +
\sqrt\nu \nm{u-U}_{L^2(0,T;L^2(\Omega))} \\
\leqslant{} &
C_{y_d,u_*,u^*,\omega_0,\mathcal M_0,T,\Omega}
\left(
\frac1\alpha + \left(\frac{1-J^{\alpha-1}}{1-\alpha}\right)^{1/2} +
\frac{1-J^{\alpha-1}}{1-\alpha} \tau^{\alpha/2}
\right) \tau^{\alpha/2}.
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}
Since the idea of of this proof is standard (cf.~\cite[Theorem
3.4]{Hinze2009}), we only provide a brief proof. Let us first prove that
\begin{align}
& \nm{(Su)(T) - (S_\tau U)(T-)}_{L^2(\Omega)} +
\sqrt\nu \nm{u-U}_{L^2(0,T;L^2(\Omega))} \notag \\
\leqslant{} &
C_{u_*,u^*,\Omega} \nm{
S^*((y(T)-y_d)\delta_T) -
S_{\tau}^*((y(T)-y_d)\widehat\delta_T)
}_{L^1(0,T;L^2(\Omega))}^{1/2} \notag \\
& \quad {} + 2\nm{(Su)(T) - (S_{\tau}u)(T-)}_{L^2(\Omega)}.
\label{eq:ava}
\end{align}
By \eqref{eq:optim-u} and \eqref{eq:optim-U}, we have
\begin{align*}
\Dual{
S^*\big( (y(T) - y_d)\delta_T \big) + \nu u, U-u
}_{\Omega \times (0,T)} \geqslant 0,\\
\Dual{
S_{\tau}^* \big( ( Y(T-) - y_d) \widehat\delta_T \big) + \nu U, u-U
}_{\Omega \times (0,T)} \geqslant 0,
\end{align*}
so that
\begin{equation}
\label{eq:I1+I2}
\nu \nm{u-U}_{L^2(0,T;L^2(\Omega))}^2
\leqslant \mathbb I_1 + \mathbb I_2,
\end{equation}
where
\begin{align*}
\mathbb I_1 &:= \dual{
S^* \big( (y(T)-y_d)\delta_T \big) -
S_{\tau}^* \big( y(T)-y_d)\widehat\delta_T \big),
U-u
}_{\Omega \times (0,T)}, \\
\mathbb I_2 &:=
\dual{
S_{\tau}^* \big( (y(T) - Y(T-))\widehat\delta_T \big),
U-u
}_{\Omega \times (0,T)}.
\end{align*}
It is clear that
\[
\mathbb I_1 \leqslant C_{u_*,u^*,\Omega}
\nm{
S^*((y(T)-y_d)\delta_T) -
S_\tau^*((y(T)-y_d)\widehat\delta_T)
}_{L^1(0,T;L^2(\Omega))},
\]
by the fact that $ u, U \in U_\text{ad} $. A straightforward computation
yields
\begin{align*}
\mathbb I_2 & = \dual{
(y(T) - Y(T-))\widehat\delta_T,
S_\tau(U-u)
}_{\Omega \times (0,T)} \quad\text{(by \eqref{eq:Stau-dual})} \\
&= \dual{ y(T)-Y(T-), (S_{\tau}(U-u))(T-) }_{\Omega}
\quad\text{(by \eqref{eq:delta_T})} \\
&= \dual{ (Su)(T)-(S_\tau U)(T-), (S_{\tau}(U-u))(T-) }_{\Omega}
\quad \text{(by (\ref{eq:optim-y}) and (\ref{eq:optim-Y}))} \\
&= \dual{
(Su)(T) - (S_{\tau}u)(T-),
(S_{\tau}(U-u))(T-)
}_\Omega -
\nm{(S_{\tau}(u-U))(T-)}_{L^2(\Omega)}^2 \\
& \leqslant
\frac12 \nm{(Su)(T) - (S_{\tau}u)(T-)}_{L^2(\Omega)}^2 -
\frac12 \nm{(S_{\tau}(u-U))(T-)}_{L^2(\Omega)}^2 \\
& \leqslant
\nm{(Su)(T) - (S_{\tau}u)(T-)}_{L^2(\Omega)}^2 -
\frac12 \nm{(Su)(T) - (S_\tau U)(T-)}_{L^2(\Omega)}^2.
\end{align*}
Combining \eqref{eq:I1+I2} and the above estimates of $ \mathbb I_1 $ and $
\mathbb I_2 $ gives \eqref{eq:ava}.
Then, by the symmetric version of \eqref{eq:S-Stau-delta} we obtain
\begin{align*}
& \nm{
S^*((y(T)-y_d)\delta_T) -
S_{\tau}^*((y(T)-y_d)\widehat\delta_T)
}_{L^1(0,T;L^2(\Omega))} \\
\leqslant{} &
C_{\omega_0,\mathcal M_0} \tau^\alpha
\left(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\right) \nm{y(T) - y_d}_{L^2(\Omega)},
\end{align*}
so that \eqref{eq:yT} implies
\begin{align}
& \nm{
S^*((y(T)-y_d)\delta_T) - S_{\tau}^*((y(T)-y_d)\widehat\delta_T)
}_{L^1(0,T;L^2(\Omega))} \notag \\
\leqslant{} &
C_{u_*,u^*,\omega_0,\mathcal M_0,T,\Omega} \tau^\alpha
\left(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\right) (1/\alpha + \nm{y_d}_{L^2(\Omega)}) \notag \\
\leqslant{} &
C_{y_d,u_*,u^*,\omega_0,\mathcal M_0,T,\Omega}
\left(
\frac1{\alpha^2} + \frac{1-J^{\alpha-1}}{1-\alpha}
\right) \tau^\alpha.
\label{eq:eve-1}
\end{align}
We obtain from \eqref{eq:S-Stau-g-2} that
\begin{equation}
\label{eq:eve-2}
\nm{(Su)(T) - (S_\tau u)(T-)}_{L^2(\Omega)}
\leqslant C_{u_*,u^*,\omega_0,\mathcal M_0,\Omega}
\left(
\frac1\alpha + \frac{1-J^{\alpha-1}}{1-\alpha}
\right) \tau^\alpha.
\end{equation}
Finally, combining \eqref{eq:ava}, \eqref{eq:eve-1} with \eqref{eq:eve-2} gives
\begin{align*}
& \nm{(Su)(T) - (S_\tau U)(T-)}_{L^2(\Omega)} +
\sqrt\nu \nm{u-U}_{L^2(0,T;L^2(\Omega))} \\
\leqslant{} &
C_{y_d,u_*,u^*,\omega_0,\mathcal M_0,T,\Omega}
\left(
\frac1\alpha + \left(\frac{1-J^{\alpha-1}}{1-\alpha}\right)^{1/2} +
\frac{1-J^{\alpha-1}}{1-\alpha} \tau^{\alpha/2}
\right) \tau^{\alpha/2},
\end{align*}
which, together with (\ref{eq:optim-y}) and (\ref{eq:optim-Y}), implies
\eqref{eq:conv}. This completes the proof.
\end{proof}
\begin{remark}
Let $ y_T $ be defined in Remark \ref{rem:nu=0}. Combining \eqref{eq:731} and \eqref{eq:conv}
yields
\begin{small}
\begin{align*}
& \nm{y_T - Y({T-})}_{L^2(\Omega)} \\
\leqslant{} &
C_{y_d,u_*,u^*,\omega_0,\mathcal M_0,T,\Omega} \left(
\sqrt\nu +
\left(
\frac1\alpha + \left(\frac{1-J^{\alpha-1}}{1-\alpha}\right)^{1/2} +
\frac{1-J^{\alpha-1}}{1-\alpha} \tau^{\alpha/2}
\right) \tau^{\alpha/2}
\right).
\end{align*}
\end{small}
\end{remark}
\section{Numerical experiments}
\label{sec:numer}
This section performs three numerical experiments in one dimensional space to
verify the theoretical results, in the following settings: $ T= 0.1 $; $ \Omega
= (0,1) $; $ \mathcal A = \Delta $; the space is discretized by a standard
Galerkin finite element method, with the space
\[
\mathcal V_h := \left\{
v_h \in H_0^1(0,1): v_h \text{ is linear on }
\big( (m\!-\!1)/2^{10}, m/2^{10} \big) \,\,
\text{for all $ 1 \leqslant m \leqslant 2^{10} $}
\right\}.
\]
\noindent{\it Experiment 1.} The purpose of this experiment is to verify
\eqref{eq:S-Stau-vdelta} and \eqref{eq:S-Stau-vdelta-l1}. We set $ v(x) := x^{-0.49} $, $ 0 <
x < 1 $, and let
\begin{align*}
e_T &:= \nm{
S_{\tau,J}(v\delta_0) -
S_{\tau^*,J^*}(v\delta_0)
}_{L^2(\Omega)}, \\
e_{l1} &:= \sum_{j=1}^{J^*} T/J^*
\nm{
S_{\tau, \left\lceil j J/J^* \right\rceil}(v\delta_0) -
S_{\tau^*, j} (v\delta_0)
}_{L^2(\Omega)},
\end{align*}
where $ J^* := 2^{15} $, $ \tau^* = T/J^* $, and $ \lceil \cdot \rceil $ is the
ceiling function. Table \ref{tab:Svdelta-alpto1} shows that $
e_T/(\tau^{\alpha-1}J^{\alpha-2}) $ will not blow up as $ \alpha \to {1-} $,
which agrees well with \eqref{eq:S-Stau-vdelta}. The numerical results in Figure
\ref{fig:ex1} illustrate that $ e_T $ is close to $ O(\tau) $, and this also agrees
well with \eqref{eq:S-Stau-vdelta}. The numerical results in Figure
\ref{fig:ex1_l1} demonstrate that $ e_{l1} $ is close to $ O(\tau^\alpha) $,
and this is in good agreement with \eqref{eq:S-Stau-vdelta-l1}.
\begin{table
\caption{$ e_T / (\tau^{\alpha-1} J^{\alpha-2}) $ of Experiment 1.}
\label{tab:Svdelta-alpto1}
\begin{center}
\begin{tabular}{cccc}
\toprule
$\alpha$ & $J=2^7$ & $J=2^8$ & $J=2^9$ \\
$0.90$ & 5.35e-3 & 5.19e-3 & 5.03e-3 \\
$0.95$ & 5.13e-3 & 4.90e-3 & 4.74e-3 \\
$0.99$ & 4.37e-3 & 4.10e-3 & 3.94e-3 \\
$0.999$ & 4.10e-3 & 3.82e-3 & 3.66e-3 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\begin{figure}[H]
\begin{minipage}[t]{0.5\linewidth}
\includegraphics[scale=0.5]{ex1.eps}
\caption{$e_T$ of numerical Example 1.}
\label{fig:ex1}
\end{minipage
\begin{minipage}[t]{0.5\linewidth}
\includegraphics[scale=0.5]{ex1_l1}
\caption{$e_{l1}$ of numerical Example 1.}
\label{fig:ex1_l1}
\end{minipage}
\end{figure}
\noindent{\it Experiment 2.} The purpose of this experiment is to verify
\eqref{eq:S-Stau-g}. To this end, we set
\[
f(t,x) := x^{-0.49}, \quad 0 < t < T, \quad 0 < x < 1,
\]
and define
\[
e_\infty := \max_{1 \leqslant j \leqslant J}
\nm{
S_{\tau,j}f - S_{\tau^*,\lceil jJ^*/J \rceil} f
}_{L^2(\Omega)},
\]
where $ J^* = 2^{15} $ and $ \tau^* = T/J^* $. The numerical results in
Figure \ref{fig:ex2} shows that $ e_\infty $ is close to $ O(\tau^\alpha) $,
which is in good agreement with \eqref{eq:S-Stau-g}.
\begin{figure}[H]
\centering
\includegraphics[scale=.5]{einfty.eps}
\caption{$e_\infty$ of numerical Example 2.}
\label{fig:ex2}
\end{figure}
\medskip\noindent{\it Experiment 3.} The purpose of this experiment is to verify
Theorem \ref{thm:conv}, in the following settings: $ a=0 $; $ b=10 $; $ \nu = 10 $; $
y_d :\equiv 1 $. Discretization \eqref{eq:numer_opti} is solved by the following
iteration algorithm (cf.~\cite[Algorithm 3.2]{Hinze2009}):
\begin{small}
\begin{align*}
& U_0 := 0, \\
& U_j = f(S_\tau^*(((S_\tau U_{j-1})(T-)-y_d)\widehat\delta_T)),
\quad 1 \leqslant j \leqslant k,
\end{align*}
\end{small}
where $ f $ is defined by \eqref{eq:f} and $ k $ is large enough such that
\[
\nm{U_k - U_{k-1}}_{L^\infty(0,T;L^\infty(\Omega))} < 10^{-12}.
\]
The ``Error" in Figure \ref{fig:ex3} means
\[
\nm{Y(T-)-Y^*(T-)}_{L^2(\Omega)} +
\nm{U - U^*}_{L^2(0,T;L^2(\Omega))},
\]
where $ U^* $ and $ Y^* $ are the numerical solutions with $ J =2^{15} $. The
theoretical convergence rate $ O(\tau^{\alpha/2}) $ is observed in Table
\ref{fig:ex3}.
\begin{figure}[H]
\centering
\includegraphics[scale=.5]{ex3.eps}
\caption{Numerical results of numerical Example 3.}
\label{fig:ex3}
\end{figure}
\bibliographystyle{plain}
|
1,108,101,564,298 | arxiv | \section{Introduction}\label{sec:intro}
Given a bounded domain $\Omega \subset \Cb^d$, let $K_\Omega: \Omega\times \Omega \rightarrow \Rb_{\geq 0}$ be the Kobayashi distance and let $k_\Omega: \Omega\times \Cb^d \rightarrow \Rb_{\geq 0}$ be the infinitesimal Kobayashi metric (also known as the Kobayashi--Royden metric). One aim of this paper is to present some new techniques\,---\,which we shall use to study the behavior of holomorphic maps into a bounded domain $\Omega$\,---\,that arise from certain intuitions in metric geometry applied to the metric space $(\Omega, K_{\Omega})$. A class of domains that has been studied extensively in the literature is the class of bounded pseudoconvex domains of finite type (which includes the class of bounded strongly pseuodconvex domains). For such a domain $\Omega$, there exist constants $c, \epsilon>0$ such that
\begin{align*}
k_\Omega(x;v) \geq \frac{c\norm{v}}{\delta_\Omega(x)^\epsilon} \; \; \text{for all $x \in \Omega$ and $v \in \Cb^d$},
\end{align*}
where $\delta_\Omega(x)$ is the distance from $x$ to $\partial\Omega$ (see \cite{C1992}; also see Section~\ref{sec:examples} for explanations). Moreover, in this case, because $\partial\Omega$ has (at least) $C^2$ regularity, it is easy to establish that for each $x_0 \in \Omega$ there exists a constant $C> 0$ (depending on $x_0$) such that
\begin{align*}
K_\Omega(x_0, x) \leq C + \frac{1}{2}\log \frac{1}{\delta_\Omega(x)} \; \; \text{for all $x \in \Omega$}.
\end{align*}
We shall show that similar bounds on the growth of $k_\Omega$ and $K_\Omega$ as one approaches the boundary underlie a variety of results on the complex geometry and complex dynamics associated to $\Omega$ (for any bounded domain $\Omega$ admitting such bounds).
To measure the growth rate of the Kobayashi metric as one approaches the boundary we introduce the following function for a bounded domain $\Omega \subset \Cb^d$:
\begin{align*}
M_\Omega(r) := \sup\left\{ \frac{1}{k_{\Omega}(x;v)} : \delta_\Omega(x) \leq r, \norm{v}=1\right\}.
\end{align*}
We are now in a position to define the following class of domains:
\begin{definition}\label{def:good_domains}
A bounded domain $\Omega \subset \Cb^d$ is a \emph{Goldilocks domain} if
\begin{enumerate}
\item for some (hence any) $\epsilon >0$ we have
\begin{align*}
\int_0^\epsilon \frac{1}{r} M_\Omega\left(r\right) dr < \infty,
\end{align*}
\item for each $x_0 \in \Omega$ there exist constants $C, \alpha > 0$ (that depend on $x_0$) such that
\begin{align*}
K_\Omega(x_0, x) \leq C + \alpha \log \frac{1}{\delta_\Omega(x)}
\end{align*}
for all $x \in \Omega$.
\end{enumerate}
\end{definition}
\begin{remark}
Given a bounded domain $\Omega$, the function $M_{\Omega}$ is clearly monotone decreasing, hence is Lebesgue measurable. Thus, the integral in (1) of Definition~\ref{def:good_domains} is well defined.
\end{remark}
\begin{remark}\label{rem:just_right}
The slightly unusual phrase ``Goldilocks domain'' is intended to point to the fact that if $\Omega$ is a Goldilocks domain, then $\partial\Omega$ lies in between (and avoids) the extremes of having outward-pointing cusps and having points at which $\partial\Omega$ is flat to infinite order and is, in a precise sense, too flat. A classical argument for planar domains, for instance, implies that the first situation is ruled out by Condition~2 above. Condition~1, it turns out, rules out domains that are not pseudoconvex: i.e., \emph{Goldilocks domain are pseudoconvex}. We discuss all this in more detail in Section~\ref{sec:examples}.
\end{remark}
We have \emph{deliberately} chosen to define Goldilocks domains in rather abstract terms. One objective of this work is to introduce methods that are rooted in metric geometry and are applied to the metric space $(\Omega, K_\Omega)$. The crucial properties that animate these methods are most clearly illustrated when working with domains that satisfy the conditions in Definition~\ref{def:good_domains}.
One deduces from the first paragraph of this section that bounded pseudoconvex domains of finite type are always Goldilocks domains. We shall see, in Section~\ref{sec:examples}, a diverse range of other domains\,---\,described in more geometrically explicit terms\,---\,that are Goldilocks domains. Consequently, we are able to establish, among several other results, extensions of the following widely-studied phenomena:
\begin{itemize}
\item Wolff--Denjoy theorems in higher dimensions,
\item continuous boundary-extension of proper holomorphic maps
\end{itemize}
to a new range of domains. To give a sense of the uses that the methods hinted at are put to: a pseudoconvex domain $\Omega\Subset \Cb^d$, $d\geq 2$, of finite type is, in general, non-convex and there may be points $\xi \in \partial\Omega$ around which $\Omega$ is not locally convexifiable. Such an $\Omega$ has \emph{very little} resemblance to a convex domain, yet a form of the Wolff--Denjoy theorem holds on $\Omega$\,---\,see Corollary~\ref{cor:WD_finite-type}.
We now introduce the main theorems of this paper.
\subsection{Negative curvature and the Kobayashi metric} It is well known that the unit ball $\Bc \subset \Cb^d$ endowed with the Kobayashi metric is isometric to complex hyperbolic space, which is a canonical example of a negatively curved Riemannian manifold. Based on this example, it is natural to conjecture that domains that are similar to the unit ball also have a negatively curved Kobayashi metric. One problem with this conjecture is that the infinitesimal Kobayashi metric on a general domain is not Riemannian and has low regularity, making a notion of infinitesimal curvature difficult to define. One remedy to this problem is to consider a coarse notion of negative curvature introduced by Gromov~\cite{G1987} which is now called Gromov hyperbolicity.
Along these lines Balogh and Bonk~\cite{BB2000} proved that the Kobayashi distance on a bounded strongly pseudoconvex domain is Gromov hyperbolic. The Kobayashi distance is also Gromov hyperbolic for bounded convex domains whose boundary has finite type in the sense of D'Angelo~\cite{Z2014}.
We will show, in Section~\ref{sec:examples}, some examples of Goldilocks domains where the Kobayashi distance is not Gromov hyperbolic. However, a key part of this paper is to show that the Kobayashi metric on a Goldilocks domain does have some negative-curvature-like behavior. In particular we are motivated by a definition of Eberlein and O'Neill~\cite{EO1973} who call a non-positively curved simply connected Riemannian manifold $X$ \emph{a visibility space} if for every two points $\xi, \eta$ in the ideal boundary $\partial X$ and neighborhoods $V_\xi, V_\eta$ of $\xi,\eta$ in $X \cup \partial X$ so that $\overline{V_\xi} \cap \overline{V_\eta} = \emptyset$ there exists a compact set $K \subset X$ with the following property: if $\sigma: [0,T] \rightarrow X$ is a geodesic with $\sigma(0) \in V_\xi$ and $\sigma(T) \in V_\eta$ then $\sigma \cap K \neq \emptyset$ (see also ~\cite[page 54]{BGS1985} or~\cite[page 294]{BH1999}). Informally, this states that geodesics between two distinct points at infinity bend into the space.
It is well known that a complete negatively curved simply connected Riemannian manifold $X$ is always a visibility space and, more generally, a proper geodesic Gromov hyperbolic metric space always satisfies a visibility type condition (see for instance~\cite[page 428]{BH1999}).
In the context of a Goldilocks domain $\Omega$, we do not know that the metric space $(\Omega, K_\Omega)$ is Cauchy complete and in particular we do not know whether or not every two points can be joined by a geodesic. This leads us to consider a more general class of curves which we call \emph{almost-geodesics} (defined in Section~\ref{sec:curves}). We will then prove:
\begin{theorem}\label{thm:visible} (see Section~\ref{sec:visible})
Suppose $\Omega \subset \Cb^d$ is a Goldilocks domain and $\lambda \geq 1$, $\kappa \geq 0$. If $\xi,\eta \in \partial\Omega$ and $V_\xi, V_\eta$ are neighborhoods of $\xi,\eta$ in $\overline{\Omega}$ so that $\overline{V_\xi} \cap \overline{V_\eta} = \emptyset$ then there exists a compact set $K \subset \Omega$ with the following property: if $\sigma: [0,T] \rightarrow \Omega$ is an $(\lambda, \kappa)$-almost-geodesic with $\sigma(0) \in V_\xi$ and $\sigma(T) \in V_\eta$ then $\sigma \cap K \neq \emptyset$.
\end{theorem}
This theorem makes intuitive sense: on a Goldilocks domain the Kobayashi metric grows rapidly as one approaches the boundary and so length minimizing curves wish to spend as little time as possible near the boundary. This leads to the phenomenon of such curves bending into the domain and intersecting some fixed compact set. A key point of this paper is giving a precise condition on the rate of blow-up, namely Definition~\ref{def:good_domains}, which leads to this behavior.
There are several visibility type results in the literature. Chang, Hu, and Lee studied the limits of complex geodesics in strongly convex domains and proved a visibility type result for complex geodesics, see~\cite[Section 2]{CHL1988}. Mercer~\cite{M1993} extended these results to \emph{$m$-convex domains}, that is bounded convex domains $\Omega$ for which there exist constants $C > 0$ and $m > 0$ such that
\begin{equation}
\label{est:m_convex}
\inf\left\{ \norm{x-\xi} : \xi \in \partial \Omega \cap (x + \Cb \boldsymbol{\cdot} v)\right\} \leq C\delta_{\Omega}(x)^{1/m} \; \;
\forall x\in \Omega \text{ and } \forall v\in \Cb^d\setminus\{0\}.
\end{equation}
Notice that every strongly convex set is $2$-convex. Finally, Karlsson~\cite[Lemma 36]{K2005b} proved a visibility result for geodesics in bounded domains $\Omega$ that satisfy estimate~\eqref{est:m_convex}, have Cauchy-complete Kobayashi metric, and have $C^{1,\alpha}$ boundary.
\subsection{Continuous extensions of proper holomorphic maps}
The earliest result on the continuous extension up to the boundary of a proper holomorphic map between a pair of domains $D$ and $\Omega$ in $\Cb^d$, $d\geq 2$, with no other assumptions on the map, was established by Pin{\v{c}}uk \cite{P1974}. In that work, the domains $D$ and $\Omega$ are assumed to be strongly pseudoconvex with $C^2$ boundaries. Owing to strong pseudoconvexity, it is shown that the continuous extension of the map from $D$ to $\Omega$ satisfies a H{\"o}lder condition on $\overline{D}$ with H{\"o}lder exponent $1/2$. Soon thereafter, the focus of the question of boundary regularity of a proper holomorphic map between a pair of domains of the same dimension shifted largely to domains with $C^\infty$ boundaries, and to obtaining \emph{smooth} extension to the boundary, of the given proper map, largely due to various Bergman-kernel methods introduced by Fefferman \cite{F1974}\,---\,who considered biholomorphisms\,---\,and by Bell--Ligocka \cite{BL1980} and Bell \cite{B1981}. The literature on the smooth extension of proper holomorphic maps is truly enormous, and we refer the interested reader to the survey \cite{F1993} by Forstneri{\v{c}}.
The latter methods are not helpful if the boundary of either of $D$ or $\Omega$ has low regularity, or if either domain is assumed to be merely pseudoconvex (i.e., without any finite-type condition on the boundary). In this context, the methods of Diederich--Forn{\ae}ss \cite{DF1979} are helpful. The idea of using the Kobayashi metric was first introduced in \cite[Theorem~1]{DF1979}, and, in that theorem, the requirement of strong pseudoconvexity of $D$ in Pin{\v{c}}uk's theorem is dropped. In this paper, we generalize \cite[Theorem~1]{DF1979} by allowing the target domain to have non-smooth boundary. We point to Section~\ref{sec:examples} for a sense of how irregular $\partial\Omega$, for $\Omega$ as in Theorem~\ref{thm:proper} below, can be. One frequently encounters proper holomorphic maps of smoothly-bounded domains whose images have non-smooth boundary; consider the various examples of maps between Reinhardt domains. In the latter setting and in low dimensions, continuous extension up to the boundary follows from an explicit description of the proper map in question\,---\,see \cite{IK2006}, for instance. Even in $\Cb^2$ though, establishing such descriptions as in \cite{IK2006} is a highly technical effort. In contrast, the question of continuous extension\,---\,and for a variety of boundary geometries for the target space\,---\,is settled by the following theorem.
\begin{theorem}\label{thm:proper} (see Section~\ref{sec:proper})
Let $D$ and $\Omega$ be bounded domains in $\Cb^d$. Suppose $D$ is pseudoconvex with $C^2$-smooth boundary, and $\Omega$ is a Goldilocks domain satisfying an interior-cone condition. Any proper holomorphic map $F: D\rightarrow \Omega$ extends to a continuous map on $\conj{D}$.
\end{theorem}
We refer the reader to Section~\ref{sec:examples} for a definition of the interior-cone condition. This cone condition on $\Omega$ above allows us to adapt certain ideas in \cite{DF1979}. Here is a sketch of the proof: using a type of Hopf lemma, which is available due to the cone condition on $\Omega$, we first show that $\delta_\Omega(F(z)) \leq c \delta_D(z)^{\eta}$ for some $c, \eta > 0$. Now suppose $\xi \in \partial D$ and $\boldsymbol{\nu}(\xi)$ is the inward-pointing normal ray, then the rapid growth of the Kobayashi metric is used to show that the curve $F(\xi+t\boldsymbol{\nu}(\xi))$ does not oscillate very much as $t \searrow 0$ and, in particular, one obtains a continuous extension to $\partial D$. As in the Theorem~\ref{thm:visible} above, the key point is to have the precise rate of blow-up necessary to obtain such behavior.
\subsection{Continuous extensions of quasi-isometric embeddings}\label{ssec:cont_extn}
We can also prove continuous extensions of certain non-holomorphic maps between domains of different dimensions. A map $F: (X, d_X) \rightarrow (Y, d_Y)$ between two metric spaces is called a \emph{$(\lambda, \kappa)$-quasi-isometric embedding} if there exist constants $\lambda \geq 1$ and $\kappa \geq 0$ so that
\begin{align*}
\frac{1}{\lambda} d_Y(F(x_1), F(x_2))-\kappa \leq d_X(x_1, x_2) \leq \lambda d_Y(F(x_1), F(x_2))+\kappa
\end{align*}
for all $x_1,x_2\in X$.
There are two motivations for investigating continuous extensions of quasi-isometric embeddings. Our main motivation stems from Lempert's theorem \cite{L1981}\,---\,and its generalization to convex domains with non-smooth boundaries by Royden and Wong \cite{RW1983}\,---\,which establish that there exist complex geodesics between any pair of points of a convex domain in $\Cb^d$, $d\geq 2$. A \emph{complex geodesic} of a domain $\Omega$ is a holomorphic map from $\Delta$ (the open unit disk in $\Cb$) to $\Omega$ that is an isometric embedding of $(\Delta, K_{\Delta})$ into $(\Omega, K_{\Omega})$. It is natural to ask whether a complex geodesic extends continuously up to $\partial\Delta$. This question has been examined\,---\,beginning with Lempert's result for strongly convex domains with $C^3$-smooth boundary \cite{L1981}\,---\,from various perspectives \cite{M1993, B2016, Z2015}, but:
\begin{itemize}
\item a comprehensive answer to this question is still forthcoming;
\item little is known in general, at present, for domains that are \emph{non-convex} and admit complex geodesics.
\end{itemize}
We ought to mention that all complex geodesics of the symmetric twofold product of $\Delta$ (also known as the \emph{symmetrized bidisc})\,---\,which is not biholomorphic to any convex domain; see \cite{C2004}\,---\,extend continuously up to $\partial\Delta$. This follows from the work of Agler--Young \cite{AY2004} and Pflug--Zwonek \cite{PZ2005}. Those results rely heavily on the specific properties of the symmetrized bidisc.
However, there is a general approach to answering this question. When $(X, d_X)$ is a proper geodesic Gromov hyperbolic metric space, $X$ has a natural boundary $X(\infty)$ ``at infinity'', and the set $X\cup X(\infty)$ has a topology that makes it a compactification of $X$ (see Section~\ref{sec:gromov_prod} below for more details). One of the fundamental properties of this compactification is the extension of quasi-isometries:
\begin{result} \label{res:gromov_qi_ext}(see for instance~\cite[Chapter III.H, Theorem 3.9]{BH1999})
Suppose $(X,d_X)$ and $(Y,d_Y)$ are two proper geodesic Gromov hyperbolic metric spaces. Then any continuous quasi-isometric embedding $F:(X, d_X) \rightarrow (Y,d_Y)$ extends to a continuous map $\wt{F} :X \cup X(\infty) \rightarrow Y \cup Y(\infty)$.
\end{result}
It is very easy to see that $(\Delta, K_{\Delta})$ is Gromov hyperbolic, with $\Delta(\infty) = \partial\Delta$. Thus, if one could show that $(\Omega, K_{\Omega})$ satisfies all the conditions in Result~\ref{res:gromov_qi_ext} and that $\Omega(\infty) = \partial\Omega$\,---\,where $\Omega$ is a domain that admits complex geodesics\,---\,then one would have an answer to the above question.
However, by the main theorem of \cite{Z2014}, if $\Omega\subset \Cb^d$, $d\geq 2$, is a smoothly bounded convex domain having infinite-type points $\xi\in \partial\Omega$ (i.e., $T_\xi(\partial\Omega)$ has infinite order of contact with $\partial\Omega$ along a \emph{complex} direction in $T_\xi(\partial\Omega)$) then $(\Omega, K_{\Omega})$ is not Gromov hyperbolic. Thus, approaches other than Result~\ref{res:gromov_qi_ext} are of interest. Independently of all this, it would be interesting in itself to prove an analogue of Result~\ref{res:gromov_qi_ext} in which, working in the category of domains in $\Cb^d$, the Gromov-hyperbolicity assumption on either of $(X, d_X)$ or $(Y, d_Y)$ (or both) is supplanted by a \emph{strictly weaker} assumption by taking advantage of our knowledge of the Kobayashi metric. The latter is further motivation for the following analogue of Result~\ref{res:gromov_qi_ext}:
\begin{theorem}(see Theorem~\ref{thm:quasi_isometry_ext} below)\label{thm:qi_ext}
Let $D$ be a bounded domain in $\Cb^k$ and suppose $(D, K_D)$ is a proper geodesic Gromov hyperbolic metric space. Let $\Omega\subset \Cb^d$ be a Goldilocks domain. If $F :(D, K_D) \rightarrow (\Omega, K_\Omega)$ is a continuous quasi-isometric embedding, then there exists a continuous extension $\wt{F} : D \cup D(\infty) \rightarrow \overline{\Omega}$.
\end{theorem}
\begin{remark} \
\begin{enumerate}
\item Our proof of Theorem~\ref{thm:qi_ext} will follow from that of Theorem~\ref{thm:quasi_isometry_ext} below. Theorem~\ref{thm:quasi_isometry_ext} is \emph{much more general} and the techniques used in its proof apply to a wide range of metric spaces (even with $(D, K_D)$ replaced by more general metric spaces) and compactifications. However, we shall focus on domains in this paper.
\item Theorem~\ref{thm:qi_ext} and Theorem~\ref{thm:quasi_isometry_ext} both represent applications of visibility. A key step of the proof is Proposition~\ref{prop:qg_visibility} below, where we establish a visibility result for quasi-geodesics.
\item If $\Omega$ is strongly pseudoconvex or convex with finite-type boundary, then $(\Omega,K_\Omega)$ is a proper geodesic Gromov hyperbolic metric space: see \cite{BB2000} and \cite{Z2014}, respectively. Hence in these cases, Theorem~\ref{thm:qi_ext} follows directly from Result~\ref{res:gromov_qi_ext}. However, proving that the Kobayashi metric is Gromov hyperbolic in either case is very involved, and our approach of using a visibility condition is much more direct.
\end{enumerate}
\end{remark}
Going back to our initial motivation for Theorem~\ref{thm:qi_ext}: it follows from this theorem that if $\varphi: \Delta \rightarrow \Omega$ is a complex geodesic into a Goldilocks domain, then $\varphi$ extends to a continuous map $\wt{\varphi} : \overline{\Delta} \rightarrow \overline{\Omega}$. This, in fact, extends known results to the case when $\Omega$ is not necessarily convex. We refer the reader to subsection~\ref{ssec:implications} below.
\subsection{Wolff--Denjoy Theorems}\label{ssec_WD} There has been considerable interest in understanding the behavior of iterates of a holomorphic map $f:\Omega \rightarrow \Omega$ on a bounded domain $\Omega$. Since $\Omega$ is bounded, for any subsequence $n_i \rightarrow \infty$ one can always find a subsequence $n_{i_j} \rightarrow \infty$ so that $f^{n_{i_j}}$ converges locally uniformly to a holomorphic map $F:\Omega \rightarrow \overline{\Omega}$. The general goal is to show that the behavior of each convergent subsequence is identical. This is demonstrated in the classical Wolff--Denjoy theorem:
\begin{result}[\cite{D1926, W1926}]\label{res:WD}
Suppose $f:\Delta \rightarrow \Delta$ is a holomorphic map then either:
\begin{enumerate}
\item $f$ has a fixed point in $\Delta$; or
\item there exists a point $\xi \in \partial \Delta$ so that
\begin{equation*}
\lim_{n \rightarrow \infty} f^n(x) = \xi
\end{equation*}
for any $x \in \Delta$, this convergence being uniform on compact subsets of $\Delta$.
\end{enumerate}
\end{result}
The above result was extended to the unit (Euclidean) ball in $\Cb^d$, for all $d$, by Herv{\'e} \cite{H1963}.
It was further generalized by Abate\,---\,see \cite{A1988} or \cite[Chapter~4]{A1989}\,---\,to strongly convex domains. The above theorem was later generalized to contractible strongly pseudoconvex domains by Hua~\cite{H1984} and to a variety of different types of convex domains (see for instance~\cite{A2014} and the references therein). Wolff--Denjoy theorems are also known to hold on certain metric spaces where a boundary at infinity replaces the topological boundary, see for instance~\cite{K2001} or~\cite{B1997}.
Using the visibility result, we will prove two Wolff--Denjoy theorems for Goldilocks domains. The first theorem concerns holomorphic maps on taut Goldilocks domains while the second theorem considers maps that are 1-Lipschitz with respect to the Kobayashi distance and Goldilocks domains $\Omega$ for which $(\Omega, K_{\Omega})$ Cauchy complete. Since every holomorphic map is 1-Lipschitz with respect to the Kobayashi distance, our second theorem considers a more general class of maps. On the other hand, because whenever $(\Omega, K_{\Omega})$ is Cauchy complete the domain $\Omega$ is taut, our first theorem considers more a general class of domains.
It is not hard to see that the dichotomy presented by Result~\ref{res:WD} fails in general if the domain in question is not contractible. The following theorems present the dichotomy relevant to more general circumstances. Here are the precise statements:
\begin{theorem}\label{thm:WD} (see Section~\ref{sec:WD} below)
Suppose $\Omega \subset \Cb^d$ is a taut Goldilocks domain. If $f:\Omega \rightarrow \Omega$ is a holomorphic map then either:
\begin{enumerate}
\item for any $x \in \Omega$ the orbit $\{ f^n(x): n \in \Nb\}$ is relatively compact in $\Omega$; or
\item there exists $\xi \in \partial \Omega$ so that
\begin{equation*}
\lim_{n \rightarrow \infty} f^n(x) = \xi
\end{equation*}
for any $x \in \Omega$, this convergence being uniform on compact subsets of $\Omega$.
\end{enumerate}
\end{theorem}
\begin{theorem}\label{thm:m_WD} (see Section~\ref{sec:WD} below)
Suppose $\Omega \subset \Cb^d$ is a Goldilocks domain such that $(\Omega, K_\Omega)$ is Cauchy complete. If $f:\Omega \rightarrow \Omega$ is 1-Lipschitz with respect to the Kobayashi distance then either:
\begin{enumerate}
\item for any $x \in \Omega$ the orbit $\{ f^n(x): n \in \Nb\}$ is relatively compact in $\Omega$; or
\item there exists $\xi \in \partial \Omega$ so that
\begin{equation*}
\lim_{n \rightarrow \infty} f^n(x) = \xi
\end{equation*}
for any $x \in \Omega$, this convergence being uniform on compact subsets of $\Omega$.
\end{enumerate}
\end{theorem}
The assumption of Cauchy completeness of $(\Omega, K_{\Omega})$ provides tools, namely work of Ca{\l}ka \cite{C1984b}, that do not have analogues in the taut setting. In particular, the proof of Theorem~\ref{thm:WD} is much more intricate than the proof of Theorem~\ref{thm:m_WD}. However, tautness is a rather mild condition: for instance a bounded pseudoconvex domain with $C^1$ boundary is known to be taut~\cite{KR1981} (whereas it is unknown whether $(\Omega, K_{\Omega})$ is Cauchy complete if $\Omega$ is a weakly pseudoconvex domain of finite type in $\Cb^d$, $d > 2$). This allows one to state various types of corollaries of Theorem~\ref{thm:WD}\,---\,for instance, see Corollary~\ref{cor:WD_finite-type} below.
\subsection{Basic notations} We end the introduction by fixing some very basic notations.
\begin{enumerate}
\item For $z \in\Cb^d$, $\norm{z}$ will denote the standard Euclidean norm and, for $z_1, z_2 \in \Cb^d$, $d_{\Euc}(z_1, z_2) = \norm{z_1 - z_2}$ will denote the standard Euclidean distance.
\item $\Delta \subset \Cb$ will denote the open unit disk, and $\rho_\Delta$ will denote the Poincar{\'e} metric on $\Delta$.
\item For a point $z\in \Cb^d$ and $r > 0$, $B_r(z)$ will denote the open Euclidean ball with center $z$ and radius $r$.
\end{enumerate}
\subsection*{Acknowledgments}
Gautam Bharali is supported in part by a Swarnajayanti Fellowship (Grant No.~DST/SJF/MSA-02/2013-14) and by a UGC Centre for Advanced Study grant. Andrew Zimmer is partially supported by the National Science Foundation under Grant No.~NSF 1400919.
\section{Examples and corollaries}\label{sec:examples}
In this section we shall present certain broad classes of bounded domains\,---\,described in terms of rather explicit boundary properties\,---\,under which either Condition~1 or Condition~2 in the definition of a Goldilocks domain (i.e., Definition~\ref{def:good_domains}) is satisfied. Consequently, we shall see that Definition~\ref{def:good_domains} admits a truly wide range of bounded domains.
\subsection{Domains that satisfy Condition~2}\label{ssec:cond_2}
Lemma~\ref{lem:int_cone} below establishes that a simple property, which arises in several areas in analysis, ensures that any domain with this property satisfies Condition~2. We require a couple of definitions.
\begin{definition}
An \emph{open right circular cone with aperture $\theta$} is an open subset of $\Cb^d$ of the form
\begin{align*}
\{z\in \Cb^d : \Real[\,\langle z, v\rangle\,] > \cos(\theta/2)\norm{z}\}
=: \Gamma(v, \theta),
\end{align*}
where $v$ is some unit vector in $\Cb^d$, $\theta\in (0, \pi)$, and $\langle\boldsymbol{\cdot}\,,\,\boldsymbol{\cdot}\rangle$ is the standard
Hermitian inner product on $\Cb^d$. For any point $p\in \Cb^d$, the \emph{axis} of the (translated) cone $p+\Gamma(v, \theta)$ is the ray $\{p + tv: t > 0\}$.
\end{definition}
\begin{definition}\label{def:cone_cond}
Let $\Omega$ be a bounded domain in $\Cb^d$. We say that $\Omega$ satisfies an \emph{interior-cone condition
with aperture $\theta$} if there exist constants $r^0 > 0$, $\theta\in (0, \pi)$, and a compact subset $K\subset \Omega$ such that for each $x\in \Omega\setminus K$, there exist a point $\xi_x\in \partial\Omega$ and a unit vector $v_x$ such that
\begin{itemize}
\item $x$ lies on the axis of the cone $\xi_x+\Gamma(v_x, \theta)$, and
\item $(\xi_x+\Gamma(v_x, \theta))\cap B_{r^0}(\xi_x) \subset \Omega$.
\end{itemize}
We say that $\Omega$ \emph{satisfies an interior-cone condition} if there exists a $\theta \in (0, \pi)$ so that $\Omega$ satisfies an interior-cone condition with aperture $\theta$.
\end{definition}
The proof of the following statement involves a mild adaptation of a technique used in \cite[Proposition~2.5]{FR1987} and in \cite[Proposition~2.3]{M1993}.
\begin{lemma}\label{lem:int_cone}
Let $\Omega$ be a bounded domain in $\Cb^d$ that satisfies an interior-cone condition with aperture
$\theta$. Then $\Omega$ satisfies Condition~2 in the definition of a Goldilocks domain.
\end{lemma}
\begin{proof}
For any $\beta > 1$, define the holomorphic map $\psi_\beta: \Delta\rightarrow \Cb$ by
\begin{align*}
\psi_\beta(\zeta) := (1+\zeta)^{1/\beta}.
\end{align*}
Given a unit vector $v\in \Cb^d$ and a number $r > 0$, define the holomorphic map
$\Psi(\boldsymbol{\cdot}\,;\,\beta, v, r): \Delta\rightarrow \Cb^d$ by
\begin{align*}
\Psi(\zeta; \beta, v, r) := r\psi_\beta(\zeta)v,
\end{align*}
and denote the image of $\Psi(\boldsymbol{\cdot}\,;\,\beta, v, r)$ by $\mathfrak{L}(\beta, v, r)$.
It is an elementary calculation that there exist constants $R > 0$ and $\alpha > 1$ such that
\begin{align*}
R\psi_\alpha(\Delta) \subset \{\zeta\in \Cb: \Real(\zeta) > \cos(\theta/2)|\zeta|\}\cap \{\zeta\in \Cb : |\zeta| < r^0\},
\end{align*}
where $\theta$ and $r^0$ are as given by Definition~\ref{def:cone_cond}. It follows from this, owing to our condition on $\Omega$, that:
\begin{itemize}
\item[$(\bullet)$] There exists a compact subset $K^\prime$ such that $K\subset K^\prime\subset \Omega$ and such that for each $x\in \Omega\setminus K^\prime$, there exist a point $\xi_x\in \partial\Omega$ and a unit vector $v_x$ so that
\begin{itemize}
\item[$(i)$] $\xi_x + \mathfrak{L}(\alpha, v_x, R)\subset \Omega$;
\item[$(ii)$] $x$ lies on the line segment joining $\xi_x$ to $\xi_x+\Psi(0; \alpha, v, R) =: q_x$; and
\item[$(iii)$] $q_x\in K^\prime$.
\end{itemize}
\end{itemize}
Then, for $x\in \Omega\setminus K^\prime$, there exists a unique number $t(x) > 0$ such that $\xi_x + t(x)v_x = x$. Clearly $\delta_\Omega(x)\leq
t(x)$. Also, $\Psi(\boldsymbol{\cdot}\,;\,\alpha, v, R)$ maps the point
\begin{align*}
\big((t(x)/R)^\alpha - 1\big) \in (-1, 0)
\end{align*}
to the point $x$.
Fix $x_0\in \Omega$. It suffices to establish the inequality that defines Condition~2 for $x\in K^\prime$. Set
$C_1 := \sup\{K_\Omega(z, x_0) : z\in K^\prime\}$. Then, by $(\bullet)$, if $x\in \Omega\setminus K^\prime$,
then
\begin{align*}
K_\Omega(x_0, x) \leq K_\Omega(x_0, q_x) + K_\Omega(q_x, x) &\leq C_1 + \rho_\Delta(0, (t(x)/R)^\alpha - 1) \\
&= C_1 + \rho_\Delta(0, 1- (t(x)/R)^\alpha ) \\
&\leq C_1 + \frac{1}{2}\log\left(\frac{2}{(t(x)/R)^\alpha}\right) \\
&\leq \big(C_1 + (1/2)\log(2R^\alpha)\big) + (\alpha/2)\log\frac{1}{\delta_\Omega(x)}.
\end{align*}
Hence, $\Omega$ satisfies Condition~2.
\end{proof}
This gives us the following:
\begin{corollary}
Let $\Omega_1$ and $\Omega_2$ be two convex Goldilocks domains in $\Cb^d$ having non-empty intersection. Then
$\Omega_1\cap \Omega_2$ is also a Goldilocks domain.
\end{corollary}
\begin{proof}
Write $D = \Omega_1\cap \Omega_2$. Since $D$ is a convex domain, it satisfies an interior-cone condition with aperture $\theta$ for some $\theta\in (0, \pi)$. Thus, by Lemma~\ref{lem:int_cone}, $D$ satisfies Condition~2.
Since $D\subset \Omega_j$, $j = 1, 2$, we have
\begin{equation}\label{eq:k_D_bigger}
k_D(x; v) \geq k_{\Omega_j}(x; v) \; \; \forall x\in D, \ \forall v: \|v\| = 1, \text{ and } j = 1, 2.
\end{equation}
Fix an $r > 0$. Then\vspace{-2mm}
\begin{multline*}
\{x\in D: \delta_D(x)\leq r\} \\
\subseteq
\{x\in D : \delta_{\Omega_1}(x)\leq r\}\cup \{x\in D : \delta_{\Omega_2}(x)\leq r\}\,\equiv\,\mathcal{S}(1, r)
\cup \mathcal{S}(2, r).
\end{multline*}
Thus, by \eqref{eq:k_D_bigger}, we can estimate:
\begin{align*}
M_D(r) &\leq \sup_{(\mathcal{S}(1, r)\cup \mathcal{S}(2, r))\times \{\norm{v} = 1\}}\frac{1}{k_D(x; v)} \\
&= \max\Big[\sup_{\mathcal{S}(1, r)\times \{\norm{v} = 1\}}
\frac{1}{k_D(x; v)}\,,\;\sup_{\mathcal{S}(2, r)\times \{\norm{v} = 1\}}\frac{1}{k_D(x; v)}\Big] \\
&\leq \max\big(M_{\Omega_1}(r), M_{\Omega_2}(r)\big).
\end{align*}
Now, $M_D$, being monotone increasing, is Lebesgue measurable. Since $M_{\Omega_1}$ and $M_{\Omega_2}$ satisfy the inequality that defines Condition~1, the above estimate ensures that $M_D$ does so too. Thus, $D$ satisfies Condition~1. Hence, $D$ is a Goldilocks domain.
\end{proof}
\subsection{Domains that satisfy Condition~1}\label{ssec:cond_1}
In looking for domains that satisfy Condition~1, we shall examine two classes of domains with very different degrees of boundary smoothness. Let us first examine a class of domains with $C^\infty$-smooth boundaries. In this connection, we need the following result.
\begin{result}[Cho, \cite{C1992}]
Let $\Omega$ be a bounded domain in $\Cb^d$, let $\partial\Omega\cap U$ be smooth and pseudoconvex, where $U$ is a neighborhood of a point $\xi_0\in \partial\Omega$, and let $\partial\Omega$ be of finite 1-type in the sense of D'Angelo at $\xi_0$. Then there exist a neighborhood $V\subset U$ of $\xi_0$ and constants $c, \epsilon > 0$ such that for every $z\in \Omega\cap V$ and for every $v\in \Cb^d$,
\begin{align*}
k_{\Omega}(z; v) \geq \frac{c\norm{v}}{\delta_{\Omega}(z)^{\epsilon}}.
\end{align*}
\end{result}
The following is now straightforward.
\begin{lemma}
Let $\Omega$ be a bounded pseudoconvex domain of finite type. Then $\Omega$ satisfies Condition~1 in the definition of a Goldilocks domain.
\end{lemma}
\begin{proof}
By the above result, and owing to our hypothesis, we can find finitely many connected open sets $V_1,\dots, V_N$ that cover $\partial\Omega$ and constants $\epsilon_1,\dots, \epsilon_N$ such that
\begin{align*}
k_{\Omega}(z; v) \geq c\delta_{\Omega}(z)^{-\epsilon_j}
\end{align*}
for every $z\in \Omega\cap V_j$ and for every unit vector $v$, where $c > 0$ is a suitable constant. Write
$s := \min(\epsilon_1,\dots, \epsilon_N)$. Then, for $r > 0$ so small that $r < 1$ and
\begin{align*}
\{z\in \Omega : \delta_{\Omega}(z)\leq r\} \subset V_1\cup\dots\cup V_N,
\end{align*}
we have $M_{\Omega}(r)\leq (1/c)r^{s}$, $s > 0$, whence Condition~1 is satisfied.
\end{proof}
The second family of domains that we shall consider will be bounded convex domains. As has been emphasized in Section~\ref{sec:intro}, we would like to consider domains $\Omega$ such that, at any smooth point $\xi\in \partial\Omega$, the boundary is allowed to osculate $H_\xi(\partial\Omega) := T_\xi(\partial\Omega)\cap iT_\xi(\partial\Omega)$ to infinite order and, yet, are not necessarily smoothly bounded. One needs a device to quantify how flat $\partial\Omega$ can get at smooth points. This is accomplished by the notion of the {\em support of $\Omega$ from the outside}, which was introduced in \cite{B2016}. The following definition has been adapted from \cite{B2016}\,---\,which focuses on domains with $C^1$-smooth boundary\,---\,to admit convex domains with non-smooth boundaries as well. (Augmenting our notation somewhat, we shall write $B^{k}_r(z)$ to denote the open Euclidean
ball in $\Cb^k$ with center $z$ and radius $r$.)
\begin{definition}\label{def:supp}
Let $\Omega$ be a bounded convex domain in $\Cb^d, \ d\geq 2$. Let $F: B^{d-1}_r(0)\rightarrow \Rb$ be a $C^1$-smooth convex function with $F(0)=0$ and $DF(0)=0$. We say that {\em $F$ supports $\Omega$ from the outside} if there exists a constant $R\in (0,r)$ such that, for each point $\xi\in \partial\Omega$, there exists a unitary transformation ${\sf U}_\xi$ so that
\begin{itemize}
\item the set $\big(\xi+{\sf U}_\xi^{-1}(\{v\in \Cb^d: v_d=0\})\big)$ is a supporting complex hyperplane of $\Omega$ at $\xi$, and
\item the line $\big(\xi+{\sf U}_\xi^{-1}({\rm span}_{\Rb}\{(0,\dots,0,i)\})\big)$ intersects $\Omega$,
\end{itemize}
and such that, denoting the $\Cb$-affine map $v\!\longmapsto\!{\sf U}_\xi(v-\xi)$ as ${\sf U}^\xi$, we have\vspace{-1mm}
\begin{equation*}
{\sf U}^\xi(\overline\Omega)\cap \big(B^{d-1}_R(0)\times\Delta\big)\,\subset\,\{z=(z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}}, z_d)\in B^{d-1}_R(0)\times\Delta
: \Imaginary(z_d) \geq F(z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}})\}.
\end{equation*}
\end{definition}
This notion allows us to describe another family of domains that satisfy Condition~1. However, to do so, we will need the following result.
\begin{result}[Graham, \cite{G1990, G1991}]\label{res:kob_bounds}
Let $\Omega$ be a bounded convex domain in $\Cb^d$. For each $z\in \Omega$
and $v\in \Cb^d\setminus\{0\}$, define
\begin{align*}
r_{\Omega}(z; v) := \sup\Big\{r > 0: \Big(z+ (r\Delta)\frac{v}{\norm{v}}\,\Big)\subset \Omega\Big\},
\end{align*}
Then:
\begin{equation}\label{eq:kob_bounds}
\frac{\norm{v}}{2r_{\Omega}(z; v)}\,\leq\,k_{\Omega}(z; v)\,\leq\,\frac{\norm{v}}{r_{\Omega}(z; v)} \quad
\forall z\in \Omega \text{ and } \forall v\in \Cb^d\setminus\{0\}.
\end{equation}
\end{result}
\noindent{The lower bound on $k_\Omega(z;v)$ is the non-trivial part of the result and a proof can also be found in \cite[Theorem 2.2]{F1991}.}
\begin{lemma}\label{lem:convex_cond_1}
Let $\Omega$ be a bounded convex domain in $\Cb^d$, $d\geq 2$. Let $\varPsi : [0, r)\rightarrow \Rb$ be a convex, strictly increasing $C^1$ function such that
\begin{align*}
\int\nolimits_0^{\epsilon}t^{-1}\varPsi^{-1}(t)\,dt < \infty
\end{align*}
(where $\epsilon > 0$ is small enough for $\varPsi^{-1}$ to be defined). Assume that $\Omega$ is supported from the outside by $F(z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}}) := \varPsi(\,\norm{z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}}}\,)$ (write $z = (z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}}, z_d)$ for each $z\in \Cb^d$). Then $\Omega$ satisfies Condition~1 in the definition of a Goldilocks domain.
\end{lemma}
\begin{proof}
Let $R$ be as given by Definition~\ref{def:supp} with $F = \varPsi(\,\norm{\boldsymbol{\cdot}}\,)$. Let $C:= \sup_{t\in [0, R)}\varPsi(t)$ and define $t_0$ as follows (it is easy to argue that the set on the right is finite):
\begin{equation}\label{eq:cap}
t_0 := \min\left[\{C/2\}\cup \{t\in (0,C) : t = \varPsi^{-1}(t)\}\right].
\end{equation}
Let us define
\begin{align*}
\boldsymbol{{\sf M}} := \{(z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}}, z_d)\in B^{d-1}_r(0)\times\Cb : \Imaginary(z_n) = \varPsi(\,\norm{z^{\raisebox{-1pt}{$\scriptstyle {\prime}$}}}\,)\}.
\end{align*}
Let $z\in \Omega$ and let $\xi(z)\in \partial\Omega$ be such that $\delta_{\Omega}(z) = d_{\Euc}(z, \xi(z))$. Clearly
\begin{align*}
\xi(z)\in \partial B_{\delta_{\Omega}(z)}(z), \; \; \; \text{and} \; \; \;
\partial\Omega\cap B_{\delta_{\Omega}(z)}(z) = \emptyset,
\end{align*}
whence $B_{\delta_{\Omega}(z)}(z)\subset \Omega$. Thus, for any $(d-1)$-dimensional complex subspace $E$ such that $E\neq H_{\xi(z)}(\partial B_{\delta_{\Omega}(z)}(z))$, the $\Cb$-affine subspace $(\xi(z)+E)$ intersects $B_{\delta_{\Omega}(z)}(z)$, and hence intersects $\Omega$. Therefore, at any point $\xi\in \partial\Omega$ that is of the form $\xi(z)$ for some $z\in \Omega$, there is a unique supporting complex hyperplane of $\Omega$ at $\xi$. So we can find a compact subset $K$ of $\Omega$ such that whenever $z\in \Omega\setminus K$,
\begin{itemize}
\item $\delta_{\Omega}(z) < \min(1, t_0)$; and
\item For any point $\xi(z)\in \partial\Omega$ that satisfies $\delta_{\Omega}(z) = d_{\Euc}(z, \xi(z))$, given
any vector $v\neq 0$ parallel to the supporting complex hyperplane of $\Omega$ at $\xi(z)$, the complex
line of the form $ {\sf U}^{\xi(z)}(z + \Cb{v})$ satisfies
\begin{align*}
{\sf U}^{\xi(z)}(z + \Cb{v}) = (0,\dots,0,i\boldsymbol{\cdot} \delta_{\Omega}(z)) + \Cb{{\sf U}_{\xi(z)}(v)}
\end{align*}
and intersects $\boldsymbol{{\sf M}}$ in a circle of radius $\varPsi^{-1}(\delta_{\Omega}(z))$.
\end{itemize}
Here ${\sf U}^{\xi(z)}$ is as described in Definition~\ref{def:supp}. From this point we can argue as in the proof of \cite[Lemma~3.2]{B2016}, \emph{mutatis mutandis}, to get
\begin{align*}
r_{\Omega}(z; v) \leq 2\varPsi^{-1}(\delta_{\Omega}(z))
\end{align*}
for each $z\in \Omega\setminus K$ and $v\in \Cb^d\setminus\{0\}$ (the purpose of $t_0$ given by \eqref{eq:cap} is to ensure that
$\delta_{\Omega}(z)$ is in the domain of $\varPsi^{-1}$ and that $\delta_{\Omega}(z)\leq \varPsi^{-1}(\delta_{\Omega}(z))$ for the aforementioned $z$).
Therefore, from \eqref{eq:kob_bounds}, we deduce that
\begin{align*}
\frac{1}{k_{\Omega}(z; v)} \leq 4\varPsi^{-1}(\delta_{\Omega}(z))
\end{align*}
for each $z\in \Omega\setminus K$ and $v\in \Cb^d$ such that $\norm{v} = 1$. Therefore, writing $\epsilon^* := \min_{z\in K}\delta_{\Omega}(z)$, we have
\begin{align*}
M_{\Omega}(t) \leq 4\varPsi^{-1}(t) \text{ for } t < \epsilon^*
\end{align*}
(by construction, $0<\epsilon^*\leq \epsilon$). Hence, by hypothesis, $\Omega$ satisfies Condition~1.
\end{proof}
\begin{remark}
The last lemma expresses quantitatively the claim that, for a convex domain $\Omega\Subset \Cb^d$ that satisfies Condition~1, $\partial\Omega$ is allowed to osculate $H_\xi(\partial\Omega)$ to infinite order at a smooth point $\xi\in \partial\Omega$. The function $\varPsi$ in Lemma~\ref{lem:convex_cond_1} also gives a sufficient condition on the extent to which the boundary of $\Omega$ must bend at a point $\xi\in \partial\Omega$ of infinite type for Condition~1 to hold. We illustrate all this via a familiar family of functions on $[0,\infty)$ that vanish to infinite order at $0$: these are the functions $\varPsi_s$, $s > 0$:
\begin{equation*}
\varPsi_s(t) := \begin{cases}
e^{-1/t^s}, &\text{if $t > 0$}, \\
0, &\text{if $t = 0$}.
\end{cases}
\end{equation*}
The description of the range of $s$ for which
\begin{align*}
\int_{0}^1 t^{-1}\varPsi_s^{-1}(t)\,dt = \int_0^1 t^{-1}\big(\log(1/t)\big)^{-1/s}\,dt < \infty
\end{align*}
is a standard result; $\varPsi_s$ (restricted to a suitably small interval) satisfies the conditions of the above lemma if and only if $0 < s < 1$.
\end{remark}
\subsection{Implications for holomorphic mappings.}\label{ssec:implications}
We shall now reformulate some of the results stated in Section~\ref{sec:intro} for the domains discussed in subsections~\ref{ssec:cond_2}~and~\ref{ssec:cond_1} and compare these results to the state of the art for certain problems of continuing interest.
Recall the works cited in subsection~\ref{ssec_WD} in connection with Wolff--Denjoy-type theorems in higher dimensions. All of the results in those works that concern the study of the iterates of a fixed-point-free holomorphic self-map on a domain, and the conclusion that this \emph{entire} sequence converges locally uniformly to a constant map, involve a domain $\Omega\Subset \Cb^d$ that:
\begin{itemize}
\item is topologically contractible; and
\item is such that $\partial\Omega$ satisfies some non-degeneracy condition: some form of strict convexity in \cite{A1988, B2012, A2014}; strong pseudoconvexity in \cite{H1984}.
\end{itemize}
Call the limit point (which lies in $\partial\Omega$) appearing in all of the Wolff--Denjoy-type results just cited a \emph{Wolff point}.
Now, it is not hard to see that an attempt to extend the dichotomy presented by the classical Wolff--Denjoy theorem (i.e., Result~\ref{res:WD} above) to higher dimensions will fail if the domain in question is not contractible. Nevertheless, it would be interesting if\,---\,making reasonable assumptions on the domain $\Omega$, but without assuming contractibility\,---\,there were to be a dichotomy wherein one of the possibilities is that the entire sequence of iterates of a holomorphic self-map of $\Omega$ converges locally uniformly to a Wolff point. In this circumstance, Theorem~\ref{thm:WD} presents the right dichotomy.
It would be even more interesting if the latter dichotomy could be exhibited for weakly pseudoconvex domains: almost none of the methods in \cite{H1984} is usable if the domain in question is a non-convex weakly pseudoconvex domain, \emph{even} if it is of finite type. A bounded pseudoconvex domain with $C^1$ boundary is taut; see~\cite{KR1981}. Thus, in view of Lemma~\ref{lem:int_cone} and the discussion in subsections~\ref{ssec:cond_2}~\&~\ref{ssec:cond_1}, Theorem~\ref{thm:WD} gives us:
\begin{corollary}\label{cor:WD_finite-type}
Let $\Omega\subset \Cb^d$ be a bounded pseudoconvex domain of finite type. If $f:\Omega\rightarrow \Omega$ is a holomorphic map then either:
\begin{enumerate}
\item for any $x \in \Omega$ the orbit $\{ f^n(x): n \in \Nb\}$ is relatively compact in $\Omega$; or
\item there exists $\xi \in \partial \Omega$ such that
\begin{equation*}
\lim_{n \rightarrow \infty} f^n(x) = \xi
\end{equation*}
for any $x \in \Omega$, this convergence being uniform on compact subsets of $\Omega$.
\end{enumerate}
\end{corollary}
\begin{remark} Karlsson gave a proof of the above Corollary with the additional assumption that $(\Omega, K_\Omega)$ is Cauchy complete~\cite[Theorem 3]{K2005b}. This assumption greatly simplifies the situation. \end{remark}
The discussion in subsection~\ref{ssec:cont_extn} allows us to improve upon what is currently known about the continuous extension of of a complex geodesic $\varphi : \Delta\rightarrow \Omega$ up to $\partial\Delta$, where $\Omega$ is any domain that admits complex geodesics. By Royden--Wong \cite{RW1983}, for any pair of points of a bounded convex domain $\Omega\subset \Cb^d$, $d\geq 2$\,---\,with no constraint on the regularity of $\partial\Omega$\,---\,there exists a complex geodesic of $\Omega$ containing these two points. Lempert \cite{L1984} has shown an analogous result for strongly linearly convex domains in $\Cb^d$ with $C^\infty$-smooth boundary. The result has been proved in \cite{L1984} for domains with real-analytic boundary, but the arguments therein can be adapted to the smooth case; also see \cite{KW2013}. We refer the reader to \cite{L1984} or \cite{KW2013} for a definition of strong linear convexity. It follows from the discussion on smoothly-bounded Hartogs domains in \cite[Chapter~2]{APS2004} that strongly linearly convex domains need not necessarily be convex. Recently, Pflug and Zwonek \cite{PZ2012} provided explicit examples of strongly linearly convex domains that are not even biholomorphic to any convex domain. However, a strongly linearly convex domain is always strongly pseudoconvex; see \cite[Propositions~2.1.8 and 2.5.9]{APS2004}.
In the works cited in subsection~\ref{ssec:cont_extn} in connection with boundary regularity of complex geodesics, the domains considered were convex domains with boundaries having some degree of smoothness. Owing to Theorem~\ref{thm:qi_ext} we are able to extend those results to certain convex domains with non-smooth boundary. In \cite{L1984}, Lempert showed that in a strongly linearly convex domain with \emph{real-analytic} boundary, all complex geodesics extend real-analytically to $\partial\Delta$. However, this has a difficult and technical proof, and the proof of even $C^{1/2}$ extension is hard. The analogous result for strongly linearly convex domains with $C^\infty$ boundary is expected to have an even more technical proof. In contrast, in the $C^\infty$ setting our methods provide a rather ``soft'' proof of the continuous extension of complex geodesics up to $\partial\Delta$. To be more precise:
owing to Theorem~\ref{thm:qi_ext}, the discussion in subsections~\ref{ssec:cond_2}~\&~\ref{ssec:cond_1}, and the fact that $(\Delta, \rho_\Delta)$ is Gromov hyperbolic, the following corollary is immediate:
\begin{corollary}
Let $\Omega\subset \Cb^d$, $d\geq 2$, be a bounded domain having either one of the following properties:
\begin{enumerate}
\item $\Omega$ is a convex Goldilocks domain (for instance, $\Omega$ is a domain of finite type, or satisfies the conditions in Lemma~\ref{lem:convex_cond_1}); or
\item $\Omega$ is a smoothly bounded strongly linearly convex domain.
\end{enumerate}
Then every complex geodesic $\varphi : \Delta\rightarrow \Omega$ extends to a continuous map $\wt{\varphi}: \overline{\Delta}\rightarrow \overline{\Omega}$.
\end{corollary}
\subsection{Goldilocks domains are pseudoconvex.}\label{ssec:pseudoconvex}
All the classes of domains presented above were examples of pseudoconvex domains. This is no coincidence: as hinted at in Remark~\ref{rem:just_right}, Goldilocks domains are necessarily pseudoconvex. We shall now present a proof of this. To do so, we refer to a classical result:
\begin{result}\label{res:con_principle}
A domain $\Omega\subset \Cb^d$, $d\geq 2$, is pseudoconvex if and only if for any continuous family of analytic discs\,---\,i.e., any continuous map $\Phi : \overline{\Delta}\times [0,1]\rightarrow \Cb^d$ such that $\varphi_t := \left.\Phi(\boldsymbol{\cdot}, t)\right|_{\Delta}$ is holomorphic for each $t\in [0,1]$\,---\,that satisfies $\Phi(\Delta\times\{0\}\cup \partial\Delta\times[0,1])\subset \Omega$, it follows that $\varphi_t(\Delta)\subset \Omega$ for each $t\in [0,1]$.
\end{result}
It is known, and can be ascertained by working through each step of the proof of Result~\ref{res:con_principle} that, in this result, it suffices to consider \emph{special} continuous families of analytic discs for which:
\begin{itemize}
\item[$(a)$] each $\varphi_t$, $t\in [0,1]$, is a holomorphic immersion of $\Delta$ into $\Cb$; and
\item[$(b)$] there exists a constant $c > 0$ such that $\|\varphi_t^\prime(\zeta)\| > c$ for every $(\zeta, t)\in \Delta\times[0,1]$.
\end{itemize}
The above can be deduced, for instance, using an intermediate characterization for pseudoconvex domains involving the so-called Hartogs figures\,---\,see, for instance, Chapter~II, \S1 of the book \cite{FG2002} by Fritzsche--Grauert. Since we do not require pseudoconvexity of Goldilocks domains in any of our proofs, we shall not elaborate on the above point any further. With this we can give a proof.
\begin{proposition}
If $\Omega \subset \Cb^d$ is a Goldilocks domain, then $\Omega$ is pseudoconvex.
\end{proposition}
\begin{proof} Since planar domains are pseudoconvex, we consider the case $d\geq 2$.
Let $\Omega\subset \Cb^d$, $d\geq 2$, be a bounded domain. Suppose $\Omega$ is \emph{not} pseudoconvex.
There exists a continuous family of analytic discs $\Phi : \overline{\Delta}\times [0,1]\rightarrow \Cb^d$ satisfying the hypothesis of Result~\ref{res:con_principle} and conditions $(a)$ and $(b)$, but such that the conclusion of Result~\ref{res:con_principle} fails. Let
\[
\tau\,:=\,\inf\{t\in (0,1] : \varphi_t(\Delta)\not\subset \Omega\}.
\]
As $\Omega$ is not pseudoconvex, and as the condition $\varphi_t(\overline{\Delta})\subset \Omega$ is an open condition, $\tau\in (0,1]$. By definition, there exists a point $\xi\in \partial\Omega$ and a point $\zeta_0\in \Delta$ such that $\varphi_\tau(\zeta_0) = \xi$. For $\nu\in \Zb_+$ so large that $(\tau - 1/\nu)\in (0,\tau)$, write $z_\nu := \varphi_{\tau-(1/\nu)}(\zeta_0)$. By the continuity $\Phi$,
\begin{equation}\label{eq:bdry_limit}
z_\nu\rightarrow \xi \; \; \text{as $\nu\to +\infty$}.
\end{equation}
Let $v_\nu$ be a unit vector such that $\varphi_{\tau-(1/\nu)}^\prime(\zeta_0)\in \Cb\!v_\nu$. By the definition of the infinitesimal Kobayashi metric (owing to which it is contracted by holomorphic maps), we have
\begin{align*}
k_{\Omega}(z_\nu; v_\nu)
= \frac{k_{\Omega}(\varphi_{\tau-(1/\nu)}(\zeta_0);\,\varphi_{\tau-(1/\nu)}^\prime(\zeta_0))}
{\|\varphi_{\tau-(1/\nu)}^\prime(\zeta_0)\|}
&\leq \frac{k_{\Delta}(\zeta_0; 1)}{\|\varphi_{\tau-(1/\nu)}^\prime(\zeta_0)\|} \\
&=\frac{1}{\|\varphi_{\tau-(1/\nu)}^\prime(\zeta_0)\|(1 - |\zeta_0|^2)} \\
&\leq 1/c (1 - |\zeta_0|^2).
\end{align*}
Owing to \eqref{eq:bdry_limit} we conclude that there is an $\epsilon > 0$ such that $M_{\Omega}(r) \geq c(1-|\zeta_0|^2)$ for any $r \in (0, \epsilon)$. But this implies that Condition~1 in Definition~\ref{def:good_domains} does nor hold in $\Omega$. In particular, $\Omega$ is not a Goldilocks domain\,---\,which establishes the result.
\end{proof}
\section{Preliminaries}
\subsection{The Kobayashi distance and metric} Let $\Omega \subset \Cb^d$ be a domain. We assume that the reader is familiar with the definitions of the Kobayashi pseudo-distance $K_{\Omega}$ and the Kobayashi--Royden pseudo-metric $k_{\Omega}$ on $\Omega$. It turns out that $K_{\Omega}$ is the integrated form of $k_{\Omega}$, but this is a \emph{result} stemming from the definitions of $K_{\Omega}$ and $k_{\Omega}$; see Result~\ref{res:local_global} below. Since we shall require the original definition of $K_{\Omega}$ in a few arguments below, we give this definition. Given points $x, y\in \Omega$, we define
\begin{align*}
K_{\Omega}(x, y) := \inf\left\{\sum_{i = 1}^n\rho_{\Delta}(\zeta_{i-1}, \zeta_i) : (\phi_1,\dots, \phi_n; \zeta_0,\dots, \zeta_n)
\in \mathfrak{A}(x, y)\right\}
\end{align*}
where $\mathfrak{A}(x, y)$ is the set of all analytic chains in $\Omega$ joining $x$ to $y$. Here, $(\phi_1,\dots, \phi_n; \zeta_0,\dots, \zeta_n)$ is an analytic chain in $\Omega$ joining $x$ to $y$ if $\phi_i\in \Hol(\Delta, \Omega)$ for each $i$,
\begin{align*}
x =&\,\phi_1(\zeta_0), \; \; \; \; \; \phi_n(\zeta_n) = y, \; \; \text{and} \\
&\,\phi_i(\zeta_i) = \phi_{i+1}(\zeta_i)
\end{align*}
for $i = 1,\dots n-1$.
Now suppose $\Omega \subset \Cb^d$ is a bounded domain. In that case,
the Kobayashi pseudo-distance is a true distance and the Kobayashi--Royden pseudo-metric is a metric. Royden~\cite[Proposition 3]{R1971} proved that the function $k_\Omega$ is upper-semicontinuous. So if a path $\sigma: [a,b] \rightarrow \Omega$ is absolutely continuous (as a map $[a,b] \rightarrow \Cb^d$) then the function $[a,b]\ni t \mapsto k_\Omega(\sigma(t); \sigma^\prime(t))$ is integrable and we can define the \emph{length of $\sigma$} to be
\begin{align*}
\ell_\Omega(\sigma)= \int_a^b k_\Omega(\sigma(t); \sigma^\prime(t)) dt.
\end{align*}
The Kobayashi metric has the following connections to the Kobayashi distance:
\begin{result}\label{res:local_global}
Let $\Omega \subset \Cb^d$ be a bounded domain.
\begin{enumerate}
\item \cite[Theorem 1(ii)]{NP2008} Suppose, for a point $x\in \Omega$, $k_\Omega(x;\,\boldsymbol{\cdot})$ is continuous and positive on $\Cb^d\setminus\{0\}$. Then
\[
k_\Omega(x;v) = \lim_{h \rightarrow 0} \frac{1}{\abs{h}} K_\Omega(x,x+hv).
\]
\item \cite[Theorem 1]{R1971} For any $x,y \in \Omega$ we have
\begin{multline*}
K_\Omega(x,y) = \inf \left\{\ell_\Omega(\sigma)\,:\,\sigma\!:\![a,b]
\rightarrow \Omega \text{ is piecewise } C^1,\right. \\
\left. \text{ with } \sigma(a)=x, \text{ and } \sigma(b)=y\right\}.
\end{multline*}
\item \cite[Theorem 3.1]{V1989} For any $x,y \in \Omega$ we have
\begin{multline*}
K_\Omega(x,y) = \inf \left\{\ell_\Omega(\sigma) : \sigma\!:\![a,b]
\rightarrow \Omega \text{ is absolutely continuous}, \right. \\
\left. \text{ with } \sigma(a)=x, \text{ and } \sigma(b)=y\right\}.
\end{multline*}
\end{enumerate}
\end{result}
\begin{remark}
The first result above is a weaker version\,---\,which suffices for our purposes\,---\,of a result by Nikolov and Pflug \cite{NP2008}. Among other things, their result holds true on complex manifolds.
\end{remark}
\subsection{The Hopf--Rinow theorem}
Given a metric space $(X,d)$, the length of a continuous curve $\sigma:[a,b] \rightarrow X$ is defined to be
\begin{align*}
L_d(\sigma) = \sup \left\{ \sum_{i=1}^n d(\sigma(t_{i-1}), \sigma(t_i) ): a = t_0 < t_2 < \dots < t_n=b\right\}.
\end{align*}
Then the induced metric $d_I$ on $X$ is defined to be
\begin{align*}
d_I(x,y) = \inf\left\{ L_d(\sigma) : \sigma\!:\![a,b] \rightarrow X \text{ is continuous}, \sigma(a)=x, \text{ and } \sigma(b)=y\right\}.
\end{align*}
When $d_I = d$, the metric space $(X,d)$ is called a \emph{length metric space}. When the Kobayashi pseudo-distance is actually a distance, then the metric space $(\Omega, K_\Omega)$ is a length metric space (by construction). For such metric spaces we have the following characterization of Cauchy completeness:
\begin{result}[Hopf--Rinow] Suppose $(X,d)$ is a length metric space. Then the following are equivalent:
\begin{enumerate}
\item $(X,d)$ is a proper metric space; that is, every bounded set is relatively compact.
\item $(X,d)$ is Cauchy complete and locally compact.
\end{enumerate}
\end{result}
For a proof see, for instance, Proposition~3.7 and Corollary~3.8 in Chapter~I of~\cite{BH1999}.
When $\Omega \subset \Cb^d$ is a bounded domain the Kobayashi distance generates the standard topology on $\Omega$ and so the metric space $(\Omega, K_\Omega)$ is locally compact. In particular we obtain:
\begin{result}\label{res:hopf_rinow}Suppose $\Omega \subset \Cb^d$ is a bounded domain. Then the following are equivalent:
\begin{enumerate}
\item $\Omega, K_\Omega)$ is a proper metric space; that is, every bounded set is relatively compact.
\item $(\Omega,K_\Omega)$ is Cauchy complete.
\end{enumerate}
\end{result}
\subsection{Lipschitz continuity of the Kobayashi distance and metric}
We begin with a simple proposition. Since we shall re-use some aspects of the proof elsewhere in this paper, and the proof itself is short, we provide a proof.
\begin{proposition}\label{prop:lip}
Suppose $\Omega \subset \Cb^d$ is bounded domain.
\begin{enumerate}
\item There exists $c_1 > 0$ so that
\begin{align*}
c_1 \norm{v} \leq k_\Omega(x;v)
\end{align*}
for all $x \in \Omega$ and $v\in\Cb^d$. In particular,
\begin{align*}
c_1 \norm{x-y} \leq K_\Omega(x,y)
\end{align*}
for all $x,y \in \Omega$.
\item For any compact set $K \subset \Omega$ there exists $C_1=C_1(K) > 0$ so that
\begin{align*}
k_\Omega(x;v) \leq C_1 \norm{v}
\end{align*}
for all $x \in K$ and $v \in \Cb^d$.
\item For any compact set $K \subset \Omega$ there exists $C_2=C_2(K) > 0$ so that
\begin{align*}
K_\Omega(x,y) \leq C_2 \norm{x-y}
\end{align*}
for $x,y \in K$.
\end{enumerate}
\end{proposition}
\begin{proof}
Fix $R > 0$ so that $\Omega$ is relatively compact in $B_R(0)$. Then
\begin{align*}
c_1:=\inf_{x \in \overline{\Omega}, \norm{v}=1} \frac{k_{B_R(0)}(x;v)}{\norm{v}} \leq \inf_{x \in \Omega, \norm{v}=1} \frac{k_{\Omega}(x;v)}{\norm{v}}.
\end{align*}
The continuity of
\begin{align*}
B_R(0)\times \Cb^d \ni (x,v) \mapsto k_{B_R(0)}(x;v)
\end{align*}
implies that $c_1 > 0$. Thus
\begin{align*}
k_\Omega(x;v) \geq c_1 \norm{v}
\end{align*}
for all $x \in \Omega$ and $v \in \Cb^d$. Then, it follows from part~(2) of Result~\ref{res:local_global} that
\begin{align*}
K_\Omega(x,y) \geq c_1 \norm{x-y}
\end{align*}
for all $x,y \in \Omega$. This establishes part~(1).
Next fix a compact set $K \subset \Omega$. Then there exists $r > 0$ so that $B_{2r}(x) \subset \Omega$ for all $x \in K$. Then
\begin{equation}\label{eq:k-metric_optimal}
k_\Omega(x;v) \leq \frac{1}{2r} \norm{v} \; \; \; \forall x \in K \text{ and } \forall v \in \Cb^d.
\end{equation}
So part~(2) is true. Now, since $B_{2r}(x) \subset \Omega$ for all $x \in K$, we see that
\begin{align*}
K_\Omega(x,y) \leq K_{B_{2r}(x)}(x, y) \leq \frac{1}{r} \norm{x-y}
\end{align*}
when $x \in K$ and $y \in B_{r}(x)$. Now let
\begin{align*}
M := \sup\{ K_\Omega(x,y) : x,y \in K\}.
\end{align*}
By the continuity of $K_\Omega$ we see that $M < \infty$. Then
\begin{align*}
K_\Omega(x,y) \leq \max\left\{ \frac{1}{r}, \frac{M}{r} \right\} \norm{x-y}
\end{align*}
for all $x, y \in K$. This establishes part~(3).
\end{proof}
\section{Length minimizing curves}\label{sec:curves}
Suppose $\Omega \subset \Cb^d$ is a bounded domain. If $I \subset \Rb$ is an interval, a map $\sigma: I \rightarrow \Omega$ is called a \emph{real geodesic} if
\begin{align*}
K_\Omega(\sigma(s),\sigma(t)) = \abs{t-s}
\end{align*}
for all $s,t \in I$. By Result~\ref{res:local_global}, for any two points $x, y \in \Omega$ there exists a sequence of curves joining $x$ and $y$
whose lengths approach $K_\Omega(x,y)$. However, for a general bounded domain the metric space $(\Omega, K_\Omega)$ may not be Cauchy complete and
thus there is no guarantee that this sequence of curves has a convergent subsequence. In particular, it is not clear if every two points in $\Omega$ are joined by a real geodesic. This possibility of non-existence motivates the next definition:
\begin{definition}\label{def:almost_geodesic}
Suppose $\Omega \subset \Cb^d$ is a bounded domain and $I \subset \Rb$ is an interval. For $\lambda \geq 1$ and $\kappa \geq 0$ a curve $\sigma:I \rightarrow \Omega$ is called an \emph{$(\lambda, \kappa)$-almost-geodesic} if
\begin{enumerate}
\item for all $s,t \in I$
\begin{align*}
\frac{1}{\lambda} \abs{t-s} - \kappa \leq K_\Omega(\sigma(s), \sigma(t)) \leq \lambda \abs{t-s} + \kappa;
\end{align*}
\item $\sigma$ is absolutely continuous (whence $\sigma^\prime(t)$ exists for almost every $t\in I$), and for almost every $t \in I$
\begin{align*}
k_\Omega(\sigma(t); \sigma^\prime(t)) \leq \lambda.
\end{align*}
\end{enumerate}
\end{definition}
In Proposition~\ref{prop:almost_geod_exist} below, we will show for any bounded domain $\Omega$ any two points $x,y \in \Omega$ can be joined by
an $(1,\kappa)$-almost-geodesic.
\begin{remark} For many domains inward pointing normal lines can be parametrized as $(1,\kappa)$-almost geodesics: for convex domains with $C^{1,\alpha}$ boundary this follows from ~\cite[Proposition 4.3]{Z2015} and for strongly pseudo-convex domains this follows from estimates in~\cite{FR1987}.
\end{remark}
\begin{proposition}\label{prop:ag_Lip}
Suppose $\Omega \subset \Cb^d$ is a bounded domain. For any $\lambda \geq 1$ there exists a $C = C(\lambda)>0$ so that any $(\lambda, \kappa)$-almost-geodesic $\sigma:I \rightarrow \Omega$ is $C$-Lipschitz (with respect to the Euclidean distance).
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:lip} there exists $c_1 > 0$ so that
\begin{align*}
k_\Omega(x;v) \geq c_1 \norm{v}
\end{align*}
for all $x \in \Omega$ and $v \in \Cb^d$. We claim that every $(\lambda, \kappa)$-almost-geodesic is $\lambda/c_1$-Lipschitz (with respect to the Euclidean distance).
Suppose that $\sigma:I \rightarrow \Omega$ is an $(\lambda, \kappa)$-almost-geodesic. Then for almost every $t \in I$ we have
\begin{align*}
\norm{\sigma^\prime(t)} \leq \frac{1}{c_1} k_\Omega(\sigma(t);\sigma^\prime(t)) \leq \frac{\lambda}{c_1}.
\end{align*}
Since $\sigma$ is absolutely continuous we have
\begin{align*}
\sigma(t) = \sigma(s) + \int_s^{t} \sigma^\prime(r) dr.
\end{align*}
Thus
\begin{align*}
\norm{\sigma(t) - \sigma(s)} = \norm{ \int_s^t \sigma^\prime(r)dr } \leq \frac{\lambda}{c_1} \abs{t-s}
\end{align*}
and $\sigma$ is $\lambda/c_1$-Lipschitz.
\end{proof}
\begin{proposition}\label{prop:almost_geod_exist}
Suppose $\Omega \subset \Cb^d$ is a bounded domain. For any $\kappa > 0$ and $x,y \in \Omega$ there exists an $(1,\kappa)$-almost-geodesic $\sigma:[a,b] \rightarrow \Omega$ with $\sigma(a) =x$ and $\sigma(b)=y$.
\end{proposition}
We begin the proof with a simple lemma:
\begin{lemma}\label{lem:restricted_l}
Suppose $\Omega \subset \Cb^d$ is a bounded domain and $\sigma: [a,b] \rightarrow \Omega$ is an absolutely continuous curve. If
\begin{align*}
\ell_\Omega(\sigma) \leq K_\Omega(\sigma(a),\sigma(b)) + \epsilon
\end{align*}
then whenever $a \leq a^\prime \leq b^\prime \leq b$ we have
\begin{align*}
\ell_\Omega(\sigma|_{[a^\prime, b^\prime]}) \leq K_\Omega(\sigma(a^\prime),\sigma(b^\prime)) + \epsilon.
\end{align*}
\end{lemma}
\noindent{This lemma is an immediate consequence of the fact that
\begin{align*}
\ell_\Omega(\sigma|_{[a^\prime, b^\prime]})
&= \ell_\Omega(\sigma) - \ell_\Omega(\sigma|_{[a, a^\prime]}) - \ell_\Omega(\sigma|_{[b^\prime,b]}),
\end{align*}
and of the triangle inequality for $K_\Omega$.}
\begin{proof}[The proof of Proposition~\ref{prop:almost_geod_exist}]
By part ~(2) of Result~\ref{res:local_global} there exists a piecewise $C^1$ curve $\sigma:[0,T] \rightarrow \Omega$ so that $\sigma(0) = x$, $\sigma(T) = y$, and
\begin{align*}
\ell_\Omega(\sigma) < K_\Omega(x,y)+\kappa.
\end{align*}
Since $k_\Omega$ is upper semi-continuous, we can perturb $\sigma$ and assume, in addition, that $\sigma$ is $C^1$-smooth, and that $\sigma^\prime(t) \neq 0$ for all $t \in [0,T]$.
Next consider the function
\begin{equation*}
f(t) = \int_0^t k_\Omega(\sigma(s); \sigma^\prime(s)) ds.
\end{equation*}
Since $\sigma([0,T])$ is compact, by Proposition~\ref{prop:lip} there exists $C \geq 1$ so that
\begin{equation}\label{eq:deriv_bounds}
\frac{1}{C} \norm{\sigma^\prime(t)} \leq k_\Omega(\sigma(t), \sigma^\prime(t)) \leq C \norm{\sigma^\prime(t)}
\; \; \; \text{for all } t \in [0,T].
\end{equation}
Thus, since $\sigma^\prime(t) \neq 0$ for all $t \in [0,T]$, we see that $f$ is a bi-Lipschitz strictly increasing function.
Next let $g:[0,\ell_\Omega(\sigma)] \rightarrow [0,T]$ be the inverse of $f$, that is $f(g(t)) = t$ for all $t \in [0,T]$. We claim that the curve $\sigma_0 := \sigma\circ g$ is an $(1,\kappa)$-almost-geodesic. Since $g$ is Lipschitz ($f$ is bi-Lipschitz) we see that $\sigma_0$ is Lipschitz and hence absolutely continuous. Moreover, if $g^\prime(t)$ exists then
\begin{align*}
\sigma_0^\prime(t) = \sigma^\prime(g(t)) g^\prime(t).
\end{align*}
When $g^\prime(t)$ exists and $f^\prime(g(t))$ exists and is non-zero, we have
\begin{align*}
g^\prime(t) = \frac{1}{f^\prime(g(t))}.
\end{align*}
Now, by the Lebesgue differentiation theorem applied to $f$, there exists a set $E \subset [0,T]$ of full measure so that if $s \in E$ then $f^\prime(s)$ exists and
\begin{align*}
f^\prime(s) = k_\Omega(\sigma(s); \sigma^\prime(s)).
\end{align*}
Since $g$ is bi-Lipschitz, $g^{-1}(E) \subset [0,\ell_\Omega(\sigma)]$ has full measure. Hence, as $\sigma^\prime(t)\neq 0$ for all $t \in [0,T]$, we can write (in view of \eqref{eq:deriv_bounds} above)
\begin{align*}
g^\prime(t) = \frac{1}{k_\Omega(\sigma(g(t)); \sigma^\prime(g(t)))}
\end{align*}
for almost every $t \in [0,\ell_\Omega(\sigma)]$. So for almost every $t \in [0, l_\Omega(\sigma)]$
\begin{align*}
k_\Omega( \sigma_0(t); \sigma_0^\prime(t)) = k_\Omega\Big(\sigma(g(t)); \sigma^\prime(g(t))g^\prime(t)\Big)=1.
\end{align*}
Therefore
\begin{align*}
\ell_{\Omega}(\sigma_0) = \ell_\Omega(\sigma) \leq K_\Omega(x,y) + \kappa.
\end{align*}
So, by Lemma~\ref{lem:restricted_l}, whenever $0 \leq s \leq t \leq \ell_\Omega(\sigma)$ we have
\begin{align*}
\abs{t-s} = \ell_\Omega(\sigma_0|_{[s,t]}) \leq K_\Omega(\sigma_0(t),\sigma_0(s)) +\kappa.
\end{align*}
Since $\sigma_0$ is absolutely continuous, Result~\ref{res:local_global} implies that
\begin{align*}
K_\Omega(\sigma_0(t),\sigma_0(s)) \leq \ell_\Omega(\sigma_0|_{[s,t]}) = \abs{t-s}.
\end{align*}
So $\sigma_0$ is an $(1,\kappa)$-almost-geodesic.
\end{proof}
\subsection{Real geodesics}
In this subsection we show that when $\Omega$ is a taut bounded domain, then any real geodesic must possess a certain degree of extra regularity: namely, that it is an $(1,0)$-almost-geodesic.
\begin{proposition}\label{prop:geodesics}
Suppose $\Omega \subset \Cb^d$ is a bounded domain. Then there exists $C_\Omega > 0$ so that any real geodesic $\sigma : I \rightarrow \Omega$ is $C_\Omega$-Lipschitz (with respect to the Euclidean distance). In particular
\begin{equation}\label{eq:ftCalc}
\sigma(t) = \sigma(t_0) + \int_{t_0}^t \sigma^\prime(s) ds
\end{equation}
for any $t,t_0 \in I$. Moreover, if $\Omega$ is taut then
\begin{align*}
K_\Omega(\sigma(t); \sigma^\prime(t)) = 1
\end{align*}
for almost every $t \in I$.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop:lip} there exists $c >0$ so that
\begin{align*}
K_\Omega(x,y) \geq c \norm{x-y}
\end{align*}
for all $x,y \in \Omega$. Then
\begin{align*}
\norm{\sigma(t) - \sigma(s)} \leq \frac{1}{c} \abs{t-s}.
\end{align*}
Thus $\sigma$ is Lipschitz (and $1/c$ is the $C_{\Omega}$ mentioned above). In particular, $\sigma$ is absolutely continuous, from which \eqref{eq:ftCalc} follows.
Next suppose that $\Omega$ is taut. We now appeal to Theorem~1.2 in \cite{V1989}. Since $\Omega$ is a taut and bounded domain, $k_\Omega$ is continuous; see, for instance, \cite[Section~3.5]{JP1993}. Hence the conditions stated in part~(1) of Result~\ref{res:local_global} are satisfied. Thus, in our specific context, \cite[Theorem 1.2]{V1989} reads as
\begin{align*}
\int_{t}^{t+h} k_\Omega(\sigma(s); \sigma^\prime(s))ds
= \sup_{\mathcal{P}}\sum_{j=1}^{N(\mathcal{P})} K_\Omega(\sigma(s_{j-1}), \sigma(s_j))
\end{align*}
where the supremum above ranges over all partitions
\begin{align*}
\mathcal{P}\,:\,t=s_0 < s_1 < s_2 < \dots < s_{N(\mathcal{P})}=t+h
\end{align*}
of $[t, t+h]$, and where $h > 0$ is such that $t, t+h \in I$. As $\sigma$ is a real geodesic, we then have
\begin{align*}
\int_{t}^{t+h} k_\Omega(\sigma(s); \sigma^\prime(s))ds = h
\end{align*}
for all $h > 0$ such that $t, t+h \in I$. Then by the Lebesque differentiation theorem
\begin{align*}
k_\Omega(\sigma(t); \sigma^\prime(t)) = 1
\end{align*}
for almost every $t \in I$.
\end{proof}
\subsection{Quasi-geodesics}
In this subsection, we show that any quasi-geodesic can, in a certain sense, be approximated by an almost-geodesic. This proposition will be needed in our proof of continuous extension of isometries.
\begin{definition} Suppose $\Omega \subset \Cb^d$ is a bounded domain and $I \subset \Rb$ is a interval. For $\lambda \geq 1$, $\kappa \geq 0$ a map $\sigma:I \rightarrow \Omega$ is called a \emph{$(\lambda, \kappa)$-quasi-geodesic} if
\begin{align*}
\frac{1}{\lambda} \abs{t-s} -\kappa \leq K_\Omega(\sigma(s), \sigma(t)) \leq \lambda \abs{t-s} + \kappa
\end{align*}
for all $s,t \in I$.
\end{definition}
\begin{remark}
Note that a $(\lambda, \kappa)$-quasi-geodesic is not required to be continuous. It is in this sense that it differs from an $(\lambda, \kappa)$-\emph{almost}-geodesic, which must have greater regularity; see Definition~\ref{def:almost_geodesic}. Furthermore, we ought to remark that the proposition below makes no assertions about existence of quasi-geodesics. Also, while Proposition~\ref{prop:almost_geod_exist} asserts that any pair of points of a bounded domain $\Omega\subset \Cb^d$ can be joined by an $(1, \kappa)$-almost-geodesic\,---\,which is more regular than a $(1, \kappa)$-quasi-geodesic\,---\,this comes \emph{with the proviso that $\kappa > 0$}.
\end{remark}
\begin{proposition}\label{prop:approx_quasi_geod}
Suppose $\Omega \subset \Cb^d$ is a bounded domain. For any $\lambda \geq 1$, $\kappa \geq 0$ there exist constants
$R > 0$, $\lambda_0\geq 1$, and $\kappa_0\geq 0$, depending \emph{only} on the pair $(\lambda, \kappa)$,
that have the following property: for any $(\lambda, \kappa)$-quasi-geodesic $\sigma:[a,b] \rightarrow \Omega$ there exists an $(\lambda_0, \kappa_0)$-almost-geodesic $S : [0,T] \rightarrow \Omega$ with $S(0) = \sigma(a)$, $S(T) = \sigma(b)$, and such that
\begin{align*}
\max \left\{ \sup_{t \in [a,b]} K_\Omega(\sigma(t), S), \sup_{t \in [0,T]} K_\Omega(S(t), \sigma) \right\} \leq R.
\end{align*}
Here, given a set $E\subset \Omega$ and a point $o\in \Omega$, we write $K_{\Omega}(o, E) := \inf_{x\in E}K_{\Omega}(o, x)$.
\end{proposition}
\begin{proof}
The argument falls into two cases, depending on the magnitude of $\abs{b - a}$. The case that requires some work is when $\abs{b - a}$ is large.
\medskip
\noindent{{\bf Case 1:} First consider the case when $\abs{b - a} > 1/2$. Fix a partition
\begin{align*}
a=t_0 < t_1 < t_2 <\dots < t_{N} = b
\end{align*}
so that $1/2\leq \abs{t_{k} - t_{k-1}} \leq 1$. For $1 \leq k \leq N$, let $\gamma_k : [0,T_k] \rightarrow \Omega$ be an $(1,1)$-almost-geodesic with $\gamma_k(0) = \sigma(t_{k-1})$ and $\gamma_k(T_k)=\sigma(t_k)$; notice that such a curve exists by Proposition~\ref{prop:almost_geod_exist}.}
Now, by the properties of $\gamma_k$ we get
\begin{align*}
T_k -1 \leq K_\Omega(\gamma_k(0), \gamma_k(T_k)) = K_\Omega(\sigma(t_{k-1}), \sigma(t_k)) \leq \lambda \abs{t_k-t_{k-1}} +\kappa \leq \lambda + \kappa,
\end{align*}
whence $T_k \leq \lambda + \kappa+1$, $k = 1,\dots, N$. Therefore,
\begin{align*}
K_\Omega(\gamma_k(t), \sigma(t_{k-1}))
= K_\Omega(\gamma_k(t), \gamma_k(0)) &\leq \abs{t} +1 \\
& \leq T_k + 1 \leq \lambda + \kappa+2
\end{align*}
for any $t \in [0,T_k]$.
Let $S : [a,b] \rightarrow \Omega$ be the curve defined as follows:
\begin{align*}
S(t) = \gamma_k\left( \frac{T_k}{t_k-t_{k-1}}( t-t_{k-1})\right), \; \text{ if } \; t_{k-1} \leq t \leq t_k, \; k = 1,\dots, N.
\end{align*}
Then, using the estimate above, for $t \in [t_{k-1},t_k]$ we have
\begin{align*}
K_\Omega(S(t), \sigma(t))
& \leq K_\Omega(S(t), \sigma(t_{k-1})) + K_\Omega(\sigma(t_{k-1}), \sigma(t)) \\
& \leq \lambda + \kappa +2+ \lambda\abs{t_{k-1}-t} + \kappa \\
&\leq 2 \lambda + 2 \kappa+2.
\end{align*}
Write $R := 2 \lambda + 2 \kappa+2$. Then
\begin{align*}
\abs{K_\Omega(S(t), S(s)) - K_\Omega(\sigma(t), \sigma(s))} \leq K_\Omega(S(t), \sigma(t))
+ K_\Omega(S(s), \sigma(s)) \leq 2R
\end{align*}
and so
\begin{align*}
\frac{1}{\lambda}\abs{t-s} - \kappa - 2R \leq K_\Omega(S(t), S(s)) \leq \lambda\abs{t-s} +\kappa + 2R.
\end{align*}
Finally since each $\gamma_k$ is an $(1,1)$-almost-geodesic we see that
\begin{align*}
k_\Omega(S(t); S^\prime(t))\, \leq \max_{1 \leq k \leq N}\,\frac{T_k}{t_{k+1}-t_k} \leq
2\lambda + 2\kappa + 2
\end{align*}
for almost every $t \in [a,b]$.
Thus $S:[a,b] \rightarrow \Omega$ is an $(\lambda_0, \kappa_0)$-almost-geodesic where $\lambda_0 = 2\lambda+2\kappa+2$ and $\kappa_0 = \kappa+2R = 4\lambda+5\kappa+4$.
\medskip
\noindent{{\bf Case 2:} Now consider the case when $\abs{b - a}\leq 1/2$. Let $S : [0, T]\rightarrow \Omega$ be an $(1,1)$-almost-geodesic with $S(0) = \sigma(a)$ and $S(T)=\sigma(b)$. Arguing as before shows that
\begin{align*}
T \leq \frac{\lambda}{2} +\kappa+2.
\end{align*}
Now if $t \in [a,b]$ then
\begin{align*}
K_\Omega(\sigma(t), S(0))= K_\Omega(\sigma(t), \sigma(0)) \leq \frac{\lambda}{2} +\kappa
\end{align*}
and if $t \in [0,T]$ then
\begin{align*}
K_\Omega(S(t), \sigma(0))= K_\Omega(S(t), S(0)) \leq T +1 \leq \frac{\lambda}{2} +\kappa+3.
\end{align*}}
\end{proof}
\section{A visibility condition}\label{sec:visible}
This section is dedicated to proving Theorem~\ref{thm:visible}. It is a key part of the present work. What makes Theorem~\ref{thm:visible} a key part of the proofs in the later sections is that, if $\Omega$ is a Goldilocks domain, $(\Omega, K_{\Omega})$ resembles \emph{adequately} a visibility space (in the sense of \cite{EO1973}, for instance) even though $(\Omega, K_{\Omega})$ is \emph{not} in general Gromov hyperbolic, nor is it known whether every pair of points can be joined by a geodesic.
We will need the following simple observation:
\begin{observation}
Suppose $f:\Rb_{\geq 0} \rightarrow \Rb_{\geq 0}$ is a bounded Lebesgue-measurable function such that
\begin{align*}
\int\nolimits_0^{\epsilon} \frac{1}{r} f(r) dr < \infty
\end{align*}
for some (and hence any) $\epsilon >0$. Then
\begin{align*}
\int\nolimits_0^{\infty} f(Ae^{-Bt}) dt < \infty
\end{align*}
for any $A,B > 0$.
\end{observation}
This is an immediate consequence of the change-of-variable formula, writing
$r = Ae^{-Bt}$ in the second integral above.
\begin{proof}[The proof of Theorem~\ref{thm:visible}] Suppose that there does not exist a compact set with the desired property. Then we can find a sequence $\sigma_n:[a_n,b_n] \rightarrow \Omega$ of $(\lambda, \kappa)$-almost-geodesics so that $\sigma_n(a_n) \in V_\xi$, $\sigma_n(b_n) \in V_\eta$, and
\begin{align*}
0 = \lim_{n \rightarrow \infty} \max\{ \delta_\Omega(\sigma_n(t)) : t \in [a_n,b_n]\}.
\end{align*}
By reparametrizing each $\sigma_n$ we can assume that
\begin{align*}
\delta_\Omega(\sigma_n(0)) = \max\{ \delta_\Omega(\sigma_n(t)) : t \in [a_n,b_n]\}.
\end{align*}
Then by passing to a subsequence we can assume $a_n \rightarrow a \in [-\infty,0]$, $b_n \rightarrow b \in [0,\infty]$, $\sigma_n(a_n) \rightarrow \xi^\prime$, and $\sigma_n(b_n) \rightarrow \eta^\prime$. By assumption $\xi^\prime \in \overline{V_\xi} \cap \partial \Omega$ and $\eta^\prime \in \overline{V_\eta} \cap \partial \Omega$. Notice that $\xi^\prime \neq \eta^\prime$ because $\overline{V_\xi} \cap \overline{V_\eta} = \emptyset$.
By Proposition~\ref{prop:ag_Lip}, there exists some $C>0$ so that each $\sigma_n$ is $C$-Lipschitz with respect to the Euclidean distance. Thus we can pass to another subsequence so that $\sigma_n$ converges locally uniformly on $(a,b)$ to a curve $\sigma:(a,b) \rightarrow \overline{\Omega}$ (we restrict to the open interval because $a$ could be $-\infty$ and $b$ could be $\infty$). Notice that $a \neq b$ because each $\sigma_n$ is $C$-Lipschitz and so
\begin{align*}
0 < \norm{\xi^\prime-\eta^\prime} \leq C \abs{b-a}.
\end{align*}
Since $\sigma_n$ is an $(\lambda, \kappa)$-almost-geodesic
\begin{equation*}
k_\Omega(\sigma_n(t); \sigma_n^\prime(t)) \leq \lambda
\end{equation*}
for almost every $t \in [a_n,b_n]$. We claim that:
\begin{equation}\label{eq:speed_estimate}
\norm{\sigma_n^\prime(t)} \leq \lambda M_\Omega(\delta_\Omega(\sigma_n(t)))
\; \; \; \text{for almost every } t \in [a_n,b_n].
\end{equation}
In the case when $\sigma_n^\prime(t) = 0$ this is immediate and if $\sigma_n^\prime(t) \neq 0$ we have
\begin{align*}
\norm{\sigma_n^\prime(t)} \leq \frac{\lambda}{k_\Omega\left(\sigma_n(t); \frac{1}{\norm{\sigma_n^\prime(t)}} \sigma^\prime(t) \right)} \leq \lambda M_\Omega(\delta_\Omega(\sigma_n(t))).
\end{align*}
\smallskip
\noindent \textbf{Claim 1:} $\sigma:(a,b) \rightarrow \overline{\Omega}$ is a constant map.
\noindent \emph{Proof.} Since
\begin{align*}
\delta_\Omega(\sigma_n(t)) \leq \delta_\Omega(\sigma_n(0))
\end{align*}
we see that
\begin{align*}
M_\Omega(\delta_\Omega(\sigma_n(t))) \leq M_\Omega(\delta_\Omega(\sigma_n(0))).
\end{align*}
Thus $M_\Omega(\delta_\Omega(\sigma_n(t))) \rightarrow 0$ uniformly. But then if $u \leq w$ and $u,w \in (a,b)$
\begin{align*}
\norm{\sigma(u)-\sigma(w)}
= \lim_{n \rightarrow \infty} \norm{\sigma_n(u)-\sigma_n(w)}
&\leq \limsup_{n \rightarrow \infty} \int_u^w \norm{\sigma_n^\prime(t)} dt\\
& \leq \lambda\limsup_{n \rightarrow \infty} \int_u^w\!\!M_\Omega(\delta_\Omega(\sigma_n(t))) dt = 0.
\end{align*}
Thus $\sigma$ is constant. \hfill $\blacktriangleleft$
\medskip
We will establish a contradiction by proving the following:
\medskip
\noindent \textbf{Claim 2:} $\sigma:(a,b) \rightarrow \overline{\Omega}$ is not a constant map.
\noindent \emph{Proof.} Fix $x_0 \in \Omega$. Then there exists $C,\alpha > 0$ so that
\begin{align*}
K_\Omega(x,x_0) \leq C + \alpha \log \frac{1}{\delta_\Omega(z)}
\end{align*}
for all $x \in \Omega$. Therefore
\begin{align*}
\frac{1}{\lambda}\abs{t} - \kappa \leq K_\Omega(\sigma_n(0), \sigma_n(t))
& \leq K_\Omega(\sigma_n(0),x_0) + K_\Omega(x_0, \sigma_n(t)) \\
& \leq 2C + \alpha \log \frac{1}{\delta_\Omega(\sigma_n(0))\delta_\Omega(\sigma_n(t))}.
\end{align*}
Thus
\begin{align*}
\delta_\Omega(\sigma_n(t)) \leq \sqrt{ \delta_\Omega(\sigma_n(0)) \delta_\Omega(\sigma_n(t)) } \leq A e^{-B\abs{t}}
\end{align*}
where $A = e^{(2C+\kappa)/(2\alpha)}$ and $B = 1/(2\alpha\lambda)$.
Thus, by the estimate \eqref{eq:speed_estimate}, for almost every $t \in [a_n,b_n]$ we have
\begin{align*}
\norm{\sigma_n^\prime(t)} \leq \lambda M_\Omega(\delta_\Omega(\sigma_n(t))) \leq \lambda M_\Omega( A e^{-B\abs{ t}})
\end{align*}
Now fix $a^\prime, b^\prime \in (a,b)$ so that
\begin{align*}
\norm{\xi^\prime-\eta^\prime} > \lambda \int_a^{a^\prime} M_\Omega(Ae^{-B\abs{t}}) dt +\lambda \int_{b^\prime}^b M_\Omega(Ae^{-B\abs{t}}) dt.
\end{align*}
Then
\begin{align*}
\norm{\sigma(b^\prime)-\sigma(a^\prime) }
&= \lim_{n \rightarrow \infty} \norm{\sigma_n(b^\prime)-\sigma_n(a^\prime) } \\
& \geq \lim_{n \rightarrow \infty} \big(\norm{\sigma_n(b_n)-\sigma_n(a_n) } -
\norm{\sigma_n(b_n)-\sigma_n(b^\prime) } - \norm{\sigma_n(a^\prime)-\sigma_n(a_n) }\big) \\
& \geq \norm{\xi^\prime-\eta^\prime} - \limsup_{n \rightarrow \infty} \int_{b^\prime}^{b_n} \norm{\sigma_n^\prime(t)} dt - \limsup_{n \rightarrow \infty} \int_a^{a^\prime} \norm{\sigma_n^\prime(t)} dt \\
& \geq \norm{\xi^\prime-\eta^\prime} - \limsup_{n \rightarrow \infty}\lambda \int_{b^\prime}^{b_n} M_\Omega(Ae^{-B \abs{t}})dt - \limsup_{n \rightarrow \infty} \lambda\int_{a_n}^{a^\prime}M_\Omega(Ae^{-B \abs{t}}) dt\\
& = \norm{\xi^\prime-\eta^\prime} - \lambda\int_{b^\prime}^b M_\Omega(Ae^{-B\abs{t}}) dt -\lambda \int_a^{a^\prime} M_\Omega(Ae^{-B\abs{t}}) dt >0.
\end{align*}
Thus $\sigma:(a,b) \rightarrow \overline{\Omega}$ is non-constant. \hfill $\blacktriangleleft$
\medskip
The above contradicts Claim~1. This establishes the existence of the compact $K$ with the stated property.
\end{proof}
\section{Extensions of quasi-isometries}\label{sec:gromov_prod}
\subsection{The Gromov boundary} Let $(X,d)$ be a metric space. Given three points $x,y,o \in X$, the \emph{Gromov product} is given by
\begin{align*}
(x|y)_o = \frac{1}{2} \left( d(x,o)+d(o,y)-d(x,y) \right).
\end{align*}
When $(X,d)$ is proper and Gromov hyperbolic, the Gromov product can be used to define an abstract boundary at infinity denoted $X(\infty)$ and called the \emph{Gromov boundary}. In particular, a seqeunce $(x_n)_{n \in \Nb} \subset X$ is said to \emph{converge at $\infty$} if
\begin{align*}
\liminf_{n,m \rightarrow \infty} (x_n|x_m)_o = \infty
\end{align*}
for some (and hence) any $o \in X$. Two sequences $(x_n)_{n \in \Nb}$ and $(y_n)_{n \in \Nb}$ in $X$ are equivalent if
\begin{align*}
\liminf_{n,m \rightarrow \infty} (x_n|y_m)_o = \infty
\end{align*}
for some (and hence) any $o \in X$. Finally $X(\infty)$ is the set of equivalence classes of sequences converging to infinity. Moreover, $X \cup X(\infty)$ has a natural topology (see for instance~\cite[Part III.H.3]{BH1999}) that makes it a compactification of $X$.
\subsection{Continuous extensions of quasi-isometries}\label{ssec:all_about_qi}
Given a bounded domain $\Omega\subset \Cb^d$, it is, in general, very hard to determine whether $(\Omega, K_{\Omega})$ is Gromov hyperbolic. Furthermore, as we saw in subsection~\ref{ssec:cont_extn}, $K_{\Omega}$ fails to be Gromov hyperbolic for domains as regular as convex domains with $C^\infty$-smooth boundary if $\partial\Omega$ has points of infinite type; see \cite{Z2014}. This renders unusable a very natural approach, namely Result~\ref{res:gromov_qi_ext}, for studying the boundary behavior of continuous quasi-isometries (for the Kobayashi metric), even if they are holomorphic, on such domains. We therefore explore alternative notions of good compactifications\,---\,from the viewpoint of obtaining continuous extensions of quasi-isometries\,---\,of $(\Omega, K_{\Omega})$. To this end, we begin with a couple of very general definitions.
\begin{definition}
Let $(X,d)$ be a metric space. A pair $(\iota, X^*)$ is a \emph{compactification} of $X$ if $X^*$ is a sequentially compact Hausdorff topological space, $\iota:X \rightarrow X^*$ is a homeomorphism onto its image, and $\iota(X)$ is open and dense in $X^*$.
\end{definition}
\begin{definition}\label{def:gooc}
Suppose $(\iota,X^*)$ is a compactification of a geodesic metric space $(X,d)$. We say $(\iota,X^*)$ is a \emph{good compactification} if for all sequences $\sigma_n:[a_n,b_n] \rightarrow X$ of geodesics with the property
\begin{align*}
\lim_{n \rightarrow \infty} \iota(\sigma_n(a_n)) = \lim_{m \rightarrow \infty} \iota(\sigma_n(b_n)) \in X^* \setminus \iota(X)
\end{align*}
we have
\begin{align*}
\liminf_{n \rightarrow \infty} d(o, \sigma_n) = \infty
\end{align*}
for any $o \in X$.
\end{definition}
To clarify our notation: if $\sigma : [0, T]\rightarrow X$ is a map and $o\in X$,
$d(o,\sigma) := \inf\{d(o,\sigma(s)) : s \in [0,T]\}$.
As the next observation shows, good compactifications only exist for proper metric spaces:
\begin{observation} Suppose $(\iota,X^*)$ is a good compactification of a metric space $(X,d)$. Then $(X,d)$ is a proper metric space.
\end{observation}
\begin{proof}
Fix $R > 0$ and $x_0 \in X$. We claim that the set $B:=\{ y \in X : d(y,x_0) \leq R\}$ is compact. To see this, fix a sequence $x_n \in B$. Since $X^*$ is sequentially compact we can assume, passing to a subsequence if necessary, that $\iota(x_n) \rightarrow \xi \in X^*$. If $\xi \in X$, then $\xi \in B$. If $\xi \in X^* \setminus \iota(X)$ then the curve $\sigma_n:[0,0] \rightarrow X$ given by $\sigma_n(0)=x_n$ is a geodesic. So, by the definition of a good compactification,
\begin{align*}
\infty = \liminf_{n \rightarrow \infty} d(x_0, \sigma_n) = \liminf_{n \rightarrow \infty} d(x_0, x_n) \leq R,
\end{align*}
which is a contradiction, whence $\xi\notin X^*\setminus \iota(X)$.
\end{proof}
\begin{remark}
We now discuss a few examples and look at some motivations underlying Definition~\ref{def:gooc}.
\begin{enumerate}
\item Let $X^* = \Rb^d \cup \{\infty\}$ be the one point compactification of $(\Rb^d, d_{\Euc})$, then $X^*$ is not a good compactification.
\item In view of Theorem~\ref{thm:quasi_isometry_ext} below, it would be desirable if the Gromov compactification $X\cup X(\infty)$, where $(X,d)$ is a proper geodesic Gromov hyperbolic space, were subsumed by Definition~\ref{def:gooc}. This is in fact the case by Result~\ref{res:gromov_qi_ext}.
\item Let $\Omega$ be a bounded convex domain with $C^{1,\alpha}$-smooth boundary and assume that for each $\xi\in \partial\Omega$, the affine set $\xi+H_\xi(\partial\Omega)$ (see subsection~\ref{ssec:cond_1} for the definition) intersects
$\overline\Omega$ precisely at $\xi$. It is a classical fact that $(\Omega, K_{\Omega})$ is Cauchy complete, see for instance~\cite[Proposition 2.3.45]{A1989}. It then follows that $(\Omega, K_{\Omega})$ is a geodesic metric space. If $\partial\Omega$ contains points that are not of finite type (in the sense of D'Angelo), then $(\Omega, K_{\Omega})$ is not Gromov hyperbolic; see \cite{Z2014}. Yet, irrespective of whether or not $(\Omega, K_{\Omega})$ is Gromov hyperbolic, it follows from \cite[Theorem~2.11]{Z2015} that $({\sf id}_{\Omega}, \overline\Omega)$ is a good compactification.
\end{enumerate}
\end{remark}
The next theorem could be stated for any geodesic metric space $(X, d)$ that admits a good compactification $(\iota, X^*)$ and any quasi-isometric embedding $F : (X, d) \rightarrow (\Omega, K_\Omega)$. However, it is unclear what the interest in such a general set-up could be. On the other hand, we have seen in the discussion in subsection~\ref{ssec:cont_extn} that quasi-isometric embeddings\,---\,relative to the Kobayashi metric\,---\,between \emph{domains} arise rather naturally, while existing tools for studying their boundary are no longer effective. These are the considerations behind the statement about quasi-isometries between two domains in Theorem~\ref{thm:quasi_isometry_ext}. Observe that Theorem~\ref{thm:qi_ext} is a special case of Theorem~\ref{thm:quasi_isometry_ext}.
\begin{theorem}\label{thm:quasi_isometry_ext}
Let $D$ be a bounded domain in $\Cb^k$ and suppose $(D, K_D)$ admits a good compactification $(\iota, D^*)$. Let $\Omega\subset \Cb^d$ be a Goldilocks domain. If $F : (D, K_D) \rightarrow (\Omega, K_\Omega)$ is a continuous quasi-isometric embedding, then $F$ extends to a continuous map from $D^*$ to $\overline{\Omega}$.
\end{theorem}
The following lemma is the key to proving Theorem~\ref{thm:quasi_isometry_ext}. Its proof follows immediately from
Theorem~\ref{thm:visible} and Proposition~\ref{prop:approx_quasi_geod}.
\begin{proposition}\label{prop:qg_visibility}
Suppose $\Omega \subset \Cb^d$ is a Goldilocks domain and $\lambda \geq1$, $\kappa \geq 0$. If $\xi,\eta \in \partial\Omega$ and $V_\xi, V_\eta$ are neighborhoods of $\xi,\eta$ in $\overline{\Omega}$ so that $\overline{V_\xi} \cap \overline{V_\eta} = \emptyset$, then for each $x_0 \in \Omega$ there exists $R > 0$ with the following property: if $\sigma: [a,b] \rightarrow \Omega$ is a $(\lambda, \kappa)$-quasi-geodesic with $\sigma(a) \in V_\xi$ and $\sigma(b) \in V_\eta$ then
\begin{align*}
K_\Omega( x_0, \sigma) \leq R.
\end{align*}
\end{proposition}
\begin{remark} If $(\Omega, K_\Omega)$ is Cauchy complete then the conclusion of Theorem~\ref{thm:visible} and Proposition~\ref{prop:qg_visibility} are equivalent (by Result~\ref{res:hopf_rinow}). But in general, Proposition~\ref{prop:qg_visibility} is weaker. This is due to the following hypothetical example: suppose there exists two sequences $x_n \rightarrow \xi \in \partial \Omega$ and $y_n \rightarrow \eta \in \partial \Omega$ so that
\begin{align*}
\sup_{n \in \Nb} K_\Omega(x_n, x_0) = R_1 < \infty \text{ and } \sup_{n \in \Nb} K_\Omega(y_n, x_0) = R_2 < \infty.
\end{align*}
Then the sequence of maps $\sigma_n :[0,1] \rightarrow \Omega$ given by
\begin{align*}
\sigma_n(t) = \left\{ \begin{array}{ll} x_n & \text{ if } 0 \leq t < 1/2 \\
y_n & \text{ if } 1/2 \leq t \leq 1.
\end{array}
\right.
\end{align*}
are all $(1,R_1+R_2+1)$-quasi-geodesics. But
\begin{align*}
\lim_{n \rightarrow \infty} \left(\max_{t \in [a_n,b_n]} \delta_\Omega(\sigma_n(t))\right)=0.
\end{align*}
\end{remark}
Theorem~\ref{thm:quasi_isometry_ext} is an application of Theorem~\ref{thm:visible}, with Proposition~\ref{prop:qg_visibility} serving as a visibility theorem for quasi-geodesics. In fact, visibility may be seen as a tool for controlling the oscillation of $F$ ($F$ as in Theorem~\ref{thm:quasi_isometry_ext}) along various sequences $(x_n)_{n \in \Nb} \subset D$ as $\iota(x_n)$ approaches some chosen point in $\xi\in D^*\setminus \iota(D)$. The idea of the proof is as follows. If the cluster set of values as one approaches $\xi$ were non-trivial, we would have two sequences $(x_n)_{n \in \Nb}$ and $(y_n)_{n \in \Nb}$ as above such that $(F(x_n))_{n \in \Nb}$ and $(F(y_n))_{n \in \Nb}$ approach two \emph{different} points in $\partial\Omega$. Let $\sigma_n$ be a geodesic joining $x_n$ to $y_n$. Then $F\circ\sigma_n$ are quasi-geodesics, whence, by Proposition~\ref{prop:qg_visibility}, these curves must be within some finite Kobayashi distance from any chosen point in $\Omega$. But then the curves $\sigma_n$ would have the analogous property in $D$, which is ruled out by the geometry of $D$.
\begin{proof}[The proof of Theorem~\ref{thm:quasi_isometry_ext}]
Fix some $\xi \in D^* \setminus \iota(D)$. We claim that
$\lim_{\iota(x) \rightarrow \xi} F(x)$
exists and is in $\partial \Omega$. Fix a sequence $(x_n)_{n\in \Nb}\subset D$ so that $\iota(x_n) \rightarrow \xi$. Since $\overline{\Omega}$ is compact we can assume, passing to a subsequence if necessary, that $F(x_n)$ converges to some $\eta \in \overline{\Omega}$. Fix a point $x_0\in D$. Since $\iota(x_n) \rightarrow \xi \in D^* \setminus \iota(D)$ and $(D, K_D)$ is proper we see that
\begin{align*}
\lim_{n \rightarrow \infty} K_D(x_n, x_0) = \infty.
\end{align*}
Then, since $F$ is a quasi-isometric embedding,
\begin{align*}
\lim_{n \rightarrow \infty} K_\Omega(F(x_n), F(x_0)) = \infty.
\end{align*}
Thus $\eta \in \partial \Omega$. Now we claim that
\begin{align*}
\lim_{\iota(x) \rightarrow \xi} F(x) = \eta.
\end{align*}
If not, then we would have a sequence $y_n \in D$ so that $\iota(y_n) \rightarrow \xi$, $F(y_n) \rightarrow \eta^\prime$, and $\eta \neq \eta^\prime$. Let $\sigma_n:[0,T_n] \rightarrow D$ be a geodesic with $\sigma_n(0)=x_n$ and $\sigma_n(T_n) = y_n$. Then $(F \circ \sigma_n) : [0,T_n] \rightarrow \Omega$ is a quasi-geodesic and since $\eta \neq \eta^\prime$, Proposition~\ref{prop:qg_visibility} implies that
\begin{align*}
\max_{n \in \Nb} K_\Omega(F(x_0), F \circ \sigma_n) < \infty.
\end{align*}
But since $F$ is a quasi-isometric embedding this implies that
\begin{align*}
\max_{n \in \Nb} K_D(x_0, \sigma_n) < \infty.
\end{align*}
This contradicts the fact that $(\iota, D^*)$ is a good compactification. Thus for any $\xi \in D^* \setminus \iota(D)$
\begin{align*}
\lim_{\iota(x) \rightarrow \xi} F(x)
\end{align*}
exists and is in $\partial\Omega$.
Next define the map $\wt{F}: D^* \rightarrow \overline{\Omega}$ by
\begin{align*}
\wt{F}(\xi) = \begin{cases}
F(\iota^{-1}(\xi)), &\text{if $\xi \in \iota(D)$}, \\
\lim_{\iota(x) \rightarrow \xi} F(x), &\text{if $\xi \in D^* \setminus \iota(D)$}.
\end{cases}
\end{align*}
We claim that $\wt{F}$ is continuous on $D^*$. Since $F$ is continuous on $D$ and $\iota(D) \subset D^*$ is open, it is enough to show that $\wt{F}$ is continuous at each $\xi \in D^* \setminus \iota(D)$. So fix some $\xi \in D^*\setminus \iota(D)$. Since $\overline{\Omega}$ is compact it is enough to show the following: if $\xi_n \rightarrow \xi$ and $\wt{F}(\xi_n) \rightarrow \eta$ then $\eta = \wt{F}(\xi) $. Now for each $n$ pick $x_n \in D$ sufficiently close to $\xi_n$ (in the topology of $D^*$) so that $\iota(x_n) \rightarrow \xi$ and
\begin{align*}
\|F(x_n) - \wt{F}(\xi_n)\| < 1/n.
\end{align*}
Then
\begin{align*}
\eta = \lim_{n \rightarrow \infty} \wt{F}(\xi_n) =\lim_{n \rightarrow \infty} F(x_n)
\end{align*}
but since $\iota(x_n) \rightarrow \xi$, from the discussion in the preceding paragraph, we get
\begin{align*}
\lim_{n \rightarrow \infty} F(x_n) = \wt{F}(\xi).
\end{align*}
Hence $\wt{F}$ is continuous.
\end{proof}
\subsection{The behavior of the Gromov product on Goldilocks domains}\label{ssec:Gromov_product}
Returning to the discussion at the start of this section: if $(X,d)$ is a proper Gromov hyperbolic metric space and $x_n, y_m$ are two sequences in $X$ converging to distinct points in $X(\infty)$ then (by definition)
\begin{align*}
\limsup_{n,m \rightarrow \infty} (x_n | y_m)_{o} < \infty
\end{align*}
for any $o \in X$. We will now show that the Kobayashi distance on a Goldilocks domain has similar behavior. If $\Omega \subset \Cb^d$ is a domain and $x,y,o \in \Omega$, we shall denote the Gromov product for $(\Omega, K_{\Omega})$ by $(x|y)_o^{\Omega}$.
\begin{proposition}\label{prop:gromov_prod}
Suppose $\Omega \subset \Cb^d$ is a Goldilocks domain. If $x_n, y_n \in \Omega$, $x_n \rightarrow \xi \in \partial \Omega$, $y_n \rightarrow \eta \in \partial \Omega$, and $\xi \neq \eta$ then
\begin{align*}
\limsup_{n,m \rightarrow \infty} (x_n|y_m)_o^{\Omega} < \infty
\end{align*}
for any $o \in \Omega$.
\end{proposition}
This proposition follows immediately from the next lemma, Proposition~\ref{prop:almost_geod_exist}, and Theorem~\ref{thm:visible}.
\begin{lemma}
Suppose $\Omega \subset \Cb^d$ is a domain and $x,y,o \in \Omega$. If $\sigma:[0,T] \rightarrow \Omega$ is an $(1,\kappa)$-almost-geodesic with $\sigma(0)=x$ and $\sigma(T)=y$ then
\begin{align*}
(x|y)_o^{\Omega} \leq \frac{3}{2} \kappa + K_\Omega(o,\sigma).
\end{align*}
\end{lemma}
\begin{proof}
Suppose $s\in[0,T]$ then
\begin{align}
K_\Omega(x,y)
&\geq \abs{T-0} - \kappa = \abs{T-s}+\abs{s-0} - \kappa \notag \\
& \geq K_\Omega(x,\sigma(s)) + K_\Omega(\sigma(s),y) - 3\kappa \label{eq:inv_trngl}
\end{align}
so
\begin{align*}
(x|y)_o^{\Omega}
&\leq \frac{3}{2} \kappa + \frac{1}{2} \left( K_\Omega(x,o) + K_\Omega(o,y) - K_\Omega(x,\sigma(s))-K_\Omega(\sigma(s),y)\right)\\
&\leq \frac{3}{2} \kappa+ K_\Omega(o, \sigma(s))
\end{align*}
by the reverse triangle inequality.
\end{proof}
\section{Proper holomorphic maps}\label{sec:proper}
The main result of this section once more highlights the point\,---\,but in a manner different from that illustrated by subsection~\ref{ssec:all_about_qi}\,---\,that the conditions defining a Goldilocks domain $\Omega\Subset \Cb^d$ impose adequate control on the oscillation of the values of a proper map into $\Omega$ along suitably chosen sequences approaching the boundary.
Since proper holomorphic maps are, in general, rather far from quasi-isometries of the Kobayashi distance, the methods in this section differ from those in Section~\ref{sec:gromov_prod}. This is also the reason that the statement of Theorem~\ref{thm:proper} addresses a subclass of the class of Goldilocks domains.
We will need the following results.
\begin{result}[a paraphrasing of Theorem~1 of \cite{DF1977}]\label{res:diederich_fornaess}
Let $\Omega\subset \Cb^d$ be a bounded pseudoconvex domain with $C^2$-smooth boundary. Then there is a defining function $\rho$ of class $C^2$ and a number $\eta_0\in (0, 1)$ such that for each $\eta$, $0< \eta\leq \eta_0$, the function $\wh{\rho} := -(-\rho)^\eta$ is a bounded strictly plurisubharmonic exhaustion function on $\Omega$.
\end{result}
The next result is a version of a Hopf lemma for subharmonic functions. This version is Proposition~1.4 of \cite{M1993b}.
\begin{result}\label{res:Hopf_lemma}
Let $\Omega\subset \Cb^d$ be a bounded domain that satisfies an interior-cone condition with aperture $\theta$. Let $\psi : \Omega\rightarrow [-\infty, 0)$ be a plurisubharmonic function. Then, there exist constants $c > 0$ and $\alpha > 1$ ($\alpha = \pi/\theta$) such that
\begin{align*}
\psi(z)\leq -c(\delta_{\Omega}(z))^\alpha
\end{align*}
for all $z\in \Omega$.
\end{result}
The idea of using the Kobayashi metric to study the boundary behavior of proper holomorphic maps goes back to Diederich and Forn{\ae}ss; see \cite{DF1979}. We adapt their idea to maps for which the target space may have non-smooth boundary.
\begin{proof}[The proof of Theorem~\ref{thm:proper}]
By Result~\ref{res:diederich_fornaess}, we can find a $C^2$-smooth defining function $\rho$ of $D$ and an $\eta\in (0, 1)$ such that $\varphi(z) := -(-\rho(z))^\eta$, $z\in D$, is strictly plurisubharmonic on $D$. Define
\begin{align*}
\psi(w) := \max\left\{\varphi(z) : F(z) = w\right\}
\end{align*}
for each $w\in \Omega$. The function $\psi$, being locally plurisubharmonic at each point not in the branch locus of $F$, is plurisubharmonic away from the branch locus of $F$. As $\psi$ is continuous on $\Omega$, it follows from classical facts\,---\,see, for instance, \cite[Appendix~PSH]{JP1993}\,---\,that $\psi$ is a strictly negative plurisubharmonic function on $\Omega$. As $\Omega$ satisfies an interior-cone condition, there exists, by Result~\ref{res:Hopf_lemma}, a $c > 0$ and an $\alpha > 1$ such that
\begin{align*}
\psi(w)\leq -c(\delta_{\Omega}(w))^\alpha
\end{align*}
for all $w\in \Omega$. Hence
\begin{align}
(\delta_{\Omega}(F(z)))^\alpha\leq \frac{1}{c}|\psi(F(z))|\leq \frac{1}{c}|\varphi(z)|
&= \frac{1}{c}|\rho(z)|^\eta \notag \\
&\leq C\delta_D(z)^\eta \; \; \; \text{for all } z\in D, \label{eq:dist_comparison}
\end{align}
for some $C > 0$, where the last inequality follows from the fact that $\rho$ is a defining function.
It follows from the \emph{proof} of part~(2) of Proposition~\ref{prop:lip}\,---\,see the inequality \eqref{eq:k-metric_optimal}\,---\,that
\begin{align*}
k_{D}(z; v) \leq \frac{\|v\|}{\delta_D(z)}
\end{align*}
for all $z\in D$ and $v\in \Cb^d$. Fix a vector $v$ such that $\|v\| = 1$. Then,
\begin{align*}
k_{\Omega}\left(F(z); F^\prime(z)v\right)
\leq k_D(z; v) \leq \frac{1}{\delta_D(z)}
\end{align*}
for all $z\in D$. It follows from this and \eqref{eq:dist_comparison} that
\begin{align}
\|F^\prime(z)v\| \leq \frac{(\delta_D(z))^{-1}}{k_{\Omega}\!\left(F(z);
\tfrac{F^\prime(z)v}{\|F^\prime(z)v\|}\right)} \leq \
&\delta_D(z)^{-1}M_{\Omega}(C(\delta_D(z))^{\eta/\alpha}) \notag \\
&\forall z\in D \text{ and } \forall v\notin {\rm Ker}(F^\prime(z)) : \|v\| = 1, \label{eq:deriv_bd}
\end{align}
and, clearly, the bound on $\|F^\prime(z)v\|$ extends trivially to all unit vectors in ${\rm Ker}(F^\prime(z))$.
As $D$ is bounded and has $C^2$ boundary, there exists an $R > 0$ such that
\begin{align*}
\{z\in D : \delta_D(z)\leq R\}\cup \partial{D} = \bigsqcup_{\xi\in \partial{D}}\{\xi+t\boldsymbol{\nu}(\xi) : 0\leq t\leq R\},
\end{align*}
where $\boldsymbol{\nu}(\xi)$ is the inward unit normal vector to $\partial{D}$ at $\xi$. By construction, for each $r\in (0, R]$, we have homeomorphisms $\pi_r : \partial{D}\rightarrow \{z\in D : \delta_D(z) = r\} =: \partial{D}_r$ defined as
\begin{align*}
\pi_r(\xi) :=&\ \text{the unique $z\in \partial{D}_r$ such that $\delta_D(z) = d_{{\rm Euc}}(\xi, z)$} \\
=&\ \xi + r\boldsymbol{\nu}(\xi).
\end{align*}
Pick and fix an $r\in (0, R)$. Write $F = (F_1,\dots, F_d)$ and fix a $j : 1\leq j\leq d$. If $\xi\in \partial{D}$ and $0 < t < r$, then
\begin{align*}
F_j(\xi + t\boldsymbol{\nu}(\xi)) =
F_j(\pi_r(\xi)) - \int_t^r\left[\sum_{l=1}^{d}\partial_lF_j(\xi + s\boldsymbol{\nu}(\xi))\boldsymbol{\nu}(\xi)_l\right]ds,
\end{align*}
where $\partial_l$ denotes the complex differential operator $\partial/\partial z_l$. By \eqref{eq:deriv_bd} and the sentence following it, we have
\begin{align*}
\int_t^r\left|\sum_{l=1}^{d}\partial_lF_j(\xi + s\boldsymbol{\nu}(\xi))\boldsymbol{\nu}(\xi)_l\right|ds
&\leq \int_t^r\frac{M_{\Omega}(Cs^{\eta/\alpha})}{s}ds \\
&= \frac{\alpha}{\eta}\int_{Ct^{\eta/\alpha}}^{Cr^{\eta/\alpha}}\frac{M_{\Omega}(u)}{u}du.
\end{align*}
Thus, given that $u\longmapsto M_{\Omega}(n)/u$ is integrable on $[0, R]$, the limit
\begin{align*}
\bv{F}_j(\xi)\,:=\,F_j(\pi_r(\xi)) -
\lim_{t\to 0^+}\int_t^r\left[\sum_{l=1}^{d}\partial_lF_j(\xi + s\boldsymbol{\nu}(\xi))\boldsymbol{\nu}(\xi)_l\right]ds
\end{align*}
exists for every $\xi\in \partial{D}$.
We shall now use an aspect of a Hardy--Littlewood-type argument to complete the proof. Pick an $\epsilon > 0$. The preceding argument shows that
\begin{equation}\label{eq:unif_small}
|\bv{F}_j(\xi) - F_j(\pi_r(\xi))| \leq
\frac{\alpha}{\eta}\int_{0}^{Cr^{\eta/\alpha}}\frac{M_{\Omega}(u)}{u}du \; \; \;
\forall \xi\in \partial{D} \text{ and } \forall r\in (0, R).
\end{equation}
Hence, as $u\longmapsto M_{\Omega}(n)/u$ is integrable on $[0, R]$, given $\xi_1, \xi\in \partial{D}$, we can find a constant $r(\epsilon) > 0$ sufficiently small that
\begin{equation}\label{eq:radial_est}
|\bv{F}_j(\xi_i) - F_j(\pi_{r(\epsilon)}(\xi_i))| < \epsilon/3, \; \; \; i = 1, 2.
\end{equation}
Now, as $\left(\left.F_j\right|_{\partial{D}_{r(\epsilon)}}\right)\circ \pi_{r(\epsilon)}$ is uniformly continuous, $\exists\delta > 0$ such that
\begin{align*}
|F_j(\pi_{r(\epsilon)}(\xi_1)) - F_j(\pi_{r(\epsilon)}(\xi_2))| < \epsilon/3\,\text{ whenever }
d_{{\rm Euc}}(\xi_1, \xi_2) < \delta.
\end{align*}
From this and \eqref{eq:radial_est}, we deduce that $\bv{F}_j$ is continuous.
Now write
\begin{align*}
\wt{F}(z) = (\wt{F}_1,\dots, \wt{F}_d)(z)
= \begin{cases}
F(z), &\text{if $z\in D$}, \\
\bv{F}(z), &\text{if $z\in \partial{D}$}.
\end{cases}
\end{align*}
To prove that $\wt{F}$ is continuous on $\conj{D}$, it suffices to show that given a $\xi\in \partial{D}$ and any sequence $\{z_n\}\subset \conj{D}\setminus\{\xi\}$ that converges to $\xi$, $\wt{F}_j(z_n)\to \bv{F}_j(\xi)$ for each $j = 1,\dots, d$. We will construct an auxiliary sequence in $\conj{D}\setminus\{\xi\}$. To this end, consider the continuous map ${\sf p} : (\{z\in D : \delta_D(z)\leq R\}\cup \partial{D})\rightarrow \partial{D}$, defined as
\begin{equation*}
{\sf p}(z) = \pi^{-1}_r(z) \; \; \; \text{if $z\in \partial{D}_r$}.
\end{equation*}
For all {\em sufficiently large} $n$, we can define
\begin{align*}
Z_n
:= \begin{cases}
z_n, &\text{if $z_n\in \partial{D}$}, \\
z_n, &\text{if $z_n\in \{\xi + t\boldsymbol{\nu}(\xi) : 0 < t\leq R\}$}, \\
{\sf p}(z_n), &\text{otherwise}.
\end{cases}
\end{align*}
By continuity of ${\sf p}$, $Z_n\to \xi$. Using integrability of $u\longmapsto M_{\Omega}(n)/u$ once again, it follows from \eqref{eq:unif_small} that $(\wt{F}_j(z_n) - \wt{F}_j(Z_n))\to 0$. However, it follows from the previous two paragraphs that $\wt{F}_j(Z_n)\to \bv{F}_j(\xi)$ for each $j = 1,\dots, d$. Hence, by the preceding discussion, we infer that $\wt{F}$ is continuous.
\end{proof}
\section{Wolff--Denjoy theorems}\label{sec:WD}
Before proving the Wolff--Denjoy theorems stated in the introduction, let us explain the main idea. Suppose that $\Omega \subset \Cb^d$ is a Goldilocks domain and $f:\Omega \rightarrow \Omega$ is 1-Lipschitz with respect to the Kobayashi metric. The difficult case to rule out is when there exist two sequences $m_i, n_j \rightarrow \infty$ so that $f^{m_i}(o) \rightarrow \xi \in \partial \Omega$, $f^{n_j}(o) \rightarrow \eta \in \partial \Omega$, and $\xi \neq \eta$. In this case we will obtain a contradiction by considering $K_\Omega(f^{m_i}(o), f^{n_j}(o))$. If we assume that $m_i > n_j$ then
\begin{align*}
K_\Omega(f^{m_i}(o), f^{n_j}(o)) \leq K_\Omega(f^{m_i - n_j}(o), o).
\end{align*}
Now, if $i \gg j$ then $f^{m_i - n_j}(o)$ should be close to $\xi$. In particular, $ K_\Omega(f^{m_i}(o), f^{n_j}(o))$ is bounded by the ``distance'' from $o$ to $\xi$. On the other hand the visibility condition tells us that any length minimizing curve joining $f^{m_i}(o)$ to $f^{n_j}(o)$ has to pass close to $o$ and so
\begin{align*}
K_\Omega(f^{m_i}(o), f^{n_j}(o)) \approx K_\Omega(f^{m_i}(o), o)+K_\Omega(o, f^{n_j}(o))
\end{align*}
which for $i, j \gg 0$ is roughly the sum of the ``distance'' from $o$ to $\xi$ and the ``distance'' from $o$ to $\eta$. Combining these two observations gives a contradiction.
To obtain the second estimate we will use the following observation:
\begin{lemma}\label{lem:middle_pt}
Suppose $\Omega \subset \Cb^d$ is a bounded domain. If $\sigma:[a,b] \rightarrow \Omega$ is a $(1,\kappa)$-quasi-geodesic then for all $t \in [a,b]$ we have
\begin{align*}
K_\Omega(\sigma(a),\sigma(b)) \leq K_\Omega(\sigma(a),\sigma(t)) + K_\Omega(\sigma(t), \sigma(b)) \leq K_\Omega(\sigma(a),\sigma(b)) + 3\kappa.
\end{align*}
\end{lemma}
\begin{proof}
This follows immediately from the triangle inequality and the definition of a quasi-geodesic. \end{proof}
\subsection{The metric case}
In this section, we give the proof of Theorem~\ref{thm:m_WD}. This theorem is the consequence of Theorem~\ref{thm:metric_WD}, which we now prove. The proof of Theorem~\ref{thm:metric_WD} uses our visibility result and an argument from a paper of Karlsson~\cite[Theorem 3.4]{K2001} about the iterations of 1-Lipschitz maps on general metric spaces.
\begin{theorem}\label{thm:metric_WD}
Suppose $\Omega \subset \Cb^d$ is a Goldilocks domain. If $f:\Omega \rightarrow \Omega$ is 1-Lipschitz with respect to the Kobayashi distance and
\begin{equation*}
\lim_{n \rightarrow \infty} K_\Omega( f^n(o), o) = \infty
\end{equation*}
for some (hence any) $o \in \Omega$, then there exists a $\xi \in \partial \Omega$ such that
\begin{align*}
\lim_{k \rightarrow \infty} f^{k}(x) =\xi
\end{align*}
for all $x \in \Omega$.
\end{theorem}
\begin{proof}
Fix $o \in \Omega$ and pick a subsequence $m_i \rightarrow \infty$ so that
\begin{align*}
K_{\Omega}(f^{m_i}(o), o) \geq K_{\Omega}(f^{n}(o), o)
\end{align*}
for all $n \leq m_i$. By passing to another subsequence we may suppose that $f^{m_i}(o) \rightarrow \xi \in \partial \Omega$.
Suppose that $f^{n_j}(x) \rightarrow \eta$ for some $x \in \Omega$ and sequence $n_j\rightarrow \infty$. We claim that $\eta=\xi$. Pick a sequence $i_j \rightarrow \infty$ with $m_{i_j} > n_j$. Now let $\sigma_j :[0,T_j] \rightarrow \Omega$ be an $(1,1)$-almost-geodesic with $\sigma_j(0) = f^{m_{i_j}}(o)$ and $\sigma_j(T_j) = f^{n_j}(x)$. Since $f^{m_i}(o) \rightarrow \xi$, $f^{n_j}(x) \rightarrow \eta$, and $\xi \neq \eta$, Theorem~\ref{thm:visible} implies the existence of some $R > 0$ so that
\begin{align*}
\max_{j \in \Nb} K_\Omega(o, \sigma_j) \leq R.
\end{align*}
So pick some $t_j \in [0,T_j]$ with
\begin{align*}
K_\Omega(o, \sigma_j(t_j)) \leq R.
\end{align*}
Then by Lemma~\ref{lem:middle_pt} we have
\begin{align*}
K_{\Omega}(f^{m_{i_j}}(o), f^{n_j}(x))
& \geq K_{\Omega}(f^{m_{i_j}}(o),\sigma_j(t_j)) + K_\Omega(\sigma_j(t_j), f^{n_j}(x)) -3 \\
& \geq K_\Omega(f^{m_{i_j}}(o), o) + K_\Omega(o, f^{n_j}(x))- 3-2R
\end{align*}
On the other hand
\begin{align*}
K_{\Omega}(f^{m_{i_j}}(o), f^{n_j}(x)) \leq K_{\Omega}(f^{m_{i_j}-n_j}(o), o)+K_\Omega(o,p) \leq K_\Omega(f^{m_{i_j}}(o), o)+K_\Omega(o,x).
\end{align*}
So
\begin{align*}
K_\Omega(o, f^{n_j}(x)) \leq 3+2R+K_\Omega(o,x)
\end{align*}
and we have a contradiction.
\end{proof}
Finally, we provide
\begin{proof}[The proof of Theorem~\ref{thm:m_WD}]
Since $(\Omega, K_\Omega)$ is Cauchy complete, a result of Ca{\l}ka~\cite[Theorem 5.6]{C1984b} implies that
either
\begin{align*}
\lim_{n \rightarrow \infty} K_{\Omega}(f^n(x), x) = \infty
\end{align*}
for any $x \in \Omega$ or
\begin{align*}
\sup_{n \geq 0} K_{\Omega}(f^n(x), x) < \infty
\end{align*}
for any $x \in \Omega$. In the first case, Theorem~\ref{thm:metric_WD} implies that there exists $\xi \in \partial \Omega$ so that
\begin{equation*}
\lim_{n \rightarrow \infty} f^n(x) = \xi
\end{equation*}
for any $x \in \Omega$. In the second case, Result~\ref{res:hopf_rinow} implies that the orbit $\{ f^n(x): n \in \Nb\}$ is relatively compact in $\Omega$ for any $x \in \Omega$.
\end{proof}
\subsection{The holomorphic case}
We shall now give a proof of Theorem~\ref{thm:WD}.
\begin{lemma}\label{lem:limits}
Let $\Omega \subset \Cb^d$ be a Goldilocks domain. Suppose $f:\Omega \rightarrow \Omega$ is a holomorphic map. If $f^{n_i}$ converges to some $F: \Omega \rightarrow \partial \Omega$ then $F \equiv \xi$ for some $\xi \in \partial \Omega$.
\end{lemma}
\begin{proof} Fix some $x \in \Omega$. Then $\lim_{i \rightarrow \infty} d(f^{n_i})_x=dF_x$. And if $v \in \Cb^d$ then
\begin{align*}
k_\Omega(f^{n_i}(x); d(f^{n_i})_x v) \leq k_\Omega(x; v).
\end{align*}
Let $\tau = \max \{ k_\Omega(x; v) : \norm{v} =1\}$. We claim that
\begin{align*}
\norm{ d(f^{n_i})_x v} \leq \tau M_\Omega(\delta_\Omega(f^{n_i}(x))) \text{ when } \norm{v}=1.
\end{align*}
It clearly suffices to consider the case when $d(f^{n_i})_x v \neq 0$. In this case
\begin{align*}
1 \leq \frac{k_\Omega(x; v)}{k_\Omega(f^{n_i}(x); d(f^{n_i})_x v)} \leq \frac{\tau}{k_\Omega(f^{n_i}(x); d(f^{n_i})_x v)}.
\end{align*}
Then
\begin{align*}
\norm{ d(f^{n_i})_x v}
& \leq \frac{\tau\norm{ d(f^{n_i})_x v}}{k_\Omega(f^{n_i}(x); d(f^{n_i})_x v)} = \frac{\tau}{k_\Omega\left(f^{n_i}(x); \frac{d(f^{n_i})_x v}{\norm{d(f^{n_i})_x v}}\right)} \\
& \leq \tau M_\Omega(\delta_\Omega(f^{n_i}(x))).
\end{align*}
Then since $\delta_\Omega(f^{n_i}(x)) \rightarrow 0$ and $\lim_{i \rightarrow \infty} d(f^{n_i})_x=dF_x$ we see that $dF_x=0$. Since $x \in \Omega$ was arbitrary we see that $dF=0$ and hence $F$ is constant.
\end{proof}
\begin{proof}[The proof of Theorem~\ref{thm:WD}]
Since $\Omega$ is taut by~\cite[Theorem 2.4.3]{A1989}, either
\begin{enumerate}
\item for any $x \in \Omega$, the orbit $\{ f^n(x): n \in \Nb\}$ is relatively compact in $\Omega$; or
\item for any $x \in \Omega$,
\begin{equation*}
\lim_{n \rightarrow \infty} d_{\Euc}(f^n(x), \partial \Omega) = 0.
\end{equation*}
\end{enumerate}
Suppose that the second condition holds. Montel's theorem tells us that there exist subsequences $\{f^{n_j}\}$ that converge locally uniformly to $\partial\Omega$-valued functions. By Lemma~\ref{lem:limits}, the latter are constant functions. Thus, we will identify the set
\begin{align*}
\Gamma:=\overline{ \{f^n : n \in \Nb\}}^{{\rm compact-open}}\setminus \{f^n : n \in \Nb\}
\end{align*}
as a set of points in $\partial\Omega$. Our goal is to show that $\Gamma$ is a single point.
Assume for a contradiction that $\Gamma$ is not a single point.
\medskip
\noindent \textbf{Case 1:} Suppose that for some (hence any) $o \in \Omega$ we have
\begin{equation*}
\limsup_{n \rightarrow \infty} K_\Omega(f^n(o), o) = \infty.
\end{equation*}
Then we can find a subsequence $m_i \rightarrow \infty$ so that
\begin{align*}
K_\Omega(f^{m_i}(o), o) \geq &\;K_\Omega(f^k(o), o) \; \; \text{for all } k\leq m_i.
\end{align*}
By passing to a subsequence we can assume that $f^{m_i} \rightarrow \xi \in \partial \Omega$. Now by assumption, there exists a subsequence $n_j \rightarrow \infty$ so that $f^{n_j} \rightarrow \eta \in \partial \Omega$ and $\eta \neq \xi$.
\medskip
\noindent \textbf{Case 1(a):} First consider the case in which
\begin{equation*}
\limsup_{j \rightarrow \infty} K_\Omega(f^{n_j}(o), o) = \infty.
\end{equation*}
In this case we can repeat the proof of Theorem~\ref{thm:metric_WD} essentially verbatim: Pick $i_j \rightarrow \infty$ so that $m_{i_j} > n_j$. Now let $\sigma_j :[0,T_j] \rightarrow \Omega$ be an $(1,1)$-almost-geodesic with $\sigma_j(0) = f^{m_{i_j}}(o)$ and $\sigma_j(T_j) = f^{n_j}(o)$. Since $f^{m_i} \rightarrow \xi$, $f^{n_j} \rightarrow \eta$, and $\xi \neq \eta$, Theorem~\ref{thm:visible} implies the existence of some $R > 0$ so that
\begin{align*}
\max_{j \in \Nb} K_\Omega(o, \sigma_j) \leq R.
\end{align*}
So pick some $t_j \in [0,T_j]$ with
\begin{align*}
K_\Omega(o, \sigma_j(t_j)) \leq R.
\end{align*}
Then by Lemma~\ref{lem:middle_pt} we have
\begin{align*}
K_{\Omega}(f^{m_{i_j}}(o), f^{n_j}(o))
& \geq K_{\Omega}(f^{m_{i_j}}(o),\sigma_j(t_j)) + K_\Omega(\sigma_j(t_j), f^{n_j}(o)) -3 \\
& \geq K_\Omega(f^{m_{i_j}}(o), o) + K_\Omega(o, f^{n_j}(o))- 3-2R
\end{align*}
On the other hand
\begin{align*}
K_{\Omega}(f^{m_{i_j}}(o), f^{n_j}(o)) \leq K_{\Omega}(f^{m_{i_j}-n_j}(o), o) \leq K_\Omega(f^{m_{i_j}}(o), o).
\end{align*}
So
\begin{align*}
K_\Omega(o, f^{n_j}(o)) \leq 3+2R
\end{align*}
and we have a contradiction.
\medskip
\noindent \textbf{Case 1(b):} Next consider the case in which
\begin{equation*}
\limsup_{j \rightarrow \infty} K_\Omega(f^{n_j}(o), o) < \infty.
\end{equation*}
By Lemma~\ref{lem:limits}, for any $\ell \in \Nb$ we have
\begin{align*}
\lim_{j \rightarrow \infty} f^{n_j-\ell}(o) = \eta.
\end{align*}
Let
\begin{align*}
M_{\ell}:= \limsup_{j \rightarrow \infty} K_\Omega(f^{n_j-\ell}(o), o).
\end{align*}
We claim that
\begin{align*}
\limsup_{\ell \rightarrow \infty} M_{\ell} < \infty.
\end{align*}
Suppose not; then we find $\ell_k \rightarrow \infty$ so that $M_{\ell_k} > k$,
$k = 1, 2, 3,\dots$ Then we pick $j_k \rightarrow \infty$ so that
\begin{align*}
K_\Omega(f^{n_{j_k}-\ell_k}(o), o) > k, \; \; \text{and} \; \; d_{\Euc}(f^{n_{j_k}-\ell_k}(o), \eta) < 1/k.
\end{align*}
But then $f^{n_{j_k}-\ell_k}(o) \rightarrow \eta$ and
\begin{align*}
\lim_{k \rightarrow \infty} K_\Omega(f^{n_{j_k}-\ell_k}(o), o) =\infty
\end{align*}
which is impossible by Case 1(a). So we see that
\begin{align*}
\limsup_{\ell \rightarrow \infty} M_{\ell} < \infty.
\end{align*}
Then
\begin{align*}
\limsup_{i \rightarrow \infty} \limsup_{j \rightarrow \infty} K_\Omega(f^{m_i} (o), f^{n_j} (o)) &\leq \limsup_{i \rightarrow \infty} \limsup_{j \rightarrow \infty} K_\Omega( o, f^{n_j-m_i} (o))\\
&= \limsup_{i \rightarrow \infty} M_{m_i} < \infty,
\end{align*}
and
\begin{align*}
\limsup_{i \rightarrow \infty} \limsup_{j \rightarrow \infty} & K_\Omega(f^{m_i} (o), f^{n_j}( o))\\
& \geq \limsup_{i \rightarrow \infty} \limsup_{j \rightarrow \infty}\Big(K_\Omega(f^{m_i}(o), o) - K_\Omega(o, f^{n_j}(o))\Big) \\
& \geq \limsup_{i \rightarrow \infty} \Big(K_\Omega(f^{m_i} (o),o) - M_{0} \Big)=\infty.
\end{align*}
So we again have a contradiction.
\medskip
\noindent \textbf{Case 2:} Suppose that for some (hence any) $o \in \Omega$ we have
\begin{equation*}
\limsup_{n \rightarrow \infty} K_\Omega(f^n(o), o) <\infty.
\end{equation*}
Suppose that $\xi, \eta \in \Gamma$ are two distinct points. Fix neighborhoods $V_\xi$ of $\xi$ and $V_\eta$ of $\eta$
so that $\overline{V_\xi} \cap \overline{V_\eta} = \emptyset$. By Theorem~\ref{thm:visible} there exists a
compact set $K \subset \Omega$ with the following property: if $\sigma:[0,T] \rightarrow \Omega$ is any $(1,2)$-almost-geodesic satisfying $\sigma(0) \in V_\xi$ and $\sigma(T) \in V_\eta$ then $\sigma([0,T]) \cap K \neq \emptyset$.
\vspace{1mm}
Next, for $\delta > 0$ define the function $G_\delta: K \times K \rightarrow \Rb$ by
\begin{align*}
G_\delta(k_1, k_2) := \inf\{ K_\Omega(f^m (k_1), k_2) : d_{\Euc}(f^m(k_1),\xi) < \delta\}.
\end{align*}
By the assumptions for Case~2,
\begin{align*}
\sup\{ G_\delta(k_1, k_2) : \delta >0 , k_1, k_2 \in K\} < \infty
\end{align*}
and if $\delta_1 < \delta_2$ then $G_{\delta_1} \geq G_{\delta_2}$. So the function
\begin{align*}
G(k_1, k_2) := \lim_{\delta \rightarrow 0} G_\delta(k_1, k_2)
\end{align*}
is well defined.
Next, let
\begin{align*}
\epsilon: = \liminf_{z \rightarrow \eta} \inf_{k \in K} K_\Omega(k, z).
\end{align*}
By Proposition~\ref{prop:lip}, $\epsilon > 0$. Now pick $q_1, q_2 \in K$ so that
\begin{align*}
G(q_1, q_2) < \epsilon + \inf\{ G(k_1, k_2) : k_1, k_2 \in K\}.
\end{align*}
Fix a sequence of integers $n_j \rightarrow \infty$ so that $f^{n_j} \rightarrow \eta$. Suppose $\mu_i \rightarrow \infty$ is any sequence of integers such that $f^{\mu_i} \rightarrow \xi$. Then by Lemma~\ref{lem:limits}
\begin{align*}
\lim_{i \rightarrow \infty} f^{\mu_i+n_j}(o) = \lim_{i \rightarrow \infty} f^{\mu_i}(f^{n_j}(o)) = \xi.
\end{align*}
So we can find a subsequence $\{\mu_{i_j}\}\subset \{\mu_i\}$ so that $f^{\mu_{i_j} + n_j} \rightarrow \xi$. Therefore, we can find a sequence of integers $m_j \rightarrow \infty$ such that
\begin{align*}
f^{m_j} &\rightarrow \xi, \\
f^{m_j+n_j} &\rightarrow \xi, \\
\lim_{j \rightarrow \infty} K_\Omega( f^{m_j}(q_1), q_2) &= G(q_1, q_2).
\end{align*}
Finally, fix a sequence $\kappa_j \searrow 0$ with $\kappa_j \leq 2$. By Proposition~\ref{prop:almost_geod_exist}, there exists an $(1,\kappa_j)$-almost-geodesic $\sigma_{j}: [0,T_j] \rightarrow \Omega$ with $\sigma(0)=f^{m_j + n_j}(q_1)$ and $\sigma(T_j) =f^{n_j}(q_2)$. For $j$ sufficiently large, $\sigma_j(0) \in V_\xi$ and $\sigma_j(T_j) \in V_\eta$. Since each $\sigma_j$ is an $(1,2)$-almost-geodesic, by the construction of $K$ there exists, for each $j$ sufficiently large, a point $k_j \in K \cap \sigma([0,T_j])$. Then, by Lemma~\ref{lem:middle_pt}, we have
\begin{align*}
K_\Omega(f^{m_j + n_j}(q_1), f^{n_j}(q_2))
\geq K_\Omega(f^{m_j + n_j}(q_1), k_j) + K_\Omega(k_j, f^{n_j}(q_2)) - 3 \kappa_j.
\end{align*}
Now by our definition of $\epsilon$ we have
\begin{align*}
\liminf_{j \rightarrow \infty} K_\Omega(k_j, f^{n_j}(q_2)) \geq \epsilon.
\end{align*}
After passing to a subsequence we can suppose that $k_j \rightarrow k \in K$. Then since $f^{m_j + n_j}(q_1) \rightarrow \xi$ we see that
\begin{align*}
\liminf_{j \rightarrow \infty} & K_\Omega(f^{m_j + n_j}(q_1), k_j) \geq
\liminf_{j \rightarrow \infty} \Big( K_\Omega(f^{m_j + n_j}(q_1), k)- K_\Omega(k,k_j) \Big) \\
& = \liminf_{j \rightarrow \infty} K_\Omega(f^{m_j + n_j}(q_1), k) \geq G(q_1, k).
\end{align*}
Since $ \kappa_j \rightarrow 0$, from the last three estimates, we get
\begin{align*}
\liminf_{j \rightarrow \infty} K_\Omega(f^{m_j + n_j}(q_1), f^{n_j}(q_2)) \geq G(q_1, k) + \epsilon.
\end{align*}
On the other hand,
\begin{align*}
\limsup_{j \rightarrow \infty} K_\Omega(f^{m_j + n_j}(q_1), f^{n_j}(q_2))
\leq \limsup_{j \rightarrow \infty}K_\Omega(f^{m_j}(q_1), q_2) = G(q_1, q_2).
\end{align*}
So we have
\begin{align*}
G(q_1,q_2) \geq G(q_1,k)+\epsilon
\end{align*}
which contradicts our choice of $q_1, q_2 \in K$.
In both Cases~1 and 2, we obtain contradictions. Hence, $\Gamma$ contains a single point.
\end{proof}
\bibliographystyle{alpha}
|
1,108,101,564,299 | arxiv | \section{Acknowledgements\label{sec:acknowl}}
This work presents results from the European Space Agency (ESA) space mission \textit{Gaia}. \textit{Gaia}\ data are being processed by the \textit{Gaia}\ Data Processing and Analysis Consortium (DPAC). Funding for the DPAC is provided by national institutions, in particular the institutions participating in the \textit{Gaia}\ MultiLateral Agreement (MLA). The \textit{Gaia}\ mission website is \url{https://www.cosmos.esa.int/gaia}. The \textit{Gaia}\ archive website is \url{https://archives.esac.esa.int/gaia}.
The \textit{Gaia}\ mission and data processing have financially been supported by, in alphabetical order by country:
\begin{itemize}
\item the Algerian Centre de Recherche en Astronomie, Astrophysique et G\'{e}ophysique of Bouzareah Observatory;
\item the Austrian Fonds zur F\"{o}rderung der wissenschaftlichen Forschung (FWF) Hertha Firnberg Programme through grants T359, P20046, and P23737;
\item the BELgian federal Science Policy Office (BELSPO) through various PROgramme de D\'eveloppement d'Exp\'eriences scientifiques (PRODEX) grants and the Polish Academy of Sciences - Fonds Wetenschappelijk Onderzoek through grant VS.091.16N, and the Fonds de la Recherche Scientifique (FNRS);
\item the Brazil-France exchange programmes Funda\c{c}\~{a}o de Amparo \`{a} Pesquisa do Estado de S\~{a}o Paulo (FAPESP) and Coordena\c{c}\~{a}o de Aperfeicoamento de Pessoal de N\'{\i}vel Superior (CAPES) - Comit\'{e} Fran\c{c}ais d'Evaluation de la Coop\'{e}ration Universitaire et Scientifique avec le Br\'{e}sil (COFECUB);
\item the National Science Foundation of China (NSFC) through grants 11573054 and 11703065 and the China Scholarship Council through grant 201806040200;
\item the Tenure Track Pilot Programme of the Croatian Science Foundation and the \'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne and the project TTP-2018-07-1171 'Mining the Variable Sky', with the funds of the Croatian-Swiss Research Programme;
\item the Czech-Republic Ministry of Education, Youth, and Sports through grant LG 15010 and INTER-EXCELLENCE grant LTAUSA18093, and the Czech Space Office through ESA PECS contract 98058;
\item the Danish Ministry of Science;
\item the Estonian Ministry of Education and Research through grant IUT40-1;
\item the European Commission’s Sixth Framework Programme through the European Leadership in Space Astrometry (\href{https://www.cosmos.esa.int/web/gaia/elsa-rtn-programme}{ELSA}) Marie Curie Research Training Network (MRTN-CT-2006-033481), through Marie Curie project PIOF-GA-2009-255267 (Space AsteroSeismology \& RR Lyrae stars, SAS-RRL), and through a Marie Curie Transfer-of-Knowledge (ToK) fellowship (MTKD-CT-2004-014188); the European Commission's Seventh Framework Programme through grant FP7-606740 (FP7-SPACE-2013-1) for the \textit{Gaia}\ European Network for Improved data User Services (\href{https://gaia.ub.edu/twiki/do/view/GENIUS/}{GENIUS}) and through grant 264895 for the \textit{Gaia}\ Research for European Astronomy Training (\href{https://www.cosmos.esa.int/web/gaia/great-programme}{GREAT-ITN}) network;
\item the European Research Council (ERC) through grants 320360 and 647208 and through the European Union’s Horizon 2020 research and innovation and excellent science programmes through Marie Sk{\l}odowska-Curie grant 745617 as well as grants 670519 (Mixing and Angular Momentum tranSport of massIvE stars -- MAMSIE), 687378 (Small Bodies: Near and Far), 682115 (Using the Magellanic Clouds to Understand the Interaction of Galaxies), and 695099 (A sub-percent distance scale from binaries and Cepheids -- CepBin);
\item the European Science Foundation (ESF), in the framework of the \textit{Gaia}\ Research for European Astronomy Training Research Network Programme (\href{https://www.cosmos.esa.int/web/gaia/great-programme}{GREAT-ESF});
\item the European Space Agency (ESA) in the framework of the \textit{Gaia}\ project, through the Plan for European Cooperating States (PECS) programme through grants for Slovenia, through contracts C98090 and 4000106398/12/NL/KML for Hungary, and through contract 4000115263/15/NL/IB for Germany;
\item the Academy of Finland and the Magnus Ehrnrooth Foundation;
\item the French Centre National d’Etudes Spatiales (CNES), the Agence Nationale de la Recherche (ANR) through grant ANR-10-IDEX-0001-02 for the 'Investissements d'avenir' programme, through grant ANR-15-CE31-0007 for project 'Modelling the Milky Way in the Gaia era' (MOD4Gaia), through grant ANR-14-CE33-0014-01 for project 'The Milky Way disc formation in the Gaia era' (ARCHEOGAL), and through grant ANR-15-CE31-0012-01 for project 'Unlocking the potential of Cepheids as primary distance calibrators' (UnlockCepheids), the Centre National de la Recherche Scientifique (CNRS) and its SNO Gaia of the Institut des Sciences de l’Univers (INSU), the 'Action F\'{e}d\'{e}ratrice Gaia' of the Observatoire de Paris, the R\'{e}gion de Franche-Comt\'{e}, and the Programme National de Gravitation, R\'{e}f\'{e}rences, Astronomie, et M\'{e}trologie (GRAM) of CNRS/INSU with the Institut National Polytechnique (INP) and the Institut National de Physique nucléaire et de Physique des Particules (IN2P3) co-funded by CNES;
\item the German Aerospace Agency (Deutsches Zentrum f\"{u}r Luft- und Raumfahrt e.V., DLR) through grants 50QG0501, 50QG0601, 50QG0602, 50QG0701, 50QG0901, 50QG1001, 50QG1101, 50QG1401, 50QG1402, 50QG1403, 50QG1404, and 50QG1904 and the Centre for Information Services and High Performance Computing (ZIH) at the Technische Universit\"{a}t (TU) Dresden for generous allocations of computer time;
\item the Hungarian Academy of Sciences through the Lend\"{u}let Programme grants LP2014-17 and LP2018-7 and through the Premium Postdoctoral Research Programme (L.~Moln\'{a}r), and the Hungarian National Research, Development, and Innovation Office (NKFIH) through grant KH\_18-130405;
\item the Science Foundation Ireland (SFI) through a Royal Society - SFI University Research Fellowship (M.~Fraser);
\item the Israel Science Foundation (ISF) through grant 848/16;
\item the Agenzia Spaziale Italiana (ASI) through contracts I/037/08/0, I/058/10/0, 2014-025-R.0, 2014-025-R.1.2015, and 2018-24-HH.0 to the Italian Istituto Nazionale di Astrofisica (INAF), contract 2014-049-R.0/1/2 to INAF for the Space Science Data Centre (SSDC, formerly known as the ASI Science Data Center, ASDC), contracts I/008/10/0, 2013/030/I.0, 2013-030-I.0.1-2015, and 2016-17-I.0 to the Aerospace Logistics Technology Engineering Company (ALTEC S.p.A.), INAF, and the Italian Ministry of Education, University, and Research (Ministero dell'Istruzione, dell'Universit\`{a} e della Ricerca) through the Premiale project 'MIning The Cosmos Big Data and Innovative Italian Technology for Frontier Astrophysics and Cosmology' (MITiC);
\item the Netherlands Organisation for Scientific Research (NWO) through grant NWO-M-614.061.414, through a VICI grant (A.~Helmi), and through a Spinoza prize (A.~Helmi), and the Netherlands Research School for Astronomy (NOVA);
\item the Polish National Science Centre through HARMONIA grant 2018/06/M/ST9/00311, DAINA grant 2017/27/L/ST9/03221, and PRELUDIUM grant 2017/25/N/ST9/01253, and the Ministry of Science and Higher Education (MNiSW) through grant DIR/WK/2018/12;
\item the Portugese Funda\c{c}\~ao para a Ci\^{e}ncia e a Tecnologia (FCT) through grants SFRH/BPD/74697/2010 and SFRH/BD/128840/2017 and the Strategic Programme UID/FIS/00099/2019 for CENTRA;
\item the Slovenian Research Agency through grant P1-0188;
\item the Spanish Ministry of Economy (MINECO/FEDER, UE) through grants ESP2016-80079-C2-1-R, ESP2016-80079-C2-2-R, RTI2018-095076-B-C21, RTI2018-095076-B-C22, BES-2016-078499, and BES-2017-083126 and the Juan de la Cierva formaci\'{o}n 2015 grant FJCI-2015-2671, the Spanish Ministry of Education, Culture, and Sports through grant FPU16/03827, the Spanish Ministry of Science and Innovation (MICINN) through grant AYA2017-89841P for project 'Estudio de las propiedades de los f\'{o}siles estelares en el entorno del Grupo Local' and through grant TIN2015-65316-P for project 'Computaci\'{o}n de Altas Prestaciones VII', the Severo Ochoa Centre of Excellence Programme of the Spanish Government through grant SEV2015-0493, the Institute of Cosmos Sciences University of Barcelona (ICCUB, Unidad de Excelencia ’Mar\'{\i}a de Maeztu’) through grants MDM-2014-0369 and CEX2019-000918-M, the University of Barcelona's official doctoral programme for the development of an R+D+i project through an Ajuts de Personal Investigador en Formaci\'{o} (APIF) grant, the Spanish Virtual Observatory through project AyA2017-84089, the Galician Regional Government, Xunta de Galicia, through grants ED431B-2018/42 and ED481A-2019/155, support received from the Centro de Investigaci\'{o}n en Tecnolog\'{\i}as de la Informaci\'{o}n y las Comunicaciones (CITIC) funded by the Xunta de Galicia, the Xunta de Galicia and the Centros Singulares de Investigaci\'{o}n de Galicia for the period 2016-2019 through CITIC, the European Union through the European Regional Development Fund (ERDF) / Fondo Europeo de Desenvolvemento Rexional (FEDER) for the Galicia 2014-2020 Programme through grant ED431G-2019/01, the Red Espa\~{n}ola de Supercomputaci\'{o}n (RES) computer resources at MareNostrum, the Barcelona Supercomputing Centre - Centro Nacional de Supercomputaci\'{o}n (BSC-CNS) through activities AECT-2016-1-0006, AECT-2016-2-0013, AECT-2016-3-0011, and AECT-2017-1-0020, the Departament d'Innovaci\'{o}, Universitats i Empresa de la Generalitat de Catalunya through grant 2014-SGR-1051 for project 'Models de Programaci\'{o} i Entorns d'Execuci\'{o} Parallels' (MPEXPAR), and Ramon y Cajal Fellowship RYC2018-025968-I;
\item the Swedish National Space Agency (SNSA/Rymdstyrelsen);
\item the Swiss State Secretariat for Education, Research, and Innovation through
the Mesures d’Accompagnement, the Swiss Activit\'es Nationales Compl\'ementaires, and the Swiss National Science Foundation;
\item the United Kingdom Particle Physics and Astronomy Research Council (PPARC), the United Kingdom Science and Technology Facilities Council (STFC), and the United Kingdom Space Agency (UKSA) through the following grants to the University of Bristol, the University of Cambridge, the University of Edinburgh, the University of Leicester, the Mullard Space Sciences Laboratory of University College London, and the United Kingdom Rutherford Appleton Laboratory (RAL): PP/D006511/1, PP/D006546/1, PP/D006570/1, ST/I000852/1, ST/J005045/1, ST/K00056X/1, ST/K000209/1, ST/K000756/1, ST/L006561/1, ST/N000595/1, ST/N000641/1, ST/N000978/1, ST/N001117/1, ST/S000089/1, ST/S000976/1, ST/S001123/1, ST/S001948/1, ST/S002103/1, and ST/V000969/1.
\end{itemize}
\newcommand{\comment}[1]{}
\comment{
The \textit{Gaia}\ project and data processing have made use of:
\begin{itemize}
\item the Set of Identifications, Measurements, and Bibliography for Astronomical Data \citep[SIMBAD,][]{2000A&AS..143....9W}, the 'Aladin sky atlas' \citep{2000A&AS..143...33B,2014ASPC..485..277B}, and the VizieR catalogue access tool \citep{2000A&AS..143...23O}, all operated at the Centre de Donn\'ees astronomiques de Strasbourg (\href{http://cds.u-strasbg.fr/}{CDS});
\item the National Aeronautics and Space Administration (NASA) Astrophysics Data System (\href{http://adsabs.harvard.edu/abstract_service.html}{ADS});
\item the SPace ENVironment Information System (SPENVIS), initiated by the Space Environment and Effects Section (TEC-EES) of ESA and developed by the Belgian Institute for Space Aeronomy (BIRA-IASB) under ESA contract through ESA’s General Support Technologies Programme (GSTP), administered by the BELgian federal Science Policy Office (BELSPO);
\item the software products \href{http://www.starlink.ac.uk/topcat/}{TOPCAT}, \href{http://www.starlink.ac.uk/stil}{STIL}, and \href{http://www.starlink.ac.uk/stilts}{STILTS} \citep{2005ASPC..347...29T,2006ASPC..351..666T};
\item Matplotlib \citep{Hunter:2007};
\item IPython \citep{PER-GRA:2007};
\item Astropy, a community-developed core Python package for Astronomy \citep{2018AJ....156..123A};
\item R \citep{RManual};
\item Vaex \citep{2018A&A...618A..13B};
\item the \hip-2 catalogue \citep{2007A&A...474..653V}. The \hip and \tyc catalogues were constructed under the responsibility of large scientific teams collaborating with ESA. The Consortia Leaders were Lennart Lindegren (Lund, Sweden: NDAC) and Jean Kovalevsky (Grasse, France: FAST), together responsible for the \hip Catalogue; Erik H{\o}g (Copenhagen, Denmark: TDAC) responsible for the \tyc Catalogue; and Catherine Turon (Meudon, France: INCA) responsible for the \hip Input Catalogue (HIC);
\item the \tyctwo catalogue \citep{2000A&A...355L..27H}, the construction of which was supported by the Velux Foundation of 1981 and the Danish Space Board;
\item The Tycho double star catalogue \citep[TDSC,][]{2002A&A...384..180F}, based on observations made with the ESA \hip astrometry satellite, as supported by the Danish Space Board and the United States Naval Observatory through their double-star programme;
\item data products from the Two Micron All Sky Survey \citep[2MASS,][]{2006AJ....131.1163S}, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center (IPAC) / California Institute of Technology, funded by the National Aeronautics and Space Administration (NASA) and the National Science Foundation (NSF) of the USA;
\item the ninth data release of the AAVSO Photometric All-Sky Survey (\href{https://www.aavso.org/apass}{APASS}, \citealt{apass9}), funded by the Robert Martin Ayers Sciences Fund;
\item the first data release of the Pan-STARRS survey \citep{panstarrs1,panstarrs1b,panstarrs1c,panstarrs1d,panstarrs1e,panstarrs1f}. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration (NASA) through grant NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation through grant AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation;
\item the second release of the Guide Star Catalogue \citep[GSC2.3,][]{2008AJ....136..735L}. The Guide Star Catalogue II is a joint project of the Space Telescope Science Institute (STScI) and the Osservatorio Astrofisico di Torino (OATo). STScI is operated by the Association of Universities for Research in Astronomy (AURA), for the National Aeronautics and Space Administration (NASA) under contract NAS5-26555. OATo is operated by the Italian National Institute for Astrophysics (INAF). Additional support was provided by the European Southern Observatory (ESO), the Space Telescope European Coordinating Facility (STECF), the International GEMINI project, and the European Space Agency (ESA) Astrophysics Division (nowadays SCI-S);
\item the eXtended, Large (XL) version of the catalogue of Positions and Proper Motions \citep[PPM-XL,][]{2010AJ....139.2440R};
\item data products from the Wide-field Infrared Survey Explorer (WISE), which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, and NEOWISE, which is a project of the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration (NASA);
\item the first data release of the United States Naval Observatory (USNO) Robotic Astrometric Telescope \citep[URAT-1,][]{urat1};
\item the fourth data release of the United States Naval Observatory (USNO) CCD Astrograph Catalogue \citep[UCAC-4,][]{2013AJ....145...44Z};
\item the fifth data release of the Radial Velocity Experiment \citep[RAVE DR5,][]{rave5}. Funding for RAVE has been provided by the Australian Astronomical Observatory, the Leibniz-Institut f\"ur Astrophysik Potsdam (AIP), the Australian National University, the Australian Research Council, the French National Research Agency, the German Research Foundation (SPP 1177 and SFB 881), the European Research Council (ERC-StG 240271 Galactica), the Istituto Nazionale di Astrofisica at Padova, The Johns Hopkins University, the National Science Foundation of the USA (AST-0908326), the W. M. Keck foundation, the Macquarie University, the Netherlands Research School for Astronomy, the Natural Sciences and Engineering Research Council of Canada, the Slovenian Research Agency, the Swiss National Science Foundation, the Science \& Technology Facilities Council of the UK, Opticon, Strasbourg Observatory, and the Universities of Groningen, Heidelberg, and Sydney. The RAVE website is at \url{https://www.rave-survey.org/};
\item the first data release of the Large sky Area Multi-Object Fibre Spectroscopic Telescope \citep[LAMOST DR1,][]{LamostDR1};
\item the K2 Ecliptic Plane Input Catalogue \citep[EPIC,][]{epic-2016ApJS..224....2H};
\item the ninth data release of the Sloan Digitial Sky Survey \citep[SDSS DR9,][]{SDSS9}. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the United States Department of Energy Office of Science. The SDSS-III website is \url{http://www.sdss3.org/}. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrof\'{\i}sica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University;
\item the thirteenth release of the Sloan Digital Sky Survey \citep[SDSS DR13,][]{2017ApJS..233...25A}. Funding for SDSS-IV has been provided by the Alfred P. Sloan Foundation, the United States Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is \url{https://www.sdss.org/}. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University;
\item the second release of the SkyMapper catalogue \citep[SkyMapper DR2,][Digital Object Identifier 10.25914/5ce60d31ce759]{2019PASA...36...33O}. The national facility capability for SkyMapper has been funded through grant LE130100104 from the Australian Research Council (ARC) Linkage Infrastructure, Equipment, and Facilities (LIEF) programme, awarded to the University of Sydney, the Australian National University, Swinburne University of Technology, the University of Queensland, the University of Western Australia, the University of Melbourne, Curtin University of Technology, Monash University, and the Australian Astronomical Observatory. SkyMapper is owned and operated by The Australian National University's Research School of Astronomy and Astrophysics. The survey data were processed and provided by the SkyMapper Team at the the Australian National University. The SkyMapper node of the All-Sky Virtual Observatory (ASVO) is hosted at the National Computational Infrastructure (NCI). Development and support the SkyMapper node of the ASVO has been funded in part by Astronomy Australia Limited (AAL) and the Australian Government through the Commonwealth's Education Investment Fund (EIF) and National Collaborative Research Infrastructure Strategy (NCRIS), particularly the National eResearch Collaboration Tools and Resources (NeCTAR) and the Australian National Data Service Projects (ANDS).
\end{itemize}
The GBOT programme (\secref{ssec:cu3ast_prop_gbot}) uses observations collected at (i) the European Organisation for Astronomical Research in the Southern Hemisphere (ESO) with the VLT Survey Telescope (VST), under ESO programmes
092.B-0165,
093.B-0236,
094.B-0181,
095.B-0046,
096.B-0162,
097.B-0304,
098.B-0030,
099.B-0034,
0100.B-0131,
0101.B-0156,
0102.B-0174, and
0103.B-0165;
and (ii) the Liverpool Telescope, which is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'{\i}sica de Canarias with financial support from the United Kingdom Science and Technology Facilities Council, and (iii) telescopes of the Las Cumbres Observatory Global Telescope Network.
In addition to the currently active DPAC (and ESA science) authors of the peer-reviewed papers accompanying \egdr{3}, there are large numbers of former DPAC members who made significant contributions to the (preparations of the) data processing. Among those are, in alphabetical order:
Christopher Agard,
Juan Jos\'{e} Aguado,
Alexandra Alecu,
Peter Allan,
France Allard,
Walter Allasia,
Carlos Allende Prieto,
Antonio Amorim,
Kader Amsif,
Guillem Anglada-Escud\'{e},
Erika Antiche,
Sonia Ant\'{o}n,
Bernardino Arcay,
Borja Arroyo Galende,
Vladan Arsenijevic,
Tri Astraatmadja,
Rajesh Kumar Bachchan,
Angelique Barbier,
Paul Barklem,
Mickael Batailler,
Duncan Bates,
Mathias Beck,
Luigi Bedin,
Antonio Bello Garc\'{\i}a,
Vasily Belokurov,
Philippe Bendjoya,
Angel Berihuete,
Hans Bernstein$^\dagger$,
Stefano Bertone,
Olivier Bienaym\'{e},
Lionel Bigot,
Albert Bijaoui,
Fran\c{c}oise Billebaud,
Nadejda Blagorodnova,
Thierry Bloch,
Klaas de Boer,
Marco Bonfigli,
Giuseppe Bono,
Simon Borgniet,
Raul Borrachero-Sanchez,
Fran\c{c}ois Bouchy,
Steve Boudreault,
Geraldine Bourda,
Guy Boutonnet,
Pascal Branet,
Maarten Breddels,
Scott Brown,
Pierre-Marie Brunet,
Thomas Br\"{u}semeister,
Peter Bunclark$^\dagger$,
Roberto Buonanno,
Robert Butorafuchs,
Joan Cambras,
Heather Campbell,
Hector Canovas,
Christophe Carret,
Manuel Carrillo,
C\'{e}sar Carri\'{o}n,
Laia Casamiquela,
Jonathan Charnas,
Fabien Ch\'{e}reau,
Nick Chornay,
Marcial Clotet,
Gabriele Cocozza,
Ross Collins,
Gabriele Contursi,
Leonardo Corcione,
Gr\'{a}inne Costigan,
Alessandro Crisafi,
Nick Cross,
Jan Cuypers$^\dagger$,
Jean-Charles Damery,
Eric Darmigny,
Jonas Debosscher,
Peter De Cat,
Hector Delgado-Urena,
C\'{e}line Delle Luche,
Maria Del Mar Nunez Campos,
Domitilla De Martino,
Markus Demleitner,
Thavisha Dharmawardena,
S\'{e}kou Diakite,
Carla Domingues,
Sandra Dos Anjos,
Laurent Douchy,
Petros Drazinos,
Pierre Dubath,
Yifat Dzigan,
Sebastian Els,
Arjen van Elteren,
Kjell Eriksson,
Carolina von Essen,
Wyn Evans,
Guillaume Eynard Bontemps,
Antonio Falc\~{a}o,
Mart\'{\i} Farr\`{a}s Casas,
Luciana Federici,
Fernando de Felice,
Krzysztof Findeisen,
Florin Fodor,
Yori Fournier,
Benoit Frezouls,
Aidan Fries,
Jan Fuchs,
Flavio Fusi Pecci,
Diego Fustes,
Duncan Fyfe,
Eva Gallardo,
Silvia Galleti,
Fernando Garcia,
Daniele Gardiol,
Nora Garralda,
Alvin Gavel,
Emilien Gaudin,
Marwan Gebran,
Yoann G\'{e}rard,
Nathalie Gerbier,
Joris Gerssen,
Andreja Gomboc,
Miguel Gomes,
Anita G\'{o}mez,
Ana Gonz\'{a}lez-Marcos,
Eva Grebel,
Michel Grenon,
Eric Grux,
Alain Gueguen,
Pierre Guillout,
Andres G\'{u}rpide,
Despina Hatzidimitriou,
Julien Heu,
Albert Heyrovsky,
Wilfried Hofmann,
Erik H{\o}g,
Andrew Holland,
Gordon Hopkinson$^\dagger$,
Claude Huc,
Jason Hunt,
Brigitte Huynh,
Arkadiusz Hypki,
Giacinto Iannicola,
Laura Inno,
Mike Irwin,
Yago Isasi Parache,
Thierry Jacq,
Laurent Jean-Rigaud,
Isabelle J{\'e}gouzo-Giroux,
Asif Jan,
Anne-Marie Janotto,
Fran\c{c}ois Jocteur-Monrozier,
Paula Jofr\'{e},
Anthony Jonckheere,
Antoine Jorissen,
Francesc Julbe Lopez,
Ralf Keil,
Adam Kewley,
Dae-Won Kim,
Peter Klagyivik,
Jochen Klar,
Jonas Kl\"{u}ter,
Jens Knude,
Oleg Kochukhov,
Katrien Kolenberg,
Indrek Kolka,
Pavel Koubsky,
Janez Kos,
Irina Kovalenko,
Maria Kudryashova,
Ilya Kull,
Alex Kutka,
Fr\'{e}d\'{e}ric Lacoste-Seris,
Val\'{e}ry Lainey,
Antoni Latorre,
Felix Lauwaert,
Claudia Lavalley,
David LeBouquin,
Vassili Lemaitre,
Helmut Lenhardt,
Christophe Le Poncin-Lafitte,
Thierry Levoir,
Chao Liu,
Davide Loreggia,
Denise Lorenz,
Ian MacDonald,
Marc Madaule,
Tiago Magalh\~{a}es Fernandes,
Valeri Makarov,
Jean-Christophe Malapert,
Herv\'{e} Manche,
Gregory Mantelet,
Daniel Mar\'{\i}n Pina,
Gabor Marschalko,
Mathieu Marseille,
Christophe Martayan,
Oscar Martinez-Rubi,
Paul Marty,
Benjamin Massart,
Emmanuel Mercier,
Fr\'{e}d\'{e}ric Meynadier,
Shan Mignot,
Bruno Miranda,
Marco Molinaro,
Marc Moniez,
Alain Montmory,
Stephan Morgenthaler,
Ulisse Munari,
J\'{e}r\^{o}me Narbonne,
Gijs Nelemans,
Anne-Th\'{e}r\`{e}se Nguyen,
Luciano Nicastro,
Thomas Nordlander,
Markus Nullmeier,
Derek O'Callaghan,
Pierre Ocvirk,
Alex Ogden,
Joaqu\'{\i}n Ordieres-Mer\'{e},
Diego Ordonez,
Patricio Ortiz,
Jose Osorio,
Dagmara Oszkiewicz,
Alex Ouzounis,
Hugo Palacin,
Max Palmer,
Peregrine Park,
Ester Pasquato,
Xavier Passot,
Marco Pecoraro,
Roselyne Pedrosa,
Christian Peltzer,
Hanna Pentik\"{a}inen,
Jordi Peralta,
Fabien P\'{e}turaud,
Bernard Pichon,
Tuomo Pieniluoma,
Enrico Pigozzi,
Bertrand Plez,
Joel Poels$^\dagger$,
Ennio Poretti Merate,
Arnaud Poulain,
Guylaine Prat,
Thibaut Prod'homme,
Adrien Raffy,
Serena Rago,
Piero Ranalli,
Gregor Rauw,
Andrew Read,
Jos\'{e} Rebordao,
Philippe Redon,
Rita Ribeiro,
Ariadna Ribes Metidieri,
Pascal Richard,
Daniel Risquez,
Adrien Rivard,
Brigitte Rocca-Volmerange,
Nicolas de Roll,
Siv Ros\'{e}n,
Stefano Rubele,
Laura Ruiz Dern,
Idoia Ruiz-Fuertes,
Federico Russo,
Toni Santana,
Helder Savietto,
Mathias Schultheis,
Damien Segransan,
I-Chun Shih,
Arnaud Siebert,
Andr\'{e} Silva,
Helder Silva,
Dimitris Sinachopoulos,
Eric Slezak,
Riccardo Smareglia,
Kester Smith,
Michael Soffel,
Rosanna Sordo,
Danuta Sosnowska,
Maxime Spano,
Ulrike Stampa,
Hristo Stoev,
Vytautas Strai\v{z}ys,
Frank Suess,
Dirk Terrell,
David Terrett,
Pierre Teyssandier,
Stephan Theil,
Carola Tiede,
Brandon Tingley,
Anastasia Titarenko,
Scott Trager,
Licia Troisi,
Paraskevi Tsalmantza,
David Tur,
Mattia Vaccari,
Fr\'{e}d\'{e}ric Vachier,
Emmanouil Vachlas,
Gaetano Valentini,
Pau Vall\`{e}s,
Veronique Valette,
Walter Van Hamme,
Eric Van Hemelryck,
Mihaly Varadi,
Marco Vaschetto,
Jovan Veljanoski,
Lionel Veltz,
Sjoert van Velzen,
Teresa Via,
Jenni Virtanen,
Antonio Volpicelli,
Holger Voss,
Viktor Votruba,
Jean-Marie Wallut,
Gavin Walmsley,
Rainer Wichmann,
Mark Wilkinson,
Patrick Yvard,
Petar Ze\v{c}evi\'{c},
Tim de Zeeuw,
Maruska Zerjal,
Houri Ziaeepour, and
Sven Zschocke.
In addition to the DPAC consortium, past and present, there are numerous people, mostly in ESA and in industry, who have made or continue to make essential contributions to \textit{Gaia}, for instance those employed in science and mission operations or in the design, manufacturing, integration, and testing of the spacecraft and its modules, subsystems, and units. Many of those will remain unnamed yet spent countless hours, occasionally during nights, weekends, and public holidays, in cold offices and dark clean rooms. At the risk of being incomplete, we specifically acknowledge, in alphabetical order,
from Airbus DS (Toulouse):
Alexandre Affre,
Marie-Th\'er\`ese Aim\'e,
Audrey Albert,
Aur\'elien Albert-Aguilar,
Hania Arsalane,
Arnaud Aurousseau,
Denis Bassi,
Franck Bayle,
Pierre-Luc Bazin,
Emmanuelle Benninger,
Philippe Bertrand,
Jean-Bernard Biau,
Fran\c{c}ois Binter,
C\'edric Blanc,
Eric Blonde,
Patrick Bonzom,
Bernard Bories,
Jean-Jacques Bouisset,
Jo\"el Boyadjian,
Isabelle Brault,
Corinne Buge,
Bertrand Calvel,
Jean-Michel Camus,
France Canton,
Lionel Carminati,
Michel Carrie,
Didier Castel,
Philippe Charvet,
Fran\c{c}ois Chassat,
Fabrice Cherouat,
Ludovic Chirouze,
Michel Choquet,
Claude Coatantiec,
Emmanuel Collados,
Philippe Corberand,
Christelle Dauga,
Robert Davancens,
Catherine Deblock,
Eric Decourbey,
Charles Dekhtiar,
Michel Delannoy,
Michel Delgado,
Damien Delmas,
Emilie Demange,
Victor Depeyre,
Isabelle Desenclos,
Christian Dio,
Kevin Downes,
Marie-Ange Duro,
Eric Ecale,
Omar Emam,
Elizabeth Estrada,
Coralie Falgayrac,
Benjamin Farcot,
Claude Faubert,
Fr\'ed\'eric Faye,
S\'ebastien Finana,
Gr\'egory Flandin,
Loic Floury,
Gilles Fongy,
Michel Fruit,
Florence Fusero,
Christophe Gabilan,
J\'er\'emie Gaboriaud,
Cyril Gallard,
Damien Galy,
Benjamin Gandon,
Patrick Gareth,
Eric Gelis,
Andr\'e Gellon,
Laurent Georges,
Philippe-Marie Gomez,
Jos\'e Goncalves,
Fr\'ed\'eric Guedes,
Vincent Guillemier,
Thomas Guilpain,
St\'ephane Halbout,
Marie Hanne,
Gr\'egory Hazera,
Daniel Herbin,
Tommy Hercher,
Claude Hoarau le Papillon,
Matthias Holz,
Philippe Humbert,
Sophie Jallade,
Gr\'egory Jonniaux,
Fr\'ed\'eric Juillard,
Philippe Jung,
Charles Koeck,
Marc Labaysse,
R\'en\'e Laborde,
Anouk Laborie,
J\'er\^{o}me Lacoste-Barutel,
Baptiste Laynet,
Virginie Le Gall,
Julien L'Hermitte,
Marc Le Roy,
Christian Lebranchu,
Didier Lebreton,
Patrick Lelong,
Jean-Luc Leon,
Stephan Leppke,
Franck Levallois,
Philippe Lingot,
Laurant Lobo,
C\'eline Lopez,
Jean-Michel Loupias,
Carlos Luque,
S\'ebastien Maes,
Bruno Mamdy,
Denis Marchais,
Alexandre Marson,
Benjamin Massart,
R\'emi Mauriac,
Philippe Mayo,
Caroline Meisse,
Herv\'e Mercereau,
Olivier Michel,
Florent Minaire,
Xavier Moisson,
David Monteiro
Denis Montperrus,
Boris Niel,
C\'edric Papot,
Jean-Fran\c{c}ois Pasquier,
Gareth Patrick,
Pascal Paulet,
Martin Peccia,
Sylvie Peden,
Sonia Penalva,
Michel Pendaries,
Philippe Peres,
Gr\'egory Personne,
Dominique Pierot,
Jean-Marc Pillot,
Lydie Pinel,
Fabien Piquemal,
Vincent Poinsignon,
Maxime Pomelec,
Andr\'e Porras,
Pierre Pouny,
Severin Provost,
S\'ebastien Ramos,
Fabienne Raux,
Florian Reuscher,
Nicolas Riguet,
Mickael Roche,
Gilles Rougier,
Bruno Rouzier,
Stephane Roy,
Jean-Paul Ruffie,
Fr\'ed\'eric Safa,
Heloise Scheer,
Claudie Serris,
Andr\'e Sobeczko,
Jean-Fran\c{c}ois Soucaille,
Philippe Tatry,
Th\'eo Thomas,
Pierre Thoral,
Dominique Torcheux,
Vincent Tortel,
Stephane Touzeau,
Didier Trantoul,
Cyril V\'etel,
Jean-Axel Vatinel,
Jean-Paul Vormus, and
Marc Zanoni;
from Airbus DS (Friedrichshafen):
Jan Beck,
Frank Blender,
Volker Hashagen,
Armin Hauser,
Bastian Hell,
Cosmas Heller,
Matthias Holz,
Heinz-Dieter Junginger,
Klaus-Peter Koeble,
Karin Pietroboni,
Ulrich Rauscher,
Rebekka Reichle,
Florian Reuscher,
Ariane Stephan,
Christian Stierle,
Riccardo Vascotto,
Christian Hehr,
Markus Schelkle,
Rudi Kerner,
Udo Schuhmacher,
Peter Moeller,
Rene Stritter,
J\"{u}rgen Frank,
Wolfram Beckert,
Evelyn Walser,
Steffen Roetzer,
Fritz Vogel, and
Friedbert Zilly;
from Airbus DS (Stevenage):
Mohammed Ali,
David Bibby,
Leisha Carratt,
Veronica Carroll,
Clive Catley,
Patrick Chapman,
Chris Chetwood,
Tom Colegrove,
Andrew Davies,
Denis Di Filippantonio,
Andy Dyne,
Alex Elliot,
Omar Emam,
Colin Farmer,
Steve Farrington,
Nick Francis,
Albert Gilchrist,
Brian Grainger,
Yann Le Hiress,
Vicky Hodges,
Jonathan Holroyd,
Haroon Hussain,
Roger Jarvis,
Lewis Jenner,
Steve King,
Chris Lloyd,
Neil Kimbrey,
Alessandro Martis,
Bal Matharu,
Karen May,
Florent Minaire,
Katherine Mills,
James Myatt,
Chris Nicholas,
Paul Norridge,
David Perkins,
Michael Pieri,
Matthew Pigg,
Angelo Povoleri,
Robert Purvinskis,
Phil Robson,
Julien Saliege,
Satti Sangha,
Paramijt Singh,
John Standing,
Dongyao Tan,
Keith Thomas,
Rosalind Warren,
Andy Whitehouse,
Robert Wilson,
Hazel Wood,
Steven Danes,
Scott Englefield,
Juan Flores-Watson,
Chris Lord,
Allan Parry,
Juliet Morris,
Nick Gregory, and
Ian Mansell.
From ESA, in alphabetical order:
Ricard Abello,
Asier Abreu,
Ivan Aksenov,
Matthew Allen,
Salim Ansari,
Philippe Armbruster,
Alessandro Atzei,
Liesse Ayache,
Samy Azaz,
Nana Bach,
Jean-Pierre Balley,
Paul Balm,
Manuela Baroni,
Rainer Bauske,
Thomas Beck,
Gabriele Bellei,
Carlos Bielsa,
Gerhard Billig,
Carmen Blasco,
Andreas Boosz,
Bruno Bras,
Julia Braun,
Thierry Bru,
Frank Budnik,
Joe Bush,
Marco Butkovic,
Jacques Cande\'e,
David Cano,
Carlos Casas,
Francesco Castellini,
David Chapmann,
Nebil Cinar,
Mark Clements,
Giovanni Colangelo,
Peter Collins,
Ana Colorado McEvoy,
Gabriele Comoretto,
Vincente Companys,
Federico Cordero,
Sylvain Damiani,
Fabienne Delhaise,
Gianpiero Di Girolamo,
Yannis Diamantidis,
John Dodsworth,
Ernesto D\"olling,
Jane Douglas,
Jean Doutreleau,
Dominic Doyle,
Mark Drapes,
Frank Dreger,
Peter Droll,
Gerhard Drolshagen,
Bret Durrett,
Christina Eilers,
Yannick Enginger,
Alessandro Ercolani,
Matthias Erdmann,
Orcun Ergincan,
Robert Ernst,
Daniel Escolar,
Maria Espina,
Hugh Evans,
Fabio Favata,
Stefano Ferreri,
Daniel Firre,
Michael Flegel,
Melanie Flentge,
Alan Flowers,
Steve Foley,
Jens Freih\"ofer,
Rob Furnell,
Julio Gallegos,
Philippe Gar\'{e},
Wahida Gasti,
Jos\'e Gavira,
Frank Geerling,
Franck Germes,
Gottlob Gienger,
B\'en\'edicte Girouart,
Bernard Godard,
Nick Godfrey,
C\'esar G\'omez Hern\'andez,
Roy Gouka,
Cosimo Greco,
Robert Guilanya,
Kester Habermann,
Manfred Hadwiger,
Ian Harrison,
Angela Head,
Martin Hechler,
Kjeld Hjortnaes,
John Hoar,
Jacolien Hoek,
Frank Hoffmann,
Justin Howard,
Arjan Hulsbosch,
Christopher Hunter,
Premysl Janik,
Jos\'e Jim\'enez,
Emmanuel Joliet,
Helma van de Kamp-Glasbergen,
Simon Kellett,
Andrea Kerruish,
Kevin Kewin,
Oliver Kiddle,
Sabine Kielbassa,
Volker Kirschner,
Kees van 't Klooster,
Ralf Kohley,
Jan Kolmas,
Oliver El Korashy,
Arek Kowalczyk,
Holger Krag,
Beno\^{\i}t Lain\'e,
Markus Landgraf,
Sven Landstr\"om,
Mathias Lauer,
Robert Launer,
Laurence Tu-Mai Levan,
Mark ter Linden,
Santiago Llorente,
Tim Lock,
Alejandro Lopez-Lozano,
Guillermo Lorenzo,
Tiago Loureiro,
James Madison,
Juan Manuel Garcia,
Federico di Marco,
Jonas Marie,
Filip Marinic,
Pier Mario Besso,
Arturo Mart\'{\i}n Polegre,
Ander Mart\'{\i}nez,
Monica Mart\'{\i}nez Fern\'{a}ndez,
Marco Massaro,
Paolo de Meo,
Ana Mestre,
Luca Michienzi,
David Milligan,
Ali Mohammadzadeh,
David Monteiro,
Richard Morgan-Owen,
Trevor Morley,
Prisca M\"uhlmann,
Jana Mulacova,
Michael M\"uller,
Pablo Munoz,
Petteri Nieminen,
Alfred Nillies,
Wilfried Nzoubou,
Alistair O'Connell,
Karen O'Flaherty,
Alfonso Olias Sanz,
William O'Mullane,
Jos\'{e} Osinde,
Oscar Pace,
Mohini Parameswaran,
Ramon Pardo,
Taniya Parikh,
Paul Parsons,
Panos Partheniou,
Torgeir Paulsen,
Dario Pellegrinetti,
Jos\'e-Louis Pellon-Bailon,
Joe Pereira,
Michael Perryman,
Christian Philippe,
Alex Popescu,
Fr\'{e}d\'{e}ric Raison,
Riccardo Rampini,
Florian Renk,
Alfonso Rivero,
Andrew Robson,
Gerd R\"ossling,
Martina Rossmann,
Markus R\"uckert,
Andreas Rudolph,
Fr\'ed\'eric Safa,
Johannes Sahlmann,
Eugenio Salguero,
Jamie Salt,
Giovanni Santin,
Fabio de Santis,
Rui Santos,
Giuseppe Sarri,
Stefano Scaglioni,
Melanie Schabe,
Dominic Sch\"afer,
Micha Schmidt,
Rudolf Schmidt,
Ared Schnorhk,
Klaus-J\"urgen Schulz,
Jean Sch\"utz,
Julia Schwartz,
Andreas Scior,
J\"org Seifert,
Christopher Semprimoschnig$^\dagger$,
Ed Serpell,
I\~{n}aki Serraller Vizcaino,
Gunther Sessler,
Felicity Sheasby,
Alex Short,
Hassan Siddiqui,
Heike Sillack,
Swamy Siram,
Christopher Smith,
Claudio Sollazzo,
Steven Straw,
Daniel Tapiador,
Pilar de Teodoro,
Mark Thompson,
Giulio Tonelloto,
Felice Torelli,
Raffaele Tosellini,
Cecil Tranquille,
Irren Tsu-Silva,
Livio Tucci,
Aileen Urwin,
Jean-Baptiste Valet,
Martin Vannier,
Enrico Vassallo,
David Verrier,
Sam Verstaen,
R\"udiger Vetter,
Jos\'e Villalvilla,
Raffaele Vitulli,
Mildred V\"ogele,
Sandra Vogt,
Sergio Volont\'e,
Catherine Watson,
Karsten Weber,
Daniel Werner,
Gary Whitehead$^\dagger$,
Gavin Williams,
Alistair Winton,
Michael Witting,
Peter Wright,
Karlie Yeung,
Marco Zambianchi, and
Igor Zayer,
and finally Vincenzo~Innocente from the Conseil Europ\'een pour la Recherche Nucl\'eaire (CERN).
In case of errors or omissions, please contact the \href{https://www.cosmos.esa.int/web/gaia/gaia-helpdesk}{\textit{Gaia}\ Helpdesk}.
}
\section{Analysis}
\label{sec:analysis}
The results for the three components of the glide vector are shown in
Fig.~\ref{fig:acceleration-lmax10}. They have been obtained by fitting the VSH
expansion in Eq.~(\ref{Vexpandreal}) for different $l_{\rm max}$ to the proper motions of the
1\,215\,942 \textit{Gaia}-CRF3\ sources with five-parameter solutions. The corresponding spheroidal VSH parameters with
$l=1$ were transformed into the Cartesian components of the glide
using Eq.~(\ref{VSH-to-acceleration}). Figure~\ref{fig:acceleration-lmax10}
displays both the equatorial components $(g_x,\,g_y,\,g_z)$
and the galactic components $(g_X,\,g_Y,\,g_Z)$ of the glide vector.
The equatorial components were derived
directly using the equatorial proper motions published in the
\textit{Gaia}\ Archive. The galactic components can be derived either by
transforming the equatorial components of the glide and their covariance matrix to
galactic coordinates, or from a direct VSH fits using the proper motions
and covariances in galactic coordinates. We have verified that the two
procedures give strictly identical results.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\hsize]{Figures/equatorial-acceleration-lmax10.pdf}
\includegraphics[width=1\hsize]{Figures/galactic-acceleration-lmax10.pdf}
\caption{Equatorial (upper pane) and galactic (lower pane) components of the solar system
acceleration for fits with different maximal
VSH order $l_{\rm max}$ (`alone' means that the three glide components
were fitted with no other VSH terms). The error bars represent $\pm 1\sigma$
uncertainties.}
\label{fig:acceleration-lmax10}
\end{center}
\end{figure}
One can see that starting from $l_{\rm max}=3$ the estimates are
stable and generally deviate from each other by less than the corresponding
uncertainties.
The deviation of the results for $l_{\rm max}<3$ from those of higher $l_{\rm max}$
shows that the higher-order systematics in the data need to be
taken into account, although their effect on the glide is relatively mild.
We conclude that it is reasonable
to use the results for $l_{\rm max}=3$ as the best estimates of the
acceleration components.
The unit weight error (square root of the reduced chi-square)
of all these fits, and of all those described below, is about 1.048.
The unit weight error calculated with all VSH terms set to zero is
also 1.048 (after applying the same outlier rejection procedure as for
the fits), which merely reflects the fact that the fitted VSH terms are much
smaller than the uncertainties of the individual proper motions.
The unit weight error is routinely used to
scale up the uncertainties of the fit. However, a more robust method
of bootstrap resampling was used to estimate the uncertainties (see
below).
To further investigate the influence of various aspects of the
data and estimation procedure, the following
tests were done.
\begin{itemize}
\item Fits including VSH components of degree up to $l_{\rm max}=40$
were made. They show that the variations of the estimated
acceleration components remain at the level of a fraction of the
corresponding uncertainties, which agrees with random variations
expected for the fits with high $l_{\rm max}$.
\item The fits in Fig.~\ref{fig:acceleration-lmax10} used the clip limit
$\kappa=3$, which rejected about 3800 of the 1\,215\,942 sources
as outliers (the exact number depends on
$l_{\rm max}$). Fits with different clip limits
$\kappa$ (including fits without outlier rejection, corresponding
to $\kappa=\infty$) were tried, showing that the result for the
acceleration depends on $\kappa$ only at a level of a quarter of the
uncertainties.
\item The use of the correlations $\rho_\mu$ between the proper
motion components for each source in the weight matrix of the fit
influences the acceleration estimates at a level of $\sim 0.1$ of
the uncertainties. This should be expected since the correlations
$\rho_\mu$ for the 1\,215\,942 \textit{Gaia}-CRF3\ sources are relatively
small (the distribution of $\rho_\mu$ is reasonably close to normal with
zero mean and standard deviation 0.28).
\end{itemize}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{Figures/equatorial-acceleration-gMagSelection.pdf}
\caption{Equatorial components of the acceleration and their
uncertainties for four intervals of $G$ magnitude: $G\le18$ mag
(29\,200 sources), $18<G\le19$ mag (146\,614 sources), $19<G\le20$ mag
(490\,161 sources), and $G>20$ mag (549\,967 sources). The horizontal
colour bands visualize the values and uncertainties (the height
corresponds to twice the uncertainty) of the corresponding
components computed from the whole data set.}
\label{fig:acceleration-gMag-selections}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\hsize]{Figures/equatorial-acceleration-nuEffSelection.pdf}
\caption{Equatorial components of the acceleration and their
uncertainties for four intervals of the colour represented by the
effective wavenumber $\nu_{\rm eff}$ used in
\gdr{3}\ astrometry. The quartiles of the $\nu_{\rm eff}$
distribution for the sources considered in this study are used as
the boundaries of the $\nu_{\rm eff}$ intervals so that each
interval contains about 304\,000 sources. The horizontal colour
bands visualize the values and uncertainties (the height
corresponds to twice the uncertainty) of the corresponding
components computed from the whole data set.}
\label{fig:acceleration-colour-selections}
\end{center}
\end{figure}
Analysis of the \gdr{3}\ astrometry has revealed systematic errors
depending on the magnitude and colour of the sources
\citep{DPACP-128,DPACP-132}. To check how these factors
influence the estimates, fits using $l_{\rm max}=3$ were
made for sources split by magnitude and colour:
\begin{itemize}
\item Figure~\ref{fig:acceleration-gMag-selections} shows the acceleration
components estimated for subsets of different mean $G$ magnitude. The
variation of the components with $G$ is mild and the
estimates are compatible with the estimates from the full data set
(shown as horizontal colour bands) within their uncertainties.
\item Figure~\ref{fig:acceleration-colour-selections} is a corresponding
plot for the split by colour, as represented by the
effective wavenumber $\nu_{\rm eff}$. Again one can
conclude that the estimates from the data selections in colour agree
with those from the full data set within their corresponding
uncertainties.
\end{itemize}
It should be noted that the magnitude and colour selections are not
completely independent since the bluer QSO-like sources tend to be fainter
than the redder ones. Moreover, the magnitude and colour selections are less
homogeneous on the sky than the full set of sources (for example owing to the
Galactic extinction and reddening). However, we conclude that the
biases in the acceleration estimates, due to magnitude-
and colour-dependent effects in the \gdr{3}\ astrometry, are below the
formal uncertainties for the full sample.
Another possible cause of biases in the \textit{Gaia} data is charge transfer
inefficiency (CTI) in the CCDs (e.g.\ \citeads{2016A&A...595A...6C}).
A detailed simulation of plausible CTI effects unaccounted for in the
\textit{Gaia}\ data processing for \gdrthree\ showed that the estimated glide
is remarkably resilient to the CTI and may be
affected only at a level below $0.1\ensuremath{\,\mu\text{as\,yr}^{-1}}$ -- at most a quarter of
the quoted uncertainty.
Our selection of \textit{Gaia}\ sources cannot be absolutely free
from stellar contaminants. As discussed in Sect.~\ref{sec:stars},
stars in our Galaxy have very large glide components in the vector field of their proper
motions. This means that even a small stellar contamination could bias
our estimate of the solar system acceleration. One can hope that the
mechanism of outlier elimination used in the VSH fit in this work (see
Sect.~\ref{sec:method}) helps to eliminate at least some of the most
disturbing stellar-contamination sources. It is, however, worth to
investigate the possible biases by direct simulation. By construction,
the stellar contaminants in our list of QSO-like sources must have
five-parameter solutions in \gdr{3}\ that satisfy the selection criteria
discussed in Sect.~\ref{sec:qso-like} and \citep{DPACP-133}.
It is therefore of interest to investigate the sample of sources obtained
by making exactly the same selection of \gdr{3}\ sources, but without
the cross-match to the external QSO/AGN catalogues.
There are a total of 23.6~million such sources in \gdr{3}, including
the 1.2~million (5.2\%) included in
\textit{Gaia}-CRF3. Most of them are stars in our Galaxy, but one also sees
stars in nearby dwarf galaxies, globular clusters, and bright stars in
other galaxies. Applying the VSH method to this sample
gives a glide of about $360\ensuremath{\,\mu\text{as\,yr}^{-1}}$ in a
direction within a few degrees of $(l,b)=(270\degr,\,0\degr)$,
that is roughly opposite to the direction of motion of the Sun
in the Galaxy. This glide has obviously nothing to do
with the acceleration of the solar system (see Sect.~\ref{sec:stars})
and its precise value is irrelevant. However, it is very relevant that
it is practically perpendicular to the glide obtained from the
QSO-like sample, for it means that a (small) stellar contamination
will not significantly alter the magnitude of the glide $|\vec{g}|$.
It could however bias the direction of the observed glide towards
$(l,b)=(270\degr,\,0\degr)$, that is mainly in galactic longitude.
We do not see a clear sign of this in our estimates (the estimated
direction is within one $\sigma$ from the Galactic centre) and we
therefore conclude that the effect of a possible stellar contamination
in \textit{Gaia}-CRF3\ is negligible for the claimed estimate of the solar
system acceleration.
\begin{figure}
\begin{center}
\includegraphics[width=1\hsize]{Figures/galactic-error-ellipse550000.pdf}
\caption{Visualizing of the error ellipse of the estimated direction of the
acceleration estimate in galactic coordinates. The plot is a density map
of the directions from 550\,000 bootstrap resampling experiments.
The colour scale is logarithmic.}
\label{fig:galactic-error-ellipse}
\end{center}
\end{figure}
Finally, it should be remembered that systematic errors in the
\textit{Gaia}\ ephemeris may also bias the estimate of the solar
system acceleration. The standard astrometric parameters in the
\textit{Gaia}\ astrometric solution are defined for a fictitious
observer located in the `solar system barycentre'. The latter is
effectively defined by the \textit{Gaia}\ ephemeris in the Barycentric
Celestial Reference Frame (BCRS; \citeads{2003AJ....126.2687S};
\citeads{2003AJ....125.1580K}) that is used in the data processing. In
particular, the \textit{Gaia}'s barycentric velocity is used to
transform the observations from the proper frame of \textit{Gaia} to
the reference frame at rest with respect to the solar system
barycentre \citepads{2004PhRvD..69l4001K}. Systematic errors in the
\textit{Gaia}\ ephemeris may result in systematic errors in the
astrometric parameters. In particular, a systematic error in the
\textit{Gaia} velocity, corresponding to a non-zero average
acceleration error over the time interval of the observations (about
33~months for \gedr{3}), will produce the same systematic error in the
measured solar system acceleration.
The barycentric ephemeris of \textit{Gaia} is obtained by combining
the geocentric orbit determination, made by the Mission Operations
Centre at ESOC (Darmstadt, Germany) using various Doppler and ranging
techniques, with a barycentric ephemeris of the Earth\footnote{The
`geocentric' orbit of \textit{Gaia}\ is also defined in the BCRS
and represents the difference of the BCRS coordinates of
\textit{Gaia}\ and those of the geocentre.}. For the latter, the
INPOP10e planetary ephemerides \citepads{2016NSTIM.104.....F} was used
in \gedr{3}. The errors in the geocentric orbit have very different
characteristics from those of the planetary ephemerides, and the two
contributions need to be considered separately. For the geocentric
part, one can rule out an acceleration bias greater than about
$2\times 10^{-13}\,\text{m}\,\text{s}^{-2}$ persisting over the
33~months, because it would produce an offset in the position of
\textit{Gaia} of the order of a km, well above the accuracy obtained
by the ranging. For the barycentric ephemeris of the Earth, we can
obtain an order-of-magnitude estimate of possible systematics by
comparing the INPOP10e ephemerides with the latest version, INPOP19a
\citepads{2019NSTIM.109.....F}, which will be used for
\gdr{4}. Averaged over 33~months, the difference in the acceleration
of the Earth between the two versions is of the order of
$10^{-12}\,\text{m}\,\text{s}^{-2}$, that is about 0.5\% of the
observed (and expected) acceleration of the solar system
barycentre. These differences in the Earth ephemeris come from the
improvements in the dynamical modelling of the solar system and the
new observational data allowing for more accurate determination of the
parameters of the solar system bodies. One can expect that the
process of improvement will continue and involve, in particular, more
objects in the outer solar system that can potentially influence the
definition of the solar system barycentre. For example, the
hypothetical Planet Nine would have an effect that was at most
$5\times 10^{-13}\,\text{m}\,\text{s}^{-2}$
\citepads{2020A&A...640A...6F}. Taking all these aspects into account,
we conclude that plausible systematic errors in the barycentric
ephemeris of \textit{Gaia}\ are too small, by at least two orders of
magnitude, to invalidate our result. Nevertheless, special care should
be taken for this source of systematic errors when considerably more
accurate measurements of the solar system acceleration -- e.g. from a
combination of the \textit{Gaia} and \textit{GaiaNIR} data
\citepads{2016arXiv160907325H} -- will become available.
The various tests and arguments reported above strengthen our confidence in the
final results, which are summarised in Table~\ref{tab:results}.
Both the equatorial and galactic components are given with their
uncertainties and correlations. The uncertainties were estimated by bootstrap
resampling \citep{efron1994introduction}, which in our case
increased the uncertainties from the fit (already
inflated by the unit weight error) by factors of 1.05 to 1.08.
As shown already in Fig.~\ref{fig:acceleration-lmax10}, the
direction of the measured acceleration is very close to the Galactic
centre. This is also illustrated in Fig.~\ref{fig:galactic-error-ellipse},
which shows the directions obtained in the bootstrap resampling.
\begin{table}
\caption{Principal results of this work: equatorial and galactic components
of the estimated acceleration of the solar system, with uncertainties and correlations.
\label{tab:results}}
\footnotesize\setlength{\tabcolsep}{4pt}
\begin{center}
\begin{tabular}{lrr}
\hline\hline
\noalign{\smallskip}
quantity & value & uncertainty \\[1pt]
\hline
\noalign{\smallskip}
\multicolumn{3}{c}{equatorial components} \\[2pt]
$g_x$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & $-0.07$ & 0.41 \\[2pt]
$g_y$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & $-4.30$ & 0.35 \\[2pt]
$g_z$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & $-2.64$ & 0.36 \\[4pt]
$\alpha$ & $269.1\degr$ & $5.4\degr$ \\
$\delta$ & $-31.6\degr$ & $4.1\degr$ \\[3pt]
\multicolumn{3}{c}{correlations} \\[1pt]
$\rho_{g_x,g_y}$ & $+0.001$ &\\
$\rho_{g_x,g_z}$ & $-0.094$ &\\
$\rho_{g_y,g_z}$ & $-0.025$ &\\[3pt]
$\rho_{\alpha,\delta}$ & $-0.081$ &\\[5pt]
\hline
\noalign{\smallskip}
\multicolumn{3}{c}{galactic components} \\
$g_X$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & $+5.04$ & 0.35 \\
$a_X$ [$\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$] & $+7.32$ & 0.51 \\[2pt]
$g_Y$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & $-0.10$ & 0.36 \\
$a_Y$ [$\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$] & $-0.14$ & 0.52 \\[2pt]
$g_Z$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & $-0.29$ & 0.41 \\
$a_Z$ [$\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$] & $-0.43$ & 0.60 \\[3pt]
$l$ & $358.9\degr$ & $4.1\degr$ \\
$b$ & $-3.3\degr$ & $4.6\degr$ \\[3pt]
\multicolumn{3}{c}{correlations} \\[1pt]
$\rho_{g_X,g_Y}$ & $+0.036$ &\\
$\rho_{g_X,g_Z}$ & $-0.014$ &\\
$\rho_{g_Y,g_Z}$ & $-0.079$ &\\[3pt]
$\rho_{l,b}$ & $-0.078$ &\\[5pt]
\hline
\noalign{\smallskip}
$|\,\vec{g}\,|$ [$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] & 5.05 & 0.35 \\
$|\,\vec{a}\,|$ [$\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$] & 7.33 & 0.51 \\
\phantom{$|\,\vec{a}\,|$} [$10^{-10}\ensuremath{\,\mathrm{m\,s^{-2}}}$] & 2.32 & 0.16 \\[3pt]
\hline
\end{tabular}
\end{center}
\tablefoot{All uncertainties are $\pm1\sigma$ estimates obtained using bootstrap
resampling. The absolute values of the acceleration are computed as
the Euclidean norm of the estimated vector, and may be biased as discussed
in Appendix~\ref{sec:unbiased}.}
\end{table}
\section{Conclusions and prospects}
\label{sec:summary}
The exquisite quality of the \gdrthree\ astrometry together with a
careful selection of the \textit{Gaia}-CRF3\ sources (Sect.~\ref{sec:qso-like})
have allowed us to detect the acceleration of the
solar system with respect to the rest-frame of the remote
extragalactic sources, with a relative precision better than 10\%.
The stability of the derived estimates was extensively checked by
numerous experiments as discussed in Sect.~\ref{sec:analysis}.
The consistency of the results
support the overall claim of a significant detection. We
note that our estimate of the solar system acceleration agrees with the
theoretical expectations from galactic dynamics (Sect.~\ref{sec:expectation})
within the corresponding uncertainties.
We stress that the detection of the solar system acceleration in the
Gaia astrometry does not require any dedicated astrometric
solution. The astrometric data used in this work to detect the acceleration
and analyze its properties are those of the
astrometric solution published in \gedr{3}.
Although the relative accuracy obtained in the estimate
is very satisfactory for this data release, it is at
this stage impossible to tell whether there are acceleration
contributions from other components than the motion of the solar
system in the Milky Way. As discussed in
Sect.~\ref{sec:expectation}, even this contribution is complex
and cannot be modelled with sufficient certainty to disentangle the
different contributions.
We can ask ourselves what should be expected from \textit{Gaia}\ in the
future. The astrometric results in \gedr{3}\ are based only on
33~months of data, while the future \gdr{4}\ will be based on about
66~months of data and the final \gdr{5}\ may use up to 120~months of
data. Since the effect of the acceleration is equivalent to proper
motions, the random uncertainty of its measurement improves with
observational time $T$ as $T^{-3/2}$. Therefore, we can expect that
the random errors of the acceleration estimated in \gdr{4}\ and
\gdr{5}\ could go down by factors of about $0.35$ and $0.15$,
respectively.
But random error is just one side of the story. What has made this
solution possible with \gedr{3}, while it was not possible with the
\gdr2 data, is the spectacular decrease of the systematic errors in
the astrometry. To illustrate this point, the glide determined from
the \textit{Gaia}-CRF2\ data (Sect.~3.3 in \citeads{2018A&A...616A..14G}) was at
the level of $10\ensuremath{\,\mu\text{as\,yr}^{-1}}$ per component, much higher than a solution
strictly limited by random errors. With the \gedr{3} we have a random
error on each proper motion of about $\simeq 400\ensuremath{\,\mu\text{as\,yr}^{-1}}$ and just over
1~million sources. So one could hope to reach $0.4\ensuremath{\,\mu\text{as\,yr}^{-1}}$ in the
formal uncertainty of the glide components, essentially what is now
achieved. In future releases, improvement for the solar system
acceleration will come both from the better random errors and the
reduced systematic errors, although only the random part can be
quantified with some certainty. In the transition from \gdr2 to
\gedr{3} a major part of the gain came from the diminishing of
systematic effects.
The number of QSO-like sources that can become available in
future \textit{Gaia}\ data releases is another interesting aspect. In
general, a reliable answer is not known. Two attempts
(\citeads{2019MNRAS.489.4741S};\citeads{2019MNRAS.490.5615B}) to find QSO-like
sources in \gdr{2}\ data ended up with about 2.7~million sources each
(and even more together). Although an important part of those
catalogues did not show the level of reliability we require for \textit{Gaia}-CRF3,
one can hope that the number of QSO-like sources with
\textit{Gaia}\ astrometry will be doubled in the future compared to
\gdr{3}. Taking all these aspects into account, it is reasonable to
hope the uncertainty of the acceleration to reach the level of well
below $0.1\ensuremath{\,\mu\text{as\,yr}^{-1}}$ in the future \textit{Gaia}\ releases.
Considering the expected accuracy, an interesting question here is if we
could think of any other effects that would give systematic patterns
in the proper motions of QSO-like sources at the level of expected
accuracy. Such effects are indeed known (a good overview of these
effects can be found e.g. in \citepads{2016A&A...589A..71B}). One such
effect is the `cosmological proper motion' \citepads{1986SvA....30..501K},
or `secular extragalactic parallax' \citepads{2020ApJ...890..146P},
caused by the motion of the solar system with respect to the rest frame of the CMB
at a speed of $370\,\ensuremath{\text{km\,s}^{-1}} \approx 78 {\,\mathrm{au\,{yr}^{-1}}}$
towards the point with galactic
coordinates $l=264.02\degr, b=48.25\degr$ (\citeads{2020A&A...641A...3P};
see also Sect.~\ref{sec:effect}). This gives a reflex proper motion of
$78\ensuremath{\,\mu\text{as\,yr}^{-1}}\,\times \left(1\,\text{Mpc}\,/\,d\right)\,\sin\beta$, where $d$ is the distance
to the object and $\beta$ is the angle between the object and the direction of motion \citepads{2016A&A...589A..71B}.
The effect is analogous to the systematic proper motions of nearby stars caused by
the apex motion of the Sun (Sect.~\ref{sec:stars}), and like it decreases with the
inverse distance to the sources. At a redshift of 0.2 the systematic proper
motion should be about $0.1\ensuremath{\,\mu\text{as\,yr}^{-1}}$ at right angle to the solar motion.
However, only a few thousand QSO-like objects can be expected at such small
redshifts, and, as discussed e.g. by \citetads{2020ApJ...890..146P}, the effect is
muddled by the peculiar velocities of the objects and deviations of their bulk
motions from the Hubble flow due to the gravitational interactions with large-scale
structures. It therefore remains questionable if this systematic proper motion will become
accessible to \textit{Gaia}\ in the future.
Another secular shift of the positions of extragalactic sources
comes from the light bending in the gravitational field of the
Galaxy, which depends (among other things) on the angle between the source
and the Galactic centre. The motion of the solar system in the Galaxy results
in a slow variation of this angle, which causes a variation of the light bending.
This will be seen as a proper motion of the extragalactic source.
The effect is independent of the distance to the source (as long as it is far
away from the Milky Way), but depends on its position on the sky according
to the details of the Galactic potential. The VSH technique used in this work
seems to be very well suited for disentangling this effect from that of the
solar system acceleration.
\section{The astrometric effect of an acceleration}
\label{sec:effect}
In the Introduction we described aberration as an effect changing the
`apparent position' of a source. More accurately, it should be described in terms of
the `proper direction' to the source: this is the direction
from which photons are seen to arrive, as measured in a
physically adequate proper reference system of the observer
(see, e.g. \citeads{2004PhRvD..69l4001K}; \citeyearads{2012aamm.book...47K}). The proper direction
which we designate with the unit vector $\vec{u}$, is what an astrometric instrument
in space ideally measures.
The aberration of light is the displacement $\delta\vec{u}$ obtained
when comparing the proper directions to the same source, as measured
by two co-located observers moving with velocity $\vec{v}$ relative to
each other. According to the theory of relativity (both special and
general), the proper directions as seen by the two observers are
related by a Lorentz transformation depending on the velocity
$\vec{v}$ of one observer as measured by the other. If
$\delta\vec{u}$ is relatively large, as for the annual aberration, a
rigorous approach to the computation is needed and also used, for
example in the \textit{Gaia} data processing
\citepads{2003AJ....125.1580K}. Here we are however concerned with
small differential effects, for which first-order formulae
(equivalent to first-order classical aberration) is sufficient.
To first order in $|\vec{v}|/c$, where $c$ is the speed of
light, the aberrational effect is linear in $\vec{v}$,
\begin{equation}\label{eq:galaberr}
\delta\vec{u} = \frac{\vec{v}}{c}-\frac{\vec{v}\cdot\vec{u}}{c}\,\vec{u}\, .
\end{equation}
Equation~(\ref{eq:galaberr}) is accurate to ${<\,}0.001~\mu$as for $|\vec{v}|< 0.02\,\ensuremath{\text{km\,s}^{-1}}$, and
to ${<\,}1\,$''$$ for $|\vec{v}|< 600\,\ensuremath{\text{km\,s}^{-1}}$ (see, however, below).
If $\vec{v}$ is changing with time, there is a corresponding
time-dependent variation of $\delta\vec{u}$, which affects all sources
on the sky in a particular systematic way. A familiar example is the
annual aberration, where the apparent positions seen from the Earth
are compared with those of a hypothetical observer at the same
location, but at rest with respect to the solar system barycentre. The
annual variation of $\vec{v}/c$ results in the aberrational effect
that outlines a curve that is close to an ellipse with semi-major axis
about $20\,$''$$ (the curve is not exactly an ellipse since the barycentric orbit
of the Earth is not exactly Keplerian).
The motion with respect to the solar system barycentre is not the only
conceivable source of aberrational effects. It is well known that the
whole solar system (that is, its barycentre) is in motion in the
Galaxy with a velocity of about $248\,\ensuremath{\text{km\,s}^{-1}}$
\citepads{2020ApJ...892...39R}, and that its velocity with respect to
the Cosmic Microwave Background Radiation (CMBR) is about $370\,\ensuremath{\text{km\,s}^{-1}}$
\citepads{2020A&A...641A...3P}. Therefore, if one compares the
apparent positions of the celestial sources as seen by an observer at
the barycentre of the solar system with those seen by another observer
at rest with respect to the Galaxy or the CMBR, one would see
aberrational differences up to ${\sim}171\,$''$$ or
${\sim}255\,$''$$, respectively -- effects that are so big that they
could be recognized by the naked eye (see
Fig.~\ref{fig:galactic-aberration} for an illustration of this
effect). The first of these effects is sometimes called secular
aberration. In most applications, however, there is no reason to
consider an observer that is `even more at rest' than the solar system
barycentre. The reason is that this large velocity -- for the purpose
of astrometric observations and for their accuracies -- can usually be
considered as constant; and if the velocity is constant in size and
direction, the principle of relativity imposes that the aberrational
shift cannot be detected. In other words, without knowledge of the
`true' positions of the sources, one cannot reveal the constant
aberrational effect on their positions.
However, the velocity of the solar system is not exactly constant.
The motion of the solar system follows a curved orbit in the Galaxy,
so its velocity vector is slowly changing with time. The secular
aberration is therefore also slowly changing with time. Considering sources that do not
participate in the galactic rotation (such as distant extragalactic sources),
we will see their apparent motions tracing out aberration `ellipses' whose period
is the galactic `year' of
${\sim}213$~million
years -- they are of course not ellipses owing to the epicyclic orbit
of the solar system (see Fig.~\ref{fig:galactic-aberration}).
Over a few years, and even thousands of years, the tiny
arcs described by the sources
cannot be distinguished from the tangent of the aberration
ellipse, and for the observer this is seen as a proper motion that can
be called additional, apparent, or spurious:
\begin{equation}\label{eq:accel}
\frac{\text{d}(\delta \vec{u})}{\text{d}t} = \frac{\vec{a}}{c} -
\frac{\vec{a}\cdot\vec{u}}{c}\,\vec{u}\,.
\end{equation}
Here $\vec{a}=\text{d}\vec{v}/\text{d}t$ is the acceleration of the solar
system barycentre with respect to the extragalactic sources.
For a given source, this slow drift of the observed position is
indistinguishable from its true proper motion. However, the apparent proper motion
as given by Eq.~(\ref{eq:accel}) has a global dipolar structure
with axial symmetry along the acceleration: it is maximal for sources in
the direction perpendicular to the acceleration and zero for
directions along the acceleration. This pattern is shown as a vector field in
Fig.~\ref{fig:pm_galaccel} in the case of the centripetal acceleration
directed towards the galactic centre.
Because only the change in aberration can be observed, not the aberration
itself, the underlying reference frame in Eq.~(\ref{eq:galaberr}) is irrelevant
for the discussion. One could have considered another reference for the
velocity, leading to a smaller or larger aberration, but the aberration drift
would be the same and given by Eq.~(\ref{eq:accel}). Although this equation
was derived by reference to the galactic motion of the solar system, it is fully
general and tells us that any accelerated motion of the solar system
with respect to the distant sources translates into a systematic
proper-motion pattern of those sources, when the astrometric
parameters are referenced to the solar system barycentre, as it
is the case for \textit{Gaia}. Using a rough estimate of the
centripetal acceleration of the solar system in its motion around the
galactic centre, one gets the approximate amplitude of the spurious
proper motions to be $\sim 5\,\ensuremath{\,\mu\text{as\,yr}^{-1}}$. A detailed discussion of
the expected acceleration is given in Sect.~\ref{sec:expectation}.
It is important to realize that the discussion in this form is
possible only when the first-order approximation given by
Eq. (\ref{eq:galaberr}) is used. It is the linearity of
Eq. (\ref{eq:galaberr}) in $\vec{v}$ that allows one, in this
approximation, to decompose the velocity $\vec{v}$ in various parts
and simply add individual aberrational effects from those components
(e.g. annual and diurnal aberration in classical astrometry or also a
constant part and a linear variation). In the general case of a
complete relativistic description of aberration via Lorentz
transformations, the second-order aberrational effects depend also on
the velocity with respect to the underlying reference frame and can
become large. However, when the astrometric parameters are referenced
to the solar system barycentre, the underlying reference frame is at
rest with respect to the barycentre and Eq. (\ref{eq:accel}) is
correct to a fractional accuracy of about
${|\vec{v}_\text{obs}|/c}\sim10^{-4}$, where $\vec{v}_\text{obs}$ is
the barycentric velocity of the observer. While this is fully
sufficient for the present and anticipated future determinations with
\textit{Gaia}, a more sophisticated modelling is needed, if a
determination of the acceleration to better than $\sim0.01\%$ is
discussed in the future.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\hsize]{Figures/galacticAberrationEllipse.pdf}
\caption{Galactic aberration over 500~Myr for an observer looking
towards Galactic north. The curve shows the apparent path of a
hypothetical quasar, currently located exactly at the north
galactic pole, as seen from the Sun (or solar system
barycentre). The points along the path show the apparent positions
after 0,~50, 100,~\dots Myr due to the changing velocity of
the Sun in its epicyclic orbit around the galactic centre. The
point labelled GC is the position of the quasar as seen by an
observer at rest with respect to the galactic centre. The point
labelled CMB is the position as seen by an observer at rest with
respect to the cosmic microwave background. The Sun's orbit was
computed using the potential model by
\citetads{2017MNRAS.465...76M} (see also Sect.~\ref{sec:expectation}),
with current velocity components
derived from the references in Sect.~\ref{sec:centripetal}. The Sun's velocity with
respect to the CMB is taken from \citetads{2020A&A...641A...3P}.}
\label{fig:galactic-aberration}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.0\hsize]{Figures/PM_Field_gal_frame.pdf}
\caption{The proper motion field of QSO-like objects induced by the centripetal
galactic acceleration: there is no effect in the directions of the
galactic centre and anti-centre, and a maximum in the plane passing
through the galactic poles with nodes at 90--270\,\degr\ in galactic
longitudes. The plot is in galactic coordinates with the solar
system at the centre of the sphere, and the vector field seen from
the exterior of the sphere. Orthographic projection with viewpoint
at $l = 30\degr, b = 30\degr$ and an arbitrary scale for the
vectors. See also an \href{http://www.aanda.org/XXX/olm}{online movie}.
}
\label{fig:pm_galaccel}
\end{center}
\end{figure}
An alternative form of Eq.~(\ref{eq:accel}) is
\begin{equation}\label{eq:accel1}
\vec{\mu} = \vec{g} - \left(\vec{g}\cdot\vec{u}\right)\,\vec{u}\,,
\end{equation}
where $\vec{\mu}=\text{d}(\delta\vec{u})/\text{d}t$ is the proper motion vector due to the
aberration drift and $\vec{g}=\vec{a}/c$ may be expressed
in proper motion units, for example $\mu$as~yr$^{-1}$.
Both vectors $\vec{a}$ and $\vec{g}$ are called `acceleration' in the context of this study.
Depending on the context, the acceleration may be given in different units, for
example \ensuremath{\,\mathrm{m\,s^{-2}}}, \ensuremath{\,\mu\text{as\,yr}^{-1}}, or $\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$\, (1\,\ensuremath{\,\mu\text{as\,yr}^{-1}}\ corresponds to
$1.45343\ensuremath{\text{\kms\,\text{Myr}}^{-1}}=4.60566\times10^{-11}\ensuremath{\,\mathrm{m\,s^{-2}}}$).
Equation~(\ref{eq:accel1}) can be written in component form, using
Cartesian coordinates in any suitable reference system and
the associated spherical angles.
For example, in equatorial (ICRS) reference system $(x,y,z)$
the associated angles are right ascension and declination $(\alpha,\delta)$.
The components of the proper motion,
$\mu_{\alpha*}\equiv\mu_\alpha\cos\delta$ and $\mu_\delta$,
are obtained by projecting $\vec{\mu}$ on the unit vectors $\vec{e}_\alpha$ and
$\vec{e}_\delta$ in the directions of increasing $\alpha$ and $\delta$ at the position
of the source (see \citeads{2012A&A...547A..59M}, Fig.~1 and their Eqs.~64 and 65).
The result is
\begin{align}\label{eq:accel_components}
\begin{aligned}
\mu_{\alpha*} &= -g_x\sin\alpha + g_y\cos\alpha\,, \\
\mu_\delta &=-g_x\sin\delta\cos\alpha - g_y\sin\delta\sin\alpha +g_z\cos\delta\,,
\end{aligned}
\end{align}
where $(g_x,\,g_y,\,g_z)$ are the corresponding components of
$\vec{g}$. A corresponding representation is valid in arbitrary coordinate
system. In this work, we will use either equatorial
(ICRS) coordinates $(x,y,z)$ or galactic coordinates $(X,Y,Z)$ and the
corresponding associated angles $(\alpha,\delta)$ and $(l,b)$,
respectively (see Sect.~\ref{sec:galactic-coordinates}).
Effects of the form in Eq.~(\ref{eq:accel_components}) are
often dubbed `glide' for the reasons explained in
Sect.~\ref{sec:method}.
\section{Theoretical expectations for the acceleration}
\label{sec:expectation}
This Section is devoted to a detailed discussion of the expected
gravitational acceleration of the solar system. We stress, however,
that the measurement of the solar system acceleration as outlined
above and further discussed in subsequent sections is absolutely
independent of the nature of the acceleration and the estimates given
here.
As briefly mentioned in Sect.~\ref{sec:effect}, the acceleration of
the solar system can, to first order, be approximated as the
centripetal acceleration towards the Galactic centre which keeps the
solar system on its not-quite circular orbit around the Galaxy. In
this section we quantify this acceleration and other likely sources of
significant acceleration. The three additional parts which we consider
are: (i) acceleration from the most significant non-axisymmetric
components of the Milky Way, specifically the Galactic bar and
spirals; (ii) the acceleration towards the Galactic plane, because the
Milky Way is a flattened system and the solar system lies slightly
above the Galactic plane; and (ii) acceleration from specific objects,
be they nearby galaxy clusters, local group galaxies, giant molecular
clouds or nearby stars.
For components of the acceleration associated with the bulk properties
of the Galaxy we describe the acceleration in galactocentric
cylindrical coordinates $(R',\,\phi',\,z')$, where $z'=0$ for the Galactic
plane, and the Sun is at $z'>0$). These are the natural model
coordinates, and we convert into an acceleration
in standard galactic coordinates $(a_X,\,a_Y,\,a_Z)$ as a final step.
\subsection{Centripetal acceleration}
\label{sec:centripetal}
The distance and proper motion of Sagittarius A* -- the super-massive
black hole at the Galactic centre -- has been measured with exquisite
precision in recent years. Since this is expected to be very close to
being at rest in the Galactic centre, the proper motion is almost
entirely a reflex of the motion of the Sun around the Galactic
centre. Its distance \citepads{2019A&A...625L..10G} is
\begin{equation*}
d_{\odot-GC}=8.178\pm0.013\; \text{(statistical)}\pm 0.022 \;\text{(systematic)}\,\text{kpc} ,
\end{equation*}
and its proper motion along the Galactic plane is $-6.411\pm
0.008 \ensuremath{\,\text{mas\,yr}^{-1}}$ \citepads{2020ApJ...892...39R}. The Sun is not on a circular orbit,
so we cannot directly translate the corresponding velocity into a centripetal acceleration. To
compensate for this, we can correct the velocity to the `local standard of rest' -- the
velocity that a circular orbit at $d_{\odot-GC}$ would have. This
correction is $12.24\pm2\,\ensuremath{\text{km\,s}^{-1}}$ \citepads{2010MNRAS.403.1829S}, in the sense
that the Sun is moving faster than a circular orbit at its
position. Considered together this gives an acceleration of
$-6.98\pm0.12\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$ in the $R'$ direction.
This corresponds to the centripetal acceleration of $4.80\pm0.08\ensuremath{\,\mu\text{as\,yr}^{-1}}$
which is compatible with the values based on measurements of Galactic rotation, discussed for example by
\citetads{2014ApJ...783..130R} and \citetads{2014jsrs.conf...44M}.
\subsection{Acceleration from non-axisymmetric components}
\label{sec:nonaxi}
The Milky Way is a barred spiral galaxy. The gravitational force from
the bar and spiral have important effects on the velocities of stars
in the Milky Way, as has been seen in numerous studies using \gdrtwo\
data (e.g.\ \citeads{2018A&A...616A..11G}). We separately consider
acceleration from the bar and the spiral. Table 1 in \citetads{2019MNRAS.490.1026H}
summarises models for the bar potential taken from the
literature. From this, assuming that the Sun lies $30\degr$ away from
the major axis of the bar \citepads{2015MNRAS.450.4050W}, most models give an
acceleration in the negative $\phi'$ direction of $0.04\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$, with
one differing model attributed to \citetads{2017ApJ...840L...2P} which has a
$\phi'$ acceleration of $0.09\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$. The \citetads{2017MNRAS.465.1621P} bar
model, the potential from which is illustrated in Figure 2
of \citetads{2019A&A...626A..41M}, is not included in the
\citetads{2019MNRAS.490.1026H} table,
but is consistent with the lower value.
The recent study by \citetads{2020ApJ...900..186E} found an acceleration from the
spiral structure in the $\phi'$ direction of $0.10\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$ in the
opposite sense to the acceleration from the bar. Statistical
uncertainties on this value are small, with systematic errors relating
to the modelling choices dominating. This spiral strength is within
the broad range considered by \citetads{2016MNRAS.461.3835M}, and we estimate the
systematic uncertainty to be of order $\pm0.05\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$.
\subsection{Acceleration towards the Galactic plane}
The baryonic component of the Milky Way is flattened, with a stellar
disc which has an axis ratio of $\sim$1:10 and a gas disc, with
both \ion{H}{II} and H$_2$ components, which is even flatter. The Sun
is slightly above the Galactic plane, with estimates of the height
above the plane typically of the order
$z'_\odot=25\pm5\,\mathrm{pc}$ \citepads{2016ARA&A..54..529B}.
We use the Milky Way gravitational potential from \citetads{2017MNRAS.465...76M},
which has stellar discs and gas discs based on literature results, to
estimate this component of acceleration. We find an acceleration of
$0.15\pm0.03\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$ in the negative $z'$ direction, i.e. towards the
Galactic plane. This uncertainty is found using only the uncertainty
in $d_{\odot-GC}$ and $z'_\odot$. We can estimate the systematic
uncertainty by comparison to the model from \citetads{2011MNRAS.414.2446M},
which, among other differences, has no gas discs. In this case we find
an acceleration of $0.13\pm0.02\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$, suggesting that the
uncertainty associated with the potential is comparable to that from
the distance to the Galactic plane. For reference, if the acceleration
were directed exactly at the Galactic centre we would expect an
acceleration in the negative $z'$ direction of ${\sim}0.02\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$ due
to the mentioned elevation of the Sun above the plane by
25\,pc, see next subsection.
Combined, this converts into an acceleration of
$(-6.98\pm0.12,\, +0.06\pm0.05,\, -0.15\pm0.03) \ensuremath{\text{\kms\,\text{Myr}}^{-1}}$
in the $(R',\phi',z')$ directions.
\subsection{Transformation to standard galactic coordinates}
\label{sec:galactic-coordinates}
For the comparison of this model expectation with the EDR3 observations
we have to convert both into standard galactic coordinates $(X,Y,Z)$
associated with galactic longitude and latitude $(l,b)$.
The standard galactic coordinates are defined by
the transformation between the equatorial (ICRS) and galactic coordinates given
in Sect.~1.5.3, Vol.~1 of \citet{ESA1997} using three
angles to be taken as exact quantities. In particular, the equatorial plane of the galactic
coordinates is defined by its pole at ICRS coordinates
$(\alpha=192.85948^\circ,\delta=+27.12825^\circ)$, and the origin of
galactic longitude is defined by the galactic longitude of the
ascending node of the equatorial plane of the galactic coordinates on
the ICRS equator, which is taken to be $l_\Omega =
32.93192^\circ$. This means that the point with galactic coordinates
$(l=0,b=0)$, that is the direction to the centre, is at
$(\alpha\approx266.40499^\circ, \delta\approx-28.93617^\circ)$.
The conversion of the model expectation takes into account the
above-mentioned elevation of the Sun, leading to a rotation of the $Z$
axis with respect to the $z'$ axis by $(10.5\pm 2)$\,arcmin, plus two sign
flips of the axes' directions. This leaves us with the final predicted
value of $(a_X,\,a_Y,\,a_Z) = (+6.98\pm0.12,\, -0.06\pm0.05,\,
-0.13\pm0.03)\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$. Note that the rotation of the vertical axis is
uncertain by about 2\arcmin, due to the uncertain values of
$d_{\odot-GC}$ and $Z_\odot$. This, however, gives an uncertainty of
only 0.004$\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$ in the predicted $a_Z$.
We should emphasize that these transformations are purely formal
ones. They should not be considered as strict in the sense that they
refer the two vectors to the true attractive center of the real
galaxy. On the one hand, they assume that the standard galactic
coordinates $(X,Y,Z)$ represent perfect knowledge of the true
orientation of the Galactic plane and the true location of the
Galactic barycentre. On the other hand, they assume that the disk is
completely flat, and that the inner part of the Galactic potential is
symmetric (apart from the effects of the bar and local spiral
structure discussed above). Both assumptions can easily be violated by
a few arcmin. This can easily be illustrated by the position of the
central black hole, Sgr~A*. It undoubtedly sits very close in the
bottom of the Galactic potential trough, by dynamical necessity. But
that bottom needs not coincide with the barycentre of the Milky Way,
nor with the precise direction of the inner galaxy's force on the
Sun. In fact, the position of Sgr~A* is off galactic longitude zero by
$-3.3$\arcmin, and off galactic latitude zero by
$-2.7$\arcmin.%
\footnote{To take the solar system as an illustrative
analogue: the bottom of the potential trough is always very close to the centre
of the Sun, but the barycentre can be off by more than one solar
radius, i.e. the attraction felt by a Kuiper belt object at, say,
30\,au can be off by more than 0.5\arcmin.}
This latitude offset is only about a quarter of the 10.5\arcmin\ correction
derived from the Sun's altitude above the plane.
Given the present uncertainty of the measured acceleration vector by a
few degrees (see Table \ref{tab:results}), these considerations about a few arcmin are
irrelevant for the present paper. We mention them here as a matter of
principle, to be taken into account in case the measured vector would
ever attain a precision at the arcminute level.
\subsection{Specific objects}
\citetads{2016A&A...589A..71B} provide in their Table~2 an estimate of the acceleration due to
various extragalactic objects. We can use this table as an initial
guide to which objects are likely to be important, however mass
estimates of some of these objects (particularly the Large Magellanic
Cloud) have changed significantly from the values quoted there.
We note first that individual objects in the Milky Way have a
negligible effect. The acceleration from $\alpha$~Cen~AB is
${\sim}0.004\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$, and that from any nearby giant molecular clouds is
comparable or smaller.
In the local group, the largest effect is from
the Large Magellanic Cloud (LMC). A number of lines of evidence now
suggest that it has a mass of
$(1{-}2.5)\times10^{11}\,M_\odot$ (see \citeads{2019MNRAS.487.2685E}
and references therein), which at a distance of
$49.5\pm0.5\,\text{kpc}$ \citepads{2019Natur.567..200P} gives an acceleration of
0.18 to $0.45\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$ with components $(a_X,a_Y,a_Z)$ between
$(+0.025,\,-0.148,\, -0.098)$ and $(+0.063,\, -0.371,\, -0.244)\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$.
We note therefore that the acceleration from the LMC is significantly larger
than that from either the Galactic plane or non-axisymmetric
structure.
The Small Magellanic Cloud is slightly more
distant ($62.8\pm2.5\,\text{kpc}$; \citeads{2000A&A...359..601C}), and significantly less
massive. It is thought that it has been significantly tidally stripped
by the LMC (e.g.\ \citeads{2020MNRAS.495...98D}), so its mass is likely to be
substantially lower than its estimated peak mass of
${\sim}7\times10^{10}\,M_\odot$ (e.g.\ \citeads{2019MNRAS.487.5799R}), but is hard to
determine based on dynamical modelling. We follow \citetads{2020ApJ...893..121P} and
consider the range of possible masses $(0.5{-}3)\times10^{10}\,M_\odot$,
which gives an acceleration of 0.005 to $0.037\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$. Other local
group galaxies have a negligible effect. M31, at a distance of
$752\pm27\,\text{kpc}$ \citepads{2012ApJ...745..156R}, with mass estimates in the range
$(0.7{-}2)\times10^{12}\,M_\odot$ \citepads{2013MNRAS.434.2779F} imparts an
acceleration of 0.005 to $0.016\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$. The Sagittarius dwarf galaxy is
relatively nearby, and was once relatively massive, but has been
dramatically tidally stripped to a mass
$\lesssim4\times10^8\,M_\odot$ (\citeads{2020MNRAS.497.4162V}; \citeads{2010ApJ...714..229L}), so
provides an acceleration $\lesssim0.003\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$.
We note that this discussion only includes the direct acceleration that these local group bodies
apply to the Solar system. They are expected to deform the Milky Way's dark matter halo
in a way that may also apply an acceleration (e.g., \citeads{2020arXiv201000816G}).
We can, like \citetads{2016A&A...589A..71B}, estimate the acceleration due
to nearby galaxy clusters from their estimated masses and
distances. The Virgo cluster at a distance
$16.5\,\mathrm{Mpc}$ \citepads{2007ApJ...655..144M} and a mass
$(1.4{-}6.3)\times10^{14}\,M_\odot$ (\citeads{2012ApJS..200....4F}; \citeads{2020A&A...635A.135K})
is the most significant single influence
(0.002 to $0.010\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$). However, we recognise that the peculiar
velocity of the Sun with respect to the Hubble flow has a component
away from the Local Void, one towards the centre of the Laniakea
supercluster, and others on larger scales that are not yet
mapped (\citeads{2008ApJ...676..184T}; \citeads{2014Natur.513...71T}), and that this is
probably reflected in the acceleration felt on the solar system
barycentre from large scale structure.
For simplicity we only add the effect of the LMC to the value given at
the end of Sect.~\ref{sec:nonaxi} to give an overall estimate of the
expected range of, adding our estimated $1\sigma$ uncertainties from
the Galactic models to our full range of possible accelerations from
the LMC to give $(a_X,\,a_Y\,,a_Z)$ as $(+6.89,\,-0.20,\,-0.20)$ to
$(+7.17,\,-0.48,\,-0.40)\ensuremath{\text{\kms\,\text{Myr}}^{-1}}$.
\section{Introduction}
\label{sec:intro}
It is well known that the velocity of an observer causes the apparent
positions of all celestial bodies to be displaced in the direction of the
velocity, an effect referred to as the aberration of light.
If the velocity is changing with time, that is if the observer is accelerated,
the displacements are also changing, giving the impression of a pattern of
proper motions in the direction of the acceleration. We exploit this effect
to detect the imprint in the \textit{Gaia}\ data of the
acceleration of the solar system with respect to the rest-frame of
remote extragalactic sources.
\subsection{Historical considerations}
\label{sec:intro:historical}
In 1833 John Pond, the Astronomer Royal at that time, sent to print
the \textsl{Catalogue of 1112 stars, reduced from observations made at
the Royal Observatory at Greenwich} \citepads{1833RGAO...18P...1P}, the
happy conclusion of a standard and tedious observatory work, and a
catalogue much praised for its accuracy
\citepads{1852hopa.book.....G}. At the end of his short introduction he
added a note discussing \textsl{Causes of Disturbance of the proper
Motion of Stars}, in which he considered the secular aberration resulting
from the motion of the solar system in free space, stating that,
\begin{quotation}
\textsl{So long as the motion of the Sun continues uniform
and rectilinear, this aberration or distortion from their true places
will be constant: it will not affect our observations; nor am I
aware that we possess any means of determining whether it exist or
not. If the motion of the Sun be uniformly accelerated, or uniformly retarded,
$[\ldots]$ [t]he effects of either of these suppositions would be, to
produce uniform motion in every star according to its
position, and might in time be discoverable by our observations, if
the stars had no proper motions of their own $[\ldots]$ But it is
needless to enter into further speculation on questions that appear
at present not likely to lead to the least practical utility, though it
may become a subject of interest to future ages.}
\end{quotation}
This was a simple, but clever, realisation of the consequences of aberration, really
new at that time and totally outside the technical capabilities of the
time. The idea gained more visibility through the successful textbooks
of the renowned English astronomer John Herschel, first in his
\textsl{Treatise of Astronomy} (\citeads{1833tras.book.....H}, \S612) and later
in the expanded version \textsl{Outlines of Astronomy}
(\citeads{1849oast.book.....H}, \S862), both of which went through numerous editions. In
the former he referred directly to John Pond as the original source of
this `\textsl{very ingenious idea}', whereas in the latter the reference to Pond was
dropped and the description of the effect looks unpromising:
\begin{quotation}
\textsl{
This displacement, however, is permanent, and therefore
unrecognizable by any ph{\ae}nomenon, so long as the solar motion
remains invariable ; but should it, in the course of ages, alter its
direction and velocity, both the direction and amount of the
displacement in question would alter with it. The change, however,
would become mixed up with other changes in the apparent proper
motions of the stars, and it would seem hopeless to attempt
disentangling them.}
\end{quotation}
John Pond in 1833 wrote that the idea came to him `\textsl{many years
ago}' but did not hint at borrowing it from someone else.
For such an idea to emerge, at least three devices had to be present
in the tool kit of a practising astronomer: a deep
understanding of aberration, well known since James Bradley's discovery in 1728;
the secure proof that stars have proper motion, provided by the
Catalogue of Tobias Mayer in 1760; and the notion of the secular motion of the Sun
towards the apex, established by William Herschel in 1783.
Therefore Pond was probably the first, to our knowledge, who
combined the aberration and the free motion of the Sun among the
stars to draw the important observable consequence in terms of
systematic proper motions. We have found no earlier mention, and had it
been commonly known by astronomers much earlier we would have
found a mention in
Lalande's \textsl{Astronomie} \citep{Lalande1792}, the most
encyclopaedic treatise on the subject at the time.
References to the constant aberration due to the secular motion of
the solar system as a whole appear over the course of years in some
astronomical textbooks (e.g. \citeads{1908tsa..book.....B}), but
not in all with the hint that only a change in the apex would make it
visible in the form of a proper motion. While the bold foresight of
these forerunners was by necessity limited by their conception of
the Milky Way and the Universe as a whole, both Pond and Herschel
recognised that even with a curved motion of the solar
system, the effect on the stars from the change in aberration would be
very difficult to separate from other sources of
proper motion. This would remain true today if the stars
of the Milky Way had been our only means to study the effect.
However, our current view of the hierarchical structure of the Universe
puts the issue in a different and more favourable guise. The whole
solar system is in motion within the Milky Way and there are
star-like sources, very far away from us, that do not share this motion. For
them the only source of apparent proper motion could be precisely that
resulting from the change in the secular aberration. We are happily
back to the world without proper motions contemplated by Pond, and
we show in this paper that \textit{Gaia}'s observations of extragalactic sources enable
us to discern, for the first time in the optical domain, the signature of this
systematic proper motion.
\subsection{Recent works}
Coming to the modern era, the earliest mention we have found of the
effect on extragalactic sources is by \citetads{1983jpl..rept.8339F} in
the description of the JPL software package MASTERFIT for reducing
Very Long Baseline Interferometry (VLBI)
observations. There is a passing remark that the change in the
apparent position of the sources from the solar system motion would be
that of a proper motion of 6~{\ensuremath{\,\mu\text{as\,yr}^{-1}}}, nearly two orders of magnitude
smaller than the effect of source structure, but systematic. There is
no detailed modelling of the effect, but at least this was clearly
shown to be a consequence of the change in the direction of the solar system velocity vector in
the aberration factor, worthy of further consideration.
The description of the effect is given in later descriptions of MASTERFIT and also in some other publications
of the JPL VLBI group (e.g. \citeads{1996jpl..rept.8339S}; \citeads{1998RvMP...70.1393S}).
\citetads{1995IAUS..166..283E} have a contribution in IAU Symposium 166
with the title \textsl{Secular motions of the extragalactic
radio-sources and the stability of the radio
reference frame}. This contains the first claim of seeing statistically significant
proper motions in many sources at the level of 30~{\ensuremath{\,\mu\text{as\,yr}^{-1}}}, about an
order of magnitude
larger than expected. This was unfortunately
limited to an abstract, but the idea behind was to search for
the effect discussed here. Proper motions of quasars were also
investigated by \citetads{1997ApJ...485...87G} in the context of search
for low-frequency gravitational waves. The technique relied heavily on a
decomposition on VSH (Vector Spherical Harmonics), very similar to
what is reported in the core of this paper.
\citetads{1995ESASP.379...99B} rediscovered the effect in the context of the
\textit{Gaia}\ mission as it was planned at the time. He describes the effect
as a variable aberration and stated clearly how it could be measured
with {\textit{Gaia}} using 60 bright quasars, with the unambiguous
conclusion that `it seems quite possible that GAIA can
significantly measure the galactocentric acceleration of the solar
system'. This was then included as an important science objective of
{\textit{Gaia}} in the mission proposal submitted to ESA in 2000 and in most
early presentations of the mission and its expected science results
(\citeads{2001A&A...369..339P}; \citeads{2002EAS.....2..327M}). Several theoretical
discussions followed in relation to VLBI or space astrometry
(\citeads{1998RvMP...70.1393S}; \citeads{2006AJ....131.1471K}). \citetads{2003A&A...404..743K}
considered the effect on the observed motions of stars in our Galaxy, while
\citetads{2012A&A...547A..59M} showed how the systematic use of the VSH
on a large data sample like \textit{Gaia} would permit a blind search of
the acceleration without ad~hoc model fitting. They also stressed
the importance of solving simultaneously for the acceleration and the
spin to avoid signal leakage from correlations.
With the VLBI data gradually covering longer periods of
time, detection of the systematic patterns in the proper motions of quasars
became a definite possibility, and in the last decade there have been several works
claiming positive detections at different levels of significance.
But even with 20 years of data, the systematic displacement of the
best-placed quasars is only $\simeq 0.1$ mas, not much larger than
the noise floor of individual VLBI positions until very
recently. So the actual detection was, and remains, challenging.
The first published solution by \citetads{1997ApJ...485...87G}, based on
323 sources, resulted in an acceleration estimate of
$(g_x, g_y, g_z) = (1.9 \pm 6.1,~5.4\pm 6.2,~7.5\pm 5.6)~\ensuremath{\,\mu\text{as\,yr}^{-1}}$,
not really above the noise level.%
\footnote{Here, and in the following, the acceleration is expressed as
a proper motion through division by $c$, the speed of light; see
Eq.~(\ref{eq:accel_components}). $(g_x, g_y, g_z)$ are the
components of the effect in the ICRS (equatorial) system.}
Then a first detection
claim was by \citetads{2011A&A...529A..91T}, using 555 sources and 20~years
of VLBI data. From the proper motions of these sources they found
$|\vec{g}| = g = 6.4 \pm 1.5$~{\ensuremath{\,\mu\text{as\,yr}^{-1}}} for the amplitude of the systematic
signal, compatible with the expected magnitude and direction. Two
years later they published an improved solution from 34~years of VLBI
data, yielding $g = 6.4 \pm 1.1$~{\ensuremath{\,\mu\text{as\,yr}^{-1}}}
(\citeads{2013A&A...559A..95T}). A new solution by
\citetads{2018A&A...610A..36T} with a global fit of the dipole on more
than 4000 sources and 36~years of VLBI delays yielded $g = 5.2 \pm
0.2$~{\ensuremath{\,\mu\text{as\,yr}^{-1}}}, the best formal error so far, and a direction a few
degrees off the Galactic centre.
\citetads{2012A&A...544A.135X} also made a direct fit of the acceleration
vector as a global parameter to the VLBI delay observations, and
found a modulus of $g = 5.82 \pm 0.32$~{\ensuremath{\,\mu\text{as\,yr}^{-1}}} but
with a strong component perpendicular to the Galactic plane.
The most recent review by \citetads{2019A&A...630A..93M} is a report
of the Working Group on Galactic Aberration of the International VLBI
Service (IVS). This group was established to incorporate the effect of
the galactocentric aberration into the VLBI analysis with a unique
recommended value. They make a clear distinction between the
galactocentric component that may be estimated from Galactic
kinematics, and the additional contributions due to the accelerated
motion of the Milky Way in the intergalactic space or the peculiar
acceleration of the solar system in the Galaxy. They use the term
`aberration drift' for the total effect. Clearly the observations
cannot separate the different contributions, neither in VLBI nor in
the optical domain with \textit{Gaia}. Based on their considerations,
the working group's recommendation is to use $g=5.8$~{\ensuremath{\,\mu\text{as\,yr}^{-1}}} for
the galactocentric component of the aberration drift. This value,
estimated directly in a global solution of the ICRF3 solution data
set, is slightly larger than the value deduced from Galactic
astronomy. This recommendation has been finally adopted in the ICRF3
catalogue, although an additional dedicated
analysis of almost 40 years of VLBI observations gave the acceleration
$g=5.83\pm0.23$~{\ensuremath{\,\mu\text{as\,yr}^{-1}}} towards $\alpha = 270.2\degr\pm2.3\degr$,
$\delta = -20.2\degr\pm3.6\degr$ \citep{2020A&A...ICRF3}.
To conclude this overview of related works, a totally different
approach by \citetads{2020ApJ...902L..28C} was recently put forward,
relying on highly accurate spectroscopy. With the performances
of spectrographs reached in the search for extra-solar planets, on the level of 10~cm\,s$^{-1}$, it is
conceivable to detect the variation of the line-of-sight velocity of
stars over a time baseline of at least ten years. This would be a
direct detection of the Galactic acceleration and a way to probe the
gravitational potential at $\sim$ kpc distances. Such a result would
be totally independent of the acceleration derived from the
aberration drift of the extragalactic sources and of great interest.
Here we report on the first determination of the solar system
acceleration in the optical domain, from \textit{Gaia} observations.
The paper is organised as follows. Section~\ref{sec:effect}
summarises the astrometric signatures of an acceleration of the solar
system barycentre with respect to the rest frame of extragalactic
sources. Theoretical expectations of the acceleration of the solar
system are presented in Sect.~\ref{sec:expectation}. The selection of
\textit{Gaia}\ sources for the determination of the effect is
discussed in Sect.~\ref{sec:selection}. Section~\ref{sec:method}
presents the method, and the analysis of the data and a discussion of
random and systematic errors are given in
Sect.~\ref{sec:analysis}. Conclusions of this study as well as the
perspectives for the future determination with \textit{Gaia}\ astrometry are
presented in Sect.~\ref{sec:summary}. In Appendix~\ref{sec:unbiased}
we discuss the general problem of estimating the length of a
vector from the estimates of its Cartesian components.
\section{Method}
\label{sec:method}
One can think of a number of ways to estimate the acceleration from a
set of observed proper motions. For example, one could directly
estimate the components of the acceleration vector by a least-squares fit
to the proper motion components using Eq.~(\ref{eq:accel_components}).
However, if there are other large-scale patterns present in the proper motions,
such as from a global rotation, these other effects could bias the acceleration
estimate, because they are in general not orthogonal to the acceleration effect
for the actual weight distribution on the sky
(Fig.~\ref{fig:sky-distribution-5p-information}).
We prefer to use a more general and more flexible mathematical
approach with Vector Spherical Harmonics (VSH). For a given set of
sources, the use of VSH allows us to mitigate the biases produced by various
large-scale patterns, thus bringing
a reasonable control over the systematic errors. The theory of VSH
expansions of arbitrary vector fields on the sphere and its
applications to the analysis of astrometric data were discussed in
detail by \citetads{2012A&A...547A..59M}. We use the
notations and definitions given in that work. In particular, to the vector
field of proper motions
$\vec{\mu}(\alpha,\delta)=\mu_{\alpha*}\,\vec{e}_\alpha+\mu_\delta\,\vec{e}_\delta$
(where $\vec{e}_\alpha$ and $\vec{e}_\delta$ are unit vectors in the local triad as in
Fig.~1 of \citeads{2012A&A...547A..59M})
we fit the following VSH representation:
\begin{equation}\label{Vexpandreal}
\begin{split}
\vec{\mu}(\alpha,\delta) &= \sum_{l=1}^{l_{\rm max}}\,\Biggl(
t_{l0} \vec{T}_{l0} + s_{l0} \vec{S}_{l0}\\
&\quad
+ 2\sum_{m=1}^{l}\, \left(t^{\Re}_{lm} \vec{T}^{\Re}_{lm} - t^{\Im}_{lm} \vec{T}^{\Im}_{lm}
+s^{\Re}_{lm} \vec{S}^{\Re}_{lm} - s^{\Im}_{lm} \vec{S}^{\Im}_{lm}
\right)\Biggr)\,.
\end{split}
\end{equation}
\noindent
Here $\vec{T}_{lm}(\alpha,\delta)$ and $\vec{S}_{lm}(\alpha,\delta)$ are
the toroidal and spheroidal vector spherical harmonics of degree $l$
and order $m$, $t_{lm}$ and $s_{lm}$ are the corresponding
coefficients of the expansion (to be fitted to the data), and the
superscripts $\Re$ and $\Im$ denote the real and imaginary parts of
the corresponding complex quantities, respectively. In general, the VSHs
are defined as complex functions and can represent complex-valued vector
fields, but the field of proper motions is real-valued and the expansion
in Eq.~(\ref{Vexpandreal}) readily uses the symmetry properties of the
expansion, so that all quantities in Eq.~(\ref{Vexpandreal}) are real.
The definitions and various properties of $\vec{T}_{lm}(\alpha,\delta)$
and $\vec{S}_{lm}(\alpha,\delta)$, as well as an efficient algorithm for their computation,
can be found in \citetads{2012A&A...547A..59M}.
The main goal of this work is to estimate the solar system acceleration described by
Eq.~(\ref{eq:accel_components}). As explained in \citetads{2012A&A...547A..59M},
a nice property of the VSH expansion is that the first-order harmonics with $l=1$ represent
a global rotation (the toroidal harmonics $\vec{T}_{1m}$) and an effect called `glide'
(the spheroidal harmonics $\vec{S}_{1m}$). Glide has the same
mathematical form as the effect of acceleration given by Eq. (\ref{eq:accel_components}).
One can demonstrate (Sect.~4.2 in \citeads{2012A&A...547A..59M}) that
\begin{align}\label{VSH-to-acceleration}
\begin{aligned}
s_{10} &= \phantom{-}\sqrt{\frac{8\pi}{3}}\,g_z\,, \\
s_{11}^\Re &= -\sqrt{\frac{4\pi}{3}}\,g_x\,,\\
s_{11}^\Im &= \phantom{-}\sqrt{\frac{4\pi}{3}}\,g_y\,.
\end{aligned}
\end{align}
In principle, therefore, one could restrict the model to
$l=1$. However, as already mentioned, the higher-order VSHs help to
handle the effects of other systematic signals. The parameter
$l_{\rm max}$ in (\ref{Vexpandreal}) is the maximal order of the VSHs
that are taken into account in the model and is an important
instrument for analysing systematic signals in the data: by calculating
a series of solutions for increasing values of
$l_{\rm max}$, one probes how much the lower-order terms
(and in particular the glide terms) are affected by higher-order systematics.
With the $L^2$ norm, the VSHs $\vec{T}_{lm}(\alpha,\delta)$ and $\vec{S}_{lm}(\alpha,\delta)$
form an orthonormal set of basis functions for a vector field on a
sphere. It is also known that the infinite set of these basis functions is complete on $S^2$.
The VSHs can therefore represent arbitrary vector fields. Just as in the case of
scalar spherical harmonics, the VSHs with increasing order $l$
represent signals of higher spatial frequency on the sphere. VSHs of
different orders and degrees are orthogonal only if one has infinite
number of data points homogeneously distributed over the sphere. For
a finite number of points and/or an inhomogeneous distribution the
VSHs are not strictly orthogonal and have a non-zero projection onto each other.
This means that the coefficients $t_{lm}^\Re$,
$t_{lm}^\Im$, $s_{lm}^\Re$ and $s_{lm}^\Im$ are correlated when
working with observational data. The level of correlation depends on
the distribution of the statistical weight of the data over the sphere, which is illustrated by
Fig.~\ref{fig:sky-distribution-5p-information} for the source
selection used in this study. For a given weight distribution there is a
upper limit on the $l_{\rm max}$ that can be profitably used in practical
calculations. Beyond that limit the correlations between
the parameters become too high and the fit gets useless. Numerical tests
show that for our data selection it is reasonable to have $l_{\rm max}\lesssim 10$,
for which correlations are less than about 0.6 in absolute values.
Projecting Eq.~(\ref{Vexpandreal}) on the vectors $\vec{e}_\alpha$ and $\vec{e}_\delta$ of
the local triad one gets two scalar equations for each celestial
source with proper motions $\mu_{\alpha*}$ and $\mu_\delta$. For $k$
sources this gives $2k$ observation equations for
$2l_\text{max}(l_\text{max}+2)$
unknowns to be solved for using a standard least-squares
estimator. The equations should be weighted using the uncertainties
of the proper motions $\sigma_{\mu_{\alpha*}}$ and
$\sigma_{\mu_\delta}$. It is also advantageous to take into account,
in the weight matrix of the least-squares estimator, the correlation $\rho_\mu$
between $\mu_{\alpha*}$ and $\mu_\delta$ of a source. This
correlation comes from the \textit{Gaia}\ astrometric solution and is
published in the \textit{Gaia}\ catalogue for each source. The correlations
between astrometric parameters of different sources are not exactly
known and no attempt to account for these inter-source correlations
was undertaken in this study.
It is important that the fit is robust against outliers, that is sources
that have proper motions significantly
deviating from the model in Eq.~(\ref{Vexpandreal}). Peculiar proper
motions can be caused by time-dependent structure variation of certain
sources (some but not all such sources have been rejected by the
astrometric tests at the selection level). Outlier elimination also
makes the estimates robust against potentially bad, systematically
biased astrometric solutions of some sources. The outlier detection
is implemented \citep{GAIA-LL-127} as an iterative elimination of all
{\sl sources} for which a measure of the post-fit residuals of the
corresponding two equations exceed the median value of that measure
computed for all sources by some chosen factor $\kappa\ge 1$, called
clip limit. As the measure $X$ of the weighted residuals for a source
we choose the post-fit residuals $\Delta\mu_{\alpha*}$ and $\Delta\mu_\delta$
of the corresponding two equations for $\mu_{\alpha*}$ and
$\mu_\delta$ for the source, weighted by the full covariance matrix
of the proper motion components:
\begin{eqnarray}
X^2&=&
\begin{bmatrix}\Delta\mu_{\alpha*} & \Delta\mu_\delta\end{bmatrix}
\begin{bmatrix}\sigma_{\mu_{\alpha*}}^2 & \rho_\mu\sigma_{\mu_{\alpha*}}\sigma_{\mu_\delta}\\
\rho_\mu\sigma_{\mu_{\alpha*}}\sigma_{\mu_\delta} & \sigma_{\mu_\delta}^2\end{bmatrix}^{-1}
\begin{bmatrix}\Delta\mu_{\alpha*}\\ \Delta\mu_\delta\end{bmatrix}\nonumber\\[6pt]
&=&\frac{1}{1-\rho_\mu^2}\left[\left(\frac{\Delta\mu_{\alpha*}}{\sigma_{\mu_{\alpha*}}}\right)^2
-2\rho_\mu\left(\frac{\Delta\mu_{\alpha*}}{\sigma_{\mu_{\alpha*}}}\right)\left(\frac{\Delta\mu_\delta}{\sigma_{\mu_\delta}}\right)
+\left(\frac{\Delta\mu_\delta}{\sigma_{\mu_\delta}}\right)^2\right]\,.
\end{eqnarray}
At each iteration the least-squares fit is computed using only the sources that
were not detected as outliers in the previous iterations; the median of $X$ is
however always computed over the whole set of sources. Iteration stops
when the set of sources identified as outliers is stable.%
\footnote{More precisely, the procedure stops the first time the set of outliers is
the same as in an earlier iteration (not necessarily the previous one).}
Identification of a whole source as an outlier and not just a single
component of its proper motion (for example, accepting $\mu_{\alpha*}$ and
rejecting $\mu_\delta$) makes more sense from the
physical point of view and also makes the procedure independent
of the coordinate system.
It is worth recording here that the angular covariance function $V_\mu(\theta)$,
defined by Eq.~(17) of \citetads{2018A&A...616A...2L}, also contains information
on the glide, albeit only on its magnitude $|\vec{g}|$, not the direction.
$V_\mu(\theta)$ quantifies the covariance of the proper motion vectors $\vec{\mu}$
as a function of the angular separation $\theta$ on the sky.
Figure~14 of \citet{DPACP-128} shows this function for \gedr{3}, computed
using the same sample of QSO-like sources with five-parameter solutions as used
in the present study (but without weighting the data according to their uncertainties).
Analogous to the case of scalar fields on a sphere (see Sect.~5.5 of \citeads{DPACP-128}),
$V_\mu(\theta)$ is related to the VSH expansion of the vector field $\vec{\mu}(\alpha,\delta)$.
In particular, the glide vector $\vec{g}$ gives a contribution of the form
\begin{equation}\label{V-glide}
V^{\rm glide}_\mu(\theta)=|\vec{g}|^2\,{1\over 6}\,\left(\cos^2\theta+1\right) \, .
\end{equation}
Using this expression and the $V_\mu(\theta)$ of \gedr{3} we obtain an estimate
of $|\vec{g}|$ in reasonable agreement with the results from the VSH fit discussed
in the next section. However, it is obvious from the plot in \citet{DPACP-128}
that the angular covariance function contains other large-scale
components that could bias this estimate as they are not included in the fit.
This reinforces the argument made earlier in this section, namely that the
estimation of the glide components from the proper motion data should not be
done in isolation, but simultaneously with the estimation of other large-scale
patterns. This is exactly what is achieved by means of the VSH expansion.
\section{Selection of Gaia sources}
\label{sec:selection}
\subsection{QSO-like sources}
\label{sec:qso-like}
\textit{Gaia} Early Data Release 3 (EDR3; \citealt{DPACP-130})
provides high-accuracy astrometry for over 1.5~billion sources, mainly
galactic stars. However, there are good reasons to believe that a few
million sources are QSOs and other extragalactic sources
that are compact enough for \textit{Gaia}\ to obtain good astrometric
solutions. These sources are hereafter referred to as `QSO-like sources'.
As explained in Sect.~\ref{sec:stars} it is only the QSO-like sources
that can be used to estimate the acceleration of the solar system.
Eventually, in later releases of \textit{Gaia}\ data, we will be able
to provide astrophysical classification of the sources and thus find
all QSO-like sources based only on \textit{Gaia}'s own data. EDR3 may be the last
\textit{Gaia}\ data release that needs to rely on external information to
identify the QSO-like sources in the main catalogue of the release. To this
end, a cross-match of the full EDR3 catalogue was performed with 17 external QSO and
AGN catalogues. The matched sources were then further filtered
to select astrometric solutions of sufficient quality in EDR3 and
to have parallaxes and proper motions compatible with zero within five
times the respective uncertainty. In this way, the contamination
of the sample by stars is reduced, even though it may also exclude some genuine
QSOs. It is important to recognise that the rejection based on significant
proper motions does not interfere with the systematic proper motions
expected from the acceleration, the latter being about two orders of
magnitude smaller than the former. Various additional tests were
performed to avoid stellar contamination as much as possible. As a
result, EDR3 includes $1\,614\,173$ sources that were identified as
QSO-like; these are available in the \textit{Gaia}\ Archive as the table
{\tt agn\_cross\_id}. The full details of the selection procedure, together
with a detailed description of the resulting \textit{Gaia}-CRF3, will be
published elsewhere \citep{DPACP-133}.
In \textit{Gaia}\ EDR3 the astrometric solutions for the individual sources are
of three different types \citep{DPACP-128}:
\begin{itemize}
\item two-parameter solutions, for which only a mean position is provided;
\item five-parameter solutions, for which the position (two coordinates),
parallax, and proper motion (two components) are provided;
\item six-parameter solutions, for which an astrometric estimate (the 'pseudocolour') of the effective wavenumber
\footnote{The effective wavenumber $\nu_\text{eff}$ is the mean value
of the inverse wavelength $\lambda^{-1}$, weighted by the detected photon flux
in the \textit{Gaia} passband $G$. This quantity is extensively used to model
colour-dependent image shifts in the astrometric instrument of \textit{Gaia}.
An approximate relation between $\nu_\text{eff}$ and the colour index
$G_\text{BP}-G_\text{RP}$ is given in \citet{DPACP-128}. The values
$\nu_\text{eff}=1.3$, 1.6, and 1.9 roughly correspond to, respectively,
$G_\text{BP}-G_\text{RP}= 2.4$, 0.6, and $-0.5$.\label{footnote-pseudocolor}}
is provided together with the five astrometric parameters.
\end{itemize}
Because of the astrometric filtering mentioned above, the
\textit{Gaia}-CRF3\ sources only belong to the last two types of
solutions: more precisely the selection comprises $1\,215\,942$
sources with five-parameter solutions and $398\,231$ sources with
six-parameter solutions. Table~\ref{tab:gaiacrf3-characteristics}
gives the main characteristics of these sources. The
\textit{Gaia}-CRF3\ sources with six-parameter solutions are typically
fainter, redder, and have somewhat lower astrometric quality
(as measured by the re-normalised unit weight error, RUWE) than those
with five-parameter solutions.%
\footnote{The RUWE \citep{DPACP-128} is a measure
of the goodness-of-fit of the five- or six-parameter model to the observations
of the source. The expected value for a good fit is 1.0. A higher value
could indicate that the source is not point-like at the optical resolution of \textit{Gaia}
($\simeq 0.1\,$''$$), or has a time-variable structure.\label{fn:RUWE}}
Moreover, various studies of the astrometric quality of EDR3
\citep[e.g.][]{DPACP-126,DPACP-128,DPACP-132}
have demonstrated
that the five-parameter solutions generally have smaller systematic errors,
at least for $G>16$, that is for most QSO-like sources. In the following
analysis we include only the $1\,215\,942$ \textit{Gaia}-CRF3\ sources with
five-parameter solutions.
\begin{table*}
\caption{Characteristics of the \textit{Gaia}-CRF3\ sources.
\label{tab:gaiacrf3-characteristics}}
\footnotesize\setlength{\tabcolsep}{6pt}
\begin{center}
\begin{tabular}{crcccccc}
\hline\hline
\noalign{\smallskip}
\multicolumn{1}{c}{type} & \multicolumn{1}{c}{number} & \multicolumn{1}{c}{$G$} &\multicolumn{1}{c}{BP$-$RP} & \multicolumn{1}{c}{$\nu_{\rm eff}$} &\multicolumn{1}{c}{RUWE} & \multicolumn{1}{c}{$\sigma_{\mu_{\alpha*}}$} & \multicolumn{1}{c}{$\sigma_{\mu_{\delta}}$}\\
\multicolumn{1}{c}{of solution} & \multicolumn{1}{c}{of sources} & \multicolumn{1}{c}{[mag]} &\multicolumn{1}{c}{[mag]} & \multicolumn{1}{c}{[$\mu{\rm m}^{-1}$]} &\multicolumn{1}{c}{} & \multicolumn{1}{c}{[$\ensuremath{\,\mu\text{as\,yr}^{-1}}$]} & \multicolumn{1}{c}{[$\ensuremath{\,\mu\text{as\,yr}^{-1}}$] }\\
\noalign{\smallskip}\hline\noalign{\smallskip}
five-parameter & 1\,215\,942 & 19.92 & 0.64 & 1.589 & 1.013 & 457 & 423 \\
six-parameter & 398\,231 & 20.46 & 0.92 & -- & 1.044 & 892 & 832 \\
all & 1\,614\,173 & 20.06 & 0.68 & -- & 1.019 & 531 & 493 \\
\noalign{\smallskip}
\hline
\end{tabular}
\tablefoot{Columns~3--8 give median values
of the $G$ magnitude, the BP$-$RP colour index, the effective
wavenumber $\nu_{\rm eff}$ (see footnote~\ref{footnote-pseudocolor}; only available for the five-parameter
solutions), the astrometric quality indicator RUWE (see footnote~\ref{fn:RUWE}),
and the uncertainties of the equatorial proper
motion components in $\alpha$ and $\delta$.
The last line (`all') is for the whole set of \textit{Gaia}-CRF3\ sources.
In this study only the sources with five-parameters solutions are used.
}
\end{center}
\end{table*}
Important features of these sources are displayed in
Figs.~\ref{fig:sky-distribution-5p} and \ref{fig:histograms-5p}. The
distribution of the sources is not homogeneous on the sky, with densities
ranging from 0 in the galactic plane to 85 sources per square
degree, and an average density of 30~deg$^{-2}$. The distribution of
\textit{Gaia}-CRF3\ sources primarily reflects the sky inhomogeneities of the
external QSO/AGN catalogues used to select the sources.
In addition, to reduce the risk of source confusion in crowded areas,
the only cross-matching made in the galactic zone ($\left|\sin
b\right|<0.1$, with $b$ the galactic latitude) was with the VLBI
quasars, for which the risk of confusion is negligible thanks to their
accurate VLBI positions. One can hope that the future Releases of
\textit{Gaia}-CRF\ will substantially improve the homogeneity and remove this
selection bias (although a reduced source density at the galactic
plane may persist due to the extinction in the galactic plane).
As discussed below, our method for estimating
the solar system acceleration from proper motions of the
\textit{Gaia}-CRF3\ sources involves an expansion of the vector field of
proper motions in a set of functions that are orthogonal on the sphere.
It is then advantageous if the data points
are distributed homogeneously on the sky. However, as shown in
Sect.~7.3 of \citepads{2012A&A...547A..59M}, what is important is not
the `kinematical homogeneity' of the sources on the sky (how many
per unit area), but the `dynamical homogeneity': the distribution of the
statistical weight of the data points over the sky (how much weight
per unit area). This distribution is shown on
Fig.~\ref{fig:sky-distribution-5p-information}.
For a reliable measurement of the solar system acceleration it is
important to have the cleanest possible set of QSO-like sources. A
significant stellar contamination may result in a
systematic bias in the estimated acceleration (see Sect.~\ref{sec:stars}).
In this context the histograms of the normalised parallaxes and proper motions
in Fig.~\ref{fig:pm-5p-histograms} are a useful diagnostic.
For a clean sample of extragalactic
QSO-like sources one expects that the distributions of the
normalised parallaxes and proper motions are normal distributions with
(almost) zero mean and standard deviation (almost) unity. Considering
the typical uncertainties of the proper motions of over $400\ensuremath{\,\mu\text{as\,yr}^{-1}}$
as given in Table~\ref{tab:gaiacrf3-characteristics} it is clear that
the small effect of the solar system acceleration can be ignored in
this discussion. The best-fit normal distributions for the normalised
parallaxes and proper motions shown by red lines on
Fig. \ref{fig:pm-5p-histograms} indeed agree remarkably well with the
actual distribution of the data. The best-fit Gaussian distributions
have standard deviations of 1.052, 1.055 and 1.063, respectively for
the parallaxes ($\varpi$), proper motions in right ascension
($\mu_{\alpha*}$), and proper motions in declination ($\mu_\delta$). Small
deviations from normal distributions (note the logarithmic scale of
the histograms) can result both from statistical fluctuations
in the sample and some stellar contaminations. One can conclude that
the level of contaminations is probably very low.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1\hsize]{Figures/agis32-reliable31-density-skymap-galactic.pdf}
\caption{Distribution of the \textit{Gaia}-CRF3\ sources with five-parameter
solutions. The plot shows the density of sources per
square degree computed from the source counts per pixel using
HEALPix of level 6 (pixel size $\sim0.84$\,deg$^2$). This and
following full-sky maps use a Hammer–Aitoff projection in galactic
coordinates with $l = b = 0$ at the centre, north up, and $l$
increasing from right to left.}
\label{fig:sky-distribution-5p}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-properMotionInformation-skymap-galactic.pdf}
\caption{Distribution of the statistical weights of the proper motions of
the \textit{Gaia}-CRF3\ sources with five-parameter
solutions. Statistical weight is calculated as the sum of
$\sigma_{\mu_{\alpha*}}^{-2}+\sigma_{\mu_{\delta}}^{-2}$
in pixels at HEALPix level~6.}
\label{fig:sky-distribution-5p-information}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-gMag-hist.pdf}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-nuEffUsedInAstrometry-hist.pdf}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-ruwe-hist.pdf}
\caption{Histograms of some important characteristics of the \textit{Gaia}-CRF3\ sources
with five-parameter solutions. From top to bottom: $G$ magnitudes,
colours represented by the effective wavenumber $\nu_{\rm eff}$ (see footnote~\ref{footnote-pseudocolor}),
and the astrometric quality indicator RUWE (see footnote~\ref{fn:RUWE}).
}
\label{fig:histograms-5p}
\end{center}
\end{figure}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-normalizedParallax-hist.pdf}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-normalizedMuAlphaStar-hist.pdf}
\includegraphics[width=1.00\hsize]{Figures/agis32-reliable31-normalizedMuDelta-hist.pdf}
\caption{Distributions of the normalized parallaxes $\varpi/\sigma_\varpi$ (upper pane),
proper motions in right ascension $\mu_{\alpha^*}/\sigma_{\mu_{\alpha*}}$
(middle pane) and proper motions in declination $\mu_{\delta}/\sigma_{\mu_{\delta}}$ (lower pane)
for the \textit{Gaia}-CRF3\ sources with five-parameter. The red lines show the corresponding best-fit Gaussian distributions.
}
\label{fig:pm-5p-histograms}
\end{center}
\end{figure}
\subsection{Stars of our Galaxy}
\label{sec:stars}
The acceleration of the solar system affects also the observed proper motions
of stars, albeit in a more complicated way than for the distant extragalactic sources.%
\footnote{For the proper motion of a star it is only the differential (tidal) acceleration
between the solar system and the star that matters.}
Here it is however masked by other, much larger effects, and this
section is meant to explain why it is not useful
to look for the effect in the motions of galactic objects.
The expected size of the galactocentric acceleration term is of the
order of $5\ensuremath{\,\mu\text{as\,yr}^{-1}}$ (Sect.~\ref{sec:expectation}). The
galactic rotation and shear effects are of the order of 5--$10\ensuremath{\,\text{mas\,yr}^{-1}}$,
i.e.\ over a thousand times bigger. In the Oort approximation they do
not contain a glide-like component, but any systematic difference
between the solar motion and the bulk motion of some stellar
population produces a glide-like proper-motion pattern over the
whole sky. Examples of this are the solar apex motion (pointing away
from the apex direction in Hercules, $\alpha\simeq 270\degr$,
$\delta\simeq 30\degr$) and the asymmetric drift of old stars (pointing
away from the direction of rotation in Cygnus, $\alpha\simeq 318\degr$,
$\delta\simeq 48\degr$). Since these two directions -- by pure chance
-- are only $\sim40\degr$ apart on the sky, the sum of their effects
will be in the same general direction.
But both are distance dependent, i.e.\ the size of the glide strongly
depends on the stellar sample used. The asymmetric drift is, in
addition, age dependent. Both effects attain the same order of
magnitude as the Oort terms at a distance of the order of 1~kpc. That
is, like the Oort terms they are of the order of a thousand times bigger than
the acceleration glide. Because of this huge difference in size, and the
strong dependence on the stellar sample, it is in practice impossible to
separate the tiny acceleration effect from the kinematic patterns.
Some post-Oort terms in the global galactic kinematics (e.g.\ a
non-zero second derivative of the rotation curve) can produce a big
glide component, too. And, more importantly, any asymmetries of the
galactic kinematics at the level of 0.1\% can create glides in more or less
random directions and with sizes far above the acceleration
term. Examples are halo streams in the solar vicinity, the tip of the
long galactic bar, the motion of the disk stars through a spiral wave
crest, and so on.
For all these reasons it is quite obvious that there is no hope to
discern an effect of $5\ensuremath{\,\mu\text{as\,yr}^{-1}}$ amongst chaotic structures of the
order of $10\ensuremath{\,\text{mas\,yr}^{-1}}$ in stellar kinematics.
In other words, we cannot use galactic objects
to determine the glide due to the acceleration of the solar system.
As a side remark we mention that there is a very big
($\simeq 6\ensuremath{\,\text{mas\,yr}^{-1}}$) direct effect of the galactocentric acceleration in the
proper-motion pattern of stars on the galactic scale: it is not a
glide but the global rotation which is represented by the minima in
the well-known textbook double wave of the proper motions $\mu_{l*}$ in
galactic longitude $l$ as function of $l$. But this is of no relevance
in connection with the present study.
\section{Spherical coordinates and transformation bias}
\label{sec:unbiased}
In Sect.~\ref{sec:analysis} the solar system acceleration vector was estimated in
the equatorial and galactic reference systems. The main result was given in the form
of the three Cartesian components of the vector and their covariance matrix. We also gave
the result in the form of the modulus (length) of the acceleration vector and the
spherical coordinates $(\alpha,\,\delta)$ or $(l,\,b)$ of its direction, the latter to facilitate
a direct comparison with the expected pointing roughly towards the Galactic centre.
While the least-squares solution for the Cartesian components of the vector naturally
yields unbiased estimates, it does not automatically imply that transformed estimates,
such as the modulus and spherical coordinates, are unbiased. If the transformation
is non-linear, as is clearly the case here, the transformed quantities are in general biased.
Because the discussion has more general applications than the specific problem in this
paper, we use generic notations in the following.
Consider the multivariate distribution of a vector $\vec{x}$ in $\mathbb{R}_\text{n}$
with modulus $r=(\vec{x}^\intercal\vec{x})^{1/2}$. We use $\vec{x}_0=\text{E}(\vec{x})$
for the true value of the vector, and $r_0 =(\vec{x}_0^\intercal\vec{x}_0)^{1/2}$
for the true value of its modulus. The covariance matrix of $\vec{x}$ is
$C =\text{E}(\vec{\xi}\vec{\xi}^\intercal)$, where $\vec{\xi}=\vec{x}-\vec{x}_0$ is
the deviation from the true vector.
We take $\vec{x}$ to represent our (unbiased) estimate of $\vec{x}_0$ and assume
that $\vec{C}$ is exactly known. Making the arbitrary transformation $y=f(\vec{x})$
of the estimate, the bias in $y$ can be understood as
$\text{E}(f(\vec{x}))-f(\text{E}(\vec{x}))=\text{E}(y)-f(\vec{x}_0)$.
This is zero if $f$ is linear, but in general non-zero for non-linear $f$. It should
be noted that the bias in general depends on the true vector $\vec{x}_0$, and
therefore may not be (exactly) computable in terms of the known quantities
$\vec{x}$ and $\vec{C}$.
Let us first consider the square of the modulus, that is $r^2=\vec{x}^\intercal\vec{x}$.
Putting $\vec{x}=\vec{x}_0+\vec{\xi}$ we have
\begin{equation}\label{eq:Ex2}
\begin{split}
\text{E}\bigl(r^2\bigr) &= \text{E}\bigl[(\vec{x}_0+\vec{\xi})^\intercal(\vec{x}_0+\vec{\xi})\bigr]\\
&= \text{E}\bigl[\vec{x}_0^\intercal\vec{x}_0+\vec{x}_0^\intercal\vec{\xi}+
\vec{\xi}^\intercal\vec{x}_0+\vec{\xi}^\intercal\vec{\xi}\bigr]\\
&= r_0^2 + \text{tr}\bigl(\vec{C}\bigr)\,,
\end{split}
\end{equation}
since $\text{E}\bigl(\vec{\xi}\bigr)=\vec{0}$ and
$\text{E}\bigl(\vec{\xi}^\intercal\vec{\xi}\bigr)=\text{tr}\bigl(\vec{C}\bigr)$.
In this case the bias is exactly computable: an unbiased estimate of $r_0^2$ is given
by $r^2-\text{tr}\bigl(\vec{C}\bigr)$. Note, however, that this estimate will sometimes
be negative: not always a convenient result!
Considering now the modulus $r=(\vec{x}^\intercal\vec{x})^{1/2}$, we have to
second order in the deviations $\vec{\xi}$,
\begin{equation}\label{eq:modulus}
\begin{split}
r&=\bigl(\vec{x}_0^\intercal\vec{x}_0+\vec{x}_0^\intercal\vec{\xi}+
\vec{\xi}^\intercal\vec{x}_0+\vec{\xi}^\intercal\vec{\xi}\bigr)^{1/2}\\
&= r_0
+\frac{1}{2}\frac{(\vec{x}_0^\intercal\vec{\xi}+\vec{\xi}^\intercal\vec{x}_0)}{r_0}
+\frac{1}{2}\frac{\vec{\xi}^\intercal\vec{\xi}}{r_0}
-\frac{1}{8}\frac{(\vec{x}_0^\intercal\vec{\xi}+\vec{\xi}^\intercal\vec{x}_0)^2}{r_0^3}
+ \mathcal{O}(\xi^3)\\
&= r_0
+\frac{\vec{x}_0^\intercal\vec{\xi}}{r_0}
+\frac{1}{2}\frac{\vec{\xi}^\intercal\vec{\xi}}{r_0}
-\frac{1}{2}\frac{\vec{x}_0^\intercal\bigl(\vec{\xi}\vec{\xi}^\intercal\bigr)\vec{x}_0}{r_0^3}
+ \mathcal{O}(\xi^3)\,,
\end{split}
\end{equation}
where in the last equality we used the general properties of scalar products,
$\vec{v}^\intercal\vec{w}=\vec{w}^\intercal\vec{v}$ and
$(\vec{v}^\intercal\vec{w})^2=\vec{v}^\intercal\bigl(\vec{w}\vec{w}^\intercal\bigr)\vec{v}
=\vec{w}^\intercal\bigl(\vec{v}\vec{v}^\intercal\bigr)\vec{w}$.
Taking now the expectation of Eq.~(\ref{eq:modulus}) gives
\begin{equation}\label{eq:expand3}
\text{E}(r) = r_0 + \frac{1}{2}\frac{ \text{tr}(\vec{C})}{r_0}
-\frac{1}{2} \frac{\vec{x}_0^\intercal\vec{C}\vec{x}_0}{r_0^3} + \mathcal{O}(\xi^3)\,.
\end{equation}
In contrast to Eq.~(\ref{eq:Ex2}), the truncated expression in Eq.~(\ref{eq:expand3})
is only approximate, and moreover depends on the unknown quantities $r_0$ and $\vec{x}_0$.
A useful correction for the bias may nevertheless be computed by inserting
the estimated quantities $r$ and $\vec{x}$ for $r_0$ and $\vec{x}_0$; thus
\begin{equation}\label{eq:biasCorrection}
r_0 \simeq r - \frac{1}{2}\frac{ \text{tr}(\vec{C})}{r}
+\frac{1}{2} \frac{\vec{x}^\intercal\vec{C}\vec{x}}{r^3}\,.
\end{equation}
We can assume that this formula may be useful as long as the bias correction is
small in comparison with $r$.
Equation~(\ref{eq:biasCorrection}) can be made more explicit in terms of the
Cartesian components. In the three-dimensional case of interest here we have
\begin{equation}\label{eq:biasCorrection3D}
\begin{split}
r_0 \simeq r
&-\frac{r^2-x^2}{r^3}\frac{\sigma^2_x}{2}+\frac{r^2-y^2}{r^3}\frac{\sigma^2_y}{2} +
\frac{r^2-z^2}{r^3}\frac{\sigma^2_z}{2}\\[3pt]
&+\frac{xy}{r^3}C_{xy}-\frac{yz}{r^3}C_{yz}-\frac{zx}{r^3}C_{zx} \, .
\end{split}
\end{equation}
In the simplest case of isotropic errors, $\sigma^2_x =\sigma^2_y =\sigma^2_z =\sigma^2$
and $C_{xy}=C_{yz}=C_{zx}=0$, this gives
\begin{equation}
r_0 \simeq r - \frac{\sigma^2}{r} \, .
\end{equation}
Interestingly, this correction is approximately $2/3$ of the correction obtained
by taking the square root of the unbiased estimate of $r_0^2$:
$\sqrt{r^2-\text{tr}\bigl(\vec{C}\bigr)}\simeq r - 3\sigma^2/2r$.
One can note that all the expressions derived thus far are invariant under a rotation of the
reference frame, since the trace of $\vec{C}$ is invariant, and the quadratic form
$\vec{x}^\intercal\vec{C}\vec{x}$ is also invariant when both $\vec{x}$ and $\vec{C}$
are expressed in the new frame.
Applied to the results of Table~\ref{tab:results}, where $|\vec{g}|=5.05\ensuremath{\,\mu\text{as\,yr}^{-1}}$
and the errors are nearly isotropic with $\sigma\simeq 0.35\ensuremath{\,\mu\text{as\,yr}^{-1}}$, we find an estimated
bias of about $+0.024\ensuremath{\,\mu\text{as\,yr}^{-1}}$. That is, our estimate of the amplitude of the glide is statistically
too large by about 0.5\%, an amount much smaller than the random uncertainty of the
amplitude. Although the bias is small in this case, it is important to draw attention to the
potential impact that non-linear transformations can have on the estimates.
It is possible to apply the same mathematical methodology to the estimation of potential
biases in the spherical coordinates $(\alpha,\,\delta)$ or $(l,\,b)$ representing the direction
of the vector $\vec{x}$. However, this would be a purely academic exercise, for it is not
clear what is meant by a bias in estimated angles such as $\alpha$ or $\delta$.
We refrain from giving the corresponding formulae, lest they should be used improperly.
For one thing, they are not invariant to a rotation of the reference frame, so the
`corrected' spherical coordinates in the equatorial and galactic systems give
slightly different positions on the sky. What is needed to complement the (unbiased)
estimate of the modulus of the vector is an unbiased estimate of its direction,
which cannot reasonably depend on the chosen reference frame. We believe that
the unbiased direction is most simply given by the unit vector $\vec{x}/x$, expressed
in its Cartesian components or spherical coordinates.
For a trivariate Gaussian error distribution, this direction has the appealing property
that any plane containing the direction bisects the distribution in two equal parts;
in other words, there is an equal probability that the true direction is on either side
of the plane.
|
1,108,101,564,300 | arxiv | \subsection{Ideal Liouville domains in their own}
\begin{definition}[Ideal Liouville Domains] \label{d:ild}
An \emph{ideal Liouville domain} $(F,\omega)$ is a domain $F$ endowed with an
ideal Liouville structure $\omega$. This \emph{ideal Liouville structure} is an
exact symplectic form on $\Int F$ admitting a primitive $\lambda$ such that: For
some (and then any) function $u \map F \to \R_{\ge0}$ with regular level set
$\del F = \{u=0\}$, the product $u \lambda$ extends to a smooth $1$-form on $F$
which induces a contact form on $\del F$.
A $1$-form $\lambda$ as above is called an \emph{ideal Liouville form}. Its dual
vector field $\dvf\lambda$ is an \emph{ideal Liouville field}.
\end{definition}
Liouville forms in ideal Liouville domains are analogous to contact forms on
contact manifolds: They exist, and it is sometimes useful to choose one, but
this choice is most often unimportant. In this analogy, ideal Liouville fields
correspond to Reeb fields though their dynamics are very different: The latter
are Hamiltonian while the former expand the symplectic form ---~and hence the
volume~--- exponentially. Speaking of contact manifolds, one of the striking
features of ideal Liouville domains is:
\begin{proposition}[The Boundary Contact Structure] \label{p:contact}
Let $(F,\omega)$ be an ideal Liouville domain. Then the boundary $K := \del F$
has a positive contact structure $\xi$, uniquely determined by $\omega$, which
is left invariant by any diffeomorphism of $F$ preserving $\omega$. Moreover,
the positive equations of $\xi$ are in one-to-one correspondence with the
negative sections of the conormal bundle of~$K$.
\end{proposition}
As a consequence of the latter claim, every symplectic diffeomorphism $\phi$ of
$(F,\omega)$ (meaning a diffeomorphism of $F$ whose restriction to the interior
preserves $\omega$) which is relative to the boundary (in the sense that its
restriction to $\del F$ is the identity) is actually tangent to the identity at
all points of $\del F$. Indeed, $\phi$ preserves the equations of the boundary
contact structure and hence acts trivially on the conormal bundle of $\del F$.
\begin{proof}
Let $\lambda$ be a Liouville form and $u \map F \to \R_{\ge0}$ a function with
regular level set $K = \{u=0\}$. By assumption, $u \lambda$ extends to a smooth
$1$-form $\beta$ on $F$ which induces a contact form on $K$. Write
$$ \omega = d\lambda = d(\beta/u)
= u^{-2} (u\, d\beta + \beta \wedge du). $$
This formula demonstrates that $u^2\omega$ extends to a smooth $2$-form $\gamma$
on $F$ which depends on $u$ only up to a conformal factor. Now, along $K$, the
form $\gamma$ has rank $2$, and its kernel is the contact structure $\xi$ on $K$
defined by $\beta$. It follows that $\xi$ is independent of the choice of
$\lambda$ and~$u$. Moreover, the identity $\gamma = \beta \wedge du$ shows that
$\beta \rst K$ is also independent of $\lambda$ and is uniquely (and pointwise
linearly) determined by $du$ viewed as a section of the conormal bundle of $K$.
\end{proof}
Now recall that the symplectization of a contact manifold $(K,\xi)$ is the
symplectic submanifold $SK$ of $T^*K$ consisting of non-zero covectors $\beta_p
\in T^*_pK$, $p \in K$, whose cooriented kernel is $\xi_p$ (contact structures
are cooriented in this text). We denote by $\lambda_\xi$ the $1$-form induced on
$SK$ by the canonical Liouville form of $T^*K$. We also define the ``projective
completion'' of $SK$ as the quotient
$$ \ol{SK} := (SK \times \R_{\ge0}) \big/ \R_{>0} $$
where $\R_{>0}$ acts (freely, properly and) diagonally by multiplication. Thus,
$\ol{SK}$ is a smooth manifold with boundary obtained by attaching a copy of
$K = SK/\R_{>0}$ to $SK = (SK \times \R_{>0}) / \R_{>0}$.
\begin{proposition}[Ideal Liouville Fields] \label{p:lioufields}
Let $(F,\omega)$ be an ideal Liouville domain, $(K,\xi)$ its contact boundary,
and $\lambda$ an ideal Liouville form in $\Int F$.
\itemup{a)}
The Liouville field $\dvf\lambda$ is complete and the singular foliation spanned
by $\dvf\lambda$ extends to a foliation of $F$ which is non-singular along $K$
and transverse to $K$. We denote by $U$ the open collar neighborhood of $K$
consisting of all extended leaves reaching $K$.
\itemup{b)}
There exists a unique embedding $\iota = \iota_\lambda \map \ol{SK} \to F$ such
that $\iota \rst K = \id$ and $\iota^*\lambda = \lambda_\xi$; its image is the
open collar neighborhood $U$.
\end{proposition}
\begin{proof}
Let $u \map F \to \R_{\ge0}$ be a function with regular level set $K = \{u=0\}$
and $\beta$ the form extending $u \lambda$ over~$F$. For $n = \frac12 \dim F$,
\begin{multline*}
\omega^n = \bigl(d (\beta / u)\bigr)^n
= u^{-2n} (u\, d\beta + \beta \wedge du)^n \\
= u^{-n-1} (u\, d\beta + n \beta \wedge du) \wedge (d\beta)^{n-1}
= u^{-n-1} \mu
\end{multline*}
where $\mu := (u\, d\beta + n\beta \wedge du) \wedge (d\beta)^{n-1}$ is a volume
form on~$F$.
Let $\nu$ denote the vector field on $F$ given by $\nu \hook \mu = n\beta \wedge
(d\beta)^{n-1}$. Since $\beta$ induces a positive contact form on the boundary,
$\nu$ is non-singular along $K$ and points transversely outwards (specifically,
$\nu \cdot u = -1$ by the very definition of things). On the other hand, in the
interior of $F$,
$$ n \beta \wedge (d\beta)^{n-1}
= n u^n \lambda \wedge (d\lambda)^{n-1}
= u^n \dvf\lambda \hook \omega^n
= u^{-1}\, (\dvf\lambda \hook \mu). $$
Comparing this relation with the definition of $\nu$, we get $\dvf\lambda = u
\nu$. Since $u$ vanishes along $K$, the vector field $\dvf\lambda$ is complete
and the foliation it defines extends to the foliation spanned by $\nu$. This
proves Part a).
As for Part b), first note that if the embedding $\iota$ exists then it maps the
standard Liouville field $\dvf{\lambda_\xi}$ of $SK$ to $\dvf\lambda$, and so its
image has to be $U$. Now observe that the holonomy of the foliation spanned by
$\nu$ yields a projection $U \to K$ and, for any point $p \in U-K$ projecting to
$q \in K$, identifies $\lambda_p \in T_p^*U$ with a covector in $T_q^*K$ whose
cooriented kernel equals $\xi_q$ (just because the holonomy preserves the kernel
of $\beta = u\lambda$). Thus, we have a smooth map $U \to \ol{SK}$ which is the
identity on $K$. The expansion properties of the flow of $\dvf\lambda$ imply
that this map is a diffeomorphism, and we define $\iota$ to be the inverse map.
The relation $\iota^*\lambda = \lambda_\xi$ follows from the very definition of
$\lambda_\xi$, and $\iota$ is unique because the identity of $K$ lifts to a
unique diffeomorphism of $SK$ preserving $\lambda_\xi$.
\end{proof}
\begin{corollary}[Ideal Liouville Forms] \label{c:liouforms}
On any ideal Liouville domain $(F,\omega)$, ideal Liouville forms constitute an
affine space. Given a function $u \map F \to \R_{\ge0}$ with regular level set
$\del F = \{u=0\}$, the underlying vector space can be described as consisting
of all closed $1$-forms $\kappa$ on $\Int F$ satisfying the following equivalent
conditions:
\begin{itemize}
\itemup{(i)}
The form $u\kappa$ extends to a smooth form on $F$.
\itemup{(ii)}
The vector field $\dvf\kappa/u$ extends to a smooth vector field on $F$
\ur(which is automatically tangent to $K := \del F$\ur).
\itemup{(iii)}
There exists a function $f \map F \to \R$ such that $\kappa - d (f \log u)$ is
the restriction of a closed $1$-form on $F$.
\end{itemize}
\end{corollary}
As a result, a Lagrangian submanifold $L \subset \Int F$ is exact for some ideal
Liouville form if and only if its Liouville class (with respect to an arbitrary
given ideal Liouville form) lies in the image of the natural map $H^1(F,\R) \to
H^1(L,\R)$.
\begin{proof}
The only (maybe) non-trivial claim which is not a straightforward consequence of
Propositions \ref{p:contact} and \ref{p:lioufields} is that (i) implies (iii).
So assume that $u \kappa$ extends to a smooth form $\gamma$ on $F$. In $\Int F$,
$$ 0 = d\kappa = d(\gamma/u) = u^{-2} (u\, d\gamma + \gamma \wedge du). $$
By continuity, $u\, d\gamma + \gamma \wedge du$ is identically zero on $F$, and
hence $\gamma \wedge du = 0$ along $K = \{u=0\}$. Thus, there exists a function
$f \map K \to \R$ such that $\gamma = f\, du$ along $K$. Extend $f$ (keeping its
name) to a function on $F$ and observe that the form
$$ \gamma - u\, d (f \log u) = \gamma - u \log u \, df - f\, du $$
extends to a $1$-form $\gamma'$ on $F$ which vanishes identically along $K$. It
follows that $\gamma' = u \kappa'$ where $\kappa'$ is a closed $1$-form on $F$.
\end{proof}
Another corollary is the following avatar of a standard lemma (see Lemma 1.1 and
the subsequent remark in \cite{BEE}):
\begin{corollary}[Exact Isotopies] \label{c:exact}
Let $(F,\omega)$ be an ideal Liouville domain and $\lambda_t$ $(t \in [0,1])$ a
path of ideal Liouville forms in $\Int F$. Then there is a symplectic isotopy
$\psi_t$ $(t \in [0,1])$ of $F$, relative to the boundary, such that $\psi_0 =
\id$ and, for every $t \in [0,1]$, the form $\psi_t^*\lambda_t - \lambda_0$ is
the differential of a function with compact support in $\Int F$.
\end{corollary}
Here the path $\lambda_t$ is assumed to be smooth in the sense that $\lambda_t =
\beta_t/u$ where $\beta_t$ $(t \in [0,1])$ is a smooth path of $1$-forms on $F$,
\emph{ie.} a smooth $1$-form on $[0,1] \times F$ whose contraction with $\del_t$
is zero ($u$, as usual, is a non-negative function on $F$ with regular level set
$K := \del F = \{u=0\}$).
\begin{proof}
For $t \in [0,1]$, let $\iota_t \map \ol{SK} \to F$ be the unique embedding such
that $\iota_t \rst K = \id$ and $\iota_t^*\lambda_t = \lambda_\xi$, where $\xi$
is the boundary contact structure (cf.~Proposition \ref{p:lioufields}). Setting
$U_t := \iota_t (\ol{SK})$, we have an isotopy of embeddings
$$ \psi_t^0 := \iota_t \circ \iota_0^{-1} \map U := U_0 \to F $$
with the following properties:
\begin{itemize}
\item
$\psi_0^0 = \id$ and $\psi_t^0 \rst K = \id$ for all $t \in [0,1]$,
\item
$(\psi_t^0)^*\lambda_t = \lambda_0$ on $U-K$ for all $t \in [0,1]$.
\end{itemize}
Therefore, the time-dependent vector field $\eta_t^0$ on $U_t$ generating the
isotopy $\psi_t^0$ satisfies, for all $t \in [0,1]$,
\begin{itemize}
\item
$\eta_t^0 = 0$ along $K$, and
\item
$(\eta_t^0 \cdot \lambda_t) + \dot\lambda_t = 0$ on $U_t - K$.
\end{itemize}
Let $f_t^0 := \lambda_t(\eta_t^0)$ and denote by $\eta_t^1$ the time-dependent
locally Hamiltonian vector field on $\Int F$ given by $\eta_t^1 \hook \omega =
-\dot\lambda_t$. In $U_t-K$,
$$ (\eta_t^1 - \eta_t^0) \hook \omega = df_t^0 . $$
Now take a time-dependent function $f_t$ on $F$ equal to $f_t^0$ near $K$, and
consider the locally Hamiltonian vector field $\eta_t$ on $\Int F$ such that
$(\eta_t^1 - \eta_t) \hook \omega = df_t$. Since $\eta_t = \eta_t^0$ close to
the boundary, $\eta_t$ extends smoothly to a vector field on $F$ which vanishes
identically along $K$. On the other hand, in $\Int F$,
$$ (\eta_t \cdot \lambda_t) + \dot\lambda_t
= d \bigl(\lambda_t(\eta_t) - f_t\bigr), $$
and the function $\lambda_t(\eta_t) - f_t$ is zero on the neighborhood of $K$
where $f_t = f_t^0$ (and $\eta_t = \eta_t^0$). The desired isotopy $\psi_t$ is
obtained by integrating the vector field $\eta_t$.
\end{proof}
Ideal Liouville domains are stable in the following sense:
\begin{lemma}[Stability] \label{l:moser} \label{l:stability}
Let $F$ be a domain and $(\omega_t)$ $(t \in [0,1])$ a path of ideal Liouville
structures on $F$. Then there exists an isotopy $\phi_t$ $(t \in [0,1])$ of $F$
such that $\phi_0=\id$ and $\phi_t^*\omega_t = \omega_0$ for all $t \in [0,1]$.
Moreover, we can choose this isotopy relative to $K = \del F$ if ---~and clearly
only if~--- all forms $\omega_t$ induce the same boundary contact structure.
\end{lemma}
Here again, the required smoothness of the path $\omega_t$ is that there is a
smooth path $\beta_t$ of $1$-forms on $F$ such that $\omega_t = d(\beta_t/u)$.
\begin{proof}[Sketch of proof]
Due to the smoothness of the path $\omega_t$, the induced contact structure on
$K$ vary smoothly with $t$. Then, by Gray's stability Theorem (and the obvious
fact that any isotopy of $K$ extends to an isotopy of $F$), it suffices to treat
the case when all forms $\omega_t$ induce the same boundary contact structure
$\xi$. Using Proposition \ref{p:lioufields}, we can further arrange that the
forms $\omega_t$ coincide near $K$ and, more specifically, have smoothly varying
ideal Liouville forms $\lambda_t$ which all agree in a neighborhood of $K$. Then
we conclude with Moser's standard argument.
\end{proof}
The next proposition is another expected and straightforward result relating the
symplectic geometry of (the interior of) an ideal Liouville domain $(F,\omega)$
with the contact geometry of its boundary $(K,\xi)$. The notations are as
follows:
\begin{itemize}
\item
$\DD (F, \omega)$ is the group of diffeomorphisms of $F$ preserving $\omega$,
\item
$\DDc (F,\omega) \subset \DD (F,\omega)$ is the subgroup of diffeomorphisms
fixing $K := \del F$ pointwise, and
\item
$\DD (K,\xi)$ is the group of diffeomorphisms of $K$ preserving $\xi$.
\end{itemize}
\begin{proposition}[Relations between Automorphism Groups] \label{p:fibration}
Let $(F,\omega)$ be an ideal Liouville domain with contact boundary $(K,\xi)$.
The restriction homomorphism
$$ \DD(F,\omega) \to \DD(K,\xi) $$
is a Serre fibration, with associated long exact sequence of homotopy groups
$$ \ldots \pi_k \DDc (F,\omega) \to \pi_k \DD (F,\omega) \to
\pi_k \DD (K,\xi) \to \pi_{k-1} \DDc (F,\omega) \ldots\,. $$
\end{proposition}
The homomorphism $\pi_1 \DD(K,\xi) \to \pi_0 \DDc(F,\omega)$ can be used to
define natural semigroups in the symplectic mapping class group $\MCG (F,\omega)
:= \pi_0 \DDc(F,\omega)$: An element in there is positive (resp.\ non-negative)
if it is the image of a positive (resp.\ non-negative) loop in $\DD (K,\xi)$.
When $(K,\xi)$ is a ``contact circle bundle'' (meaning that some Reeb flow
generates a free circle action), the image of the corresponding loop is the
mapping class of a ``fibered Dehn twist''.
\begin{proof}[Sketch of proof]
We merely explain how to lift paths. Let $\phi_0 \in \DD(F,\omega)$ and take a
path $\ch\phi_t \in \DD(K,\xi)$ $(t \in [0,1])$ starting with $\ch\phi_0 =
\phi_0 \rst K$. The contact isotopy $\ch\phi_t$ lifts to a Hamiltonian isotopy
$S\ch\phi_t$ in the symplectization $SK$. Pick an arbitrary ideal Liouville form
$\lambda$ and identify $\ol{SK}$ with the collar neighborhood $U = \iota_\lambda
(\ol{SK})$ of Proposition \ref{p:lioufields}. The path $S\ch\phi_t$ can then be
viewed as a Hamiltonian isotopy of $U$ extending $\ch\phi_t$. We obtain the path
$\phi_t$ by cutting off the corresponding Hamiltonian functions away from $K$
inside $U$, and by integrating the new Hamiltonian functions with $\phi_0$ as
the initial condition.
\end{proof}
We now describe the two main examples of ideal Liouville domains.
\begin{example}[\textit{in vivo}: Convex Hypersurfaces in Contact Manifolds]
\label{x:ild1}
Let $(V,\xi)$ be a contact manifold and $S$ a hypersurface in $V$ which is \emph
{$\xi$-convex}, meaning that $S$ is transverse to some contact vector field
$\nu$. Consider the ``dividing set''
$$ \Gamma := \{p \in S \with \nu_p \in \xi_p\} \subset S. $$
Then the closure of every relatively compact connected component of $S - \Gamma$
is naturally an ideal Liouville domain (see \cite[I.3-C]{Gi1}).
To see this, pick an equation $\alpha$ of $\xi$, set $u := \alpha(\nu)$ and note
that $\Gamma$ is the zero-set of $u \rst S$. We claim that $u \rst S$ vanishes
transversely. Indeed, the identity $du \rst \xi = -(\nu \hook d\alpha) \rst \xi$
(drawn from the Cartan formula for the Lie derivative) implies that $du \rst{
\xi \cap TS } \ne 0$ along $\Gamma$, and that $\Gamma$ is actually a contact
submanifold of $(V,\xi)$. Moreover, $\nu$ restricted to the open set $\{u\ne0\}
\subset V$ is the Reeb vector field of the contact form $\alpha/u$. Since $\nu$
is transverse to $S$, the differential $d(\alpha/u)$ induces a symplectic form
on $S - \Gamma$.
\end{example}
\begin{example}[\textit{in vitro}: Ideal Completion of a Liouville Domain]
\label{x:ild2}
Let $(F,\lambda)$ be a Liouville domain in the sense of Definition~\ref{d:ld},
and let $u \map F \to \R_{\ge0}$ be a function with the following properties:
\begin{itemize}
\item
$u$ admits $K := \del F$ as its regular level set $\{u=0\}$,
\item
$\dvf\lambda \cdot \log u < 1$ at every point in $\Int F$.
\end{itemize}
Then a simple calculation (already resorted to in the introduction) shows that
$\omega := d(\lambda/u)$ is a symplectic form on $\Int F$, and so $(F,\omega)$
is an ideal Liouville domain. Moreover, since conditions 1 and 2 define a convex
cone of functions $u$, it follows from Lemma \ref{l:moser} that, up to isotopy
relative to the boundary, the geometry of $(F,\omega)$ is independent of~$u$.
Taking $u$ non-increasing along the orbits of $\dvf\lambda$ and equal to $1$
outside the collar neighborhood of $K$ associated with $\lambda$ (by Proposition
\ref{p:lioufields}), we see that $(\Int F, \omega)$ is symplectically isomorphic
to the completion of $(F,\lambda)$. For this reason, $(F,\omega)$ is called the
\emph{ideal completion} of $(F,\lambda)$. It can be alternatively obtained by
gluing $K$ to the usual completion $(\wh F, \wh\lambda)$ in exactly the same way
as $\ol{SK}$ was constructed from $SK$.
\end{example}
To conclude this general discussion of ideal Liouville domains, here is the
product construction alluded to in the introduction:
\begin{proposition}[Product of Ideal Liouville Domains]
Let $(F_1,\omega_1)$ and $(F_2,\omega_2)$ be two ideal Liouville domains. Up to
isomorphism, there exists a unique ideal Liouville domain $(F,\omega)$ admitting
a diffeomorphism $\phi \map \Int F \to \Int (F_1 \times F_2)$ such that $\omega
= \phi^* (\omega_1 \oplus \omega_2)$ and, for any Liouville forms $\lambda_1$
and $\lambda_2$ on $F_1$ and $F_2$, respectively, $\phi^* (\lambda_1 \oplus
\lambda_2)$ is a Liouville form on $F$.
\end{proposition}
\begin{proof}
Clearly, $(\Int (F_1 \times F_2), \lambda_1 \oplus \lambda_2)$ is the (usual)
completion of some Liouville domain. The desired product is the ideal completion
of this domain. Uniqueness follows from the convexity of the sets of ideal
Liouville forms on $F_1$ and $F_2$.
\end{proof}
\begin{remark}[Generalizations] \label{r:gen}
I presented the notion of ideal Liouville domains in a talk at ETH (Zurich) in
November 2010 (for Eddi Zehnder's 70th birthday), and the first published paper
where they explicitly appear is \cite{MNW}. The concept was further generalized
in \cite{Co2} where Courte defined ideal Liouville cobordisms. A cobordism is an
oriented domain $F$ whose boundary components are given prescribed orientations;
$\del_+F$ (resp.~$\del_-F$) denotes the union of the boundary components endowed
with the boundary orientation (resp.\ with the reversed orientation). An \emph
{ideal Liouville cobordism} is a cobordism $F$ together with an exact symplectic
form $\omega$ on $\Int F$ which admits a primitive $\lambda$ such that:
\begin{itemize}
\item
For some/any function $u \map F \to \R_{\ge0}$ with regular level set $\del_+F
= \{u=0\}$, the product $u \lambda$ extends to a smooth $1$-form on $\Int F \cup
\del_+F$ which induces a contact form on $\del_+F$.
\item
For some/any function $u \map F \to \R_{\ge0}$ with regular level set $\del_-F
= \{u=0\}$, the quotient $\lambda / u$ extends to a smooth $1$-form on $\Int F
\cup \del_-F$ which induces a contact form on $\del_-F$.
\end{itemize}
Thus, an ideal Liouville domain $(F,\omega)$ is an ideal Liouville cobordism for
which $\del_-F$ is empty.
All the results discussed above readily extend to ideal Liouville cobordisms. In
particular, both $\del_-F$ and $\del_+F$ inherit canonical contact structures
which are positive for their prescribed orientations. In other words, $\del_-F$
is concave while $\del_+F$ is convex.
Finally, the global exactness condition on the symplectic form can be relaxed
since exactness is needed only near the boundary. This leads to the definition
of \emph{ideal symplectic domains/cobordisms}.
\end{remark}
\subsection{Ideal Liouville domains in contact geometry}
We will now explain how the notion of ideal Liouville domain can help in the
study of the relationships between contact structures and open books. We begin
with a few basic definitions and constructions.
\medskip
An \emph{open book} in a closed manifold $V$ is a pair $(K,\theta)$, where:
\begin{itemize}
\item
$K \subset V$ is a submanifold of codimension $2$ with trivial normal bundle.
\item
$\theta \map V-K \to \S^1 = \R/2\pi\Z$ is a smooth locally trivial fibration
which, in some neighborhood $\D^2 \times K$ of $K = \{0\} \times K$, is simply
the (pullback of the) angular coordinate in $\D^2 - \{0\}$.
\end{itemize}
The submanifold $K$ is called the \emph{binding} of the open book while the
closures of the fibers of $\theta$ are the \emph{pages}. The binding and the
pages inherit coorientations from the canonical orientation of $\S^1$. Hence, if
$V$ is oriented, they are automatically oriented (and the binding is oriented as
the boundary of every page).
In practice, most often open books arise from (smooth) complex-valued maps. If a
map $h \map V \to \C$ vanishes transversely, with zero-set $K := \{h=0\}$, and
if the argument function $\theta := h/|h| \map V-K \to \S^1$ has no critical
points, then the pair $(K,\theta)$ is an open book. Obviously, every open book
$(K,\theta)$ can be obtained in this way, and the defining map $h$ is unique up
to multiplication by a positive function.
\medskip
An open book $(K,\theta)$ in a closed manifold $V$ is characterized by its
monodromy, which is a diffeomorphism of the $0$-page $F := K \cup \{\theta=0\}$
relative to the boundary $K$ and defined only up to isotopy. More precisely,
consider the affine space of \emph{spinning vector fields}, namely vector fields
$\nu$ on $V$ satisfying the following properties:
\begin{itemize}
\item
$\nu = 0$ along $K$ and $\nu \cdot \theta = 2\pi$ in $V-K$;
\item
$\nu$ is \emph{weakly smooth} in the sense that it lifts to a smooth vector
field on the manifold with boundary obtained from $V$ by a real oriented blowup
along $K$ (see Remark \ref{r:smooth} for comments on this condition).
\end{itemize}
For any such vector field $\nu$, the time~$1$ map of its flow, restricted to
$F$, is a diffeomorphism $\phi$ of $F$ relative to $K$. Moreover, as $\nu$ runs
over its affine space, $\phi$ sweeps out an entire mapping class in $\MCG(F) :=
\pi_0 \DDc(F)$ (cf.~Remark \ref{r:smooth}). This mapping class ---~and sometimes
also, by extension, any of its representatives~--- is the \emph{monodromy} of
the open book $(K,\theta)$.
Conversely, given a domain $F$ with non-empty boundary and a diffeomorphism
$\phi$ of $F$ relative to $K := \del F$, one can construct a closed manifold
$\OB(F,\phi)$ endowed with an obvious open book whose $0$-page is parametrized
by $F$ and whose monodromy is represented by $\phi$. There are two steps in the
construction.
\Step1)
We consider the mapping torus of $\phi$, namely the quotient
$$ \MT(F,\phi) := (\R \times F) \big/ {\sim} \quad
\text{where} \quad (t,p) \sim \bigl(t-1, \phi(p)\bigr). $$
This is a compact manifold (with boundary) which has an obvious fibration
$$ \wh\theta \map \MT(F,\phi) \to \S^1 = \R/2\pi\Z $$
coming from the projection $\R \times F \to \R$ multiplied by $2\pi$. All fibers
are diffeomorphic to $F$ and we use the projection $\R \times F \to \MT(F,\phi)$
restricted to $\{0\} \times F$ as a special parametrization of the $0$-fiber
$\{\wh\theta=0\}$ by $F$. We notice that, since $\phi$ induces the identity on
$K = \del F$, the boundary of $\MT(F,\phi)$ is canonically diffeomorphic to
$\S^1 \times K$, the restriction of $\wh\theta$ to $\del\MT(F,\phi)$ being given
by the projection $\S^1 \times K \to \S^1$.
An important point about the manifold $\MT(F,\phi)$ is that it depends only on
the mapping class of $\phi$ in the following sense: If $\phi_0$ and $\phi_1$ are
diffeomorphisms of $F$ relative to $K$ and representing the same mapping class
in $\MCG(F)$, then there is a diffeomorphism $\MT(F,\phi_0) \to \MT(F,\phi_1)$
which respects the fibrations over $\S^1$ and the special parametrizations of
the $0$-fibers.
\Step2)
We construct the closed manifold $\OB(F,\phi)$ from $\MT(F,\phi)$ by collapsing
every circle $\S^1 \times \{.\} \subset \S^1 \times K = \del\MT(F,\phi)$ to a
point. Thus, $\OB(F,\phi)$ is the union of $\Int \MT(F,\phi)$ and $K = (\S^1
\times K) / \S^1$. We denote by
$$ \theta \map \OB(F,\phi) - K = \Int \MT(F,\phi) \to \S^1 $$
the restriction of the fibration $\wh\theta$.
To see that $(K,\theta)$ is indeed an open book in $\OB(F,\phi)$, we need to
specify the smooth structure near $K$. In short, we blow down $\del\MT(F,\phi)$,
the points of $\del\MT(F,\phi) = \S^1 \times K$ corresponding to oriented lines
in the (trivial) normal bundle of $K$ in $\OB(F,\phi)$. Concretely, we fix a
collar neighborhood $\wh N$ of $\del\MT(F,\phi)$ whose fibers are intervals
contained in the fibers of $\wh\theta$ and we declare that, for every $p \in K$,
the union of all intervals ending on $\S^1 \times \{p\} \subset \del\MT(F,\phi)$
projects to a smooth disk $D_p$ in $\OB(F,\phi)$ transverse to $K$ at $p$. More
specifically, we choose a function $\wh r \map \MT(F,\phi) \to \R_{\ge0}$ with
regular level set $\del\MT(F,\phi) = \{\wh r = 0\}$, and we take the induced
function $r$ on $\OB(F,\phi)$, together with $\theta$, as polar coordinates near
$p$ on the disk $D_p$.
It is not hard to check that a different choice of collar neighborhood $\wh N$
and function $\wh r$ leads to an equivalent smooth structure. Actually, the two
structures are conjugated by a homeomorphism of $\OB(F,\phi)$ which preserves
$\theta$ and induces the identity on the page $F_0 := K \cup \{\theta=0\}$. As a
result, $(K,\theta)$ is an open book in $\OB(F,\phi)$, its $0$-page $F_0$ has a
(special) parametrization by $F$, and its monodromy is represented by $\phi$
(note that the vector field $\del_t$ on $\R \times F$ descends to a smooth
vector field on $\MT(F,\phi)$, so its image in $\OB(F,\phi)$ is tautologically
weakly smooth).
\begin{remark}[Smoothly Generated Monodromy Diffeomorphisms] \label{r:smooth}
Given an open book $(K,\theta)$ in $V$, one can easily find spinning vector
fields $\nu$ on $V$ that are smooth, not just weakly smooth. Thus, the monodromy
of $(K,\theta)$ has representatives which are \emph{smoothly generated}, meaning
that they can be obtained by integrating smooth spinning vector fields $\nu$ on
$V$. In particular, one can check that any representative of the monodromy which
is the identity on a neighborhood of $K$ is smoothly generated (see Lemma
\ref{l:rel} for the symplectic version of this assertion). However, Not every
representative of the monodromy is smoothly generated. The following simple
example was pointed out to me by Roussarie \cite{Ro}.
Consider in $\R^2$ a smooth vector field $\nu = 2\pi (\del_\theta + rf \del_r)$,
where $f \map \R^2 \to \R$ is a smooth function. Let $\psi, \phi \map \R \to \R$
denote the diffeomorphisms of the $x$-axis induced by the flow of $\nu$ at times
$1/2$ and $1$, respectively. Then $\phi$ and $\psi$ commute, and $\psi$ reverses
orientation while $\phi$ preserves it. These properties restrict the behavior of
$\phi$. For instance, the germ of $\phi$ at $0$ cannot have the shape $\phi(x)
= x+x^2+{}$ higher order terms. More generally, here is Roussarie's observation:
If $\phi-\id$ is not infinitely flat at $0$ then the first non-zero term in its
Taylor expansion has odd degree. Indeed, if $\phi-\id$ is not flat, it has a
fixed sign on $(0,\eps]$ for $\eps>0$ sufficiently small. Suppose that $\phi(x)
> x$ for all $x \in (0,\eps]$. Since $\psi$ is decreasing and commutes with
$\phi$,
$$ \phi \circ \psi (x) = \psi \circ \phi (x) < \psi(x) \quad
\text{for all $x \in (0,\eps]$.} $$
Hence, $\phi(x) < x$ for all $x \in [\psi(\eps),0)$, and this proves the claim.
In contrast, if the vector field $\nu = 2\pi (\del_\theta + rf \del_r)$ is only
assumed to be ``weakly smooth'' (namely, if $\nu$ lifts to a smooth vector field
on the blownup plane), then its return map $\phi$ on $\R_{\ge0}$ remains smooth
and can freely vary in its mapping class. Indeed, the hypothesis means that $f$
is smooth not as a function on $\R^2$ but as a function of the polar coordinates
$(r,\theta) \in \R_{\ge0} \times \S^1$. Note also that, since $f$ and $df$ are
bounded near $\{0\} \times \S^1$, the vector field $\nu$ is Lipschitz. These
remarks equally apply to weakly smooth spinning vector fields in any dimension.
\end{remark}
The following definition was introduced in \cite{Gi2} to establish formal links
between open books and contact structures:
\begin{definition}[Open Books and Contact Structures] \label{d:sob}
A contact structure $\xi$ on a closed manifold $V$ is \emph{supported} by an
open book $(K,\theta)$ in $V$ if it admits a Pfaff equation $\alpha$ which is
\emph{adapted to $(K,\theta)$} in the sense that:
\begin{itemize}
\item
$\alpha$ induces a positive contact form on $K$, and
\item
$d\alpha$ induces a positive symplectic form on the fibers of $\theta$.
\end{itemize}
Orientations here come from the orientation of $V$ defined by $\xi$.
\end{definition}
We will show below that an open book supporting a contact structure has some
specific geometric structure that we now describe:
\begin{definition}[Liouville Open Books] \label{d:lob}
A \emph{Liouville open book} $(K,\theta,\omega_t)$ in a closed manifold $V$ is
an open book $(K,\theta)$ whose pages $F_t := K \cup \{\theta = 2\pi t\}$ are
equipped with ideal Liouville structures $\omega_t$ $(2\pi t \in \S^1)$ having
primitives $\lambda_t$ such that: For some/any map $h \map V \to \C$ defining
$(K,\theta)$, the products $|h|\, \lambda_t$ are the restrictions to the fibers
$F_t-K$ of a global (smooth) $1$-form $\beta$ on $V$. Such a $1$-form $\beta$ is
referred to as a \emph{binding $1$-form} (associated with $h$), as it indeed
ties the forms $\omega_t$ about~$K$.
\end{definition}
In this context, we consider the affine space of weakly smooth spinning vector
fields $\nu$ on $V$ satisfying the additional condition that $\nu$ preserves the
ideal Liouille structures of pages. This means that the flow of $\nu$, which
rotates the open book $(K,\theta)$, preserves the family of forms $\omega_t$.
Equivalently, $\nu$ spans the kernel of a closed $2$-form on $V-K$ which induces
$\omega_t$ on each page $F_t$.
For such a symplectically spinning vector field $\nu$, the time~$1$ map of its
flow restricted to the ideal Liouville page $(F,\omega) := (F_0,\omega_0)$ is a
symplectic diffeomorphism $\phi$ relative to $K = \del F$. Moreover, as $\nu$
runs over its affine space, $\phi$ sweeps out a full symplectic mapping class in
$\MCG(F,\omega) := \pi_0 \DDc(F,\omega)$. This mapping class is the symplectic
monodromy of the Liouville open book.
The next lemma shows that the symplectic monodromy of a Liouville open book has
representatives which are generated by smooth symplectically spinning vector
fields and can be further assumed to be the identity on a neighborhood of $K$.
As in the usual (non-Liouville) case, however, not every representative of the
symplectic monodromy can be generated in this way.
\begin{lemma}[Binding Forms and Monodromy] \label{l:smooth}
Let $(K,\theta,\omega_t)$ be a Liouville open book in a closed manifold $V$,
and $h \map V \to \C$ a map defining $(K,\theta)$. For every binding $1$-form
$\beta$, the vector field $\nu$ on $V-K$ spanning the kernel of $d(\beta/|h|)$
and satisfying $\nu \cdot \theta = 2\pi$ extends to a smooth vector field on $V$
which is zero along $K$. Furthermore, $\beta$ can be chosen so that $\nu$ is
$1$-periodic near $K$.
\end{lemma}
Note that binding forms associated with any fixed defining map $h$ constitute an
affine space. Another thing to be mentioned here is that, among symplectically
spinning vector fields, those associated with binding $1$-forms generate exact
symplectic diffeomorphisms (see our comment following Proposition \ref{p:tw}).
\begin{proof}
First observe that $\beta \rst K$ is a contact form and defines the (common)
boundary contact structure of all ideal Liouville pages. We fix a small $\eps>0$
such that $\beta$ induces a contact form on every fiber $K_w := \{h=w\}$, $|w|
\le \eps$. We set $\alpha_w := \beta \rst{ K_w }$ and $N := \{|h| \le \eps\}$.
The hyperplane field $\tau := \Ker (\beta \rst N)$ splits as a direct sum $\tau
= \xi \oplus \xi^\bot$, where $\xi$ is the subdistribution consisting of the
contact structures $\xi_w := \Ker \alpha_w$, $|w| \le \eps$, and $\xi^\bot$ is
the $d\beta$-orthogonal complement of $\xi$ in $\tau$ (and determines a contact
connection over $\eps \D^2$). Now consider the following vector fields on $N$:
\begin{itemize}
\item
$\del_\alpha$ is the vector field in $\Ker dh$ whose restriction to each fiber
$K_w$ of $h$ is the Reeb field $\del_{\alpha_w}$ of $\alpha_w$.
\item
$\wt\del_\theta$ and $\wt\del_r$ are the vector fields in $\xi^\bot$ projecting
to $\del_\theta$ and $\del_r$, respectively, where $(r,\theta)$ denote polar
coordinates in $\eps \D^2$.
\end{itemize}
A routine calculation shows that the vector field $\nu$ on $V-K$ spanning the
kernel of $d(\beta/|h|)$ and satisfying $\nu \cdot \theta = 2\pi$ is given in
$N$ by
$$ \nu = 2\pi (\wt\del_\theta + a r \wt\del_r + b \del_\alpha), $$
where $r = r \circ h = |h|$ while
$$ a := \frac{ d\beta (\wt\del_\theta, \del_\alpha) }
{ 1 + d\beta (\del_\alpha, r \wt\del_r) } \quad \text{and} \quad
b := \frac{ d\beta (r \wt\del_r, \wt\del_\theta) }
{ 1 + d\beta (\del_\alpha, r \wt\del_r) }. $$
Clearly, $a$ and $b$ are smooth functions on $N$ and vanish identically along
$K$, so $\nu$ has the desired smooth extension on $V$.
We will now modify $\beta$ to obtain a binding form $\beta'$ such that the
spinning vector field $\nu'$ spanning the kernel of $d(\beta'/|h|)$ in $V-K$ is
$1$-periodic near $K$. First, we trivialize $N$ as a product $N = \eps \D^2
\times K$ so that, in the corresponding cylindrical coordinates $(r,\theta,q)$,
the vector field $\del_r$ lies in $\xi^\bot$. In other words, $\del_r$ equals
$\wt\del_r$ and, along $K = \{0\} \times K$, the $2$-plane field $\xi^\bot$ is
horizontal (namely, tangent to the disks $\D^2 \times \{q\}$, $q \in K$). It is
then an exercise to check that $\beta(\del_\theta)\, d\theta$ is a smooth form
in $N$. Now pick a function $\rho \map V \to [0,1]$ compactly supported in $N$
and equal to $1$ near $K$, and let
$$ \beta' := \beta - \rho\, \beta(\del_\theta)\, d\theta. $$
This smooth $1$-form on $V$ coincides with $\beta$ on every page, so it is a
binding form. Moreover, near $K$,
$$ \beta' = \beta - \beta(\del_\theta)\, d\theta = f\, \pi^*\alpha_0 $$
where $f$ is a positive function, $\pi$ the projection $N = \eps \D^2 \times K
\to K$ and $\alpha_0$ the restriction of $\beta$ to $K$. It follows that the
spinning vector field $\nu'$ spanning the kernel of $d(\beta'/|h|)$ is horizontal
(in the product structure of $N$) and tangent to the level sets of the function
$f/|h|$. Therefore, $\nu'$ is $1$-periodic.
\end{proof}
A practical consequence of this lemma is:
\begin{lemma}[Criterion for Smooth Generation] \label{l:rel}
Any representative of the symplectic monodromy of a Liouville open book which is
the identity near the boundary is generated by a smooth symplectically spinning
vector field.
\end{lemma}
\begin{proof}
This follows from the last assertion of Lemma \ref{l:smooth} and the fact that,
if two symplectic diffeomorphisms of an ideal Liouville domain $(F,\omega)$
coincide with the identity near $K := \del F$ and represent the same class in
$\MCG(F,\omega)$, then they are connected by a symplectic isotopy relative to a
neighborhood of $K$ (an easy way to construct such an isotopy is to use the
embeddings of Proposition \ref{p:lioufields} as in the proof of Corollary \ref
{c:exact}).
\end{proof}
The next proposition is a variation on a wellknown construction first introduced
by Thurston--Winkelnkemper in three dimensions \cite{TW} and extended to higher
dimensions in \cite{Gi2}:
\begin{proposition}[Construction of Liouville Open Books] \label{p:tw}
Consider an ideal Liouville domain $(F,\omega)$ and a symplectic diffeomorphism
$\phi \map F \to F$ relative to $K := \del F$. The open book in $\OB(F,\phi)$ is
a Liouville open book for which the parametrization of its $0$-page by $F$ is a
symplectomorphism.
\end{proposition}
The proof below actually shows that, if the symplectic diffeomorphism $\phi$ is
the identity near $K$ and is exact (meaning that there exists an ideal Liouville
form $\lambda$ such that $\phi^*\lambda - \lambda$ is the differential of a
function with compact support in $\Int F$), then $\phi$ is (smoothly) generated
by the spinning vector field of a binding $1$-form.
\begin{proof}
Let $\lambda_t$ be a path of ideal Liouville forms on $(F,\omega)$ joining an
arbitrary $\lambda_0$ to $\lambda_1 := \phi^*\lambda_0$. According to Corollary
\ref{c:exact}, there is a symplectic isotopy $\psi_t$ of $F$, relative to $K$,
such that $\psi_0 = \id$ and $\psi_t^*\lambda_t - \lambda_0 = df_t$ for all $t
\in [0,1]$, where the functions $f_t$ have compact supports in $\Int F$. Then
the symplectic isotopy $\phi_t := \phi \circ \psi_t$ is relative to $K$ and
connects $\phi = \phi_0$ to a symplectic diffeomorphism $\phi_1$ which is exact
and coincides with the identity near $K$. Since $\OB(F,\phi)$ depends only on
the (smooth) mapping class of $\phi$, we assume from now on that $\phi$ is exact
and is the identity on a neighborhood of $K$.
We now pick an ideal Liouville form $\lambda$ such that $\phi^*\lambda = \lambda
+ df_1$, where $f_1$ is a function with compact support in $\Int F$, and we
choose a path of functions $f_t$ ---~all with compact supports in $\Int F$~---
joining $f_0 := 0$ to $f_1$. Then the $1$-form $\lambda + df_t + \dot f_tdt$ on
$[0,1] \times \Int F$ is a primitive of (the pullback of) $\omega$ and descends
to a $1$-form $\wh\beta$ on $\Int \MT(F,\phi)$ (to ignore smoothing issues, take
the path $f_t$ to be constant near its endpoints).
The next step is to fix cylindrical coordinates near $K$ in $\OB(F,\phi)$ and a
map $h \map \OB(F,\phi)$ defining the obvious open book. We pick a non-negative
function $u$ on $F$, with regular level set $K = \{u=0\}$, such that:
\begin{itemize}
\item
$\dvf\lambda \cdot \log u = -1$ in a neighborhood of $K$ (equivalently, the Lie
derivative $\dvf\lambda \cdot (u\lambda)$ is zero), and
\item
$u \circ \phi = u$ (this property is typically satisfied if $u$ is constant on
the support of $\phi$).
\end{itemize}
Then the map
$$ (t,p) \in [0,1] \times F \mapsto u(p) e^{2i\pi t} \in \C $$
provides the required defining map $h \map \OB(F,\phi) \to \C$. Furthermore, the
function $u$ and the collar neighborhood of $K$ associated with $\lambda$ (cf.\
Proposition \ref{p:lioufields}) provide cylindrical coordinates near $K$. More
precisely, let $G := \{u \le \eps\} \subset F$ with $\eps$ small enough that
$\dvf\lambda \cdot \log u = -1$ on $G$ and $G$ is disjoint from the supports of
$\phi$ and of all functions $f_t$ $(t \in [0,1])$. Then the function $u$ and the
foliation spanned by $\dvf\lambda$ identify $G$ with $[0,\eps] \times K$. In the
same way, $N := \{|h| \le \eps\}$ is identified with $\eps \D^2 \times K$.
It remains to see that the form $|h|\,\wh\beta$ on $\OB(F,\phi) - K$ extends to a
smooth (binding) form on $\OB(F,\phi)$. In fact, the form $u\lambda \rst{ G-K }$
is invariant under the flow of $\dvf\lambda$, so it is the pullback on $(0,\eps]
\times K$ of a $1$-form $\alpha$ on $K$. Similarly, the form $|h|\,\wh\beta \rst
{ N-K }$ is the pullback on $(\eps\D^2 - \{0\}) \times K$ of the same $1$-form
$\alpha$ on $K$. Hence, it extends smoothly across $K$.
\end{proof}
Now the most obvious relationship between supporting and Liouville open books
is:
\begin{proposition}[Supporting Open Book are Liouville] \label{p:supliouv}
Let $(V,\xi)$ be a closed contact manifold, and $(K,\theta)$ a supporting open
book with defining map $h \map V \to \C$. Then the equations $\alpha$ of $\xi$
such that $d(\alpha/|h|)$ induces an ideal Liouville structure on each page form
a non-empty convex cone.
\end{proposition}
\begin{proof}
This follows readily from uniqueness of ideal completions of (usual) Liouville
domains (see Example \ref{x:ild2}).
\end{proof}
An equation $\alpha$ of $\xi$ as in the above proposition yields ideal Liouville
structures $\omega_t$ on the fibers of $\theta$, and $(K,\theta,\omega_t)$ is a
Liouville open book. By Lemma \ref{l:smooth}, the kernel of $d(\alpha/|h|)$ is
spanned by a smooth symplectically spinning vector field $\nu$, but it is easy
to verify that $\nu$ is never $1$-periodic near $K$. Though it may create some
psychological disconfort, this inconvenience is not a problem. It could in fact
be remedied by replacing Liouville open books with open books whose pages are
given ``degenerate ideal Liouville structures'' (to define those objects, take
Definition \ref{d:ild} and simply substitute $u\lambda$ with $u^2\lambda$ in the
extension condition). In short, the key observation here is that, if we consider
for instance the contact form $\alpha := dz + r^2 d\theta$ in $3$-space (with
cylindrical coordinates $(r,\theta,z)$) then, away from the $z$-axis, the Reeb
field of $\alpha/r^2$ is $\del_\theta$ while the Reeb field of $\alpha/r$ is
proportional to the vector field $\del_\theta + r^2 \del_z$.
Proposition \ref{p:supliouv} leads to a new definition:
\begin{definition}[Liouville Open Books and Contact Structures]
Let $(K,\theta,\omega_t)$ be a Liouville open book on a closed manifold $V$, and
$h \map V \to \C$ a map defining $(K,\theta)$. A contact structure on $V$ is
(\emph{symplectically}) \emph{supported} by $(K,\theta,\omega_t)$ if it admits
a binding equation on $V$, that is, an equation $\alpha$ such that $\alpha/|h|$
induces an ideal Liouville form on each ideal Liouville page $(F_t,\omega_t)$
$(2\pi t \in \S^1)$.
\end{definition}
\begin{remark}[Uniqueness of the Binding Equation] \label{r:unique}
If it exists, the above equation $\alpha$ is unique (the defining map $h$ being
fixed). The underlying more general assertion is that, given an ideal Liouville
domain $(F,\omega)$ and an ideal Liouville form $\lambda$, the constant function
$1$ is the only function $f$ on $\Int F$ such that $d(f\lambda) = \omega$. For
$\dim F \ge 4$, the reason is purely algebraic: $f\omega$ and $\omega$ must
agree on the kernel of $\lambda$, which contains an $\omega$-symplectic space;
hence $f$ has to equal $1$. If $\dim F = 2$, non-constant solutions $f$ exist
locally, so we need a more global argument. Since
$$ d(f\lambda) = f\omega + df \wedge \lambda
= (f + \dvf\lambda \cdot f) \omega, $$
the condition $d(f\lambda) = \omega$ reads $g + \dvf\lambda \cdot g = 0$, where
$g := f-1$. Now any non-zero solution $g$ of this equation has to be unbounded
on every complete non-trivial orbit of $\dvf\lambda$. The claim then follows
from $\dvf\lambda$ being complete (Proposition \ref{p:lioufields}).
\end{remark}
If a contact structure is (symplectically) supported by a Liouville open book
then it is supported by the underlying smooth open book: To obtain an adapted
equation in the sense of Definition \ref{d:sob}, simply replace $\alpha/|h|$ by
$\alpha/u(|h|)$ where $u \map \R_{\ge0} \to \R_{>0}$ is an increasing function
such that $u(x) = x$ for $x \ge \eps$ and $u(x) = x^2+\eps^2$ for $x \le \eps/2$
(with $\eps$ sufficiently small).
\medskip
We now conclude this paper by showing that the inclusion of the space of contact
structures supported by a Liouville open book into the affine space of binding
forms is a weak homotopy equivalence:
\begin{proposition}[Existence and Uniqueness of Supported Contact Structures]
On a closed manifold, contact structures supported by a given Liouville open
book form a non-empty and weakly contractible subset in the space of all contact
structures. In particular, they lie in a unique isotopy class.
\end{proposition}
Note that the symplectic orientation of the pages, together with their natural
coorientation, determines an orientation of the ambient manifold. It is implicit
in this statement that the supported contact structures are positive for this
orientation.
\begin{proof}
Let $V$ be the ambient closed $2n+1$-manifold, $(K,\theta,\omega_t)$ a Liouville
open book in $V$ and $h \map V \to \C$ a map defining $(K,\theta)$. For any
binding form $\beta$ on $V$ (associated with $h$), we can find an $\eps>0$ such
that $\beta$ induces a contact form on every fiber $K_w := \{h=w\}$ with $|w|
\le \eps$. We fix a non-decreasing function $f \map \R_{\ge0} \to \R$ such that
$f(x) = x$ for $x \le \eps/2$ and $f(x) = 1$ for $x \ge \eps$. Then, for $c \ge
0$, we define
$$ \beta_c := \beta + c\, |h|\, f(|h|)\, d\theta. $$
Clearly, $\beta_c/|h|$ coincides with $\beta/|h|$ on every page. Therefore, if
$\beta_c$ is a contact form, the contact structure it defines is symplectically
supported by our Liouville open book. We claim that $\beta_c$ is a contact form
for all sufficiently large $c$, and in fact for all $c \ge 0$ if $\beta$ itself
is already a contact form. To see this, we set $r := |h|$ and we write
\begin{multline*}
\beta_c \wedge (d\beta_c)^n
= nc rf'(r)\, dr \wedge d\theta \wedge \beta \wedge (d\beta)^{n-1} \\
+ cf(r)\, d\theta \wedge (r\, d\beta + n \beta \wedge dr) \wedge (d\beta)^{n-1}
+ \beta \wedge (d\beta)^n.
\end{multline*}
Since $\beta$ induces a contact form on each fiber $K_w$ of $h$ with $|w| \le
\eps$, the term $rf'(r)\, dr \wedge d\theta \wedge \beta \wedge (d\beta)^{n-1}$
is a positive volume form provided $f'(r) \ne 0$. On the other hand, for all
$r>0$,
$$ f(r)\, d\theta \wedge (r\, d\beta + n \beta \wedge dr) \wedge (d\beta)^{n-1}
= r^{n+1} f(r)\, d\theta \wedge \bigl(d(\beta/r)\bigr)^n $$
is also a positive volume form. The claim follows.
Now consider a $k$-sphere $\xi_s$, $s \in \S^k$, of contact structures supported
by the Liouville open book $(K,\theta,\omega_t)$. By Remark \ref{r:unique},
every contact structure $\xi_s$ has a unique binding equation $\alpha_s$, and
the forms $\alpha_s$, $s \in \S^k$, depend continuously on $s$. Since binding
forms (associated with a fixed $h$) constitute an affine space, we can find a
$(k+1)$-disk of binding forms $\beta_s$, $s \in \D^{k+1}$, such that $\beta_s =
\alpha_s$ for all $s \in \S^k = \del\D^{k+1}$. We choose $\eps>0$ small enough
that each $\beta_s$, $s \in \D^{k+1}$, induces a contact form on all fibers
$K_w$ with $|w| \le \eps$, and we apply our claim twice:
\begin{itemize}
\item
For some $c_0$ sufficiently large, the forms
$$ \beta_{s,c_0} := \beta_s + c_0 |h|\, f(|h|)\, d\theta, \quad
s \in \D^{k+1}, $$
constitute a $(k+1)$-disk of contact forms.
\item
For the same value $c_0$, the forms
$$ \alpha_{s,c} := \alpha_s + c\, |h|\, f(|h|)\, d\theta, \quad
s \in \S^k, \ c \in [0,c_0], $$
constitute a homotopy of $k$-spheres of contact forms between the original
$k$-sphere $\alpha_s = \alpha_{s,0}$, $s \in \S^k$, and the $k$-sphere
$\alpha_{s,c_0} = \beta_{s,c_0}$, $s \in \S^k$, which bounds a $(k+1)$-disk of
contact forms.
\end{itemize}
Since all these contact forms are binding forms, all the contact structures they
define are supported by the Liouville open book $(K,\theta,\omega_t)$, and so
our argument shows that the $k$-sphere $\xi_s$, $s \in \S^k$, is nulhomotopic in
the space of contact structures supported by $(K,\theta,\omega_t)$.
\end{proof}
|
1,108,101,564,301 | arxiv | \section{Introduction}
The largest class of high-temperature superconductors includes materials that are formed of intercalated two-dimensional (2D) structures, such as YBa$_2$Cu$_3$O$_7$ and iron-based superconductors. The simplest such structure is MgB$_{2}$, in which a layered flat hexagonal material is intercalated by a single element, which was discovered in 2001\cite{MgB2Review} to have a high $T_{c}$ of 39 K. In 2005, another class of intercalated 2D materials was found to superconduct: graphite intercalated with Ca and Yb.\cite{10} This class of structures, known as graphite intercalated compounds (GIC), is fundamentally different to MgB$_{2}$, because it is formed of graphene layers that can exist individually as a stable crystal. Since 2005, the number of new GICs has not increased; to date, the only known superconducting GICs are those with intercalatants K, Ca, Li, Yb, Sr and Ba. This discovery created immense research interest aiming to understand the mechanism that underlies the superconductivity in this class of materials.
In spite of the structural similarity of GICs to MgB$_{2}$, the superconducting temperature of MgB$_{2}$ is far higher than all of the observed $T_c$'s of GICs, the maximum on record being 11.5 K for CaC$_6$.\cite{10} Moreover, it was observed that a superconducting gap appears on the Fermi surface associated with the intercalatant atom, but not in the $\pi^{*}$ orbital of the graphitic layers,\cite{key-2} confirming the theoretical prediction that there is an occupied interlayer state in all superconducting compounds of this class of materials.\cite{key-3} While MgB$_2$ possesses an interlayer state, it is unoccupied. Another key difference is the fact that MgB$_2$ is a two-gap superconductor,\cite{MgB2Review} and the two gaps were theoretically shown\cite{EPW} to arise from the $\sigma$ and $\pi$ bands. However, graphite intercalated compounds are single gap superconductors.
Nevertheless, graphite intercalation compounds and MgB$_2$ have a number of features in common. The hexagonal 2D layers in both are perfectly flat; that is, there is no buckling. In fact, buckling was thought to destroy the superconducting state in layered Li$_x$BC because it induces strong mixing of the $\sigma$ and $\pi$ bands.\cite{LixBC} The calculated electron-phonon coupling (EPC) strength, $\lambda$, of bulk CaC$_6$ is 0.83,\cite{key-4} and that of MgB$_2$ is 0.748 \cite{EPW}, so the EPC strength in both compounds are quite close.
It is interesting to consider what would then be the superconducting properties of an intercalated vdW material that is intrinsically buckled? An example of a buckled 2D material is germanene, which can be viewed as the cleaved (111) layer of the $Fd3m$ phase of bulk germanium. It is predicted to be a stable 2D Dirac material.\cite{Germanene} Here we explore the potential superconductivity of potassium-intercalated germanene, KGe$_2$, which is a hypothetical compound that resembles the structure of CaGe$_2$ that is already well known.\cite{CaGe2_1} The interesting feature of KGe$_2$ is that the germanene layers preserve their Dirac cones; that is, KGe$_2$ is a truly intercalated 2D germanene material. The K intercalation was also observed to enhance the superconductivity of FeSe,\cite{key-7} and the K intercalation of MoS$_2$ leads to the emergence of several superconducting phases.\cite{4} We find by solving the anisotropic Eliashberg equations based on density-functional theory (DFT) and time-dependent perturbation theory\cite{EPW} that the superconducting gap is $\sim 11$ K, which is close to that observed in CaC$_6$.
\section{Computational details}
The DFT calculations are performed using the local density approximation (LDA)\cite{LDA} and norm-conserving pseudopotentials\cite{TM} using QUANTUMESPRESSO\cite{espresso}. The valence electronic wave functions are expanded in a plane-wave basis set with a kinetic energy cutoff of 40 Ry. We use a $12\times 12\times 12$ \textbf{k}-point mesh for KGe$_2$ and $12\times 12\times 1$ for monolayer germanene, and a Methfessel-Paxton smearing\cite{MP} of 0.10 eV. The dynamical matrices and the linear variation of the self-consistent potential are calculated within density-functional perturbation theory\cite{LR} on the irreducible set of a regular $12\times 12\times 12$ \textbf{k}-point mesh and a $4\times 4\times 4$ \textbf{q}-point mesh for KGe$_2$, and $12\times 12\times 1$ \textbf{k}-point mesh and a $4\times 4\times 1$ \textbf{q}-point mesh for germanene. In order to solve the Eliashberg equations we evaluate electron energies, phonon frequencies, and electron-phonon matrix elements on fine grids with a $N_k=20\times 20 \times 20$ Monkhorst-Pack mesh and a $N_q=20\times 20 \times 20$ for KGe$_2$ and $N_k=20\times 20 \times 1$ Monkhorst-Pack mesh and a $N_q=20\times 20 \times 1$ for germanene, which were obtained by convergence of the EPC. The calculations are performed using smearing parameters in the Dirac $\delta$ functions corresponding to 100 and 0.5 meV for electrons and phonons, respectively.
\def{\bf d}{{\bf d}}
\def{\bf k}{{\bf k}}
\def{\bf q}{{\bf q}}
\def{\bf G}{{\bf G}}
\def{\bf R}{{\bf R}}
The EPC, $\lambda$, is calculated according to the equation
\begin{equation}
\lambda = 2 \int {\alpha^2F(\omega) \over \omega} d\omega.
\end{equation}
\noindent where the Eliashberg spectral function $\alpha^2F(\omega)$ is defined as
\begin{equation}
\alpha^2F(\omega) = {1\over 2\pi N(e_F)}\sum_{{\bf q}\nu}
\delta(\omega-\omega_{{\bf q}\nu})
{\gamma_{{\bf q}\nu}\over\hbar\omega_{{\bf q}\nu}}\quad,
\end{equation}
\noindent where $\omega$ is the phonon frequency, ${\bf q}$ and $\nu$ are the phonon momentum and mode, respectively, $N(e_F)$ is the electronic density of states at the Fermi level, and $\gamma_{{\bf q}\nu}$ the electron-phonon coupling strength associated with
a specific phonon mode $\nu$ and momentum ${\bf q}$.
\section{Results and discussions}
In all our calculations of the layered KGe$_2$ we have used the $AA$ stacking of the germanene layers as displayed in Fig. \ref{fig_Structure}. The $AA$ stacking was found to be more favorable than the $AB$ stacking based on comparing the stabilization energies of the two configurations, $E_{S}$, which is defined as $E_{S}=E_{KGe_2}-E_{Ge_2}-E_{K}$, where $E_{KGe_2}$ is the total energy of KGe$_2$, $E_{Ge_2}$ is the total energy of the germanene unit cell and $E_{K}$ is the total energy of an isolated K atom. The $E_{S}$ of $AA$ was calculated to be -2.46 eV, which is 126 meV lower than that of $AB$. Finding the optimal stacking sequence for intercalated layered compounds is a daunting task, we choose here to only study the $AA$ stacking based on the following considerations: This unit cell is the smallest of all KGe$_2$ structures, which offers computational efficiency. Even though other stacking sequences are possible, the $AA$ stacking is a good representative model of the layered KGe$_2$ for the purpose of calculating the superconducting properties, because the small horizontal shifts in the position of the K atoms across the layers are expected to have little effect on its superconducting properties \cite{27}. The calculated lattice parameters of the $AA$ KGe$_2$ structure are $a=3.987$ \AA and $c= 4.596$ \AA, while that of monolayer germanene is $a=3.921$ \AA. The buckling height of the germanene layers in KGe$_2$ (the $z$-axis distance between the two Ge atoms) is 0.827 \AA, while that of monolayer germanene is 0.620 \AA, in agreement with published results.\cite{Germanene} With such a crystal, the K-K bond length becomes 3.987 \AA, which is higher than the bond length of the K-K bond (3.577 \AA) in the \textit{Im3m} K polymorph at 12 GPa.\cite{K12GPa}
\begin{figure}[h]
\includegraphics[width=80mm]{fig_Structure}
\caption{The (a) top and (b) side view of the KGe$_2$ structure with the $AA$ stacking. The unit cell is indicated with associated lattice vectors. The K atom is positioned in the center of the germanene hole.}
\label{fig_Structure}
\end{figure}
In order to examine the impact of buckling on the EPC and the superconducting properties, we applied a planar strain on KGe$_2$. Specifically, we tested the strains along the $a$ axis of $\pm 5$\% and $\pm 10$\%. We found that $- 5$\% and $- 10$\% tensile strains result in dynamically unstable structures (that is, the phonon dispersion has imaginary frequencies). In the case of positive strains, a 5\% strain (with the lattice parameter $a=4.212$ \AA) maintains the dynamical stability of the structure, whereas 10\% strain yields a dynamically unstable structure. Therefore, we focus here on the structure with 5\% positive strain. The buckling height in this structure is 0.790 \AA, which is 4.5\% less than that of the unstrained structure.
With the buckling of the germanene layers in KGe$_2$, the coupling between the $\pi$ band and the K band is expected to be much stronger than in MgB$_2$ and CaC$_6$. The band structure of KGe$_2$ and the projected density of states are presented in Fig. \ref{fig_bands}(a,b), and the band structure of monolayer germanene is diplayed in Fig. \ref{fig_bands}(c). In the unit cell, according to Bader charge analysis, the K atom donates 1 $\lvert e\lvert$ to the germanene layer, which results in shifting the Dirac point of germanene in Fig. \ref{fig_bands}(b) downwards. This full charge transfer is analogous to the nearly-full charge transfer that takes place in KC$_8$ \cite{KC8}.
\begin{figure}[h]
\includegraphics[width=80mm]{fig_bands_v2}
\caption{(a) The partial electronic density of states (PDOS) of KGe$_2$, (b) the KGe$_2$ band structure, indicating the orbitals with an interlayer nature as filled circles, and (c) the band structure of monolayer germanene. The three plots are aligned at the Fermi level, which is the energy zero.}
\label{fig_bands}
\end{figure}
As the Dirac cone is shifted below the Fermi level, there is one Ge $4p$ band, that is slightly hybridized with K $3p$, crossing the Fermi energy. Compared to the CaC$_6$ band structure, the Dirac cone in KGe$_2$ stays intact, while in CaC$_6$ the $\Gamma$ point is opened by $\sim 0.5$ eV. The Dirac cones here do not experience a momentum shift, unlike the situation in CaSi$_2$ (which is non-superconducting) where the Dirac cone shifts slightly from the high symmetry points of the first Brillouin zone due to the electron transfer between the adjacent silicene layers \cite{CaSi2}. The situtation of KGe$_2$ is akin to KC$_8$ \cite{KC8} (a superconductor), and electrostatically doped graphene, where only minor differences related to the intercalation states are present close to Fermi energy at the $\Gamma$ point.
It is of interest to determine whether KGe$_2$ exhibits an interlayer state like intercalated graphite superconductors? The presence of such a state in superconducting GIC was first realized as a striking ``coincidence'' in Ref. \citenum{key-3} and is characterized as a hybridized band with significant charge density in the interlayer region. In the case of KGe$_2$, as shown by the small filled circles in Fig. \ref{fig_bands}(b), there is no occupied band that is dominated by an interlayer nature. Instead, the occupied band in Fig. \ref{fig_bands}(b) only has a few momentum points that have an interlayer character as determined by inspecting the wave functions.
Figure \ref{fig_ProjPHDOS}(a) displays the isotropic Eliashberg spectral function $\alpha^2 F(\omega)$ for KGe$_2$. $\alpha^2 F(\omega)$ displays a large dominant peak centered around 50 meV, a second weaker peak centered around 120 meV, and a third weaker peak centered at 240 meV. The corresponding isotropic electron-phonon coupling strength is $\gamma = 1.90$. In order to undestand the vibrational origin of these peaks, we display the atom-projected phonon density of states (PHDOS) in Fig. \ref{fig_ProjPHDOS}(b,c). The $\alpha^2 F(\omega)$ peak centered at 50 meV originates from Ge vibrations, mainly from the Ge out-of-plane modes. The peak centered at 120 meV originate from K and Ge modes. Regarding K, the main contributing mode is the out-of-plane component, then the planar modes in both the $x$ and $y$ directions. For the Ge contributing modes, again the modes in all directions contribute, where the out-of-plane modes contribute more than the planar modes. Finally, the peak centered at 240 meV does not have any K contribution. It is driven by Ge $z$ and $y$ modes. That third peak does not have an influence on the value of $\lambda$, as can be seen in the flattening of the $\alpha^2 F(\omega)$ curve beyond 200 meV.
\begin{figure}[h]
\includegraphics[width=80mm]{fig_ProjPHDOS}
\caption{(a) The Eliashberg function $\alpha^2 F(\omega)$ of KGe$_2$, the atom-projected phonon density of states (PHDOS) for the (b) K and the (c) Ge atoms, and the (d) phonon dispersion along the high symmetry points of the Brillouin zone.}
\label{fig_ProjPHDOS}
\end{figure}
The $\alpha^2 F(\omega)$ of KGe$_2$ is different to that of CaC$_6$,\cite{key-4} where the $\alpha^2 F(\omega)$ of the latter has three primary peaks, the low energy peak being contributed mainly by Ca planar modes, while the second peak is contributed by C out-of-plane modes, and the third C peak is contributed by planar modes. First of all, the planar modes in both directions contribute equally in CaC$_6$ due to the lattice symmetry, unlike the case of KGe$_2$ where the positions of the two Ge atoms with respect to the K atom within unit cell are not symmetric. Second, given that the K atom is lighter than the Ge atom, the first Ge peak has a lower energy than the K peak, which is opposite to the case of CaC$_6$, where the first C peak has a higher energy than the Ca peak. Third, the lowest-energy peak in CaC$_6$ is almost purely Ca dominated, mostly of planar modes, whereas the K peak has a mixture of K and Ge modes, the majority of which are out-of-plane modes.
\begin{figure}[h]
\includegraphics[width=80mm]{fig_Delta}
\caption{The superconducting gap function $\Delta(\omega)$ at various values for $T$ for the (a) equilibrium and (b) 5\% tensile-strained KGe$_2$. The curved lines show the trend as the values of $\Delta(\omega)$ converge towards zero.}
\label{fig_Delta}
\end{figure}
The calculated electron-phonon coupling is much larger than the values reported for MgB$_2$ and the intercalated graphite compounds, but is closer to the range of values of strong-coupling superconductors such as Pd \cite{EPW}. We display a list of these values in Tab. I for 2D intercalated compounds. The reason for this difference is the large buckling of the germanene layer, which leads to the enhanced EPC. The situation in KGe$_2$ is in stark contrast to the CaGe$_2$ compound, where the latter has a very small EPC of 0.19 (cf. Tab I) and does not superconduct.
Within the anisotropic Eliashberg formalism \cite{EPW}, the value of $T_c$ is obtained by examining the gap function $\Delta$ as the temperature parameter $T$ is changed. When $\Delta$ vanishes at some $T$, this characterizes the superconducting state, at which $T_c=T$. Such identification is performed by plotting the $\Delta(\omega)$ function at various values of $T$, and inspecting the trend of the function as its peaks converge to zero, as discussed in \cite{EPW} and displayed in Fig. \ref{fig_Delta}. This figure that the $\Delta(\omega)$ converges to zero as $T$ appraches $T_c \sim 11$ K. The convergence trend is displayed by a straight line that passes through the various $\Delta(\omega)$ functions.
In the isotropic limit, the Eliashberg formalism reduces to the Allen-Dynes formulation \cite{Allen}, in which $T_c$ is given by
\begin{equation}
T_c = {\omega_{log}\over 1.2} \mbox{exp} \left [
{-1.04(1+\lambda)\over \lambda(1-0.62\mu^*)-\mu^*}\right ]
\label{allendynes}
\end{equation}
\noindent where $\mu^*$ is the Coulomb pseudopotential, for which we use $\mu^*=0.16$, and the phonon frequencies logarithmic average $\omega_{log}$ is given by
\begin{equation}
\omega_{log} = \mbox{exp} \left [ {2\over\lambda} \int {d\omega\over\omega}
\alpha^2F(\omega) \mbox{log}\omega \right ]
\end{equation}
\noindent The value of $T_c$ calculated for KGe$_2$ using the isotropic Allen-Dynes formalism (Eq. \ref{allendynes}) is 5.8 K, which is almost half of the value predicted by solving the full Eliashberg equation. This is because of the significance of the momentum anisotropy in KGe$_2$, which is also the case in MgB$_2$.
\begin{table}
\label{tab:comparison}
\caption{The electron-phonon coupling strength $\lambda$, the predicted and experimental superconducting critical temperatures $T_c$ for a number of 2D-intercalated compounds.}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Structure & $\lambda$ & $T_c$ & $T_c$ exp & Ref. \\
\hline
KGe$_2$ & 1.9 & 11 & & Present work \\
MgB$_2$ & 0.748 & 50 & 39 & \citenum{EPW} \\
CaC$_6$ & 0.83 & 11 & 11.5 &\citenum{key-4} \\
CaC$_2$ [95 GPa] & 0.564 & 9.8 & & \citenum{CaC2}\\
LiB & 0.62 & 10-15 & & \citenum{LiB} \\
\hline
\end{tabular}
\end{table}
Another model for the superconductivity of intercalated compounds is that proposed by Al-Jishi \cite{Jishi} for GICs. This is a simplified BCS-based purely electronic model where the graphite $\pi$ and intercalatant $s$ states of are coupled via a coupling parameter. We can easily extend this model to KGe$_2$, where the Hamiltonian couples between the germanene $\pi$ and the K $4s$ states, and we obtain the equation
\begin{equation}
kT_c \sim \hbar \omega_c exp(-\frac{1}{\arrowvert \lambda \arrowvert}\sqrt{N_{\pi}(0)N_{4s}(0)}),
\label{jishi}
\end{equation}
\noindent where $\omega_c$ is the Debye frequency, $N_{\pi}(0)$ and $N_{4s}(0)$ are the density of states of the $\pi$ and the K $4s$ states at the Fermi level, respectively. The critical feature in this equation is that $T_c$ would become zero when one of $N_{\pi}(0)$ and $N_{4s}(0)$ is zero. In our KGe$_2$, full charge transfer occurs from the K atom to the germanene layer, which should lead to a $T_c\sim 0$. The reason is that, with increasing charge gain in the graphite layers in GICs, the 2D electrons screen the polar coupling between the intercalatant atoms \cite{Takada}. This, however, is not the case in KGe$_2$, owing to the large electron-phonon coupling contribution arising from the electron-doped germanene layers; that is, superconductivity here is $\pi$-driven (most of $\lambda$ is contributed by the germanene layers), in contrast to the case of GICs where superconductivity is interlayer-driven (most of $\lambda$ is contributed by the Ca atoms) \cite{key-4}.
\section{Conclusions}
We predict a superconducting temperature of $\sim 11$ K in a novel buckled intercalated compound, KGe$_2$. The compound has a large electron-phonon coupling of 1.9, which decreases by 11\% when a positive planar tensile strain of 5\% is applied. This is acompanied by a slight increase in $T_c$ of $\sim 12$ K. That is, strong electron-phonon coupling results from the buckled structure of the germanene layers. Despite being an intercalated van der Waals material like intercalated graphite superconductors, KGe$_2$ does not possess an occupied interlayer state.
This research was funded by the Australian Government through the
Australian Research Council (ARC DP160101301). Theoretical
calculations were undertaken with resources provided by the National
Computational Infrastructure (NCI) supported by the Australian
Government and by the Pawsey Supercomputing Centre funded by the
Australian Government and the Government of Western Australia.
|
1,108,101,564,302 | arxiv | \section*{Introduction}
We assume that algebras are finite dimensional over a field $K$.
For $n \geq 1$, an $A$-module $M$ is called an \emph{$n$-cluster tilting module} if it satisfies:
\begin{align*}
\operatorname{\mathrm{add}}(M) &= \{X \in \mod-A \mid \operatorname{Ext}_A^i(M,X)=0 \ \text{for} \ 1 \leq i \leq n-1 \} \\
&= \{X \in \mod-A \mid \operatorname{Ext}_A^i(X,M)=0 \ \text{for} \ 1 \leq i \leq n-1 \}.
\end{align*}
We remark that in some references like \cite{EH} an $n$-cluster tilting module is called an maximal $(n-1)$-orthogonal module.
The concept of $n$-cluster tilting modules was introduced by Iyama in \cite{Iya} and \cite{Iya4}. It has found several important applications, for instance in the theory of cluster algebras, see \cite{GLS}.
Cluster tilting modules are especially important for selfinjective algebras where recent methods allow to construct many examples related to other structures in algebra and combinatorics, see for example \cite{DI} and \cite{CDIM}.
One of the most important classes of selfinjective algebras are group algebras. This leads to the following natural question:
\begin{question}
When does a block of a group algebra have a cluster tilting module?
\end{question}
In \cite{EH}, Erdmann and Holm showed that selfinjective algebras with a cluster tilting module have complexity at most one (recently it was shown in \cite{MV} that this result does not hold for non-selfinjective algebras). Erdmann and Holm used this result in \cite[Section 5.3]{EH} to show that a block of a group algebra can only have a cluster tilting module when it is representation-finite or Morita equivalent to an algebra of quaternion type.
Every representation-finite block of a group algebra is derived equivalent to a symmetric Nakayama algebra and recently a complete classification for the existence of cluster tilting modules was obtained in \cite[Section 5]{DI} for selfinjective Nakayama algebras.
For algebras of quaternion type (which are always of infinite representation type) it is however unknown whether they can have cluster tilting modules and this was posed in \cite[Section 5.3]{EH} as an open question.
There is no universal method to construct cluster tilting modules or to show that they do not exist. The fact that the classification of indecomposable modules for algebras of quaternion type is still not known makes the search and verification of cluster tilting modules especially hard. We remark that the existence of an $n$-cluster tilting module is a Morita invariant.
In this article we show that the algebra of quaternion type $Q(3 \mathcal{A})_2^2$ has a $3$-cluster tilting module. This algebra is Morita equivalent to the principal block of the group algebra of $SL(2,5)$ over a splitting field of characteristic two.
Our main result is as follows:
\begin{theorem*}
Let $A$ be the principal block of the group algebra $KG$ for $G=SL(2,5)$ over a splitting field of characteristic two.
Then $A$ has a $3$-cluster tilting module.
\end{theorem*}
This gives the first example of a cluster tilting module of a representation-infinite block of a group algebra and it gives a positive answer to the question of Erdmann and Holm about the existence of cluster tilting modules for algebras of quaternion type.
We remark that we found the $3$-cluster tilting module in the main result by experimenting with the GAP-package \cite{QPA}. The proof also uses the calculation of quiver and relations for an endomorphism ring which was obtained with the aid of the computer.
\section{An example of a 3-cluster tilting module for the algebra of quaternion type $Q(2 \mathcal{A})_2^2$}
We assume that all algebras are finite dimensional over a field $K$ and all modules are finite dimensional right modules unless stated otherwise. $J$ will denote the Jacobson radical of an algebra and $D=\operatorname{Hom}_K(-,K)$ the natural duality. We assume that the reader is familiar with the basics of representation theory and homological algebra of finite dimensional algebras and refer for example to the textbook \cite{SY}.
The \emph{global dimension} $\operatorname{gldim} A$ of an algebra $A$ is defined as the supremum of all projective dimensions of the simple $A$-modules. It is well known that the global dimension of $A$ coincides with the global dimension of the opposite algebra $A^{op}$, see for example \cite[Exercise 4.1.1]{W}.
The \emph{dominant dimension} $\operatorname{domdim} A$ of $A$ is defined as the minimal $n$ such that $I_n$ is not projective (or infinite if no such $n$ exists), where
$$0 \rightarrow A \rightarrow I_0 \rightarrow I_1 \rightarrow \cdots $$
is a minimal injective coresolution of the regular $A$-module $A$.
The dominant dimension of $A$ coincides with the dominant dimension of the opposite algebra $A^{op}$, see \cite[Theorem 4]{M}.
We will also need the following lemma on the behaviour of the global and dominant dimension under extensions of the ground field. For a field extension $F$ of $K$, we denote by $A_F:=A \otimes_K F$ the $F$-algebra which is obtained from $A$ by the field extension.
\begin{lemma} \label{field extensionlemma}
Let $A$ be a finite dimensional algebra over the field $K$ and let $F$ be a field extension of $K$.
\begin{enumerate}
\item $\operatorname{domdim} A=\operatorname{domdim} A_F$.
\item If $A/J$ is separable (where $J$ denotes the Jacobson radical of $A$), then
$\operatorname{gldim} A= \operatorname{gldim} A_F$.
\end{enumerate}
\end{lemma}
\begin{proof}
\leavevmode
\begin{enumerate}
\item See \cite[Lemma 5]{M}.
\item See \cite[Corollary 18]{ERZ}.
\end{enumerate}
\end{proof}
Recall that a module $M$ is a \emph{generator} of $\mod-A$ when every indecomposable projective $A$-module is a direct summand of $M$ and $M$ is a \emph{cogenerator} of $\mod-A$ when every indecomposable injective $A$-module is a direct summand of $M$.
\begin{theorem} \label{clustertiltingtheorem}
Let $A$ be a non-semisimple connected finite dimensional algebra with an $A$-module $M$ that is a generator and cogenerator of $\mod-A$.
Then $M$ is an $n$-cluster tilting module if and only if $B:=\operatorname{End}_A(M)$ is a higher Auslander algebra of global dimension $n+1$, that is $B$ has global dimension equal to $n+1$ and dominant dimension equal to $n+1$.
\end{theorem}
\begin{proof}
See \cite[Theorem 2.6]{Iya2} for an elementary proof.
\end{proof}
We refer to \cite[Section VII]{E} for the precise definition of algebras of quaternion type, which arise in the study of blocks of group algebras with quaternion defect groups.
The tables starting at page $303$ of \cite{E} give quiver and relations of algebras of quaternion type.
In this article, we only need to know the algebra of quaternion type $Q(3 \mathcal{A})_2^2$ that we describe next. Let $K$ be a field of characterstic two.
Let $A=KQ/I$ be the following quiver algebra where $Q$ is given by
\begin{center}
\begin{tikzpicture}
\node (v1) at (-4,0) {$\bullet^{1}$};
\node (v2) at (-2,0) {$\bullet^{2}$};
\node (v3) at (0,0) {$\bullet^{3}$};
\draw [-open triangle 45] (v1) edge[bend right] node[below] {$b$} (v2);
\draw [-open triangle 45] (v2) edge[bend right] node[above] {$y$} (v1);
\draw [-open triangle 45] (v3) edge[bend right] node[above] {$n$} (v2);
\draw [-open triangle 45] (v2) edge[bend right] node[below] {$d$} (v3);
\end{tikzpicture}
\end{center}
and the relations are given by
$$I=\langle byb-bdnybdn,yby-dnybdny,ndn-nybdnyb,dnd-ybdnybd,bybd,ndny\rangle.$$
This is the algebra of quaternion type $Q(3 \mathcal{A})_2^2$ and this algebra is Morita equivalent to the principal block of the the group algebra $FG$ where $G=SL(2,5)$ is the special linear group of $2 \times 2$-matrices over the field with five elements and $F$ is a splitting field of characteristic two, see for example page $110$ of \cite{H} and section $7$ of \cite{E2}.
The algebra $A$ is symmetric of period $4$ and $\dim_K(A)=36$.
The dimension vectors of the indecomposable projective $A$-modules $P_1, P_2$ and $P_3$ are respectively given by~$[4,4,2],$~$[4,8,4]$ and~$[2,4,4]$. We define the following $A$-modules:
\begin{enumerate}
\item Let $M_1= e_3 A/nA$, which has dimension vector $[0,0,1]$.
\item Let $M_2=e_3 A/nybdnyA$, which has dimension vector $[1,3,3]$.
\item Let $M_3=e_3A/ nyA$, which has dimension vector $[0,1,2]$.
\item Let $M_4=e_2A/yA$, which has dimension vector $[1,4,2]$.
\end{enumerate}
Let $M:=A \oplus M_1 \oplus M_2 \oplus M_3 \oplus M_4$. Note that every indecomposable summand of $M$ has simple top.
We fix $A$ and $M$ as above for the rest of this article.
We show that $M$ is a $3$-cluster tilting module.
\begin{theorem}
Let $A$ be the algebra of quaternion type $Q(3 \mathcal{A})_2^2$ over a field $F$ with characteristic two.
Then $M$ is a 3-cluster tilting module.
\end{theorem}
\begin{proof}
Clearly $M$ is a generator and cogenerator of $\mod-A$.
We show that $B:=\operatorname{End}_A(M)$ has global dimension $4$ and dominant dimension $4$. Then, $M$ is a $3$-cluster tilting module by Theorem \ref{clustertiltingtheorem}.
First assume that $K$ has two elements.
The following QPA program calculates quiver and relations of $B^{op}$ over the field with two elements and shows that $B^{op}$ has global dimension and dominant dimension equal to $4$. We remark that GAP applies functions from the right. Thus, it calculates the opposite algebra of the endomorphism ring of $M$.
\begin{tiny}
\begin{verbatim}
LoadPackage("qpa");
k:=2;F:=GF(2);Q:=Quiver(3,[[1,2,"b"],[2,3,"d"],[2,1,"y"],[3,2,"n"]]);
kQ:=PathAlgebra(F,Q);AssignGeneratorVariables(kQ);
rel:=[b*y*b-(b*d*n*y)^(k-1)*b*d*n,y*b*y-(d*n*y*b)^(k-1)*d*n*y,
n*d*n-(n*y*b*d)^(k-1)*n*y*b,
d*n*d-(y*b*d*n)^(k-1)*y*b*d,b*y*b*d,n*d*n*y];
A:=kQ/rel; B:=Basis(A);U:=Elements(B);Display(U);n:=Size(B);
UU:=[];for i in [4..n] do Append(UU,[U[i]]);od;
t1:=UU[4];
M1:=RightAlgebraModuleToPathAlgebraMatModule(RightAlgebraModule(A, \*, RightIdeal(A,[t1])));
N1:=CoKernel(InjectiveEnvelope(M1));M1:=N1;
t2:=UU[33];M2:=RightAlgebraModuleToPathAlgebraMatModule(RightAlgebraModule(A, \*, RightIdeal(A,[t2])));
N2:=CoKernel(InjectiveEnvelope(M2));M2:=N2;
t3:=UU[10];
M3:=RightAlgebraModuleToPathAlgebraMatModule(RightAlgebraModule(A, \*, RightIdeal(A,[t3])));
N3:=CoKernel(InjectiveEnvelope(M3));M3:=N3;
t4:=UU[3];
M4:=RightAlgebraModuleToPathAlgebraMatModule(RightAlgebraModule(A, \*, RightIdeal(A,[t4])));
N4:=CoKernel(InjectiveEnvelope(M4));M4:=N4;
N:=DirectSumOfQPAModules([N1,N2,N3,N4]);
projA:=IndecProjectiveModules(A);RegA:=DirectSumOfQPAModules(projA);
M:=DirectSumOfQPAModules([RegA,N]);
B:=EndOfModuleAsQuiverAlgebra(M)[3];
QQ:=QuiverOfPathAlgebra(B);Display(QQ);rel:=RelatorsOfFpAlgebra(B);
gd:=GlobalDimensionOfAlgebra(B,33);dd:=DominantDimensionOfAlgebra(B,33);
\end{verbatim}
\end{tiny}
\noindent We observe that $B^{op}=K\hat{Q}/\hat{I}$ is a quiver algebra where $\hat{Q}$ is given by
\begin{center}
\begin{tikzpicture}[scale=0.65]
\node (v1) at (1,2.5) {$\bullet^{1}$};
\node (v2) at (-7,5.5) {$\bullet^{2}$};
\node (v3) at (-4,2.5) {$\bullet^{3}$};
\node (v4) at (-5.5,4) {$\bullet^{4}$};
\node (v5) at (4.5,-6) {$\bullet^{5}$};
\node (v6) at (1,-2.5) {$\bullet^{6}$};
\node (v7) at (-4,-2.5) {$\bullet^{7}$};
\draw [-open triangle 45] (v1) edge node[above] {$\alpha_1$} (v3);
\draw [-open triangle 45] (v1) edge[bend right] node[left] {$\alpha_2$} (v6);
\draw [-open triangle 45] (v2) edge[bend left] node[above] {$\alpha_3$} (v1);
\draw [-open triangle 45] (v2) edge[bend right] node[below left] {$\alpha_4$} (v3);
\draw [-open triangle 45] (v3) edge[bend right] node[above right] {$\alpha_5$} (v2);
\draw [-open triangle 45] (v3) edge node[below,pos=0.65] {$\alpha_6$} (v4);
\draw [-open triangle 45] (v3) edge node[right] {$\alpha_7$} (v7);
\draw [-open triangle 45] (v4) edge node[above,pos=0.05] {$\alpha_8$} (v2);
\draw [-open triangle 45] (v5) edge[bend right] node[above right] {$\alpha_9$} (v6);
\draw [-open triangle 45] (v6) edge[bend right] node[right] {$\alpha_{10}$} (v1);
\draw [-open triangle 45] (v6) edge[bend right] node[left] {$\alpha_{11}$} (v5);
\draw [-open triangle 45] (v6) edge[bend right] node[above] {$\alpha_{12}$} (v7);
\draw [-open triangle 45] (v7) edge[bend left] node[below left] {$\alpha_{13}$} (v2);
\draw [-open triangle 45] (v7) edge[bend right] node[below] {$\alpha_{14}$} (v6);
\end{tikzpicture}
\end{center}
\noindent and the relations are given by
\begin{align*}
\hat{I}=\langle &\alpha_1\alpha_6, \alpha_2\alpha_{11}, \alpha_1\alpha_7 + \alpha_2\alpha_{12}, \alpha_4\alpha_5, \alpha_6\alpha_8 + \alpha_7\alpha_{13}, \alpha_8\alpha_3, \alpha_9\alpha_{10}, \alpha_2\alpha_{10} + \alpha_1\alpha_5\alpha_3, \alpha_2\alpha_{10}\alpha_1, \alpha_3\alpha_1\alpha_5,\\ &\alpha_3\alpha_2\alpha_{10},\alpha_3\alpha_2 + \alpha_4\alpha_7\alpha_{14}, \alpha_5\alpha_3\alpha_1, \alpha_5\alpha_4\alpha_6, \alpha_8\alpha_4\alpha_6, \alpha_9\alpha_{12}\alpha_{13}, \alpha_{12}\alpha_{13} + \alpha_{10}\alpha_1\alpha_5,\\ &\alpha_{10}\alpha_1\alpha_7 + \alpha_{11}\alpha_9\alpha_{12}, \alpha_{10}\alpha_2\alpha_{10} + \alpha_{12}\alpha_{13}\alpha_3, \alpha_{14}\alpha_{12} + \alpha_{13}\alpha_4\alpha_7, \alpha_{14}\alpha_{10}\alpha_2 + \alpha_{14}\alpha_{11}\alpha_9,\\ &\alpha_{13}\alpha_3\alpha_2 + \alpha_{14}\alpha_{12}\alpha_{14}, \alpha_1\alpha_7\alpha_{14}\alpha_{12}, \alpha_2\alpha_{10}\alpha_2\alpha_{10}, \alpha_3\alpha_1\alpha_7\alpha_{14}, \alpha_3\alpha_1 + \alpha_4\alpha_6\alpha_8\alpha_4,\\ &\alpha_7\alpha_{14}\alpha_{12} + \alpha_6\alpha_8\alpha_4\alpha_7, \alpha_5\alpha_4 + \alpha_7\alpha_{14}\alpha_{10}\alpha_1, \alpha_7\alpha_{14}\alpha_{12}\alpha_{13}, \alpha_9\alpha_{12}\alpha_{14}\alpha_{12}, \alpha_{10}\alpha_2\alpha_{10}\alpha_2 + \alpha_{11}\alpha_9\alpha_{11}\alpha_9,\\ &\alpha_{12}\alpha_{13}\alpha_4\alpha_6, \alpha_{12}\alpha_{14}\alpha_{12}\alpha_{13}, \alpha_{14}\alpha_{12}\alpha_{13} + \alpha_{13}\alpha_4\alpha_6\alpha_8, \alpha_{14}\alpha_{10}\alpha_2\alpha_{10}, \alpha_{13}\alpha_3\alpha_1 + \alpha_{14}\alpha_{12}\alpha_{13}\alpha_4,\\ &\alpha_{10}\alpha_2 + \alpha_{11}\alpha_9 + \alpha_{12}\alpha_{14}\alpha_{10}\alpha_1\alpha_7\alpha_{14}, \alpha_{13}\alpha_3 + \alpha_{14}\alpha_{10}\alpha_1\alpha_7\alpha_{14}\alpha_{10}\rangle.
\end{align*}
\noindent With $B^{op}$ also $B$ has global dimension and dominant dimension equal to $4$ and thus $M$ is a $3$-cluster tilting module.
Now let $F$ be an arbitrary field with characteristic two, which is an extension of the field $K$ with two elements.
We have
$$\operatorname{End}_{A_F}(M \otimes_K F) \cong \operatorname{End}_A(M) \otimes_K F \cong B_F ,$$
which has also dominant and global dimension equal to $4$. This follows from Lemma \ref{field extensionlemma} and the fact that~$B/J$ is separable, since $B$ is a quiver algebra.
Thus $M \otimes_K F$ is also a 3-cluster tilting module of $A_F$.
\end{proof}
We remark that it took the supercomputer "nenepapa" from the TU Kaiserslautern $105$ hours to compute the endomorphism ring of $M$. The data of this supercomputer are as follows. Compute-Server Linux (Gentoo): Dell PowerEdge R730, 2x Intel Xeon E5-2697AV4 2.6 GHz, Turbo 3.60 GHz, 40 MB SmartCache, 32 Cores, 64 Threads, 768 GB RAM.\newline\newline
\indent As remarked earlier, the principal block of the group algebra $KG$ for $G=SL(2,5)$ over a splitting field $K$ of characteristic two is Morita equivalent to the algebra of quaternion type $Q(3 \mathcal{A})_2^2$. As a corollary of the previous Theorem we obtain our main result:
\begin{corollary}
Let $G=SL(2,5)$ and $K$ be a field of characteristic two that is a splitting field for $KG$.
Then the principal block of $KG$ has a $3$-cluster tilting module.
\end{corollary}
Note that not every algebra of quaternion type has a cluster tilting module.
In fact, the group algebra $KG$ of the quaternions $G$ of order $8$ over a field $K$ with characteristic two has no cluster tilting modules, since it is representation-infinite and we have $\operatorname{Ext}_{KG}^1(M,M) \neq 0$ for every non-projective $KG$-module $M$ by a result of Tachikawa, see \cite[Theorem 8.6]{T}.
\section*{Acknowledgements}
We thank Karin Erdmann for having informed us in private communication that she has also found a~$3$-cluster tilting module for another algebra of quaternion type which is not a block of a group algebra. We thank Thorsten Holm for providing a reference to his habilitation thesis.
Bernhard B\"ohmler gratefully acknowledges funding by the DFG (SFB/TRR 195). Ren{\'e} Marczinzik gratefully acknowledges funding by the DFG (with project number 428999796). We profited from the use of the GAP-package \cite{QPA}.
|
1,108,101,564,303 | arxiv | \section{Introduction}
Flowing granular materials tend to segregate by particle size, density, or other physical properties, which is a phenomenon crucial to many industrial and geophysical processes \cite{ottino2000mixing,ottino2008mixing,frey2009river}. Despite decades of research on this topic, fundamental aspects of granular flow-driven segregation remain elusive, and state-of-the-art continuum segregation models largely rely instead on {\it ad hoc} or configuration-specific closure schemes~\citep{gray2018particle,umbanhowar2019modeling,thornton2021brief}. Recent efforts characterizing forces on single intruder particles in otherwise species-monodisperse granular flows have advanced our understanding of segregation at the particle level~\citep{tripathi2011numerical,guillard2016scaling,jing2017micromechanical,van2018segregation,staron2018rising,jing2020rising} and led to segregation force models applicable across flow configurations \citep{guillard2016scaling,jing2021unified}. However, it is unclear whether or how single intruder results can be applied to granular mixtures with finite species concentration \citep{tripathi2021theory,rousseau2021bridging}.
More fundamentally, the physical mechanisms governing transitions in segregation behaviors between intruder and mixture regimes as the species concentration is varied, remain unresolved.
In this Letter, we show that particle size segregation in sheared granular flow exhibits a continuous transition from the single intruder limit to finite mixture concentrations. To do so, we extend the virtual-spring-based ``force meter'' approach for a single intruder particle~\citep{guillard2016scaling,van2018segregation,jing2020rising} to size-bidisperse mixtures of arbitrary species concentration and use it to characterize the dependence of the segregation force on concentration for various particle size ratios in controlled, constant-shear-rate flow simulations, see Fig.~\ref{scheme}(a). We find that the segregation force exhibits a plateau at small concentrations and decreases monotonically above a critical concentration, indicating a transition from non-interacting intruders to cooperative phenomena in mixtures, which is reminiscent of previously observed asymmetric concentration dependence in the segregation flux \cite{van2015underlying,jones2018asymmetric}. We also show that these results can provide physics-based closures for connecting segregation models with continuum theories for granular mixtures.
\begin{figure}
\centerline{\includegraphics[width=3.5 in]{figure1.pdf}}
\caption{(a) Large (4\,mm, {\color{blue}{blue}}) and small (2\,mm, {\color{red}{red}}) particles ($c_l=c_s=0.5$) in a controlled, constant-shear-rate flow.
(b) Scaled restoring force vs.\ time for large (\textcolor{blue}{blue}) and small (\textcolor{red}{red}) particles. Data points sampled at 0.01\,s intervals; bold curves are averages using a 1\,s long sliding window. Horizontal lines are averages from 2\,s to 5\,s.
}
\label{scheme}
\end{figure}
{\it Methods}. An in-house discrete element method (DEM) code running on CUDA-enabled GPUs~\cite{isner2020axisymmetric} is used to simulate a size-bidisperse particle mixture with volume concentration $c_i$, diameter $d_i$, and density $\rho_i=1$\,g/cm$^3$ ($i = l$, $s$ for large or small particles, respectively) sheared in a streamwise ($x$) and spanwise ($y$) periodic domain of length $L=35d_l$, width $W=10d_l$, and height $H=25d_l$ to $40d_l$ (varied as needed) in the presence of gravity ($g=9.81\,$m/s$^2$, in the negative $z$-direction), see Fig.~\ref{scheme}(a). The standard linear spring-dashpot model \cite{cundall1979discrete} is used to resolve particle-particle and particle-wall contacts of spherical particles using a friction coefficient of 0.5, a restitution coefficient of 0.2, and a binary collision time of 0.15\,ms. Changing bounding walls from smooth to bumpy (randomly attached particles) does not affect the results. Large ($d_l=4$\,mm) and small particles ($d_s$ varied to adjust the size ratio, $d_l/d_s$) have a $\pm10$\% uniform size distribution to minimize layering \cite{staron2014segregation} (increasing the size variation to $\pm20$\% does not alter the results).
A constant shear rate $\dot\gamma=U/H$ is imposed on the flow by applying a streamwise stabilizing force, $F_{\mathrm{stabilize},k}=K_s (u_k-\dot\gamma z_k)$, on each particle $k$ at every simulation time step, where $u_k$ is the particle streamwise velocity, $z_k$ is the vertical particle position, and $K_s$ is a gain parameter~\cite{Lerner2012unified,clark2018critical,fry2018effect,saitoh2019nonlocal,duan2020segregation,jing2020rising}. This stabilizing force reduces the granular temperature in the streamwise direction but does not alter the segregation~\cite{jing2021unified}.
An overburden pressure equal to the pressure at a depth of $H_w=20d_l$ (i.e., $P_{\mathrm{wall}}=\rho \phi g H_w$ where the bulk solid fraction $\phi$ varies from 0.55 to 0.6 depending on flow conditions) is applied using a massive flat frictional top wall that is free to move vertically (fluctuates by $\pm2\%$ or less after an initial rapid dilation of the particles at flow onset) and moves horizontally at a velocity determined by the constant shear rate velocity profile.
A spring-like vertical restoring force proportional to the center of mass distance between the two species is applied uniformly to all particles of each species $i$ at every simulation time step in order to characterize the particle forces while preventing segregation and the resulting changes in local concentration. This method is inspired by the virtual spring-based force technique used in single intruder DEM simulations to measure the segregation force \cite{guillard2016scaling,van2018segregation,jing2020rising}. The restoring force is $F_{\mathrm{res},i}=-K_r( \bar z_{i}-\bar z_j)/N_i$, where $\bar z_{i}={\sum_{k\in i}^{N_i} z_kV_k}/{\sum_{k= 1}^{N} V_k}$, $V_k$ is the volume of particle $k$, subscript $j$ indicates the other species, and $N_i$ and $N$ are the number of particles of species $i$ and the total number of particles, respectively. Since the applied restoring forces are internal forces,
\begin{equation}
F_{\mathrm{res},i} N_i+F_{\mathrm{res},j}N_j=0
\label{eq1}
\end{equation}
and the bulk flow behavior (e.g., shear flow, bulk pressure) is unaltered.
Figure~\ref{scheme}(b) plots the instantaneous restoring force scaled by particle weight, $F_{\mathrm{res},i}/m_ig$, at 0.01\,s intervals.
The scaled restoring forces for large (blue) and small (red) particles are equal and opposite for $c_l=c_s=0.5$ due to force balance, which can be written as $c_lF_{\mathrm{res},l}/m_lg+c_sF_{\mathrm{res},s}/m_sg=0$ based on Eq.~(\ref{eq1}), noting that particle mass $m_i=\rho V_i$ and species volume concentration $c_i=N_i V_i/V_{\mathrm{tot}}$, where $V_{\mathrm{tot}}$ is the total particle volume. The time average $F_{\mathrm{res},i}/m_ig$ over 1\,s time windows (bold curve) remains relatively constant 2\,s after flow onset, although force fluctuations occur due to the stochastic nature of granular flows. In addition, varying the shear rate $\dot \gamma$, the layer thickness $H$, or the gain parameters $K_s$ and $K_r$ has minimal influence on $F_{\mathrm{res},i}/m_ig$, indicating that the restoring force is independent of the details of the flow geometry and control parameters, and that its effect is uniform through the depth of the particle bed.
Since $F_{\mathrm{res},i}$, determined as the time-average of the reactive restoring force, balances the particle segregation force, $F_{\mathrm{seg},i}$ and the particle weight, $m_ig$,
\begin{equation}
F_{\mathrm{seg},i}=m_ig-F_{\mathrm{res},i}.
\label{equilibrium}
\end{equation}
$F_{\mathrm{seg},i}$ is always upward, opposing gravity. Since $F_{\mathrm{res},s}>0$ [Fig.~\ref{scheme}(b)], $F_{seg,s}<m_s g$ so small particles would sink without the restoring force; likewise, since $F_{\mathrm{res},l}<0$, $F_{seg,l}>m_l g$ so large particles would rise without the restoring force. Hereon, we scale the segregation force with the particle weight, $\hat F_i=F_{\mathrm{seg},i}/ m_i g$.
{\it Results}. The first key result of this paper is measurements of the dependence of the segregation force on concentration for various particle size ratios. Figure~\ref{fig2}(a-c) shows examples of $\hat F_{i}$ (symbols) vs.\ concentration for three size ratios ($d_l/d_s=1.3$, 2, and 3), where the error bars reflect fluctuations of the reactive restoring force in Fig.~\ref{scheme}(b). Although the error bars are largest at low concentrations, $\hat F_{i}$ clearly plateaus to a maximal (minimal) value approaching the single intruder limit $\hat F_{i,0}$ at $c_i\approx0$ and decreases (increases) monotonically with $c_i$ for large (small) particles. For both small and large species, $\hat F_{i,1}=1$ (or, equivalently, $F_{\mathrm{seg},i}=m_i g$) in the monodisperse limit ($c_i = 1$), since the segregation force exactly offsets the weight.
\begin{figure}
\centerline{\includegraphics[width=3.5 in]{figure2.pdf}}
\caption{
(a-c) Scaled particle segregation force $\hat F_i=F_{\mathrm{seg},i}/ m_i g$ vs.\ species concentration $c_i$ for large (\textcolor{blue}{$\Circle$}) and small (\textcolor{red}{$\triangle$}) particles. Error bars are the standard deviation for the time-average of $F_{\mathrm{res},i}$. Dashed and dotted curves are predictions of the single intruder segregation force model extended to mixtures [Eq.~(\ref{balance})]. Solid curves are fits of Eq.~(\ref{tanh}) using large particle data. Arrows indicate the concentration $c_{i,\mathrm{crit}}$ where $F_i$ deviates from the intruder limit, see text. (d) $\hat F_{i,0}$ from fits of Eq.~(\ref{tanh}) to large (\textcolor{blue}{blue}) and small (\textcolor{red}{red}) particle data. Dashed curve is a single intruder model based on single intruder simulations~\cite{jing2020rising}.
}
\label{fig2}
\end{figure}
Details of the dependence of $\hat F_{i}$ on $c_i$ vary with the size ratio, $d_i/d_j$. Specifically, Fig.~\ref{fig2}(a) for $d_l/d_s=1.3$ shows that the plateau in $\hat F_i$ for both species extends to a concentration of $c_{i}\approx0.3$. To quantify the extent of this plateau where particles of the lower concentration species are intruder-like (i.e., non-interacting with each other), we define $c_{i,\mathrm{crit}}$ as the critical concentration at which $\hat F_{i}-1$ deviates by 5\% from $\hat F_{i,0}-1$. For $c_i<c_{i,\mathrm{crit}}$, particles of species $i$ interact so infrequently with each other that the segregation force acting on them is essentially that for a single intruder particle. As $c_i$ increases beyond $c_{i,\mathrm{crit}}$, interactions between particles of species $i$ become significant, eventually resulting in the segregation force approaching the monodisperse limit as $c_i$ approaches one. This is the second key result: there is a plateau in species concentration over which the segregation force for that species is simply that of a single intruder particle. The plateau extends to higher concentrations as $d_i/d_j$ is increased, see Fig.~\ref{fig2}(b,c). Furthermore, $c_{l,\mathrm{crit}}\ge c_{s,\mathrm{crit}}$, indicating that large particles act like intruders at higher concentrations than small particles. For example, for $d_l/d_s=3$ [Fig.~\ref{fig2}(c)] the plateau for large particles extends to $c_{l,\mathrm{crit}}\approx 0.6$ nearly four times $c_{s,\mathrm{crit}}\approx 0.15$.
The total segregation force across both species for the entire system, which sums to the total particle weight, can be expressed
using Eqs.~(\ref{eq1}) and (\ref{equilibrium}), as
\begin{equation}
\hat F_i c_i+ \hat F_jc_j = 1.
\label{sumto1}
\end{equation}
Noting that $c_j=1-c_i$ and $\hat F_{j}=F_{j,0}$ for $c_j\le c_{j,\mathrm{crit}}$ (or, equivalently, $c_i\ge 1-c_{j,\mathrm{crit}}$), we can predict $\hat F_{i}$ for mixtures not only in the intruder regime of species $i$, but also in the intruder regime of species $j$,
\begin{equation}
\hat F_{i}=\bigg\{
\begin{array}{lcl}
\hat F_{i,0} & & c_i\le c_{i,\mathrm{crit}}, \\
\big[1-\hat F_{j,0}(1-c_i)\big]/c_i & & c_i\ge 1-c_{j,\mathrm{crit}}.
\end{array}
\label{balance}
\end{equation}
Figures~\ref{fig2}(a-c) show that the predictions of Eq.~(\ref{balance}) for both large (dashed curves) and small particles (dotted curves) match the segregation force data (symbols) for large values of concentration when $\hat F_{i,0}$ and $\hat F_{j,0}$ are set to the intruder-limit values given in Fig.~\ref{fig2}(d) (described shortly). That is, selecting $\hat F_{l,0}$ at $c_i<c_{i,\mathrm{crit}}$ for large particles (dashed blue horizontal line) leads to the corresponding prediction for $\hat F_s$ at large $c_i$ (dashed red curve) and likewise for small particles (dotted red horizontal line and dotted blue curve). This approximation fits the data well, except in the middle of the concentration range where the initial deviation of the data from the horizontal line reflects the approximate value of $c_{i,\mathrm{crit}}$.
Though Eq.~(\ref{balance}) combined with $\hat F_{i,0}$ and $\hat F_{j,0}$ predict $\hat F_i$ at the concentration extremes, a greater challenge is to model $\hat F_i$ in the intermediate transition regime (i.e., $c_{i,\mathrm{crit}} < c_i < 1-c_{j,\mathrm{crit}}$). Since $\hat F_i$ is bounded at both ends of the concentration range, we propose an empirical relation of the form
\begin{equation}
\hat F_{i}=\bigg\{
\begin{array}{lcl}
1+(\hat F_{i,0}-1) \tanh\bigg(\frac{1-\hat F_{j,0}}{\hat F_{i,0}-1} \frac{c_j}{c_i}\bigg), & d_i/d_j\ge 1, \\
1 -(\hat F_{j,0}-1) \tanh\bigg(\frac{1-\hat F_{i,0}}{\hat F_{j,0}-1} \frac{c_i}{c_j}\bigg)\frac{c_j}{c_i}, & d_i/d_j<1.
\end{array}
\label{tanh}
\end{equation}
Equation~(\ref{tanh}) applies to both large and small particles ($i=l$ for $d_i/d_j\ge 1$ and $i=s$ for $d_i/d_j<1$) and automatically satisfies the constraints that $\hat F_i=\hat F_{i,0}$ at $c_i=0$ and $\hat F_i=1$ at $c_i=1$.
Model coefficients $\hat F_{i,0}$ and $\hat F_{j,0}$ correspond to intruder segregation forces and can be obtained by fitting Eq.~(\ref{tanh}) for $d_i/d_j\ge 1$ to the data for large particles or, equivalently, Eq.~(\ref{tanh}) for $d_i/d_j< 1$ to the data for small particles with no significant differences in the fit quality or fit parameters.
To demonstrate the validity of our simulation and fitting approach, Fig.~\ref{fig2}(d) shows $\hat F_{i,0}$ based on curve fits to Eq.~(\ref{tanh}) for both large (blue circles) and small (red triangles) particle data. The two data sets match within the uncertainty, demonstrating the robust nature of the hyperbolic functional form of Eq.~(\ref{tanh}) in characterizing the segregation force. In addition, the results match predictions (dashed curve) of a single intruder model derived from single intruder simulations \cite{jing2020rising} with particle properties (i.e., $d_l=1-40\,$mm, $d_s=5\,$mm, and $\rho=2.5\,$g/mm$^3$), contact model (i.e., Hertz contact model with Young's modulus of 5$\times10^7\,$Pa and Poison's ratio 0.4), and flow geometry (inclined chute) different from this study.
This validates not only the values we find for the segregation force at the single intruder limit, but also our approach for direct measurement of segregation forces in bidisperse mixtures.
We determine the critical concentration, $c_{i,\mathrm{crit}}$ below which the segregation force is nearly independent of concentration by conducting 260 simulations at different concentrations, size ratios, and shear rates, and fitting the resulting segregation force data to Eq.~(\ref{tanh}).
The phase diagram in Fig.~\ref{figure3}(a) shows that $c_{i,\mathrm{crit}}$ (symbols) for both large and small particles increases monotonically with size ratio for the range explored here ($1< d_l/d_s<3$) and is reasonably well fit by the expression $c_{i,\mathrm{crit}}=0.74[1-\exp(-0.54 d_i/d_j)].$ The limiting value of $c_{i,\mathrm{crit}}=0.74$ for $d_i/d_j\gg1$ matches the free sifting limit for small particles in a network of randomly close-packed large particles at $\phi_\mathrm{max}=0.64$, i.e., $1/(2-\phi_\mathrm{max})$ \cite{prasad2017subjamming}. This suggests that for $c_l>0.74$ small particles percolate downward through the voids without significantly affecting the flow of large particles, indicating a possible change in the size segregation mechanism~\cite{golick2009mixing,schlick2015modeling}.
\begin{figure}
\centerline{\includegraphics[width=3.4 in]{figure3.pdf}}
\caption{
(a) Phase diagram showing segregation force regimes (shaded areas) dependence on large particle concentration and size ratio. Symbols represent $c_{i,\mathrm{crit}}$ for large (\textcolor{blue}{blue}) and small (\textcolor{red}{red}) particles. Curves (from fits) are $c_{i,\mathrm{crit}}=0.74[1-\exp(-0.52 d_i/d_j)]$.
(b) Sheared bed images for $d_l/d_s=2$ [vertical dotted line in (a)] at $c_l$ intervals of 0.1. For $c_s<c_{s,\mathrm{crit}}\approx 0.18$ (or, equivalently, $c_l\gtrapprox0.82$) the small particle (\textcolor{red}{red}) segregation force equals that on a single small intruder, while for $c_l<c_{l,\mathrm{crit}}\approx 0.46$ the large particle segregation force equals that on a single large intruder. Intermediate concentrations ($0.46\lessapprox c_l\lessapprox0.82$), where segregation forces are less than for intruders, are termed mixture-like.
}
\label{figure3}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=3.4 in]{figure4.pdf}}
\caption{Ratio of species specific pressure to bulk pressure, $f_i=P_i/P$, for different size ratios. Symbols represent data for large (\textcolor{blue}{blue}) and small (\textcolor{red}{red}) particles.
Solid curves are predictions of Eq.~(\ref{tanh}) recast as a pressure ratio, i.e., $f_i=c_i \hat F_i$.
Thin curves are $f_i$ as assumed in previous studies: $f_i=d_ic_i/\sum d_i c_i$ (dashed) \cite{marks2012grainsize} and $f_i=d_i^3c_i/\sum d_i^3 c_i$ (dash-dot) \cite{tunuguntla2014mixture}.
Dotted line represents the monodisperse case ($d_l/d_s=1$) where $f_i=c_i$.
}
\label{fig4}
\end{figure}
Alternately, in the monodisperse mixture limit ($d_i/d_j=1$), the exponential fit to the data in Fig.~\ref{figure3}(a) gives $c_{i,\mathrm{crit}}\approx0.31$, matching the concentration at which mixtures of equal diameter conducting and insulating spheres become globally conductive (exceed the percolation threshold)~\cite{powell1979site,ziff2017percolation}. Further, the critical concentrations for $1/3\le d_s/d_l< 1$ from this study also match the percolation thresholds in size-bidisperse mixtures~\cite{he2004effect}, suggesting that the particle segregation force and geometric percolation are related. Anecdotal support for this picture is provided by Fig.~\ref{figure3}(b), which shows shear flow images for $d_l/d_s=2$. In the intruder-like regime for small particles (large $c_l$), small particles appear to contact each other only in the voids between large particles, whereas in the intruder-like regime for large particles (small $c_l$) large particles are well-separated by a continuous phase of small particles and are unlikely to interact directly with each other. Further investigation of the connection between the intruder regimes and the percolation limit is clearly necessary but is beyond the scope of this Letter.
{\it Discussion}. Our results characterizing the segregation force can be applied to continuum descriptions of segregation. Previous studies assume $\hat F_{i}$ depends linearly on $c_i$ to close the momentum equation~\cite{gray2005theory,rousseau2021bridging}. Despite some success for these continuum models in predicting concentration profiles of equal-volume mixtures, the assumed linear relation between $\hat F_i$ and $c_i$ fails to capture the segregation force plateau for intruders. In addition, the resulting symmetric form for the species-specific pressure, when coupled with a linear drag model, fails to predict the asymmetric concentration dependence of segregation (i.e., small particles among mostly large particles segregate faster than {\it vice versa})~\cite{golick2009mixing,gajjar2014asymmetric,jones2018asymmetric}. To address the asymmetric segregation flux, $\hat F_{i}$ was later proposed to be quadratic in $c_i$ \cite{gajjar2014asymmetric,tripathi2021theory,duan2021modelling}.
Though the coefficients in a quadratic model can be adjusted to minimize the difference between the model and the data in the plateau regime, the model is never truly independent of $c_i$, as must be true approaching the intruder limit ($c_i\approx0$).
To address these shortcomings in modeling the segregation force within the framework of the continuum model, we recast our results (data and model [Eq.~(\ref{tanh})]) as partial pressures (normal stresses), i.e., ${\partial P_i}/{\partial z}=N_iF_{\mathrm{seg},i}/LWH={n_iF_{\mathrm{seg},i}}$ \cite{rousseau2021bridging}, where $n_i=c_i\phi /V_i$ is the particle number density. Combined with the bulk pressure gradient $\partial P/ \partial z= \phi \rho g $, the ratio of the pressure contribution of species $i$ to the bulk pressure, or normal stress fraction, is $f_i=P_i/P=c_i \hat F_{i}$ \cite{tunuguntla2017comparing}, which, unlike the standard mixture theory, does not necessarily equal the species volume fraction.
Here, having measured $\hat F_i$ vs.\ $c_i$, we can directly evaluate $f_i$ as Fig.~\ref{fig4} shows for four examples at $d_l/d_s=1.1$, 1.3, 2, and 3. At all concentrations, the pressure partition functions for large and small particles sum to 1 (i.e., $ f_l+f_s=1$), and the curves based on the segregation force model of Eq.~(\ref{tanh}) match the simulation data -- the third key result of this paper. The deviation of the pressure partitioning for $d_l/d_s \ne 1$ from the linear monodisperse case, $f_i=c_i$ (dotted line) and from previously proposed models assuming $f_i$ is a weighted function of particle size, $f_i=d_ic_i/\sum d_i c_i$ (dashed) \cite{marks2012grainsize}, or volume, $f_i=d_i^3c_i/\sum d_i^3 c_i$ (dash-dot) \cite{tunuguntla2014mixture}, is clear for all size ratios. Thus, the more accurate pressure partition function based on Eq.~(\ref{tanh}) can be directly applied to continuum models of flowing mixtures of bidisperse granular materials such as those used for free surface flows \cite{marks2012grainsize,tunuguntla2014mixture,tripathi2021theory,rousseau2021bridging}.
Our results capture and characterize the concentration dependence of the segregation force in uniform shear flows, but a word of caution about extensions is in order. Recent studies indicate that the intruder segregation force $\hat F_{i,0}$ also depends on the shear rate gradient~\cite{fan2011theory,guillard2016scaling,jing2021unified}. Although the shear rate gradient-induced component of $F_\mathrm{seg}$ is negligible in most free surface flows~\cite{jing2020rising}, further study of the concentration dependence in flows with shear rate gradients is clearly necessary.
\textbf{Acknowledgements.}
Supported by the National Science Foundation under Grant No.~CBET-1929265.
\nocite{*}
|
1,108,101,564,304 | arxiv | \section{Introduction}
\label{sec1}
Orbiting stars $\gtrsim 10$ Myr old,
``debris disks'' are
thought to trace the aftermath of planet formation
(for a review, see \citealt{wyatt08}).
By definition, they are composed of optically thin
dust grains, generated from the collisional
attrition of larger parents.
Observations of debris disks shed light
on the size distribution and velocity dispersion
of constituent bodies (e.g., \citealt{shannon11};
\citealt{pan12}; and references therein),
and by extension the processes by which
planetesimals and planetoids build up and grind down.
Debris disks may also serve as signposts for embedded planets
(e.g., \citealt{mouillet97}; \citealt{rodigas14};
\citealt{nesvold16}; and references therein).
The morphologies of debris disks are coming
into increasingly sharp resolution with the advent
of extreme adaptive optics instruments, including
the {\it Gemini Planet Imager} \citep[{\it GPI};][]{macintosh14},
{\it SPHERE} \citep{beuzit08}, and {\it SCExAO} \citep{jovanovic15}.
In past and present observing campaigns,
a variety of disk shapes have been uncovered,
some featuring warps \citep[e.g.,][]{heap00,apai15,mmb15,wang15}
and eccentric rings \citep[e.g.,][]{kalas05fom,wahhaj14,perrin15},
and others evoking ``moths''
\citep[e.g.,][]{hines07,maness09,ricarte13}
and ``needles'' \citep[e.g.,][]{kalas07,kalas15}.
Some imaged features have even been observed to
vary with time \citep{boccaletti15}.
\citet{schneider14} present a beautiful
compilation of debris disk images taken with
the {\it Hubble Space Telescope} ({\it HST}).
Disk structures that are non-axisymmetric are especially
intriguing because they hint at gravitational sculpting
by planets (assuming disk self-gravity is negligible;
see, e.g., \citealt{jalali12} for a contrarian view).
Foundational work was done by \citet{wyatt99},
who calculated how one or more planets on eccentric,
inclined orbits imprint ellipticities and warps onto debris disks.
The planetary perturbations treated by these authors
are secular, i.e., orbit-averaged in the sense that the gravitational
potential
presented by each planet is that of a smooth, massive wire
(see also the textbook by \citealt{murray_dermott}).
Mean-motion commensurabilities with a planet can also shape disks
by truncating them in a chaotic zone of overlapping
first-order resonances
\citep[e.g.,][]{wisdom80,quillen06_fom,pearce14,nesvold15_gap}.
Individual resonances can also,
in principle, trap disk particles
and clump them azimuthally \citep[e.g.,][]{kuchner03,stark09}.
Such resonant clumps, moving at pattern speeds that typically differ
from local Kepler frequencies, have yet to be confirmed
in extrasolar debris disks.
The preponderance of evidence shows that
debris disks are smooth \citep[e.g.,][]{hughes12},
suggesting that secular effects dominate their
appearance.
We offer here a systematic exploration of the morphologies
of planet-perturbed debris disks, as imaged in scattered
starlight. We focus on what is arguably the simplest possible
scenario: a narrow ring of parent bodies forced
secularly by a single planet, producing dust grains that
are propelled outward by stellar radiation pressure.
Our work builds on \citet{wyatt99} by
supplying synthetic scattered light images of disks
viewed from all possible directions. For all its simplicity,
the model contains a surprisingly large variety of
morphologies, and we will assess, in a qualitative way, the extent
to which the observed real-world diversity of shapes
(rings, flares, moths, needles, and the like)
may be attributed to differences in viewing geometry;
in other words, we explore a ``unification'' model for debris disks,
by analogy with unification models for active galactic
nuclei. We do not expect our model to be able
to explain every detail of resolved disk images,
but submit our work as a starting point for interpreting
those images: a baseline reference that can guide more
sophisticated theories.
Our paper is straightforward.
After describing the model elements and
computational procedure (Section \ref{sec2}),
we present synthetic scattered light images
(Section \ref{sec3})
and compare them informally to actual systems
(Section \ref{sec4}).
Our aim is to provide a primer on debris disk
morphology: to explain features from first principles,
and develop intuition for mapping scattered light
observations to the underlying parent
disks and attendant planets.
This paper is intended as a more general
expansion of ideas discussed by
Esposito et al.~(submitted)
to explain the moth-like
morphology presented by HD 61005
(see also \citealt{fitz11} for the original proposal).
\section{Model}
\label{sec2}
We posit a planet of mass
$M_{\rm planet} = 10 M_\oplus$
on an orbit with semi-major axis $a_{\rm planet} = 30$ AU
and eccentricity $e_{\rm planet} \in (0, 0.25, 0.7)$
about a star with mass $M_\ast = 1 M_\odot$.
The planet's orbit lies in the reference ($x$-$y$) plane,
with its longitude of periapse $\varpi_{\rm planet} = 0$
(the planet's periapse is located on the $x$-axis).
Debris disk bodies are of two kinds:
parent bodies and dust particles. The latter
are spawned from the former. Parent bodies (subscripted p)
are located exterior to the planet's orbit and number
$N_{\rm p} = 1000$ in all.
They have semi-major axes distributed uniformly
from just outside the planet's chaotic zone
\citep{wisdom80,quillen06_fom,quillen06,chiang09,nesvold15_gap},
\begin{equation}
a_{\rm p,inner} = a_{\rm planet} \left[1 + 2 (M_{\rm planet}/M_\ast)^{2/7}\right] \,,
\label{eq:ain}
\end{equation}
to a value 10\% larger,
\begin{equation}
a_{\rm p,outer} = 1.1 a_{\rm p,inner} \,.
\end{equation}
Thus our debris disks are really debris
rings, as inspired by the narrow belts observed in,
e.g., HR 4796A, Fomalhaut, AU Mic, and the Kuiper belt.
For the highest value of $e_{\rm planet} = 0.7$ that we
consider, equation (9) of \citet{pearce14}
is more accurate and gives a value for
$a_{\rm p,inner} - a_{\rm planet}$
that is $\sim$2 times larger
than the one predicted by our equation (\ref{eq:ain});
we neglect this correction for simplicity.
A parent body's eccentricity vector --- a.k.a.~its
Runge-Lenz vector, which points toward periapse and has a
length equal to the eccentricity ---
is the vector sum of its
forced and free eccentricities \citep[e.g.,][]{murray_dermott}.
The forced eccentricity vector is computed
from Laplace-Lagrange (L-L) secular theory; in the
one-planet case which we are considering, the forced
vector points parallel to the planet's
eccentricity vector (i.e.,
in the positive $x$-direction), and has a magnitude
specific to the body's orbital semi-major axis:
\begin{equation}
e_{\rm p,forced} = \frac{b_{3/2}^{(2)} (a_{\rm planet}/a_{\rm p})}{b_{3/2}^{(1)} (a_{\rm planet}/a_{\rm p})} \,e_{\rm planet} \,,
\end{equation}
where the $b$'s are the usual Laplace coefficients.
As $a_{\rm planet}/a_{\rm p} \rightarrow 1$, $e_{\rm p,forced} \rightarrow e_{\rm planet}$.
The components of the free eccentricity vectors,
as resolved in
$(h,k) \equiv (e \sin \varpi, e\cos \varpi)$ space,
are
\begin{align}
h_{\rm p,free} &= e_{\rm p,free} \sin \varpi_{\rm p,free} \\
k_{\rm p,free} &= e_{\rm p,free} \cos \varpi_{\rm p,free}
\end{align}
where $\varpi_{\rm p,free}$ is a uniform deviate
between 0 and $2\pi$ rad, and
$e_{\rm p,free}$ is a uniform
deviate that extends from 0 to 0.02.
The value of $e_{\rm p,free}$ measures
the random velocity dispersion,
which in turn depends on how bodies collide
and are gravitationally stirred
(processes not modeled here;
see, e.g., \citealt{pan12}).
Total parent body eccentricities
are such that no parent body crosses the planet's orbit;
see \citet{chiang09} for
numerical $N$-body integrations verifying
orbital stability for parameters similar to those used here.
That $\varpi_{\rm p,free}$ ranges
uniformly from 0 to $2\pi$ assumes that parent bodies
are secularly relaxed; for our chosen parameters
($M_{\rm planet}$, $a_{\rm planet}$, $a_{\rm p,inner}$,
$a_{\rm p,outer}$), differential precession
timescales across the parent ring are of order a
couple of Myrs,
shorter than typical debris disk ages of tens of Myrs.
To summarize, the parent bodies occupy, in the mean,
a narrow elliptical
ring located just outside the planet's elliptical orbit
and apsidally aligned with it.\footnote{The low-order
Laplace-Lagrange (L-L) secular theory which we use is quantitatively
inaccurate at high $e_{\rm planet}$ but should be
qualitatively correct. \citet{pearce14} find
good correspondence between their $N$-body integrations
and L-L theory for $e_{\rm planet}$ as high as $\sim$0.8,
provided the planet's orbit lies
within $\sim$20$^\circ$ of the
parent disk, as it does for all our models.}
Parent body inclination vectors,
resolved in $(p,q) \equiv (i \sin \Omega, i \cos\Omega)$
space, where $i$ is inclination and $\Omega$ is
the longitude of ascending node, behave analogously
to eccentricity vectors.
For our one-planet case, the forced
inclination vector is the zero vector: forced orbits
are co-planar with the planet's orbit. Therefore parent body
inclination vectors equal their free values:
\begin{align}
p_{\rm p,free} &= i_{\rm p,free} \sin \Omega_{\rm p,free} \\
q_{\rm p,free} &= i_{\rm p,free} \cos \Omega_{\rm p,free}
\end{align}
where $\Omega_{\rm p,free}$
is a uniform deviate between 0 and $2\pi$ rad
(this assumes the system is secularly relaxed; see above),
and $i_{\rm p,free}$
is a uniform deviate between 0 and 0.02 rad.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{fig1_loe_fixbeta_pqhk}
\caption{Elements of a sample model for which $e_{\rm planet} = 0.25$,
$\max e_{\rm p, free} = \max i_{\rm p,free} = 0.02$ (in radians),
and --- for this figure alone --- a fixed $\beta = 0.4$ for dust particles.
{\it Top row}: Inclination and eccentricity vector components of the planet
(blue open circle), parent bodies (black points), and dust particles
(red points). Forced eccentricities of parent bodies are shown as a red bar;
full eccentricities differ from their forced values
by up to $\max e_{\rm p,free}$ (top right panel). Similarly, full
inclinations differ from their forced values by up to $\max i_{\rm p,free}$ (the half thickness of the disk).
Because stellar radiation pressure does not alter orbital
inclination, dust particle and parent body inclinations are identical
(black points overlie red points in the top left panel). {\it Bottom row}:
Synthetic scattered light images for this disk seen face-on ($alt = 90$ deg,
bottom left and middle panels) and seen 5 deg above the planet's orbital
plane ($alt = 5$ deg, $az = 0$, bottom right panel).
The scattered light features in the
face-on (a.k.a.~polar) view can be understood from an underlying
``skeleton'' of representative dust grain orbits,
shown in matching colors
in top and bottom middle panels. The nearly edge-on
view in the right panel is such that the planet's apoapse
points toward the observer.}
\label{fig1}
\end{figure*}
To launch dust particles (subscripted d)
from parent bodies,
we first randomly draw, for a given parent body's orbit,
$N_{\rm launch} = 100$
true anomalies. These true anomalies mark the launch
positions for dust grains; for simplicity, we draw the
$N_{\rm launch}$ true
anomalies from a uniform distribution (on the one hand,
periastron may be favored for collisions because
orbital velocities are higher there, but on the other
hand, apastron may be favored because parent bodies spend
more time there; we discuss the effects of different
choices for the distribution of launch sites in
Section \ref{sec3}).
At every true anomaly,
a dust particle orbit is created whose instantaneous
velocity at that position matches the parent body's
instantaneous velocity, and whose radiation
$\beta$ --- the ratio
of the force of stellar radiation pressure
to that of stellar gravity --- is drawn randomly from a
distribution to be given below.
To quantify our statements so far, the orbital elements of each
dust grain orbit are given by:
\begin{align}
\label{eq:a_d}
a_{\rm d} &= \frac{a_{\rm p} (1-e_{\rm p}^2) (1-\beta)}{1-e_{\rm p}^2-2\beta(1+e_{\rm p}\cos f_{\rm p})} \\
\label{eq:e_d}
e_{\rm d} &= \frac{\sqrt{e_{\rm p}^2+2\beta e_{\rm p} \cos f_{\rm p} + \beta^2}}{1-\beta} \\
\label{eq:om_d}
\omega_{\rm d} &= \omega_{\rm p} + \arctan\left(\frac{\beta\sin f_{\rm p}}{e_{\rm p} + \beta\cos f_{\rm p}}\right) \\
\label{eq:i_d}
i_{\rm d} &= i_{\rm p} \\
\label{eq:Om_d}
\Omega_{\rm d} &= \Omega_{\rm p}
\end{align}
where $\omega$ is the argument of periapse and $f_{\rm p}$ is the
parent body's true anomaly at launch.
The $\beta$-distribution is related to the assumed size
distribution of dust grains. If the latter derives
from a standard collisional cascade and obeys, e.g.,
a Dohnanyi distribution $dN/ds \propto s^{-7/2}$
for particle size $s$,
then $dN/d\beta \propto \beta^{3/2}$,
under the assumption that dust particles present
geometric cross sections to radiation pressure ($\beta \propto 1/s$).
But a conventional cascade
underestimates the number of grains whose sizes
are just shy of the radiation blow-out size.
These grains are on especially
high-eccentricity and high-semi-major-axis orbits,
avoiding interparticle collisions
by spending much of their time
away from the dense parent body ring.
Their actual lifetimes against
collisional destruction, and by extension their steady-state
population, are underestimated by a standard
cascade.
We correct for this effect by scaling the number of dust
grains in a given size bin to their orbital period $P_{\rm d}$,
which is longer at higher $\beta$.
This same scaling is used by \citet[][see their Figure 3]{strubbe06}
who show that it correctly reproduces the surface brightness profiles
of collision-dominated debris disks like AU Mic.
Our $\beta$-distribution therefore scales as
\begin{align}
dN/d\beta & \propto \beta^{3/2} \times P_{\rm d} \nonumber \\
& \propto \beta^{3/2} \frac{(1-\beta)^{3/2}}{ \left[ 1 - e_{\rm p}^2 - 2\beta (1+ e_{\rm p} \cos f_{\rm p}) \right]^{3/2}} \label{eq:orbcorr}
\end{align}
where we have used $P_{\rm d} \propto a_{\rm d}^{3/2}$
and equation (\ref{eq:a_d}).
The $\beta$-distribution extends
from $\beta_{\rm min} = 0.001$ to a maximum value
$\beta_{\rm max}$ corresponding to a marginally
bound (zero energy; $e_{\rm d}=1$) orbit.
Each value of $\beta_{\rm max}$ is specific
to a given launch position and velocity.
The $\beta$-distribution given by (\ref{eq:orbcorr})
is very top-heavy;
most grains have $\beta$ near
the maximum value
\begin{equation}
\beta_{\rm max} = \frac{1-e_{\rm p}^2}{2(1+e_{\rm p} \cos f_{\rm p})}\, .
\label{eq:betamax}
\end{equation}
Along each dust particle orbit,
we lay down, at random,
$N_{\rm dust-per-orbit} = 100$ dust particles.
Their mean anomalies are uniformly
distributed but their true anomalies are not;
dust particles concentrate near apoapse, following
Kepler's equation.
The dust particles,
numbering $N_{\rm d} = N_{\rm p} \times N_{\rm launch}
\times N_{\rm dust-per-orbit} = 10^7$ in all,
are projected onto the sky plane of a distant
observer and used to synthesize a scattered light image.
The sky plane of 800 $\times$ 800 AU, centered on the star,
is divided into 800 $\times$ 800 square cells, and each dust particle
contributes, to the cell in which it is located,
a surface brightness
proportional to $\phi(g,\theta)/(\beta^2 r^2)$.
Here $1/\beta^2$ accounts for the scattering cross
section for each grain (assumed geometric),
$r$ is the distance between the dust particle
and the host star, and $\phi(g,\theta)$ is the
Henyey-Greenstein scattering
phase function for asymmetry parameter $g = 0.5$
and $\theta$ equal to
the angle between the dust particle and the observer
with the vertex at the star.
Multiple scattering of photons is neglected; this
is a safe assumption insofar as debris disks
are optically thin.
\begin{figure*}[!ht]
\includegraphics[width=\textwidth]{fig2_loe_altaz}
\caption{``Alt-az'' diagram for the case $e_{\rm planet} = 0.25$.
Synthetic scattered light images
of the debris disk are shown
as a function of the observer's altitude ($alt=0^\circ$/90$^\circ$
gives an edge-on/pole-on view of the planet's orbit) and
azimuth ($az=0^\circ$/180$^\circ$ has the planet's apoapse/periapse
pointing toward the observer). For this and other alt-az figures,
we use an image scaling proportional to the
square root of the surface brightness.
Each alt-az snapshot is constructed from an 800 AU $\times$ 800 AU
grid, smoothed by convolving with a 2D Gaussian having a standard
deviation of 2 AU, and truncated vertically to 400 AU.
The convolution shrinks
the dust inner cavity; we restore
the cavity seen in the pre-smoothed image
by masking out the corresponding pixels.
The surface brightnesses of the brightest features are $\sim$600 ($10^4$) times
higher than that of the faintest features in the face-on (edge-on) view.
The yellow dot in each panel marks the location of the central star.
}
\label{fig2}
\end{figure*}
\begin{figure*}[!ht]
\includegraphics[width=\textwidth]{fig3_hie_thin_altaz}
\caption{Same as Figure \ref{fig2}, but for a more
eccentric planet with $e_{\rm planet} = 0.7$.
The surface brightnesses of the brightest features are
$\sim$10$^4$ ($2\times 10^4$) times higher than that
of the faintest features in the face-on (edge-on) view.}
\label{fig3}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{fig4_circ_altaz}
\caption{Same as Figure \ref{fig2}, but for $e_{\rm planet} = 0$.}
\label{fig4}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{hk_orb_ep}
\caption{Zoomed-in images for $e_{\rm planet}=0.25$
(top row)
and $e_{\rm planet}=0.7$ (bottom row). Left panels are
nearly edge-on views ($alt=10^\circ$), and are each
overlaid with a contour of constant surface brightness.
Middle panels are face-on views showing representative
dust grain orbits,
color-coded to correspond to the colored points in the
right-hand $h$-$k$ plots (whose remaining symbols have
meanings identical to those in Figure \ref{fig1}).
Note how most particles in $h$-$k$ space cluster near
unit eccentricity as a consequence of our top-heavy
$\beta$-distribution (\ref{eq:orbcorr}).
The white dust orbit is launched from parent body
periapse, and the green and yellow dust orbits are chosen to have
median eccentricities and longitudes of periastron.
As planet eccentricity increases,
increasingly many dust orbits have their periastra aligned
with that of the planet, leading to a more extended
and sharply angled ``fan'' of emission in nearly edge-on views.
}
\label{fig5}
\end{figure*}
Figure \ref{fig1} illustrates the basic ingredients of our model.
It depicts how the locations
of bodies in $(p,q,h,k)$ space relate to one another,
and to the resultant scattered light images,
for a sample case $e_{\rm planet} = 0.25$.
For pedagogic purposes, and for Figure \ref{fig1} only,
we assign all dust particles a fixed $\beta = 0.4$,
discarding particles not bound to the star.
Surface brightness morphologies
can be understood in terms of underlying dust particle orbits
by starting from the
face-on scattered light image (looking down the $z$-axis
onto the $x$-$y$ plane).
The inner dust cavity is outlined
by the launch sites of dust particles, i.e.,
the cavity rim coincides
with the elliptical ring of parent bodies
(the parent bodies themselves do not contribute
to the scattered light image).
Because launch velocities are ``high'' for the weakened
gravitational potential felt by dust particles (the potential is
weakened by $1-\beta$), the cavity rim / parent body ring
marks the periastra of dust particles.
The bright half of the cavity rim,
located on the negative $x$-axis, traces the periastra
of $e \sim 0.3$ dust particles
(drawn in white); these are launched from the apastra of their
parents' orbits. These same dust particles' apastra
form the ``arc'' located to the right of the cavity.
The entire outer boundary
of dust emission, of which the arc is the brightest segment,
is demarcated by all the particle apastra.
Particles with the largest eccentricities
(e.g., yellow, blue, green, grey-brown), extending
up to unity, are launched
from near the periastra of their parents' orbits, and
form the barely visible ``wings''
extending above and below the arc. In a more edge-on
view, these wings increase in brightness because of their
increased line-of-sight optical depth.
Viewed at 5 deg above the planet's orbital plane,
with the planet's apoapse directed toward the observer,
the wings appear downswept.
\section{Synthetic Scattered Light Images}
\label{sec3}
Figures \ref{fig2}, \ref{fig3}, and \ref{fig4} show the scattered
light images for $e_{\rm planet} = 0.25$,
$e_{\rm planet} = 0.70$, and $e_{\rm planet} = 0$, respectively, with the
radiation $\beta$ following a distribution given by (\ref{eq:orbcorr}).
We smooth away some of the shot noise
caused by a finite number of dust grains
by convolving images (from Figure \ref{fig2} onward)
with a 2D Gaussian having a standard deviation
of 2 pixels (2 AU).
A side effect of the convolution is that
it shrinks
the dust inner cavity; we restore the cavity
by masking out the corresponding pixels.
The panels in each figure are computed from a variety of vantage
points. The orientation of the observer (on the celestial
sphere centered on the debris disk) is parametrized
by altitude $alt$ (inclination angle relative to the planet's
orbital plane;
$alt = 0^\circ$ corresponds to the planet's orbit seen edge-on,
while $alt = 90^\circ$ gives a face-on view)
and azimuth $az$ (angle measured in the planet's orbital
plane; $az = 0^\circ$ corresponds to the apoapse of the planet's
orbit pointing toward the observer, while $az = 180^\circ$ directs
the planet's periapse toward the observer).
For all images we rotate first in $alt$ and then in $az$
starting from $(alt,az) = (90^\circ, 0^\circ)$.
We refer to Figures \ref{fig2}--\ref{fig4} as ``alt-az'' diagrams.
All three alt-az diagrams are displayed on a universal
brightness scale.
To bring out the fainter features, images are scaled
to the square root of the surface brightness. More edge-on views
have greater line-of-sight optical depths and
therefore yield brighter disks.
For reference, the angular
half-thickness of our disk is $\max i_{\rm p,free} = 0.02$
rad $\simeq$ 1$^\circ$.
Later in this section we will experiment with a thicker disk for which
$\max i_{\rm p,free} = 0.15$ rad $\simeq$ 9$^\circ$.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{image_hk_difflaunch}
\caption{Experiments in the distribution of launch
sites for dust particles, for the case
$e_{\rm planet} = 0.7$. If dust grains are launched
strictly from the periastra of parent bodies, then all
orbits are apsidally aligned
(left column of panels;
symbol meanings in the $h$-$k$ plot
are identical to those in Figure
\ref{fig1}). If dust grains are launched at parent body
mean anomalies $M_{\rm p}$
that are uniformly distributed between
$0$ and $2\pi$, the preference for apsidal alignment is
muted (right column of panels). Our standard model assumes
that dust grains are launched at parent body
true anomalies $f_{\rm p}$ that are
uniformly distributed between $0$ and $2\pi$, and
represents an intermediate case (middle column of panels).
The top row displays corresponding scattered light
images, observed at $alt = 10^\circ$
and $az = 0^\circ$ for our standard vertically thin disk
with $\max i_{\rm p,free} = 0.02$ rad. Each nearly edge-on disk, as
traced by a white contour of constant surface brightness,
resembles a ``fan'' or ``moth''; the wings of the moth
are angled more sharply downward as dust particle
orbits are more strongly apsidally aligned (reading right to left).
Note the ``double winged moth'' that appears when
dust grains are launched exclusively from parent periastra (top left).
The middle row features scattered light images observed
at $alt = 0^\circ$ and $az = 90^\circ$ for
a vertically thicker disk with $\max i_{\rm p,free} = 0.15$ rad.
The center panel features an inner ``ship'' surrounded
by its ``wake,'' as detailed in the main text.
Brightness asymmetries and vertical asymmetries across the ship-and-wake are magnified as dust grain launch sites concentrate
toward parent body periastra (reading right to left).}
\label{fig6}
\end{figure*}
In many of the views displayed in Figures \ref{fig2}--\ref{fig3},
the eccentricity of the debris disk forced upon it by
the eccentric planet manifests itself as
a stellocentric offset: the star is displaced from the apparent
geometric center of the ring's inner cavity.
Another signature of planet eccentricity
is the tail of scattered light extending to one side of the star,
seen most prominently for high $e_{\rm planet}$. This tail
arises from dust
particles on high-eccentricity orbits
launched from near the periastron
of the parent body ring;
in our diagnostic Figure \ref{fig5}, these orbits
are color-coded green, white, and yellow
(see in particular the bottom panels).
High-eccentricity
dust abounds as our $\beta$-distribution
(\ref{eq:orbcorr}) is strongly
weighted toward the maximum value just short of
radiation blow-out.
When this ``sheet'' of high-$e$ dust particles
is viewed just above its
plane ($alt \sim 5$--$10^\circ$) and in front of the star
($az=0^\circ$),
it appears in projection as a ``fan'' or ``moth''
whose wings sweep downward from the
star. Higher planet eccentricities
cause both the top and bottom boundaries of the fan
to be more angled; compare the white
contours in the leftmost panels of Figure \ref{fig5}.
Maintaining the same above-the-plane view
($alt \sim 5$--$10^\circ$),
but now rotating the observer in azimuth
so that the sheet of high-eccentricity dust
is seen behind the star ($az=180^\circ$),
produces an upswept fan (see Figures \ref{fig2}--\ref{fig3}).
Observer azimuths intermediate between $0^\circ$ and $180^\circ$
yield simultaneous top-down and left-right asymmetries.
For example, comparing the left and right limbs
of the disk seen at $az=135^\circ$ and $alt=10^\circ$ in
Figure \ref{fig3}, we see that the left limb is more extended in
length, has a lower peak brightness, and is angled more upward.
When the planet's orbit is viewed nearly but not completely
edge-on ($0^\circ < alt \leq 10^\circ$), with its
periapse pointing toward the observer ($az > 90^\circ$),
a faint ``bar'' emerges displaced from the star.
This bar, seen below the star in Figures \ref{fig2}--\ref{fig3},
is equivalent to the ``arc'' seen in front
of the planet's periastron in Figure \ref{fig1}; the bar/arc
is comprised of dust grains at their apastra,
on orbits launched from near the apastra of their parent
bodies. These orbits are apsidally anti-aligned
relative the planet's orbit (see the white orbit
in Figure \ref{fig1}). The bar is brightest
when seen in forward-scattered
light and at low observing altitudes which enhance its line-of-sight
optical depth.
The above mentioned tail of scattered light extends
only in the direction of the parent body disk's
apastron (and by extension the planet's apastron)
because there are more dust grains
on orbits apsidally aligned with their parents' orbits than anti-aligned.
This preference for apsidal alignment
magnifies with increasing planet eccentricity,
as shown in Figure \ref{fig5}, and can be understood
as follows.
For the simplifying case of coplanar orbits,
dust grains have
$0 \leq |\varpi_{\rm d} - \varpi_{\rm p}| < \pi/2$
if they are launched
between a parent body's periastron and its semi-minor vertex
(where the semi-minor axis crosses the orbit).
The range of
true anomalies between periastron and
the semi-minor vertex is
greater than between the semi-minor vertex and apastron.
This difference grows as $e_{\rm planet}$ grows; consequently,
more dust grain orbits have
$0 \leq |\varpi_{\rm d} - \varpi_{\rm p}| < \pi/2$
as $e_{\rm planet}$ increases.
The degree to which dust orbits
apsidally align with the parent body ring depends
not only on planet eccentricity, but also
the distribution of parent body true anomalies at launch.
Different distributions of launch sites are explored in Figure
\ref{fig6}. Alignment is perfect --- and the wings of the disk
seen in projection are swept most strongly downward --- if dust
grains are launched exclusively from periastron (left panels).
If instead launch mean anomalies are uniformly distributed --- i.e., if
launch true anomalies are weighted toward apastron where parent
bodies linger --- then apsidal alignment is weakened (right panels).
Our standard model assumes a uniform distribution
of launch true anomalies and represents an intermediate case
(middle panels).
In the endmember case that all dust particles are launched
at parent body periastra and have their orbits completely
apsidally aligned, we can discern two sets of wings:
a thin pair of wings sitting above a more diffuse and roughly parallel
pair of wings below the star (top left panel of Figure \ref{fig6}).
We can understand this ``double wing'' morphology
using the face-on views shown in Figure \ref{fig7}.
The upper set of wings seen in Figure \ref{fig6}
corresponds to the bright arc near periastron in Figure \ref{fig7}.
This arc is especially luminous because of the confluence
of orbits converging on nearly the same periastron.
The lower set of wings in Figure \ref{fig6}
corresponds in Figure \ref{fig7}
to the pair of overdense ``rays'' located
toward parent body apastron and
symmetrically displaced above and below
the apsidal line (the $x$-axis).
These two local maxima in surface brightness ---
what look like a pair of jets spraying particles away from the star
in the face-on view --- arise from two
effects: (1) the tendency of particles
on a given orbit to be found closer
to apoapse (where they linger) than to periapse
(which they zip through), and (2)
the lowering of the particle density
along the apsidal line in the direction of apastron,
due to large orbit-to-orbit differences in
apastron distance --- see the
bottom panel of Figure \ref{fig7}. The broad distribution
of apastron distances (extending to infinity) is due in turn to
a radiation-$\beta$ distribution that abuts blow out.
Effect (1) concentrates particles
toward apoapse,
while effect (2) dilutes the particle density
along the apsidal line (in the direction of apastron);
the net effect is to concentrate particles
at two orbital phases symmetrically displaced away from the
apsidal line.
The difference between the ``double wing'' and the ``bar''
lies in their relative proximities to the central star.
The outermost wing --- what defines the edge of the disk --- cuts
almost directly across the star in projection, whereas the
bar is necessarily displaced from the star.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{double_spine}
\caption{Understanding the origin of the double
wings seen for some moths, as seen in the upper left panel of Figure \ref{fig6}
(see also the middle right panel of Figure
\ref{fig9} for another version of the double wing
morphology).
Double wings appear when all dust grains share practically the
same periastron and apsidal line ($x$-axis)
as a consequence of being launched
only at parent body periastra.
{\it Top:} Scattered light image of the same disk
featuring double
wings, but seen face on here.
Emission near periastron generates the
upper set of wings in Figure \ref{fig6}, while
the pair of jet-like features
displaced symmetrically above and below
the apsidal line produces the lower set of wings.
{\it Bottom}: Same as top, but plotting individual
dust grains.
Local overdensities generated at two orbital azimuths
correspond to the two jets seen in the top panel.
}
\label{fig7}
\end{figure}
All of the behavior reported above persists if $\max i_{\rm p,free}$
is increased from our standard value of 0.02 rad to 0.15 rad;
i.e., the alt-az diagram for a vertically thicker disk looks similar
to that of our standard thin disk.
But there is more.
Thickening the disk, and viewing it
edge-on ($alt=0^\circ$)
and near quadrature ($45^\circ \leq az \leq 135^\circ$),
reveals new morphological features, as seen
in Figure \ref{fig8}.
The disk's outer isophote
has a front (toward planet periastron)
that is vertically thinner than its back,
resembling the ``wake'' of a ``ship'' (inner isophote
enclosing the dust cavity rim seen in projection).
The head of the ship
and the back of its wake
comprise dust on orbits that are highly eccentric
and closely apsidally aligned with the parent disk
(these are represented by the white, green, and yellow orbits
in Figure \ref{fig5}).
Conversely, the
stern of the ship
and the front of its wake
coincide with the few
dust orbits that are
anti-aligned with the parent disk
and less eccentric.
The wake grows in vertical thickness from front to back
because the front is composed of dust at the apastra of
low eccentricity orbits,
while the back is composed of dust at the more distant apastra
of high eccentricity orbits;
at fixed inclination dispersion, the more
distant apastra have greater heights above the disk midplane.
As was the case for the moth (see above),
the degree of
vertical asymmetry for the
wake depends on the distribution of dust grain launch sites:
the more the launch sites concentrate near periastra,
the more severe the asymmetry (see middle row of
Figure \ref{fig6}).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{comp_Isig_vert}
\caption{A sufficiently thick disk ($\max i_{\rm p,free} = 0.15$ rad;
bottom panel) seen edge-on
features a ``ship'' (inner white contour of constant
surface brightness) and its surrounding ``wake''
(outer white contour). The ship's front/bow (on
the positive $x$-axis, aligned with the underlying planet's
periastron) is brighter than its back/stern.
The outer wake is narrower at its front than its back.
The ship-and-wake morphology
might be relevant for HD 106906; see Section \ref{sec4}.}
\label{fig8}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{six_disks}
\caption{Prototypical debris disk morphologies
seen in scattered light, as captured by a ``minimum model''
(single eccentric planet + ring
of parent bodies + dust grains + stellar radiation pressure).
Possible observable shapes include a ``ring'' (top left),
a ``needle'' (middle left; this is essentially a ring seen edge-on),
and a ``ship-and-wake''
(bottom left; this is basically
a needle which is fat enough to resolve vertical structure).
Right panels feature various kinds of ``moths,'' either
our standard version where most dust grains are in front
of the star and therefore appear bright in forward-scattered
light (top right), a moth with ``double wings'' where dust grain
orbits are perfectly apsidally aligned as a consequence of assuming
that grains are launched exclusively from parent body periastra
(middle right), and a ``reverse moth'' where most grains are behind
the star, accompanied by a ``bar'' in front of the star (bottom right).
Note the sharp wingtips seen in the ``double wing'' panel;
this model looks encouragingly similar to HD 32297
\citep[][their Figure 19b]{schneider14}.
The surface brightness contrasts between the brightest
and the faintest features are $\sim$36, $\sim$900,
$\sim$10$^4$, $\sim$260, $\sim$620, and $\sim$400 for
the ring, needle, ship-and-wake, moth, double wing, and
bar, respectively.
The head of the ship is $\sim$400$\times$ brighter
than its stern.
In the double wing, the two wings are $\sim$4$\times$
brighter than the gap between them.
The bar is $\sim$20\% brighter than the gap
that separates it from the main disk.
}
\label{fig9}
\end{figure*}
\section{Summary and Discussion}
\label{sec4}
We have explored in this work what a ``minimum model'' of a
debris disk looks like in scattered light.
The minimum model consists of a narrow ring of parent bodies,
secularly perturbed by a single, possibly eccentric
planet, producing dust grains whose orbits are made arbitrarily
eccentric by stellar radiation pressure. The model
has obvious relevance to systems like Fomalhaut and HR 4796A
which patently feature narrow elliptical
rings.\footnote{\citet{perrin15}
suggest that the HR 4796A disk
may be slightly optically thick.}
What might not be
so obvious is that the minimum model can also help to explain
many other morphologies documented in resolved images of debris disks ---
all by simple changes in viewing perspective.
A message emerging from our work is that the
outskirts of planetary systems
are shaped by eccentric planets,
possibly just a few Earth masses each.
In Figure \ref{fig9} we summarize the various disk shapes
that are possible. We classify these into five types:
``ring,'' ``moth,'' ``bar,'' ``needle,'' and ``ship-and-wake.''
The first four shapes can be generated even by a disk
that is completely flat. We review each of these morphologies
in turn, highlighting potential applications to observed systems,
and close by listing future modeling directions.
\subsection{``Ring''}
Dust that is generated from an eccentric ring of parent bodies
appears as an eccentric ring itself
when viewed close to face on (top left panel of Figure \ref{fig9}).
The inner rim of the ring is illuminated by
dust particles near their periastra, while a skirt of diffuse
emission extends outward from dust grains
en route to their apastra.
Some real-life examples of rings with offset host stars
are provided by Fomalhaut
\citep[e.g.,][their Figure 1]{kalas05fom},
HR 4796A (e.g., \citealt{schneider09}, their Figure 3;
\citealt{thalmann11}, their Figure 1;
\citealt{perrin15}, their Figure 8;
\citealt{grady16}),
HD 181327 \citep[e.g.,][their Figure 33]{schneider14},
and HD 157587 (\citealt{padgett16}; Millar-Blanchaer et al., submitted).
These systems also feature diffuse emission exterior to
their rings.
\subsection{``Moth''}
When the parent body ring is viewed nearly but not completely
edge-on, with its apoapse pointing out of the sky plane
toward the observer,
a shape like a fan or moth materializes
(top right panel of Figure \ref{fig9}).
The resemblance of this morphology to the actual ``Moth''
(HD 61005; \citealt{hines07})
was first pointed out by \citet{fitz11}
and explored with detailed and quantitative models fitted
to the Moth by Esposito et al.~(submitted).
For sample images of HD 61005, see, e.g., Figure 3 of \citet{hines07};
Figure 1 of \citet{maness09};
Figure 1 of \citet{buenzli10};
and Figure 1 of \citet{ricarte13}.
The wings of our model moth are composed of
dust grains on highly eccentric orbits that are apsidally aligned with
the parent ring (and by extension the planet),
and whose apastra are directed toward the observer.
Viewing these grains from slightly above their orbital plane
produces downswept wings; viewing them from below produces upswept wings
(flip the top right panel of Figure \ref{fig9} about the $y$-axis).
If instead these grains' apastra are directed into the sky plane
away from the observer,
then the wings of the moth appear foreshortened because
most of the starlight is forward-scattered away from the observer
(this is the ``reverse moth'' featured in the
bottom right panel of Figure \ref{fig9}; the
foreshortening is not apparent because the panel
is made using a low contrast to highlight
another feature, the ``bar,'' which will be
discussed below).
Note that the moth morphology does not depend on a non-zero
inclination between the parent body ring and the planet;
a perfectly flat system suffices, provided it is viewed
slightly away from edge-on.
The degree to which the wings of the moth are angled
depends on the degree to which dust grain orbits are
apsidally aligned.
In turn, the preference for apsidal alignment depends
on both planet eccentricity and the orbital phases
at which parents give birth to dust grains.
If dust grains are launched
from parent body periastra and no other orbital phase,
then the system is, in a sense, maximally non-axisymmetric;
there is a ``preferred'' direction in space;
apsidal alignment is perfect, and the moth wings
sweep most strongly away from the horizontal.
The wings of HD 61005 are angled
by $\sim$23 degrees from the horizontal
(\citealt{buenzli10}; Esposito et al., submitted),
suggesting high planet eccentricity and
a strong preference for
launching dust grains near parent periastra.
Another moth-like system is presented by
HD 32297. Intriguingly, HD 32297 sports
a second, fainter pair of moth wings
that roughly parallel the first,
as imaged by {\it HST} on scales of several arcseconds
(e.g., \citealt{schneider14}, their Figures 18 and 19).
Our minimum model can reproduce this ``double wing''
structure (middle right panel of Figure \ref{fig9}).
When dust orbits are closely apsidally aligned,
a first set of wings (closest to the star, toward the bottom
of the panel) traces particles at and around their
periastra, while another fainter set of wings (farther from
the star, toward the top of the panel)
is generated by particles near, but not at, their apastra.
We can even try to make a connection to the disk geometry
as revealed on smaller, subarcsecond scales at infrared
wavelengths. Figure 4 of \citet{esposito14}
(see also Figure 1 of \citealt{currie12})
reveals the emission closest to the star
to be concave down (when north-west is up)
and the emission farther from the star to be concave up
(the latter curvature is consistent with the {\it HST} images
from \citealt{schneider14}).
We can reproduce this reversal of concavity between
small and large scales
by identifying the observed concave downward disk with
the bright arc above the star (the apoapse
of the innermost cavity rim, pointed toward
the observer),
and the concave upward disk with
the wingtips.
A third example of a fan/moth
is given by HD 15745; see, e.g., Figure 1 of
\citet{kalas07_hd15745}
and Figures 13 and 14 of Schneider et
al.~(\citeyear{schneider14}; note the typo
in the source HD number in the caption to Figure 13).
Unlike the case for HD 61005 and HD 32297,
isophotal ellipses describe well
the fan of HD 15745, and indicate that this disk is not
necessarily eccentric: an axisymmetric disk viewed somewhat
above its orbital plane, composed of grains that strongly
forward-scatter, can
reproduce the morphology of HD 15745. See Figure 4 of
\citet{kalas07_hd15745}, or our Figure \ref{fig4} (e.g., $alt = 10^\circ$).
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{needle_im_mask_sb}
\caption{Zoom-in on our model ``needle.'' As long as the coronagraphic
mask covers enough of the central cavity --- specifically the region near
periapse, where the disk is at maximum brightness --- then the disk's longer
arm can appear brighter than its shorter arm, as is consistent with
observations of HD 15115. Accompanying surface brightness profiles for each
arm are computed versus radius $|x|$ by integrating over $y$. Each profile features a local maximum where the line of sight intersects
regions near the ansa of the cavity rim.
}
\label{fig10}
\end{figure}
An alternative way to produce a moth-like morphology is to
allow the interstellar medium (ISM) to secularly perturb dust grain orbits
\citep{maness09}.\footnote{Secular ISM perturbations on grains that
remain bound to the host star, as proposed by \citet{maness09},
should not be confused with ISM deflections of
unbound grains \citep{debes09}.
Unbound grains contribute negligibly to disk
surface brightness; compared to bound grains,
unbound grains have lifetimes that are shorter by orders of magnitude,
and so their relative steady-state population
is correspondingly smaller
\citep[e.g.,][]{strubbe06,krivov06}.
See \citet[][their section 4]{maness09} for a detailed
discussion of the various flavors of ISM interactions,
including empirical arguments against interaction with
a high density ($\sim$100 atoms per cm$^3$)
ISM in the case of HD 61005.}
The mono-directional flow of the ISM across the disk
can induce a global disk eccentricity and thereby mimic
some of the effects of an eccentric planet.
As \citet{maness09} recognize (see their section 5.1),
this mechanism is subject to uncertainties in the host stellar wind;
in principle, the stellar wind can blow
an ``astrosphere'' shielding disk grains from ISM interactions.
\subsection{``Bar''}
A faint bar emerges when disks are viewed close to
but not completely edge-on,
with the embedded planet's periapse pointing out
of the sky plane
(bottom right panel of Figure \ref{fig9}).
The bar, which can be $\sim$20\% brighter than the
gap separating it from the main disk,
is composed of dust grains lingering at the apastra
of orbits that are nearly apsidally anti-aligned
relative to the planet's orbit.
These grains are launched
onto highly eccentric, barely bound orbits
from the apastra of the parent body ring.
Detecting the bar
would confirm that the grain size distribution rises sharply
toward the radiation blow-out value as a consequence
of the long collisional lifetimes afforded by highly eccentric
grains \citep{strubbe06}.
Such a top-heavy size distribution ensures that
dust orbit eccentricities cluster about a unique value;
a pure Dohnanyi size distribution is actually insufficiently
top heavy and does not produce bars.
\subsection{``Needle''}
Needles appear when eccentric and vertically thin disks are viewed
edge-on with their semi-minor axes nearly parallel to the line of sight
(middle left panel of Figure \ref{fig9}).
Needles possess not only length asymmetries --- one limb
appears longer than the other --- but also brightness
asymmetries.\footnote{Of course, if the parent
ring is circular, or if an eccentric ring is seen exactly edge-on with
its major axis parallel to the line of sight, then
both limbs will appear of equal length and brightness (Figure \ref{fig4},
$alt=0^\circ$).
This limiting case of a ``symmetric needle'' may apply to
AU Mic, modulo its mysterious
non-axisymmetric and time-dependent clumps
\citep{fitzgerald07,schneider14,wang15,boccaletti15}.}
As Figure \ref{fig10} details, the shorter arm, containing
dust grains crammed closer to the star, has a higher peak brightness
where the line of sight runs through the periapse of the ring cavity.
Our model needle is reminiscent of the prototype
HD 15115 (``The Blue Needle''):
see, e.g., Figure 1 of \citet{kalas07}; Figure 11 of
\citet{schneider14};
and Figure 1 of \citet{macgregor15}.
These observations show the longer arm to be brighter than the shorter
arm (cf.~Figure 1 of \citealt{rodigas12} and Figure 1 of
\citealt{mazoyer14}
which show more of a brightness asymmetry than a length asymmetry).
Bright long arms can be explained by our model needle provided
the coronagraphic mask is large enough to block out
the global maximum in surface brightness which lies
along the shorter arm; see Figure \ref{fig10}.
A prediction of the model is that beneath the mask,
the surface brightness profiles of the two arms
should cross, with the shorter arm ultimately outshining
the longer arm sufficiently close to the star.
\subsection{``Ship-and-Wake''}
Akin to needles are ships and their associated wakes,
which appear when
eccentric parent rings, viewed edge-on
and close to quadrature,
have sufficiently large inclination
dispersions that vertical structure can be resolved
(bottom left panel of Figure \ref{fig9}).
The ship appears on length scales of the inner cavity rim.
The wake, tracing large-scale diffuse emission,
is vertically thicker
in the direction of the planet's apastron.
The wake might be relevant for HD 106906.
On comparatively small scales within $\sim$1 arcsec (92 AU) of the star,
the disk's western arm appears shorter
than its eastern
arm, as resolved by the {\it Gemini Planet Imager} and {\it SPHERE}
(Figure 1 of \citealt{kalas15}
and Figure 1 of \citealt{lagrange16}, respectively).
We would interpret these observations
to imply that the underlying planet's periapse points west.
On larger scales outside $\sim$2 arcsec, the {\it Hubble Space
Telescope (HST)} reveals the nebulosity to the
east to be more diffuse than to the west (\citealt{kalas15},
their Figure 3) --- this is consistent with the eastern nebulosity
being the back of the wake, comprising dust grains
near the apastra of eccentric orbits apsidally aligned
with the planet's.
A potential problem with this interpretation
is that the {\it HST} image also evinces a radially long
extension to the west, suggesting that apoapse points
west instead of east.
The complete picture
must ultimately include HD 106906b, the substellar
companion at a projected distance of $\sim$7 arcsec
from the star \citep{bailey14}.
It may be that the system is not dynamically relaxed
but has been perturbed by a flyby \citep{larwood01,kalas15}.
\subsection{Future Improvements}\label{fi}
Our model can be improved in a number of ways.
A more accurate calculation of
the distribution of dust grain launch sites as a function
of parent body orbital phase would be welcome.
We found that the appearance of ``moth''-like disks depended
on this distribution: if parent bodies collide preferentially
near their periastra, launching more dust grains there,
then the wings of the moth would be angled more sharply downward.
Collision rates and grain size distributions, each a function
of position, depend on one another; moreover, the entire
disk is spatially interconnected, as dust grains on orbits made
highly eccentric by radiation pressure can collide with
particles at a range of orbital radii.
Numerical simulations --- e.g., {\tt SMACK}
\citep{nesvold13}, augmented to include radiation
pressure \citep{nesvold15} --- can help to solve this problem.
The impact of different scattering phase functions
can be explored. Our images, constructed with a
single Henyey-Greenstein scattering phase function
having a fixed asymmetry parameter $g$,
can be made more realistic
by accounting for how smaller grains
scatter light more isotropically (smaller grains should
have smaller $g$ values than larger grains).
\citet{hedman15} find empirically that
the light scattering properties of Saturn's rings
resemble those of irregularly shaped particles
and submit them for application to debris disks.
Warps --- misalignments between inner and outer disks ---
are missing from our models of single planets in steady-state
(secularly relaxed) disks. Positing two or more planets
on mutually inclined orbits produces warps
\citep[e.g.,][]{wyatt99}. A single planet can also
induce a transient warp \citep{mouillet97},
as has been apparently confirmed by the discovery of
beta Pictoris b (\citealt{lagrange10}). See also, however,
\citet{mmb15} and \citet{apai15}
who report features in beta Pic that a single planet may be
unable to explain.
Higher-order secular effects relevant at high planet eccentricity
and high inclination relative to the parent body disk
\citep[e.g.,][]{veras07,li14,pearce14,nesvold16},
and explicit numerical tests of dynamical stability,
can also be incorporated in future models.
Other neglected dynamical effects include those
from non-overlapping mean-motion resonances
\citep[e.g.,][]{kuchner03,stark09}.
Observational evidence for the relevance of individual
MMRs is so far scant except for
among Kuiper belt objects \citep[e.g.,][]{batygin16,volk16}
and in the inner Solar System's zodiacal dust disk
\citep[e.g.,][]{dermott94,reach10,jones13}.
\acknowledgments
We thank Gaspard Duch\^ene, Tom Esposito, Mike Fitzgerald,
James Graham, Paul Kalas, Max Millar-Blanchaer,
Ruth Murray-Clay, Erika Nesvold, Chris Stark,
and Jason Wang for
encouraging and insightful discussions, and prompt and constructive
feedback on a draft version of this manuscript.
An anonymous referee provided a helpful and exceptionally fast report.
EJL is supported in part by the Natural Sciences and
Engineering Research Council of Canada under PGS D3 and
the Berkeley Fellowship. EC acknowledges support from
grants AST-0909210 and AST-1411954 awarded by the
National Science Foundation, and NASA Origins grant
NNX13AI57G.
This research used the Savio computational cluster
resource provided by the Berkeley Research Computing
program at the University of California,
Berkeley (supported by the UC Berkeley Chancellor,
Vice Chancellor of Research, and the
Chief Information Officer).
|
1,108,101,564,305 | arxiv | \section{introduction} \label{sec:1}
Einstein and Rosen introduced the concept of wormholes, as early as in 1935 \cite{Einstein:1935tc}, which generate `short-cuts' in spacetime and allow `apparently faster than light' travel \citep{Visser:1995cc, Hawking:1973uf, Wald:1984rg, lobo2017wormholes, Witten:2019qhl} between two distinct spacetime points, dubbed as the Einstein-Rosen bridge. Wheeler, later, referred these solutions as `wormholes' \cite{Wheeler:1957mu}. It was soon realised that the wormhole solutions do not form a stable structure-- it's `throat' closes up too quickly when subjected to even tiny perturbation \citep{Kruskal:1959vx, Fuller:1962zza, Eardley:1974zz, Wald:1980nm}. Morris and Thorne showed that one can get a `traversable' wormhole solution by threading exotic form of matter (matter with negative energy density that violates null energy condition (NEC)) at or near the throat \citep{Morris:1988cz, Visser:1995cc}. Remarkably, the concept of exotic matter (such as dark energy, dark matter and phantom energy) is found to be useful in explaining several cosmological observations such as: the late time accelerated expansion of the universe, behaviour of the galactic rotation curves and the mass discrepancy in clusters of galaxies \cite{Lobo:2008sg}. It seems that the natural habitat for such matter would be of quantum origin. However, it has been noted that approaches using quantum features of standard model matter are insufficient for creating macroscopic wormholes \cite{Witten:2019qhl}.
Despite the facts stated earlier, certain classical ways are devised to circumvent the necessity for matter with negative energy density that violates energy conditions \cite{Hochberg:1990is, Bhawal:1992sz, Agnese:1995kd, Samanta:2018hbw, Fukutaka:1989zb, Ghoroku:1992tz, Furey:2004rq, Bronnikov:2009az}. Alternative theories of gravity, often known as the modified gravity theories, also offer new ways to avoid energy condition violations. Even though the convergence condition of null geodesics is violated in some cases, the literature has a considerable number of non-exotic matter models under modified gravity \citep{Lobo:2008zu, Kanti:2011jz, Kanti:2011yv, Zubair:2017oir, Shaikh:2016dpl, Ovgun:2018xys, Canate:2019spb}. So far, the $f(R)$, $f(R, T)$, $f(Q)$, and higher order gravity theories have garnered sufficient attention. It has been demonstrated that cosmic acceleration can be explained using $f(r)$ gravity \cite{Nojiri:2003ft} and that these models can provide wormhole solutions with viable matter sources \citep{Lobo:2009ip, Garcia:2010xb, MontelongoGarcia:2010xd, Sajadi:2011oei, Moraes:2017dbs, Sahoo:2017ual, Moraes:2019pao, Sahoo:2020sva, Hassan:2021egb, Mustafa:2021ykn}.
Although wormholes are still regarded speculative, new advancements in precision black hole measurements have enhanced the need of evaluating plausible wormhole models (as black hole mimickers). Studies of events such as wormhole merger \citep{Krishnendu:2017shb, Cardoso:2016oxy} and their quasinormal modes \citep{Aneesh:2018hlp, DuttaRoy:2019hij} can aid in the detection of wormhole signature in the cosmos. Similarly. their lensing effects, shadows, Einstein ring and other features may be analysed in detail for possible detection \citep{Abe:2010ap, Toki:2011zu, Takahashi:2013jqa, Cramer:1994qj, Perlick:2003vg, Tsukamoto:2012xs, Bambi:2013nla, Nedkova:2013msa, Zhou:2016koy, Dzhunushaliev:2012ke, Dzhunushaliev:2013lna, Aringazin:2014rva, Dzhunushaliev:2016ylj}. Any of these signatures, if detected, may also favour modified gravity theories over general relativity.
The Ellis-Bronnikov wormhole (4D-EB) \citep{Ellis:1973yv, Bronnikov:1973fh} was constructed separately by Ellis and Bronnikov employing phantom scalar field (a field with negative kinetic term) and is one of the most researched wormhole geometries. Several studies on this spacetime can be found in the literature, including geometry of spinning 4D-EB spacetime \cite{Chew:2016epf}, generalized spinning of 4D-EB wormhole in scalar-tensor theory \cite{Chew:2018vjp}, hairy Ellis wormholes solutions \cite{Chew:2020svi}, Ellis wormholes in anti-de Sitter space \cite{Blazquez-Salcedo:2020nsa}, stability analysis of 4D-EB solution in higher dimensional spacetime \cite{Torii:2013xba} as such. Kar et al. presented a generalised version of 4D-EB spacetime (4D-GEB) \cite{Kar:1995jz}, where the necessity for exotic matter is {\em partially} evaded by introducing a new wormhole parameter, $m \geq 2$ ($m = 2$ corresponds to 4D-EB geometry). Quasinormal modes, echoes and some other aspects of 4D-GEB spacetimes is reported in \cite{DuttaRoy:2019hij}. We recently suggested a further generalisation \cite{Sharma:2021kqb} where the 4D-GEB geometry is embedded in a five-dimensional warped spacetime and showed that energy conditions are satisfied even for $m=2$ i.e. a novel 5D-EB geometry.
Such embedding is also considered in the context of Schwarzschild-like spacetime \cite{Culetu:2021zun} as well. Recently Kar has proposed another 5D warped wormhole model where the warping chosen is largely inspired by the non-static Witten bubble \cite{Kar:2022omn}.
The theories of extra spatial dimensions started with the work of Kaluza (1921) and Klein (1926) \cite{Kaluza:1921tu, Klein:1926tv}, who attempted to combine electromagnetism and gravity in a 5D-gravity framework. In fundamental physics, be it the string theory \cite{Green:1987sp}, which is a work in progress in unifying the quantum theory and gravity or in the context of symmetries of particle physics \cite{Furey:2015yxg, Baez:2001dm, Baez:2010ye, Furey:2018yyy, Furey:2018drh, Gillard:2019ygk}, the extra dimensions seems to appear {\em naturally}. Development of string theory also motivated the so-called brane-world scenarios-- where our 4D Universe (3-brane) is embedded in a higher dimensional bulk. The Dvali-Gabadadze-Porrati (DGP) models produce infra-red modification with extra dimensional gravity dominating at low energy scale \cite{Dvali:2000hr}. Further generalization to cosmology of DGP model lead to late-accelerated-expansion \cite{Deffayet:2000uy}.
Perhaps, the most popular of these models are the `warped braneworld' models \cite{Rubakov:1983bb, Gogberashvili:1998iu, Gogberashvili:1998vx,Randall:1999ee, Randall:1999vf} that generate ultra-violet modification to general relativity with extra dimensional gravity dominating at high energy scale. This model proposes a non-factorizable geometry- a curved five-dimensional spacetime in which the 4D-metric is warped by the extra dimension. Though some recent research on wormholes embedded in higher-dimensional spacetime has been published \citep{Lobo:2007qi, deLeon:2009pu, Wong:2011pt, Kar:2015lma, Banerjee:2019ssy, Wang:2017cnd}, warped braneworld models have not been examined.
We showed in \cite{Sharma:2021kqb} that a GEB (and EB) spacetime embedded in 5D warped bulk (5D-WWEB) model meets the energy conditions. As a follow-up, we analysed the timelike trajectories in these spacetimes in detail in \cite{Sharma:2022tiv}. In this work, we investigate the congruence of geodesics in the original 4D-GEB geometry as well as the 5D-WEB spacetime and compare them to see how the wormhole parameter and warped extra dimension effect the congruence evolution. Note that the 5D line element we utilised is the well-known thick brane model \cite{Dzhunushaliev:2009va, Ghosh:2008vc}, in which the warp factor is a smooth function of the extra dimension (thus there are no derivative jump or delta functions in the curvature and connections).
The following is a breakdown of the structure of this article. Section (\ref{sec:2}) introduces the wormhole spacetimes that correspond to the novel 5D-WGEB model. It includes a summary of the geometric properties and geodesic equations. In Section (\ref{sec:3}), we used timelike velocity vector field to obtain analytic expressions for the expansion and shear (with zero rotation) variables corresponding to the geodesic congruences and solve the Raychaudhury equations for 4D-GEB geometry. In section (\ref{sec:4}), we numerically computed the ESR variables for two cases-- without rotation and with rotation for both 4D-GEB and 5D-WGEB geometries. In Section (\ref{sec:5}), to get further insight about the congruences in 4D and 5D, the evolution of cross-sectional area of a geodesic congruence of timelike geodesics is numerically determined and presented graphically. Finally, in Section (\ref{sec:6}) we summarise the key results and discuss future direction of research.
\section{The 5D-WGEB geometry and Geodesics} \label{sec:2}
The specific line element of 5D-WGEB geometry introduced in \cite{Sharma:2021kqb} is as follows,
\begin{equation}
ds^{2} = e^{2f(y)} \Big[ - dt^{2} + dl^{2} + r^{2}(l)~\big( d\theta^{2} + \sin^{2}\theta~d\phi^{2} \big) \Big] + dy^{2}, \label{eq:5d-line-element}
\end{equation}
where, the extra spatial dimension is represented by $y$ ($ - \infty \leq y \leq \infty$). $f(y)$ is the warping factor that we chose as, $f(y) = \pm \log[\cosh(y/y_{0})]$, corresponding to a thick brane scenario. The four dimensional part of the above metric (\ref{eq:5d-line-element}) is a spherically symmetric, ultra-static Generalized Ellis-Bronnikov wormhole spacetime, given by,
\begin{equation}
ds^{2}_{4D} = - dt^{2} + dl^{2} + r^{2}(l)~\big( d\theta^{2} + \sin^{2}\theta~d\phi^{2} \big) \label{eq:generalised-E&B-l}
\end{equation}
\begin{equation}
\mbox{where}~~~~~ r(l) = (b_{0}^{m} + l^{m})^{1/m}. \label{eq:r(l)}
\end{equation}
Here, $l$ is the `proper radial distance' or the `tortoise coordinate'. The `radius' of the wormhole throat is given by $b_{0}$, and $m$ is the wormhole parameter that essentially generalises the EB geometry ($m=2$) to GEB geometry($m > 2$). For smoothness of $r(l)$, it is necessary to have even valued $m$. In the usual radial coordinate $r$, the above metric (\ref{eq:generalised-E&B-l}) can be written as,
\begin{equation}
ds^{2} = - dt^{2} + \frac{dr^{2}}{\Big( 1 - \frac{b(r)}{r} \Big)} + r^{2} \big( d\theta^{2} + \sin^{2}\theta d\phi^{2} \big), \label{eq:generalised-E&B-r}
\end{equation}
where $r$ and $l$ are related through the {\em shape function} $b(r)$ as,
\begin{equation}
dl^{2} = \frac{dr^{2}}{ \Big( 1 - \frac{b(r)}{r} \Big)}~~~~~\implies ~~~~~\quad b(r) = r - r^{(3-2m)} (r^{m} - b_{0}^{m})^{ \Big( 2 - \frac{2}{m} \Big)}. \label{eq:r-l-relation}
\end{equation}
The geometric and curvature quantities for both the 5D-WGEB and 4D-GEB spacetimes have been discussed in detail in \cite{Sharma:2021kqb}. Interested reader is requested to go through that article. In \cite{Sharma:2022tiv}, we further discussed the possible timelike trajectories in detail. In the following we shall focus on the congruence of geodesics.
For completeness, we rewrite the geodesic equations corresponding to metrics (\ref{eq:5d-line-element}) and (\ref{eq:generalised-E&B-l}) as these equations are to be solved along with the geodesic deviation equations to determine the properties of the congruences
The geodesic equations for 4D-GEB are,
\begin{equation}
\frac{d^{2}t}{d\lambda^{2}} = 0 \label{eq:geodesic-1}
\end{equation}
\begin{equation}
\frac{d^{2}l}{d\lambda^{2}} - l^{-1+m} ~\big( b_{0}^{m} + l^{m} \big)^{-1 + \frac{2}{m}} ~\Big[ \Big( \frac{d\theta}{d\lambda} \Big)^{2} + \sin^{2}\theta ~\Big( \frac{d\phi}{d\lambda} \Big)^{2} \Big] = 0 \label{eq:geodesic-2}
\end{equation}
\begin{equation}
\frac{d^{2}\theta}{d\lambda^{2}} + \frac{2l^{-1+m}}{\big( b_{0}^{m} + l^{m} \big)} \frac{dl}{d\lambda} \frac{d\theta}{d\lambda} - \sin\theta \cos\theta \Big( \frac{d\phi}{d\lambda} \Big)^{2} = 0 \label{eq:geodesic-3}
\end{equation}
\begin{equation}
\frac{d^{2}\phi}{d\lambda^{2}} + 2 \cot\theta \frac{d\theta}{d\lambda} \frac{d\phi}{d\lambda} + \frac{2l^{-1+m}}{ \big( b_{0}^{m} + l^{m} \big) } \frac{dl}{d\lambda} \frac{d\phi}{d\lambda} = 0 \label{eq:geodesic-4}
\end{equation}
and the geodesic equations for the 5D-WGEB model are as follows.
\begin{equation}
\frac{d^{2}t}{d\lambda^{2}} + 2 ~f'(y) ~\frac{dt}{d\lambda} ~\frac{dy}{d\lambda} = 0 \label{eq:geodesic-11}
\end{equation}
\begin{equation}
\frac{d^{2}l}{d\lambda^{2}} + 2~f'(y)~\frac{dl}{d\lambda}~\frac{dy}{d\lambda} - l^{-1+m} ~\big( b_{0}^{m} + l^{m} \big)^{-1 + \frac{2}{m}} ~\Big[ \Big( \frac{d\theta}{d\lambda} \Big)^{2} + \sin^{2}\theta ~\Big( \frac{d\phi}{d\lambda} \Big)^{2} \Big] = 0 \label{eq:geodesic-22}
\end{equation}
\begin{equation}
\frac{d^{2}\theta}{d\lambda^{2}} + 2 ~f'(y) ~\frac{d\theta}{d\lambda} ~\frac{dy}{d\lambda} + ~\frac{2l^{-1+m}}{(b_{0}^{m} + l^{m})} ~\frac{d\theta}{d\lambda} ~\frac{dl}{d\lambda} - \sin\theta ~\cos\theta ~\Big( \frac{d\phi}{d\lambda} \Big)^{2} = 0 \label{eq:geodesic-33}
\end{equation}
\begin{equation}
\frac{d^{2}\phi}{d\lambda^{2}} + 2 ~f'(y) ~\frac{d\phi}{d\lambda} ~\frac{dy}{d\lambda} + 2 ~\cot\theta ~\frac{d\theta}{d\lambda} ~\frac{d\phi}{d\lambda} + \frac{2l^{-1+m}}{ \big( b_{0}^{m} + l^{m} \big) } ~\frac{dl}{d\lambda} ~\frac{d\phi}{d\lambda} = 0 \label{eq:geodesic-44}
\end{equation}
\begin{equation}
\frac{d^{2}y}{d\lambda^{2}} + f'(y)~e^{2f(y)}~\Big[ \Big( \frac{dt}{d\lambda} \Big)^{2} - \Big( \frac{dl}{d\lambda} \Big)^{2} - (b_{0}^{m} + l^{m})^{2/m} ~\Big[ \Big( \frac{d\theta}{d\lambda} \Big)^{2} + \sin^{2}\theta ~\Big( \frac{d\phi}{d\lambda} \Big)^{2} \Big] \Big] = 0 \label{eq:geodesic-55}
\end{equation}
Here $\lambda$ is the affine parameter. The differences between 4D and 5D geodesic equations can be seen explicitly from the above expressions, which are $y$ and $\dot{y}$ dependent term in the right hand sides of the Eqs. (\ref{eq:geodesic-11}) to (\ref{eq:geodesic-55}), and an extra equation for motion along the extra dimension. Note that Eq. \ref{eq:geodesic-55}, implies that in presence of growing warp factor we have confined trajectories (whose $y$-coordinate oscillate about $y=0$) and in presence of a decaying warp factor we have runaway trajectories ($y \rightarrow \pm \infty$ with evolving $\lambda$) \cite{Sharma:2022tiv,Ghosh:2009ig}.
\section{Derivation of ESR From the Velocity Field} \label{sec:3}
In the following we solve the the Raychaudhuri equations to analyse the flow of geodesic congruences through the kinematic variables expansion $\Theta$, shear $\Sigma_{AB}$ and rotation $\Omega_{AB}$ (ESR). Note that we shall consider those trajectories that cross the throat (for details of {\em crossing trajectories} see \cite{Sharma:2022tiv}).
The Raychaudhuri equation for expansion \cite{Kar:2006ms,Poisson:2009pwt}, given by
\begin{equation}
\frac{d\Theta}{d\lambda} + \frac{1}{3}\Theta^{2} + R_{AB}u^{A}u^{B} + \Sigma^{2} - \Omega^{2} = 0, \label{eq:raychaudhuri-eq}
\end{equation}
is dependent on other kinematic variables such as the shear and rotation, as well as the curvature term, $R_{AB}u^{A}u^{B}$. The above equation is a non-linear first order differential equation that can be transformed into a second order {\em linear} form via redefining the $\Theta$ as $\Theta = 3 \frac{\dot{F}}{F} $. Thus, Eq. (\ref{eq:raychaudhuri-eq}) turn out to be,
\begin{equation}
\frac{d^{2}F}{d\lambda^{2}} + \frac{1}{3} \left( R_{AB}u^{A}u^{B} + \Sigma^{2} - \Omega^{2} \right)F = 0. \label{eq:raychaudhuri-eq-2}
\end{equation}
The congruence converges at a finite $\lambda$ where $F = 0$, $\dot{F} < 0$. Using the Sturm comparison theorems in differential equations, one can show that convergence happens when
\begin{equation}
R_{AB}u^{A}u^{B} + \Sigma^{2} - \Omega^{2} \geq 0 .\label{eq:convergence-condition}
\end{equation}
The role of the terms that appear in the evolution equation for expansion, is clearly shown in the above convergence condition. While rotation works against convergence, shear works in its favour. When $\Omega_{AB} = 0$ then $R_{AB}u^{A}u^{B} \geq 0$ leads to focusing of the congruences.
In the case of vanishing rotation, we derived analytic expressions for ESR variables using the first integral of the metric and plotted them below.
ESR variables can be derived directly from their definitions where the velocity vector field is $u^{A}$. The following are the formal definitions for the expansion $\Theta$, shear $\Sigma_{AB}$, and rotation $\Omega_{AB}$ \citep{Poisson:2009pwt}:
\begin{equation}
\Theta = \nabla_{A}u^{A} , \label{eq:theta-definition}
\end{equation}
\begin{equation}
\Sigma_{AB} = \frac{1}{2}~\left( \nabla_{B}u_{A} + \nabla_{A}u_{B} \right) - \frac{1}{n - 1}~h_{AB}~\Theta , \label{eq:sigma-definition}
\end{equation}
\begin{equation}
\Omega_{AB} = \frac{1}{2}~\left( \nabla_{B}u_{A} - \nabla_{A}u_{B} \right). \label{eq:omega-definition}
\end{equation}
Here, $n$ is the dimension of spacetime, $h_{AB} = g_{AB} \pm u_{A}u_{B} $ is the projection tensor and $u_{A}u^{A} = \mp 1$ (The plus and minus sign stands for the timelike and spacelike geodesics, respectively). In case of the 4D-GEB geometry the calculation of the first integral $u^{A}$ (whose expressions are independent of the affine parameter $\lambda$) corresponding to each coordinate could be found. These are listed bellow in the case of timelike trajectories (for sake of simplicity, we choose $\theta = \pi/2 $ without loosing any generality).
\begin{equation}
\dot{t} = k ,\label{eq:t-dot}
\end{equation}
\begin{equation}
\dot{\phi} = \frac{h}{\left( b_{0}^{m} + l^{m} \right)^{\frac{2}{m}}} ,\label{eq:phi-dot}
\end{equation}
\begin{equation}
\dot{l} = \sqrt{k^{2} - \frac{h^{2}}{\left( b_{0}^{m} + l^{m} \right)^{\frac{2}{m}}} - 1} .\label{eq:l-dot}
\end{equation}
Eq. (\ref{eq:l-dot}) obtained by using Eqs, (\ref{eq:t-dot}), (\ref{eq:phi-dot}) and timelike constraint on geodesics, $ g_{AB}u^{A}u^{B} = -1 $. Here, $k$ and $h$ are the conserved energy and angular momentum per unit mass of the timelike particle.
For given velocity vector field (Eqs. \ref{eq:t-dot} to \ref{eq:l-dot}) of 4D-GEB wormhole geometry, the expansion scalar and amplitude squared of shear is,
\begin{equation}
\theta = l^{-1+m} \left[ \frac{h^{2} + \left(b_{0}^{m} + l^{m} \right)^{\frac{2}{m}} \left(k^{2} - \frac{h^{2}}{\left(b_{0}^{m} + l^{m} \right)^{\frac{2}{m}}} - 1 \right)}{\left(b_{0}^{m} + l^{m} \right)^{1 + \frac{2}{m}} \sqrt{k^{2} - \frac{h^{2}}{\left(b_{0}^{m} + l^{m} \right)^{\frac{2}{m}}} - 1}} \right] , \label{eq:GEB-theta}
\end{equation}
\begin{equation}
\resizebox{\textwidth}{!}
{%
$ \Sigma^{2} = \frac{l^{ -2 + 2m } \left( b_{0}^{m} + l^{m} \right)^{\frac{-2 \left( 1 + m \right)}{m}} \left[ h^{4} \left( 13 + 4k^{2} + k^{4} \right) - h^{2} \left( - 16 + 12k^{2} + 3k^{4} + k^{6} \right) \left( b_{0}^{m} + l^{m} \right)^{2/m} + \left( - 1 + k^{2} \right)^{2} \left( 9 - k^{2} + k^{4} \right) \left( b_{0}^{m} + l^{m} \right)^{\frac{4}{m}} \right]}{9 \left[ - h^{2} + \left( - 1 + k^{2} \right) \left( b_{0}^{m} + l^{m} \right)^{\frac{2}{m}} \right]} $%
} \label{eq:GEB-sigsq}
\end{equation}
One can easily check that if $l \rightarrow 0$ or $l \rightarrow \pm \infty$ the expansion scalar $\Theta \rightarrow 0$ and the shear scalar $\Sigma^{2}\rightarrow$ $0$. This is true irrespective of the value of $h$. Thus there will be no expansion/contraction and distortion of timelike geodesic congruences at the wormhole throat and at asymptotic flat regions which is expected. The profile of $\Sigma^2$ is symmetric about $l=0$, on the other hand, $\theta$ profile is exactly asymmetric about the throat. Fig.(\ref{fig:GEB-ESR-analytic}) clarifies the effect of the wormhole parameter $m$ on expansion and shear. The same $\Theta$-variation has also been reported in \cite{DuttaRoy:2019hij}. While plotting the figures the numerical values are chosen to be consistent with the corresponding `crossing trajectory conditions' \cite{Sharma:2022tiv}.
\begin{figure}[H]
\centering
\includegraphics[scale=.5]{GEB-crossing-theta-analytic.pdf}
\hspace{1cm}
\includegraphics[scale=.5]{GEB-crossing-sigma-analytic.pdf}
\caption{Evolution of $\Theta$ and $\Sigma^{2}$ in case of crossing geodesic congruences with different choice of wormhole parameter $m = 2, 4, 6$ (blue , black and red curves). Where, $b_{0} = 1$, $k = \sqrt{3}$ and $h = 1$. }
\label{fig:GEB-ESR-analytic}
\end{figure}
Variation in $\Theta$ (or $\Sigma^{2}$) is qualitatively similar in the asymptotic regions for $m = 2$ and $m > 2$ geometries. However, they are very different at about the throat which is a reminiscent of the fact that upto $m^{th}$ order derivative of the geodetic potential vanishes at the throat \cite{Sharma:2022tiv}. Thus a congruence of geodesics coming from $l=\infty$, with zero initial ESR, expands and then contract before crossing the throat with zero ESR. However, the location of the extrema moves farther away from the throat with increasing $m$.
In case of 5D-WGEB model, it is difficult build any intuition from the equations because of their complexity. Therefore, in the following we numerically analyse the 4D and 5D scenario that is difficult to solve analytically.
\section{Numerical Analysis of ESR variables} \label{sec:5}
To describe the behaviour of evolution of expansion $\Theta$ in both the 4D-GEB and 5D-WGEB models, we numerically solved Eq. (\ref{eq:evolution-of-B_AB}) along with the geodesic equations. For this, we have chosen two types of boundary conditions along with $\Theta$ and $\Sigma_{AB}$ being zero at the throat: (a) without rotation ($\Omega_{AB}=0$) and (b) with rotation ($\Omega_{AB}\neq 0$). The velocity components for timelike geodesics in the 4D-GEB background satisfying the timelike constraint are chosen as $\{ \dot{t}(0) = \sqrt{3} $, $\dot{l}(0) = 1.41421$, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = 0 \} $ and the same for the 5D-WGEB models with growing and decaying warp factor are $\{ \dot{t}(0) = 1.71485 $, $\dot{l}(0) = 1.39665 $, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = 0 $, $ \dot{y(0)} = 0 \}$ and $ \{ \dot{t}(0) = 1.74943 $, $\dot{l}(0) = 1.43195 $, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = 0 $, $ \dot{y(0) = 0} \}$ respectively (see Appendix and \cite{Sharma:2022tiv}). Fig. (\ref{fig:GEB-ESR-wor2}) to (\ref{fig:5d-decay-ESR-wr2}) show the evolution of ESR variables with (continuous curves) and without (dashed curves) rotation for three different values of $m$ for 4D-GEB and 5D-WGEB (with growing and decaying warp factor) models, respectively.
\subsection{case-1: Congruences in 4D-GEB spacetime with and without rotation}
The evolution of numerically solved expansion $\Theta$ (without rotation) in the 4D-GEB scenario shows exact similarity with the analytic behaviour discussed earlier thus proving the accuracy of the numerical computation. The quantitative difference wrt Fig. \ref{fig:GEB-ESR-analytic} is because we have chosen angular momenta $h$ of the geodesics of the congruence to be zero for computational simplicity. For readers sake, we also showed the evolution of the curvature term $R_{AB}u^{A}u^{B}$ along with the expansion and shear.
The boundary values used in this subsection is shown in Appendix \ref{app-1}.
\begin{figure}[H]
\centering
\includegraphics[scale=.4]{GEB-crossing-theta-wor2.pdf}
\hspace{.3cm}
\includegraphics[scale=.4]{GEB-crossing-sigma-wor2.pdf}
\hspace{.2cm}
\includegraphics[scale=.45]{GEB-crossing-curvature-wor2.pdf}
\caption{Evolution of $\theta$, $\Sigma^{2}$ and the curvature term without rotation in case of crossing geodesic congruences for wormhole parameter, $m = 2, 4$, and $6$ (blue, black and red curves) with $b_{0} = 1$. }
\label{fig:GEB-ESR-wor2}
\end{figure}
Something interesting happens in presence of rotation in the congruence (see Fig. \ref{fig:GEB-ESR-wr2}), even for 4D-GEB that could not be derived through analytic approach.
Posing a boundary condition at the throat with non-zero rotation $\Omega_{AB}(\lambda=0) \neq 0$ seems to have an effect of nullifying the effect of $m$ the wormhole parameter. Thus the $\Theta$ profile (vs $l$) for different values of $m$ overlaps. This raises a possibility that a {\em rotating} Ellis-Bronnikov wormhole may not require exotic matter to be stable. However a detailed study of this interesting case is not possible within the scope of this article.
Fig. \ref{fig:GEB-ESR-wr2} further shows that the initial rotation also dies away in the asymptotic regions.
\begin{figure}[H]
\centering
\includegraphics[scale=.4]{GEB-crossing-theta-wr2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{GEB-crossing-sigma-wr2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{GEB-crossing-omega-wr2.pdf}
\hspace{0.2cm}
\caption{Evolution of ESR in case of crossing geodesic congruences with rotation for wormhole parameter, $m = 2, 4$, and $6$ (blue, black and red curves) with $b_{0} = 1$. }
\label{fig:GEB-ESR-wr2}
\end{figure}
\subsection{Case-2: Congruences in 5D-WGEB spacetime with growing warp factor}
In the presence of growing warp factor, the evolution of ESR variables with rotation and without rotation are shown in figures \ref{fig:5d-grow-ESR-wor2} and \ref{fig:5d-grow-ESR-wr2} respectively.
The boundary values used in this subsection is shown in Appendix \ref{app-2}.
The general observation on evolution of $\Theta$ implies that a congruence of dense geodesics (focussed) coming from positive $l$ will defocus on the other side and vice versa. The presence of geodesic singularity at finite $l$ is essentially because of bounded trajectories along $y$ \cite{Ghosh:2010gq}. For $m=2$ geometry, there is only one extrema on either side of $l=0$, but for $m > 2$ we have two extrema on either side.
Shear on the other hand increases monotonically at $l \rightarrow \pm \infty$. The curvature term particularly become positive in the asymptotic region and around the throat which is different from the 4D case where this term was negative everywhere.
\begin{figure}[H]
\centering
\includegraphics[scale=.4]{5d-grow-crossing-theta-wor2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{5d-grow-crossing-sigma-wor2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.45]{5d-grow-crossing-curvature-wor2.pdf}
\caption{Evolution of $\theta$, $\Sigma^{2}$ and the curvature term without rotation in case of crossing geodesic congruences with growing warp factor for $m = 2, 4$, and $6$ (blue, black and red curves) with $b_{0} = 1$. }
\label{fig:5d-grow-ESR-wor2}
\end{figure}
Introduction of non-zero rotation, $\Omega_{AB}$, at the throat again compensate for the differences in different $m$ geometries.
The evolution of ESR variables are similar as the 4D case with rotation to some extent about. Congruence singularity sustains the presence of rotation. Though expansion diverges, shear on the other hand vanishes (unlike the 4D case and without rotation case) in the asymptotic region like rotation.
\begin{figure}[H]
\centering
\includegraphics[scale=.4]{5d-grow-crossing-theta-wr2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{5d-grow-crossing-sigma-wr2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{5d-grow-crossing-omega-wr2.pdf}
\hspace{0.2cm}
\caption{Showing evolution of ESR in case of crossing geodesic congruences for growing warp factor wormhole parameter, $m = 2, 4$, and $6$ (blue, black and red curves) with as $b_{0} = 1$. }
\label{fig:5d-grow-ESR-wr2}
\end{figure}
\subsection{case-3: Congruences in 5D-WGEB spacetime with decaying warp factor}
Fig. \ref{fig:5d-decay-ESR-wor2}, in the presence of a decaying warp factor, without rotation, the differences in the $\Theta$-profiles between $m=2$ and $m > 2$ geometries is decreased-- both have one of extrema on either side of the throat.
Further, the congruence singularities is moved towards $l \rightarrow \pm \infty$.
Shear has local minima at finite $\pm l$ (except the global minima at $l=0$) and increases monotonically beyond them.
The curvature term, though positive in the asymptotic region, around the throat it contributes negatively (even for $m > 2$) which is similar to the 4D case and opposite to the case with a growing warp factor.
\begin{figure}[H]
\centering
\includegraphics[scale=.4]{5d-decay-crossing-theta-wor2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{5d-decay-crossing-sigma-wor2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.45]{5d-decay-crossing-curvature-wor2.pdf}
\caption{Evolution of $\theta$, $\Sigma^{2}$ and the curvature term without rotation in case of crossing geodesic congruences with decaying warp factor for $m = 2, 4$, and $6$ (blue, black and red curves) with $b_{0} = 1$.}
\label{fig:5d-decay-ESR-wor2}
\end{figure}
Introduction of non-zero rotation, $\Omega_{AB}$, at the throat, again compensate for the differences in different $m$ geometries. Remarkably, it slows the evolution down and avoid the divergences at large $l$ as it happened in case with growing warp factor.
The evolution of shear and rotation are similar to the case with a growing warp factor.
The boundary values used in this subsection is shown in Appendix \ref{app-3}.
\begin{figure}[H]
\centering
\includegraphics[scale=.4]{5d-decay-crossing-theta-wr2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{5d-decay-crossing-sigma-wr2.pdf}
\hspace{0.2cm}
\includegraphics[scale=.4]{5d-decay-crossing-omega-wr2.pdf}
\hspace{0.2cm}
\caption{Showing evolution of ESR in case of crossing geodesic congruences for decaying warp factor wormhole parameter, $m = 2, 4$, and $6$ (blue, black and red curves) with as $b_{0} = 1$. }
\label{fig:5d-decay-ESR-wr2}
\end{figure}
We can make some general conclusion for the evolution of ESR variables from the figures given above. In case with zero rotation, at $l = 0$ (this is by choice) and at $l = \pm b_{0} = \pm 1$, expansion and shear have same value for all $m$ for both 4D and 5D models.
The effect of wormhole parameter $m$ neutralises when we introduce rotation ($\Omega_{AB}\neq 0$) in the congruence. Occurrence of divergences in the $\Theta$ and $\Sigma^2$ profiles of the geodesic congruences or congruence singularity is due to the extra dimension or the warping factor to be specific. However, the presence of rotation can avoid these divergences in the case of decaying warp factor.
\section{Evolution of cross-sectional area of Congruences} \label{sec:4}
In this section we present a different perspective on the evolution of ESR variables.
Let us consider a congruence of four timelike geodesics coming from one side ($l = - \infty$) of the throat, crosses the throat, and reaches the other side to infinity ($l = \infty$). One may ask how the cross sectional area of these geodesic congruences evolve as the affine parameter or time evolves? To answer this question, we numerically solve the following set of equations (which are essentially the Raychaudhury and the geodesic deviation equations),
\begin{equation}
u^{C}\nabla_{C}B_{AB} = - B_{AC}B^{C}_{B} - R_{ACBD}u^{C}u^{D} \label{eq:evolution-of-B_AB}
\end{equation}
\begin{equation}
\xi^{A}_{;B}u^{B} = B^{A}_{B} \xi^{B} \label{eq:evolution-deviation-vector}
\end{equation}
along with the geodesic equations (\ref{eq:geodesic-1} - \ref{eq:geodesic-4}). Here, $B_{AB} = \nabla_{A}u_{B}$ is gradient of velocity vector field, $\xi^{A}$ representing the deviation vector between two geodesics and Eq. (\ref{eq:evolution-deviation-vector}) is the evolution equation for the deviation vector. To see the evolution from the perspective of a local observer, we express the tensorial quantities in the frame basis. In coordinate and frame basis, the metric tensors and components of deviation vector are related via the vierbein field $e^{A}_{a}$ as
\begin{equation}
g^{AB} = e^{A}_{a}e^{B}_{b} \eta_{ab} ,\label{eq:relation-g-eta}
\end{equation}
\begin{equation}
\xi^{A} = e^{A}_{a} \xi^{a} .\label{eq:relation-deviation-A-a}
\end{equation}
Here capital and small latin indices stand for general spacetime coordinate (w.r.t coordinate basis) and local laboratory coordinate or local Lorentz frame (w.r.t. frame basis) respectively.
As a first case, let us choose the boundary conditions (on $B_{AB}$) such that all the rotation components vanish (i.e. $\Omega_{AB}=0$). We choose a congruence of four geodesics (along with a reference geodesic at the {\em origin}) such that the cross-sectional area projected on $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{33}} \xi^{3}$ or $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{44}} \xi^{4}$ planes is of square shape at $\lambda = 0$ (or at the throat as $ l(\lambda=0) = 0$ for crossing geodesics). Then Eq. (\ref{eq:evolution-deviation-vector}) is solved for the four deviation vectors which represent the distances/deviations of the four geodesics (at the four corners of the square) from a central geodesic (co-moving observer) which is at the centre of the imaginary square (in the figures below).
The following figures show the evolution of the projected cross sectional area for both 4D-GEB and 5D-WGEB geometries with respect to $\lambda$ (as geodesic congruence crosses the throat at $\lambda=0$). Various subcases correspond to different values of the wormhole parameter $m = 2, 4, 6$ and growing/decaying warp factor is analysed. In case where $m=4$ and $m=6$ geometries show similar behaviour we have presented evolution for $m=4$. Note that we choose the angular momentum `$h$' to be non-zero (and consistent with all the constraints), in this section, which plays an interesting role in 5D scenario.
\subsection{Case-1: 4D-GEB spacetime}
Congruences in the 4D-GEB spacetime is shown in Fig. (\ref{fig:4d-deviation-evolution}).
The boundary values used in this subsection is shown in Appendix \ref{app-4}.
Four snapshots of the area with increasing and decreasing $\lambda$ is depicted in left and right plots respectively. These plots show features of expansion and shear with no rotation as expected. The cross sectional area focuses as geodesic congruence approaches towards the throat $l=0$, while distortion in projected area decreases until it hits the throat and increases after passing through the throat.
\begin{figure}[h]
\centering
\includegraphics[scale=.60]{4d-deviation-2-negative-l.pdf}
\hspace{1cm}
\includegraphics[scale=.60]{4d-deviation-2-positive-l.pdf}
\hspace{1cm}
\includegraphics[scale=.60]{4d-deviation-4-negative-l.pdf}
\hspace{1cm}
\includegraphics[scale=.60]{4d-deviation-4-positive-l.pdf}
\caption{Showing evolution of projected geodesic congruence plane (without rotation) in case of crossing congruences with different choice of wormhole parameter $m = 2, 4$ and $b_{0} = 1$. }
\label{fig:4d-deviation-evolution}
\end{figure}
The evolution of cross sectional area is symmetric about the throat which is opposite to the overall expansion profile as shown in the the previous section.
However, there are no qualitative difference in the evolution of cross-section between $m = 2$ and $m > 2$ geometries. Apparently, a congruence which is widespread and largely distorted coming from one asymptotic region crossing the throat with minimum size and distortion and regenerates its original shape and size on the other side.
\subsection{Case-2: 5D-WGEB spacetime with growing warp factor}
The evolution profiles of congruence cross-sections projected on the $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{33}} \xi^{3}$ and $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{44}} \xi^{4}$ plane for 5D-WGEB model with growing warp factor is shown in the Fig. (\ref{fig:5d-grow-deviation-evolution}) for two different values of $m = 2, 4$.
The boundary values used in this subsection is shown in Appendix \ref{app-5}.
Since the evolution is symmetric about $\lambda=0$ (or $l=0$) we only present here plots for $\lambda \geq 0$ (or $l \geq 0$).
\begin{figure}[H]
\centering
\includegraphics[scale=.6]{5d-grow-deviation-2-positive-l.pdf}
\hspace{1cm}
\includegraphics[scale=.6]{5d-grow-deviation-2-positive-y.pdf}
\includegraphics[scale=.60]{5d-grow-deviation-4-positive-l.pdf}
\hspace{1cm}
\includegraphics[scale=.60]{5d-grow-deviation-4-positive-y.pdf}
\caption{Showing evolution of projected geodesic congruence plane (without rotation) in case of crossing congruences with growing warp factor for wormhole parameter $m =2, 4$ and $b_{0} = 1$. }
\label{fig:5d-grow-deviation-evolution}
\end{figure}
The evolution of the area element projected on $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{33}} \xi^{3}$ plane has almost similar profile as the 4D case with a crucial difference-- the area has rotation (that slowly increases with increasing $m$) too along with expansion and shear even though the rotation was zero at the throat as in the 4D case. Thus the warping factor triggers a rotation in the congruence if individual geodesics has angular momenta. In other words, congruence of geodesics with angular momenta that has initial rotation at asymptotic regions may become rotation-less while passing through the throat.
The other effect of the growing warp factor is visible through the projection of the evolution on the $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{44}} \xi^{4}$ plane which shows that the geodesics whose $y$-coordinate are some distant apart at $\lambda=0$, have the same $y$-coordinate in the asymptotic regions, i.e. separation along $y$-axis vanishes among the geodesics as $\lambda \rightarrow \pm \infty$.
However, there is no rotation of the congruence on this plane.
\subsection{Case-3: 5D-WGEB spacetime with decaying warp factor}
The figures (\ref{fig:5d-decay-deviation-evolution-2}) and (\ref{fig:5d-decay-deviation-evolution-4}) show the evolution of projected area element with different $m$ values ($m = 2, 4$) in $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{33}} \xi^{3}$ and $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{44}} \xi^{4}$ plane for the 5D-WGEB model with a decaying warp factor.
The boundary values used in this subsection is shown in Appendix \ref{app-6}.
There are few factors remarkably different from the case with a growing warp factor.
First, evolution is not similar before and after crossing the throat. So we keep plots for both positive (on right) and negative (on left) $\lambda$.
Second, the presence of the decaying warp factor has amplified the magnitude of rotation of the area element considerably when $\lambda \rightarrow \infty$ whereas rotation is almost negligible when $\lambda \rightarrow -\infty$. This means a congruence with low rotation become highly rotating after crossing the throat and vice versa.
\begin{figure}[H]
\centering
\includegraphics[scale=.6]{5d-decay-deviation-2-negative-l.pdf}
\hspace{0.2cm}
\includegraphics[scale=.6]{5d-decay-deviation-2-positive-l.pdf}
\includegraphics[scale=.6]{5d-decay-deviation-2-negative-y.pdf}
\hspace{0.2cm}
\includegraphics[scale=.6]{5d-decay-deviation-2-positive-y.pdf}
\caption{Showing evolution of projected geodesic congruence plane (without rotation) in case of crossing congruences with decaying warp factor for wormhole parameter $m = 2$ and $b_{0} = 1$. }
\label{fig:5d-decay-deviation-evolution-2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=.60]{5d-decay-deviation-4-negative-l.pdf}
\hspace{0.2cm}
\includegraphics[scale=.60]{5d-decay-deviation-4-positive-l.pdf}
\includegraphics[scale=.60]{5d-decay-deviation-4-negative-y.pdf}
\hspace{0.2cm}
\includegraphics[scale=.60]{5d-decay-deviation-4-positive-y.pdf}
\caption{Showing evolution of projected geodesic congruence plane (without rotation) in case of crossing congruences with decaying warp factor for wormhole parameter $m = 4$ and $b_{0} = 1$. }
\label{fig:5d-decay-deviation-evolution-4}
\end{figure}
In contrast to the model with a growing warp factor, the evolution of the area element projected on the $\sqrt{g_{11}} \xi^{1}$-$\sqrt{g_{44}} \xi^{4}$ plane in the case of decaying warp factor has the opposite profile.
Further, these figures reveal that the wormhole parameter, $m$, affects the area element in the same way for all the models, i.e. the magnitude of the expansion and distortion decreases as $m$ grows.
Perhaps the most significant aspect of this section is the unique perspective that let us look into the details of the congruence rotation or {\em accretion}.
\section{Discussion} \label{sec:6}
Earlier we showed that the generalised 4D-Ellis-Bronnikov spacetime embedded in warped 5D bulk satisfies the energy conditions thus is a viable wormhole model within the framework of general relativity. We further discussed the particle trajectories in 4D-GEB and 5D-GEB models and compared them. These investigations show that even the simple 5D-EB model ($m=2$ case) has nice properties as a wormhole.
This work can be considered as the third article in this series where we have analysed congruence of timelike crossing geodesics in detail which is essential to understand the viability of travelling through these class of wormholes.
We derived the evolution of the so-called ESR variables for both the 4D-GEB and 5D-WGEB (with growing or decreasing warp factor) spacetimes by solving geodesic equations, geodesic deviation equations and the Raychaudhuri equation simultaneously and compared the results. The key findings of this analysis are as follows.
\begin{itemize}
\item For 4D-GEB spacetime, analytic expressions of ESR variables of the congruence were first determined without rotation ($\Omega_{AB} = 0$). The resulting plots reveal that $m = 2$ and $m > 2$ geometries can be easily distinguished based on the evolution of expansion scalar $\Theta$. ESR variables all vanish in a similar way in the asymptotic regions at $l \rightarrow \pm \infty$. This essentially happens because the geometries are asymptotically flat.
\item In the 4D-GEB geometry, for congruence with rotation, numerical results are consistent with the analytic calculation of ESR variables. Numerical analysis further show that the differences between ESR profiles in $m = 2$ and $m > 2$ geometries, that appear in absence of rotation, in fact disappear if rotation is present while the congruence is crossing the throat.
\item The 5D-WGEB models are studied with growing and decaying warp factors. In absence of any rotation in the congruence, expansion and shear becomes very large in the asymptotic regions i.e. the congruence fails to retain its shape and size while crossing the throat. Remarkably, presence of rotation improves the situation for expansion scalar in case of growing warp factor and completely removes the divergences in case of decaying warp factor model. In fact the divergence in shear at asymptotic region is removed as well.
An universal feature, in absence of rotation, in both the 4D-GEB and 5D-WGEB models is that the expansion scalar corresponding to all values of $m$ has exactly same value at length scale $l = \pm b_{0}$.
\item In the snapshots of cross-sectional area of a congruence of timelike geodesics projected on 2D surfaces ($l-\phi$ and $l-y$ planes) provide another interesting perspective on the evolution of the ESR variables. This analysis clearly shows that, a congruence with zero rotation near the throat may have rotation in the asymptotic regions if the individual geodesic possesses angular momenta. This effect is considerably large in presence of decaying warp factor compared to the case of a growing warp factor. One may say, the warping factor essentially leads to {\em accretion}. Further an effect of increasing $m$ (which essentially means a steeper throat) has similar effect on the evolutions for all the 4D and 5D models.
\item It is important to note that when the warp factor is growing, geodesics have a turning point along the extra dimension \cite{Ghosh:2010gq}, which forces congruence singularity to occur, whereas when the warp factor is decaying, the occurrence of congruence singularity is completely dependent on boundary conditions.
Our visualization analyses do show how the square elements change shape and get rotated because of variations in the metric functions and boundary conditions. We may contemplate such accretion effects for flows around brane-world wormholes (similar to braneworld black-hole scenario \cite{Pun:2008ua}). In such situations, it will become useful to pursue an approach very similar to what we have used in this article.
\end{itemize}
It still remains to analyse the congruences for {\em all} possible boundary conditions which should be imposed in the asymptotic regions. To avoid computational difficulty, in most cases, we imposed those conditions at the throat. Even then we could understand various effects the congruences will be subjected to while traversing through the hole. There are various other features yet to be studied for these class of wormholes regarding lensing, stability under perturbation as such. One crucial pointer is towards rotating Ellis-Bronnikov wormholes which seems to be possessing useful properties and yet not been addressed in the literature as much. We shall report on these issues in future communications.
\section{Appendix}
\subsection{Boundary conditions to determine ESR profiles in 4D-GEB spacetimes} \label{app-1}
\noindent
$k = \sqrt{3}$, $h = 0 $;
$t(0) = 0$, $l(0) = 0$, $\theta(0) = \pi/2$, $\phi(0) = 0$ ;\\
$\dot{t}(0) = k $, $\dot{l}(0) = 1.41421$, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = \frac{h}{(1 + l(0)^{m})^{2/m}} = 0 $ ;\\
$B_{AB}(0) = 0$ for without rotation and (initially $\theta$, $\Sigma_{AB}$ and $\Omega_{AB}$ is zero) ;\\
For with rotation case: $\theta$ and $\Sigma_{AB}$ are zero at $\lambda=0$ but non-zero $\Omega_{AB}$ are chosen such that $\Omega_{AB}u^{B} = 0 = u^{A}\Omega_{AB}$ at $\lambda = 0$
\subsection{Boundary conditions to determine ESR profiles in 5D-WGEB spacetimes with growing warp factor}\label{app-2}
\noindent
$T = \sqrt{3}$, $H = 0 $;
$t(0) = 0$, $l(0) = 0$, $\theta(0) = \pi/2$, $\phi(0) = 0$, $y(0) = 0.1 $ ;\\
$\dot{t}(0) = \frac{T}{e^{2f(y)}} = 1.71485 $, $\dot{l}(0) = 1.39665 $, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = \frac{H}{(1 + l(0)^{m})^{2/m}} = 0 $,$ \dot{y}(0) = 0 $ ;\\
$B_{AB}(0) = 0$ for without rotation that implies $\theta(0)$, $\Sigma_{AB}(0)$ and $\Omega_{AB}(0)$ is zero ;\\
For with rotation case: $\theta$ and $\Sigma_{AB}$ are zero at $\lambda=0$ but non-zero $\Omega_{AB}$ are chosen such that $\Omega_{AB}u^{B} = 0 = u^{A}\Omega_{AB}$ at $\lambda = 0$
\subsection{Boundary conditions to determine ESR profiles in 5D-WGEB spacetimes with decaying warp factor}\label{app-3}
\noindent
$T = \sqrt{3}$, $H = 0 $ ;
$t(0) = 0$, $l(0) = 0$, $\theta(0) = \pi/2$, $\phi(0) = 0$, $y(0) = 0.1 $ ;\\
$\dot{t}(0) = \frac{T}{e^{2f(y)}} = 1.74943 $, $\dot{l}(0) = 1.43195 $, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = \frac{H}{(1 + l(0)^{m})^{2/m}} = 0 $,$ \dot{y}(0) = 0 $ ;\\
$B_{AB}(0) = 0$ for without rotation that implies $\theta(0)$, $\Sigma_{AB}(0)$ and $\Omega_{AB}(0)$ is zero ;\\
For with rotation case: initially $\theta$ and $\Sigma_{AB}$ are still zero but non-zero $\Omega_{AB}$ are chosen such that $\Omega_{AB}u^{B} = 0 = u^{A}\Omega_{AB}$ at $\lambda = 0$
\subsection{Boundary conditions for evolution of cross-sectional area in 4D-GEB spacetimes}\label{app-4}
\noindent
$k = \sqrt{3}$, $h = \sqrt{\frac{k^{2} - 1 }{2}} = 1$ ;
$t(0) = 0$, $l(0) = 0 $, $\theta(0) = \pi/2$, $\phi(0) = 0$ ;\\
$\dot{t}(0) = k$, $ \dot{l}(0) = 1$, $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = \frac{h}{(1 + l(0)^{m})^{2/m}} = h = 1 $ ;\\
$\dot{l}(0)$ is calculated from the timelike geodesic constraint ;\\
$B_{AB}(0) = 0$ ;
$\xi^{1}(0) = \pm 1$, $\xi^{2}(0) = 0$, $\xi^{3}(0) = \pm 1$ ;\\
$\xi^{0}(0) = \pm \frac{2}{\sqrt{3}}, 0$ is calculated from $u_{A}\xi^{A} = 0$.
\subsection{Boundary conditions to determine evolution of cross-sectional area in 5D-WGEB spacetimes with growing warp factor}\label{app-5}
\noindent
$t(0) = 0$, $l(0) = 0$, $\theta(0) = \pi/2$, $\phi(0) = 0$, $y(0) = a = 0.1$ ;\\
$\dot{t}(0) = T$, $ \dot{l}(0) = 0.98758 $ , $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = \frac{H}{\cosh(a)^{2}(1 + l(0)^{m})^{2/m}} = 0.98758$, $\dot{y}(0) = b = 0$ ;\\
$T = \sqrt{3}$, $H = \sqrt{\frac{k^{2} - \cosh(a)^{2}(b^{2} + 1)}{2}} = 0.997489$ ;\\
$B_{\mu\nu}(0) = 0$ , where $\mu$ and $\nu$ runs from $0$ to $4$ ;\\
$\xi^{1}(0) = \pm \frac{1}{\cosh(a)} = \pm 0.995021$, $\xi^{2}(0) = 0$, $\xi^{3}(0) = \pm \frac{1}{\cosh(a)} = \pm 0.995021$, $\xi^{4}(0) = \pm \frac{1}{\cosh(a)} = \pm 0.995021$ ;\\
$\xi^{0}(0) = \pm 1.19213, \pm 0.0574475$ is calculated from $u_{A}\xi^{A} = 0$.
\subsection{Boundary conditions to determine evolution of cross-sectional area in 5D-WGEB spacetimes with decaying warp factor}\label{app-6}
$t(0) = 0$, $l(0) = 0$, $\theta(0) = \pi/2$, $\phi(0) = 0$, $y(0) = a = 0.1$ ;\\
$\dot{t}(0) = T$, $\dot{l}(0) = 1.01254 $ , $\dot{\theta}(0) = 0$, $\dot{\phi}(0) = \frac{H}{sech(a)^{2} (1 + l(0)^{m})^{2/m}} = 1.01254$, $\dot{y}(0) = b = 0$ ;\\
$T = \sqrt{3}$, $H = \sqrt{\frac{k^{2} - sech(a)^{2}(b^{2} + 1)}{2}} = 1.00248$ ;\\
$\xi^{1}(0) = \pm \frac{1}{sech(a)} = \pm 1.005$, $\xi^{2}(0) = 0$, $\xi^{3}(0) = \pm \frac{1}{sech(a)} = \pm 1.005 = \xi^{4}(0)$ ; \\
$\xi^{0}(0) = \pm 1.23305, \pm 0.0580239 $ is calculated from $u_{A}\xi^{A} = 0$.
\section*{Bibliography}
|
1,108,101,564,306 | arxiv | \section{Introduction}
Up to an energy of $\sim 3\times10^{15}$ eV, the energy spectrum of the primary cosmic radiation (PCR) is well described by a power law with an index of 2.7. At higher energy the index increases rapidly to 3.1 creating the knee in the energy spectrum. It is now more than 50 years since the knee was discovered \cite{kulikov1958}, but its nature is still the subject of intensive discussions due to its importance for understanding the origin of cosmic rays in general. In spite of many attempts to explain the origin of the knee, none of the proposed explanations is generally accepted. The main reason for this is the difficulty of obtaining a direct experimental evidence for individual PCR sources, caused by the multiple deflections of the charged particle trajectories in the chaotic and regular magnetic fields in the Galaxy. On a large scale, the propagation of the PCR particles is close to a Brownian and can be considered as a diffusive transfer.
At present there are 3 basic astrophysical models describing the behavior of the PCR in this energy range:
\begin{itemize}
\item The diffusion model \cite{ptuskin2007}, in which the knee appears as a result of the increased leakage of particles from the Galaxy with rising energy. Since magnetic fields bend heavy nuclei more than light nuclei, protons leave the Galaxy first followed later by heavier nuclei.
\item The model of limited energy \cite{berezhko2006}, which suggests that the knee reflects the maximum energy to which protons are accelerated in the shells of Galactic supernova remnants.
\item The model of a nearby source \cite{erlykinWolfendale1997}, where the spectrum of particles from this source is superimposed on the smooth Galactic spectrum and creates an excess in the knee region which causes the break of the energy spectrum.
\end{itemize}
\section{The GAMMA experiment}
The present attempt to study the origin of the knee is based on the investigation of the diffusion character of the PCR propagation in the Galaxy, and was carried out using the last 3 years experimental data of the GAMMA experiment. GAMMA is located on Mt. Aragats in Armenia at 3200 m a.s.l. (corresponding to $700 \text{ g}/\text{cm}^2$ atmospheric depth). The geographical coordinates are $l = 40^{\circ} 28' 12''$ N, $\lambda = 44^{\circ} 10' 56''$ E. The GAMMA array registers extensive air showers (EAS) in the energy range $10^{14}-10^{17}$ eV with the help of the surface and underground scintillation detectors. The detailed description of GAMMA, its technical characteristics and the main data available up to date have been presented in \cite{garyakaAstropp2007,gammaTeamJphys2008,gammaTeamJcontphys2013,martirosovBeijing2011,gallantPune2005}.
EAS with the number of charged particles $N_e > 10^5$, zenith angles of $\theta < 40^{\circ}$ in the laboratory coordinate system and the axes with radius of $R < 60$ m from the center of the GAMMA array are selected for the current analysis. The total number of EAS is 3.382.892 taken over an effective life time of 11544 hours.
During the primary treatment of the experimental data the following characteristics of the registered EAS were calculated:
\begin{itemize}
\item coordinates X and Y of the shower axis relative to the center of the GAMMA array;
\item zenith and azimuthal angles $\theta, \varphi$ in the laboratory coordinate system;
\item EAS size $N_e$ and number of muons $N_\mu$;
\item the so-called ``lateral age parameter'' $S$, calculated from the lateral distribution function in the Nishimura-Kamata-Greisen (NKG) approximation;
\item primary energy $E_0$, calculated by the method described previously \cite{gammaTeamJphys2008} using the EAS parameters $N_e, N_\mu, S, \theta $;
\item Greenwich arrival time.
\end{itemize}
For each EAS the angular coordinates $\theta, \varphi$ in the laboratory coordinate system were converted to the horizontal astronomical coordinates $\xi, h$ in the following way: $h = 90^{\circ} - \theta$ (instead of zenith angle $\theta$ the height above the horizon has been used) and $\xi = 286^{\circ} - \varphi$, since the ``North'' direction of the GAMMA installation is turned $16^{\circ}$ to the East relative to the real North \cite{gallantPune2005}. The count of $\varphi$ angles was conducted from ``East'' counterclockwise. In the astronomical horizontal coordinate system the count of $\xi$ is clockwise from the South.
The arrival direction for each EAS ($\alpha$ - right ascension, $\delta$ - declination) for the equatorial coordinate system was calculated from the horizontal astronomical system by the standard formulae using as well the geographical coordinate of the installation and the EAS arrival time. In addition, equatorial coordinates for each EAS have been recalculated to the Galactic coordinates ($l$ -– longitude, $b$ –- latitude). The correctness of the recalculation was checked by the astronomical utilities \cite{utilcoor}. The total error of the recalculation from the laboratory system to the Galactic system was no more than 10 angular minutes for the period between 1960 and 2060.
\section{Method for the analysis of the experimental data}
This method is based on two natural assumptions.
\begin{enumerate}
\item According to many experimental results \cite{guillianPhysRev2007} incoming EAS with primary energies of $10^{14} - 10^{17}$ eV are isotropic to a level better than 1\%. This is due to the presence of numerous sources and to the large-scale diffusion transport of charged particles from the sources to the Earth. It is assumed that under these conditions for a rather big number of registered EAS and not too a large distance between source and the Earth, the contribution of a particular source to the EAS from the source direction will smoothly decrease with the rising angle between the source direction and the direction of the incoming EAS. This is the consequence of the diffusive character of the transport. The maximum contribution is expected from the direction to the source, the minimum contribution -- from the opposite direction. With increasing distance to the source, the angular distribution of the incoming charged particles will become wider and tend towards an isotropic distribution, where the difference of contributions from the source and the opposite direction becomes invisible (the limit to the region of the methods sensitivity). Such an approach can be also applied to the other EAS characteristics that depend on the angle of scattering (for example, the PCR mass composition).
\item It is also assumed that the GAMMA installation operates with the same aperture independently of time of the day and season. This provides the same observational conditions for different directions as the Earth rotates. It is the common requirement for the stable operation of the experimental installation.
\end{enumerate}
The difference method for the test of the knee models at $\sim 3\times 10^{15}$ eV was suggested in \cite{pavlyuchenkoLebedevBull2014}. The difference (more accurately -- the diffusion-difference) method for the analysis of experimental data, assuming the diffusive character of the PCR propagation in the Galaxy, is as follows. The whole celestial sphere in the Galactic coordinates is divided into two (typically unequal) parts: one in the given direction ($l_0, b_0$), the other -- in the opposite direction ($l_0-180^{\circ}, -b_0$). The division is made in a way that the number of events for both samples is the same. The characteristics of the EAS from these two parts of the sky are compared with one another. For both sets of events the experimental distributions of the EAS parameter selected for the analysis (or of the combination of several parameters) are calculated. Since both sets have been reduced to equal conditions, these distributions can be subtracted from one another to study possible differences.
The reduction to equal conditions means taking the same limits of intervals for both distributions and the choice of such an angle $\psi_0$ (or $H_0=\cos \psi_0$) of the spherical cone around the direction ($l_0, b_0$), that the number of events $n$ and $n^{anti}$ for both sets are equal and at $H \ge H_0$ the EAS are coming from a part of the sky centered around the given direction, and at $H < H_0$ -- from the opposite sky part. For EAS with angles ($l,b$)
$$
H = \cos{\psi} = \sin{b_0}\sin{b} + \cos{b_0}\cos{b}\cos{(l-l_0)}.
$$
Taking into account assumptions 1 and 2 it can be said that for $n = n^{anti}$ the observation periods for the two parts of the celestial sphere are equal, and any additional validation of the conditions for the EAS registration efficiency is not required. In the difference method the common background and the possible methodical errors are subtracted automatically, because they are the same for both sets. The error in the assignment of EAS to the incorrect sample at the boundary region due to the errors in the angle estimations does not matter much. The EAS characteristics are practically similar to each other for close arrival angles and they are subtracted as a common background.
The numerical parameter for the difference of two distributions is $\sfrac{\chi^2}{J}$, where $\chi^2=\sum\limits_i{\left(\sfrac{\Delta_i}{\sigma_i}\right)^2}$, and $J$ is the number of degrees of freedom. The sum runs over all intervals $i$ of the parameter chosen for this analysis. The difference between the distributions in the interval $i$ is equal to $\Delta_i=m_i - m_i^{anti}$ ($m_i$ and $m_i^{anti}$ being the number of events in the two parts of the sky for the given interval $i$ of the parameter under study). The root-mean-square error of this difference is calculated from the Poisson distribution as
$$
\sigma_i = \sqrt{m_i+m_i^{anti}+1} = \sqrt{n_i+1},
$$
where $n_i$ is total number of events in the interval $i$ over the whole observational sphere. This number does not depend on the given angles ($l_0, b_0$). Such independence of $\sigma_i$ is very important for the comparison of the $\sfrac{\chi^2}{J}$ values to each other when scanning the sky in the search for the direction with the maximum difference between the distributions in the given and opposite directions (the maximum of $\sfrac{\chi^2}{J}$), which then may be interpreted as the direction to a source of PCR.
The equality $n = n^{anti}$ for the installation with a limited scanning sector allows us to investigate the whole sky sphere within the limits of the method's sensitivity, since the values of $\sfrac{\chi^2}{J}$ for the given and opposite directions are equal, because the values of $\sigma_i$ and $|\Delta_i|$ are equal. Only the sign of $\Delta_i$ is changed.
\section{Experimental results}
As an experimental parameter, the EAS age parameter $S$ has been chosen because of its weak dependence on the primary energy and on the EAS incoming angles in the laboratory coordinate system. This is a formal parameter derived by fitting the lateral distribution function in the Nishimura-Kamata-Greisen (NKG) approximation to the detectors response. This parameter is not the ``pure'' age of the longitudinal development of the electromagnetic cascade, which is calculated in the cascade theory, but it is linearly correlated to it. The parameter $S$ is small at the beginning of a shower's development. It is equal to about $1$ at the shower maximum and increases with atmospheric depth. The $S$ distribution has a Gaussian shape around its mean value. The average age of proton primaries is smaller than that for heavier nuclei since the latter start developing higher in the atmosphere.
We have scanned the ($l_0, b_0$) plane in order to find the maximum value of $\sfrac{\chi^2}{J}$ (see Fig.~\ref{Fig1}). The range of studied directions is $l_0 = 0^{\circ} \div 180^{\circ}$ and $b_0 = -30^{\circ} \div 30^{\circ}$. The local maximum of the $\sfrac{\chi^2}{J}$ distribution was found in the direction $l_0 = 97^{\circ} \pm 3^{\circ}$, $b_0= 5^{\circ} \pm 3^{\circ}$ (or $l_0 = 277^{\circ} \pm 3^{\circ}$, $b_0= -5^{\circ} \pm 3^{\circ}$ since we get the maximum by comparing opposite directions).
The dependence of the number of EAS in the direction of $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ ($H>0.55$) and in the opposite direction ($H<0.55$, $l_0 = 277^{\circ} \pm 3^{\circ}$, $b_0= -5^{\circ} \pm 3^{\circ}$) as a function of the $S$ parameter are presented in Table~\ref{Tbl1}.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{fig1.jpg}
\captionof{figure}{$\sfrac{\chi^2}{J}$ distribution for the $S$ parameter in the Galactic coordinate system.}
\label{Fig1}
\end{figure}
\begin{table}[!t]
\centering
\begin{tabular}{ | c | c | c | c | c | c | }
\hline
$S$ & \begin{tabular}{@{}c@{}}$m_i$ \\ $(H>0.55)$\end{tabular} & \begin{tabular}{@{}c@{}}$m_i^{anti}$ \\ $(H<0.55)$\end{tabular} & $\Delta_i$ & $\sigma_i$ & $\sfrac{\Delta_i}{\sigma_i}$ \\ \hline
0.40 & 1393 & 1634 & -241 & 55.0 & -4.4 \\ \hline
0.45 & 3214 & 3617 & -403 & 82.6 & -4.9 \\ \hline
0.50 & 6865 & 7650 & -785 & 120.5 & -6.5 \\ \hline
0.55 & 14580 & 15870 & -1290 & 174.5 & -7.4 \\ \hline
0.60 & 28379 & 30939 & -2560 & 243.5 & -10.5 \\ \hline
0.65 & 52522 & 55909 & -3387 & 329.3 & -10.3 \\ \hline
0.70 & 88132 & 93369 & -5237 & 426.0 & -12.3 \\ \hline
0.75 & 133334 & 138457 & -5123 & 521.3 & -9.8 \\ \hline
0.80 & 175895 & 180188 & -4293 & 596.7 & -7.2 \\ \hline
0.85 & 205543 & 206558 & -1015 & 642.0 & -1.6 \\ \hline
0.90 & 212242 & 210641 & 1601 & 650.3 & 2.5 \\ \hline
0.95 & 198178 & 194725 & 3453 & 626.8 & 5.5 \\ \hline
1.00 & 170464 & 165176 & 5288 & 579.4 & 9.1 \\ \hline
1.05 & 135799 & 131198 & 4601 & 516.7 & 8.9 \\ \hline
1.10 & 103088 & 99050 & 4038 & 449.6 & 9.0 \\ \hline
1.15 & 74108 & 71738 & 2370 & 381.9 & 6.2 \\ \hline
1.20 & 52113 & 50475 & 1638 & 320.3 & 5.1 \\ \hline
1.25 & 35595 & 34252 & 1343 & 264.3 & 5.1 \\ \hline
& $<S> = 0.935$ & $<S> = 0.930$ & & & \\ \hline
& $D(S) = 0.156$ & $D(S) = 0.157$ & & & \\ \hline
\end{tabular}
\caption{Distributions of the $S$ parameter (column 1) for number of EAS in the direction of $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ and in the opposite direction $l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$ (columns 2 and 3). $\Delta_i$ -– difference between the values in columns 2 and 3; $\sigma_i$ -– root-mean-square error of the difference; $\sfrac{\Delta_i}{\sigma_i}$ –- ratio of the difference to its error.}
\label{Tbl1}
\end{table}
In Fig.~\ref{Fig2} we present a comparison of the zenith angle distributions for both regions $H > 0.55$ and $H < 0.55$. This is in order to investigate if the found maximum of $\sfrac{\chi^2}{J}$ might be the result of different zenith angle distributions in both regions of $H$ which possibly could fake this maximum and also the following results. We observe a very good coincidence of the two distributions at $\theta < 25^{\circ}$ but a small difference at $25^{\circ} < \theta < 40^{\circ}$.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{fig2.jpg}
\captionof{figure}{Distributions of zenith angles $\theta$ for $H > 0.55$, solid line ($<\theta> = 22.48^{\circ}$) and $H < 0.55$, dotted line ($<\theta> = 22.51^{\circ})$ at $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$.}
\label{Fig2}
\end{figure}
Figure ~\ref{Fig3} shows the $S$ distributions for the numbers of EAS ($m_i$ and $m_i^{anti}$) in the direction $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ (solid line), in the opposite direction $l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$ (dashed line) and the difference between these distributions (curve with error bars).
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig3a.jpg}
\caption{}
\label{Fig3a}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=\linewidth]{fig3b.jpg}
\caption{}
\label{Fig3b}
\end{subfigure}
\caption{Number of EAS versus age $S$ for the direction $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ and its opposite ($l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$): a) -- for $\theta < 40^{\circ}$, b) -- for $\theta < 25^{\circ}$. The right scales show the difference $\Delta$ between the distributions.}
\label{Fig3}
\end{figure}
To make sure that the effect seen for the full investigated range of zenith angles (Fig.~\ref{Fig3a}) is not due to the small differences seen for $\theta > 25^{\circ}$ (see Fig.~\ref{Fig2}), we show the distribution also for the restricted range of $\theta < 25^{\circ}$ (Fig.~\ref{Fig3b}). The statistics for $\theta < 25^{\circ}$ is two times less than for $\theta < 40^{\circ}$, but the shape of the difference curves as well as the position of the $\sfrac{\chi^2}{J}$ value are the same for both angular ranges within the errors. Therefore the influence of methodical effects connected with different zenith angle distributions in the opposite regions of $H$ is considered negligible.
We observe a \textit{deficit} of showers with small $S$ for $H>0.55$ which is equivalent to an \textit{excess} of showers with small $S$ for $H<0.55$. This indicates a higher contribution of light nuclei for $H<0.55$ -- i.e. of those nuclei, which have a higher probability to contain any directional information not washed out by diffusion processes.
\begin{table}[!h]
\centering
\begin{tabular}{ | c | c | c | c | c | c | }
\hline
$E_0\times10^{-14}$ eV & \begin{tabular}{@{}c@{}}$m_i E_0^{1.7}$ \\ $(H>0.55)$\end{tabular} & \begin{tabular}{@{}c@{}}$m_i^{anti} E_0^{1.7}$ \\ $(H<0.55)$\end{tabular} & $\Delta_i$ & $\sigma_i$ & $\sfrac{\Delta_i}{\sigma_i}$ \\ \hline
1.00 & 2518 & 2631 & -113 & 71.7 & -1.6 \\ \hline
1.58 & 79831 & 83391 & -3560 & 597.5 & -6.0 \\ \hline
2.51 & 1147168 & 1175618 & -28450 & 3334.0 & -8.5 \\ \hline
3.98 & 4970294 & 5120105 & -149811 & 10277.9 & -14.6 \\ \hline
6.31 & 10005028 & 10268177 & -263149 & 21548.5 & -12.2 \\ \hline
10.00 & 13071512 & 13463556 & -392044 & 36463.6 & -10.8 \\ \hline
15.85 & 13649390 & 13995570 & -346180 & 55051.0 & -6.3 \\ \hline
25.12 & 13156862 & 13367718 & -210856 & 79762.4 & -2.6 \\ \hline
39.81 & 12081955 & 12331834 & -249879 & 113184.4 & -2.2 \\ \hline
63.10 & 10699998 & 10840413 & -140415 & 157259.0 & -0.9 \\ \hline
100.00 & 9177689 & 9380816 & -203127 & 215904.9 & -0.9 \\ \hline
158.49 & 7983836 & 7874125 & 109711 & 295272.1 & 0.4 \\ \hline
251.19 & 7940497 & 7196120 & 744377 & 426929.7 & 1.7 \\ \hline
398.11 & 7404421 & 7671228 & -266807 & 630165.6 & -0.4 \\ \hline
630.96 & 7816351 & 6373961 & 1442390 & 906200.9 & 1.6 \\ \hline
$\ge1000.00$ & 13135020 & 13177106 & -42084 & 1308309.4 & -0.0 \\ \hline
\end{tabular}
\caption{Dependence of the parameter $m_i E_0^{1.7}$ on $E_0$ ($m_i$ -– number of events). Column 1 -– $E_0$, columns 2 and 3 -- $m_i E_0^{1.7}$ values in direction $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ and opposite direction $l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$. Columns $\Delta_i$, $\sigma_i$ and $\sfrac{\Delta_i}{\sigma_i}$ –- difference between columns 2 and 3, root-mean-square error of the difference and ratio of the difference to its error, respectively.}
\label{Tbl2}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{fig4.jpg}
\captionof{figure}{Dependence of $m_i E_0^{1.7}$ on $E_0$ for the direction of $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ and opposite to it ($l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$). The right scale shows the difference $\Delta$ between the two distributions. }
\label{Fig4}
\end{figure}
Table~\ref{Tbl2} and Fig.~\ref{Fig4} show the dependence of the parameters $m_i E_0^{1.7}$ and $m_i^{anti} E_0^{1.7}$ on $E_0$. The weighting with $E_0^{1.7}$ has been chosen to highlight details around the knee position and to determine if there is any excess of PCR at $l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$ ($H < 0.55$) in this energy region. Such an excess is expected from Fig.~\ref{Fig3} and the observed small $S$ excess for $H<0.55$, supposing that this region has a stronger contribution of protons than the higher energies where iron PCR are supposed to dominate. Indeed we observe an excess of PCR in the knee region and slightly above.
\section{Discussion}
The $S$ distributions for the opposite directions are very similar, but the parameter $\sfrac{\chi^2}{J}$ indicates a marked difference. The maximum value of $\sfrac{\chi^2}{J}$ for the points $l_0 = 97^{\circ}\pm 3^{\circ}$, $b_0 = 5^{\circ}\pm 3^{\circ}$ ($l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$) is $57.64 \pm 0.34$ with 17 degrees of freedom. This is a very large value. For a random spread $\sfrac{\chi^2}{J}$ should be close to 1, assuming that all terms have the same error $\sigma$ and, correspondingly, the same statistical weight. In our case the $\sfrac{\chi^2}{J}$ values can be distorted; but this distortion should be the same for all directions of ($l_0,b_0$), because $\sigma_i$ in each interval does not depend on the direction. Moreover, in our case the value $\left(\sfrac{\chi^2}{J}\right)-1$ linearly depends on the total number of events, which is about 3.38 million. For the control the direction where $\sfrac{\chi^2}{J}$ has minimum equal to $1.32 \pm 0.34$ at $l_0 = 15^{\circ} \pm 10^{\circ}$, $b_0 = 60^{\circ} \pm 10^{\circ}$ was found. This value coincides with the random distribution of $\sfrac\Delta\sigma$ within the limits of one standard deviation. This finding is evidence for the lack of systematic distortions and of a large statistical reliability in the directions $l_0 = 97^{\circ}$, $b_0 = 5^{\circ}$ ($l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$). We emphasize that the direction of minimum $\sfrac{\chi^2}{J}$ is perpendicular to the direction of maximum $\sfrac{\chi^2}{J}$ (although determined with a poorer angular precision than the latter).
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{fig5.jpg}
\captionof{figure}{$\sfrac{\chi^2}{J}$ distribution for the $S$ parameter in the Galactic coordinate system (contour diagram). The white circle in the center marks the position of the Vela cluster}
\label{Fig5}
\end{figure}
Table~\ref{Tbl2} and Fig.~\ref{Fig4} demonstrate that there is an excess of EAS in the knee region from the direction of $l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$. Not far from this point there is a cluster in the Vela constellation with two closely appearing supernova remnants Vela X ($263.9^{\circ}$, $-3.3^{\circ}$) and Vela Jr ($266.2^{\circ}$, $-1.2^{\circ}$) at distances from the Earth of about 0.3 and 0.2 kpc, respectively (Fig.~\ref{Fig5}).
This cluster is a good candidate for being a nearby source of PCR. If we suppose a causal relation between the direction of the observed anisotropy and the Vela cluster we would have to explain the shift in the longitude relative to the supernova remnants and the insufficient axial symmetry. The shift could be connected with possible systematic errors of our analysis or with the existence of a regular magnetic field between the source and the Earth. We emphasize, however, that the main purpose of this paper is to present the diffusion-difference technique as a method to identify tiny anisotropies and to motivate other experimental groups to apply the method to their own data. Given the possible systematics of our analysis we consider it necessary to confirm the effect by other experiments. Anyway we note that Erlykin and Wolfendale recently \cite{erlykinWolfendale2013} draw attention to the fact that Vela could be such a strong local source, if the supernova remnant became ``leaky'' at early times.
Coming back to the rationale of the diffusion-difference method we emphasize that the excess of the ``young'' EAS from this direction may be related to the diffusion of particles on the way from the source to the Earth. ``Younger'' EAS indicate a lighter mass composition of the PCR with predominance of protons. During their diffusion heavy nuclei deviate in the interstellar magnetic field more strongly than light nuclei. That is why the flux of the PCR in the direction of source can be enriched by protons leading to a lighter composition and to the rejuvenation of the EAS coming from the source, compared with the EAS from the opposite side.
It is probable that the registered excess explains the trend to a heavier PCR mass composition at energies above the knee (see \cite{apelAstropp2005,budnevNuclPhys2009}) as well as the decline of the parameter $S$ with rising energy in the region below the knee with its further constancy and, probably, a subsequent increase \cite{cherdyntsevaNuclPhys2003,martirosovBeijing2011}. Such a decline of the $S$ parameter is more rapid than could be expected from the shift of the EAS maximum down with increasing energy.
Based on the obtained preliminary results it is impossible to say, if the excess forms the knee entirely or that the contribution of other sources is possible too, because only an excess was registered, but not an absolute value of the flux from the source.
Only one source is discovered within the radius of the method's sensitivity. Perhaps it is the Vela supernova remnant including sources Vela X and Vela Jr.
Most likely the existence of the diffusion process for the PCR is registered on the way from the nearby source to the Earth. This process initiates a reduction of the mass in the PCR mass composition and, correspondingly, rejuvenation of the EAS in the knee region. The diffusion process in the direction of the Galactic Center -- Anticenter is not observed within the limits of statistical sensitivity of the method.
\section{Conclusion}
We have presented a new method to reveal tiny anisotropies of primary cosmic ray particles, provided they consist of protons and heavier nuclei with different galactic diffusion coefficients. The main feature of the suggested method is a difference study of EAS characteristics but not their intensity in different directions. We have found the age parameter $S$ to be the most suitable and physically motivated parameter. Other parameters may also be useful, and their combination with $S$ may be even more powerful than $S$ alone.
We have used data taken with a comparatively small device, the GAMMA detector in Armenia. We find an anisotropy which is maximal along the direction between the celestial coordinates $l_0 = 277^{\circ}$, $b_0 = -5^{\circ}$ and the opposite sky position. The maximum of the excess at these coordinates turns out to be close to the Vela cluster. The effect has a high statistical significance, but yet we cannot exclude that it is caused by hitherto unconsidered systematic biases. Therefore we suggest that other experiments, with different systematics, repeat the analysis with their own data.
\section{Acknowledgments}
We are grateful to all colleagues at the Moscow Lebedev Institute and the Yerevan Physics Institute who took part in the development and exploitation of the GAMMA array. We are also grateful to the Department of Nuclear Physics and Astrophysics of the Moscow Lebedev Physical Institute, to the Yerevan Physics Institute, to DESY as well as to the State Committee of Science of the Republic of Armenia, to the ``Hayastan'' All-Armenian Fund and Program of Fundamental Research of the Presidium of the Russian Academy of Science ``Fundamental properties of matter and astrophysics'' for financial support.
We also thank Ch.~Spiering and A.~W.~Wolfendale for useful discussions and help in the preparation of this paper.
|
1,108,101,564,307 | arxiv | \section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{phd_am.eps}\hspace{\stretch{1}}\includegraphics[width=0.49\textwidth]{phd_qcd.eps}
\caption{Schematic phase diagram of the 3D Anderson model (left) and QCD
(right) in the disorder/eigenvalue plane.}
\label{fig:1}
\end{figure}
It is well known that the properties of the low-lying modes of the
Dirac operator are intimately related to the behaviour of QCD under
chiral symmetry transformations, as clearly exemplified by the
Banks-Casher relation~\cite{BC}.
In particular, it has been realised in recent years that their
localisation properties
change completely across the
chiral transition/crossover.
While below the critical temperature
$T_c$ all the eigenmodes are delocalised, it has been
shown~\cite{GGO,KGT,KP,KP2} that above $T_c$ the low-lying ones, up to
some critical point $\lambda_c$, become localised; modes above
$\lambda_c$ remain delocalised. Initially the evidence for this was
mainly obtained in the {\it quenched} approximation and/or for the
$SU(2)$ gauge group, but recently this scenario has been demonstrated
in full QCD~\cite{KP2}, by studying the spectrum of the staggered
Dirac operator in numerical simulations of lattice QCD with $N_f=2+1$
flavours of quarks at physical masses~\cite{BW}. An improved study,
with much higher statistics and larger lattice volumes, has been
presented at this conference~\cite{feri}.
The presence of a transition from localised to delocalised modes in
the spectrum, as the one found in QCD above $T_c$, is a well known
phenomenon in condensed matter physics, and it represents the main
feature of the celebrated Anderson model~\cite{Anderson} in three
dimensions. The Anderson model aims at a description of electrons in a
``dirty'' conductor, by mimicking the effect of impurities through
random interactions. In its lattice version, the model is obtained by
adding a random on-site potential to the usual tight-binding
Hamiltonian,
\begin{equation}
\label{eq:andersonmodel}
H = \sum_n \varepsilon_n| n \rangle \langle n | + \sum_{n}\sum_{\mu=1}^3 |n +
\hat\mu\rangle\langle n | + |n \rangle\langle n + \hat\mu|\,,
\end{equation}
where $| n \rangle$ denotes a state localised on the lattice site $n$, and
$\varepsilon_n$ are random variables drawn from some distribution, whose
width $W$ measures the amount of disorder, i.e., of impurities in the
system. The phase diagram of this model is sketched in
Fig.~\ref{fig:1}. While for $W=0$ all the eigenmodes are delocalised,
localised modes appear at the band edge as soon as the random
interaction is switched on. The critical energy $E_c$ separating
localised and delocalised modes is called ``mobility edge'', and its
value depends on the amount of disorder, $E_c=E_c(W)$. As $W$
increases, $E_c$ moves towards the center of the band, and above a
critical disorder $W_c$ all the modes become localised. From the
physical point of view, this signals a transition of the system from
metal to insulator.
In Fig.~\ref{fig:1} we also sketch a schematic phase diagram for
QCD. Here the role of disorder is played by the temperature, while the
energy is replaced by the eigenvalue of the Dirac operator. Localised
modes are present in the low end of the spectrum above $T_c$, up to
the ``mobility edge'' $\lambda_c(T)$. Around the critical temperature
$\lambda_c$ vanishes~\cite{KP2}, and below $T_c$ all the modes are
extended.
In both models, localised modes appear where the spectral density
is small. One then expects that they are not easily mixed
by the fluctuations of the random interaction, which in turn suggests
that the corresponding eigenvalues are statistically independent,
obeying Poisson statistics. On the other hand, eigenmodes remain
extended in the region of large spectral density also in the presence
of disorder, and so one expects them to be basically freely mixed by
fluctuations. The corresponding eigenvalues are then expected
to obey the Wigner-Dyson statistics of Random Matrix Theory (RMT).
This connection between localisation of eigenmodes and eigenvalue
statistics provides a convenient way to detect the
localisation/delocalisation transition and study its critical
properties.
The transition from Poisson to RMT behaviour in the local spectral
statistics is most simply studied by means of the so-called unfolded level
spacing distribution (ULSD). Unfolding consists essentially in a local
rescaling of the eigenvalues to have unit spectral density throughout the
spectrum. The ULSD gives the probability distribution of the
difference between two consecutive eigenvalues of the Dirac operator
normalised by the local average level spacing. The ULSD is known
analytically for both kinds of behaviour: in the case of Poisson
statistics it is a simple exponential, while in the case of
RMT statistics it is very precisely approximated by the so-called ``Wigner
surmise'' for the appropriate symmetry class, which for QCD is the
unitary class,
\begin{equation}
\label{eq:ulsd}
P_{\rm Poisson}(s)=e^{-s} \,, \qquad P_{\rm RMT}(s)=\f{32}{\pi^2} s^2
e^{-\f{4}{\pi}s^2} \,.
\end{equation}
Rather than using the full distribution to characterise the local spectral
statistics, it is more practical to consider a single parameter of the
ULSD. Any such quantity, having different values for Poisson and RMT
statistics, can be used to detect the Poisson/RMT transition. In our
study, we used the integrated ULSD and the second moment of the
ULSD,
\begin{equation}
\label{eq:il_s2}
I_\lambda = \int_0^{s_0} ds\,
P_\lambda(s)\,,\quad s_0\simeq 0.508\,,
\qquad \langle s^2 \rangle_\lambda= \int_0^\infty ds\, P_\lambda(s)\, s^2 \,,
\end{equation}
defined locally in the spectrum. The choice of $s_0$ was made in order
to maximise the difference between the Poisson and RMT predictions,
namely $I_{\rm Poisson}\simeq 0.398$ and $I_{\rm RMT}\simeq 0.117$;
as for the second moment, the predictions are $\langle s^2 \rangle_{\rm
Poisson}=2$ and $\langle s^2 \rangle_{\rm RMT}=3\pi/8$.
\section{Numerical results}
\begin{figure}[t]
\centering
\includegraphics[width=0.49\textwidth]{ilambda_improved_proc.eps}\hspace{\stretch{1}}\includegraphics[width=0.49\textwidth]{secmom_improved_proc.eps}
\caption{Integrated ULSD (left) and second moment of the ULSD
(right), computed locally along the spectrum, for several lattice
sizes. Here $\Delta\lambda=3\cdot 10^{-3}$.}
\label{fig:2}
\end{figure}
The results presented here are based on simulations of lattice QCD using
a Symanzik-improved gauge action and $2+1$ flavours of stout smeared
staggered fermions, with quark masses at physical values~\cite{BW}. We used
a lattice of fixed temporal extension $N_t=4$ at $\beta=3.75$, corresponding to
lattice spacing $a=0.125~{\rm fm}$ and physical temperature $T=394~{\rm
MeV}\simeq 2.6 T_c$. For different choices of spatial size
$L=24,28,32,36,40,44,48,56$ in lattice units, we collected large
statistics for eigenvalues and eigenvectors of the staggered Dirac
operator in the relevant spectral range - see Ref.~\cite{feri} for
more details. Here and in the following the eigenvalues $\lambda$ are
expressed in lattice units. Unfolding was done by ordering all the
eigenvalues obtained on all the configurations (for a given volume)
according to their magnitude, and replacing them by their rank order
divided by the total number of configurations. We then computed
locally the integrated ULSD and the second moment of the ULSD, by
dividing the spectrum in small bins of size $\Delta\lambda$, computing
the observables in each bin, and assigning the resulting value to the
average value of $\lambda$ in each bin. We used several values for
$\Delta\lambda$, ranging from $1\cdot 10^{-3}$ to $6\cdot 10^{-3}$.
In Fig.~\ref{fig:2} we show the integrated ULSD $I_\lambda$ and the second
moment of the ULSD $\langle s^2 \rangle_\lambda$, for several values of the
spatial volume. A transition from Poisson to RMT is clearly visible,
and moreover it gets sharper and sharper as the volume of the lattice
is increased. This suggests that the transition becomes a true phase
transition in the thermodynamic limit.
\section{Finite size scaling}
To check if the Poisson/RMT transition in the spectral statistics
(i.e., the localisation/de\-lo\-ca\-li\-sation transition) is a genuine,
Anderson-type phase transition, we have performed a finite size
scaling analysis, along the lines of Refs.~\cite{HS,SSSLS,SP}.
The Anderson transition is a second-order phase transition, with the
characteristic length of the system $\xi_\infty$ diverging at the
critical point $\lambda_c$ like $\xi_\infty(\lambda)\sim
|\lambda-\lambda_c|^{-\nu}$. To determine the critical exponent $\nu$ and
the critical point $\lambda_c$, one picks a dimensionless quantity
$Q(\lambda,L)$, measuring some local statistical properties of the
spectrum, and having different thermodynamic limits on the two sides
of the transition (and possibly at the critical point), i.e.,
\begin{equation}
\label{eq:observable}
\lim_{L\to\infty} Q(\lambda,L) = \left\{
\begin{aligned}
&Q_{\rm Poisson} &&& &\hspace{-0.40cm} \lambda<\lambda_c &&&
&\hspace{-0.40cm}\text{(localised)}\,,\\
&Q_c &&& &\hspace{-0.40cm} \lambda=\lambda_c &&&
&\hspace{-0.40cm}\text{(critical)}\,,\\
&Q_{\rm RMT} &&& &\hspace{-0.40cm} \lambda>\lambda_c &&&
&\hspace{-0.40cm}\text{(delocalised)}\,.
\end{aligned}
\right.
\end{equation}
As the notation suggests, $Q(\lambda,L)$ is computed on a lattice of
linear size $L$. For large enough volume, and close to the
critical point, finite size scaling suggests that the
dependence of $Q$ on $L$ is of the form $Q(\lambda,L) =
f(L/\xi_\infty(\lambda))$. As $Q(\lambda,L)$ is
analytic in $\lambda$ for any finite $L$, we must have
\begin{equation}
\label{eq:fss}
Q(\lambda,L)=F(L^{1/\nu}(\lambda-\lambda_c)) \,,
\end{equation}
with $F$ analytic. Here we have assumed that corrections to one-parameter
scaling can be neglected.
\begin{figure}[t]
\centering
\includegraphics[width=0.31\textwidth]{bayes_convergence_valerrsep_proc.eps}\hspace{\stretch{3}}\includegraphics[width=0.33\textwidth]{nu_fitrange_36_binsize_proc.eps}
\hspace{\stretch{1}} \includegraphics[width=0.33\textwidth]{nu_fitrange_36_width_minsize_proc.eps}
\caption{Dependence of the fitted value of $\nu$ and corresponding
relative error as a function of the number of terms $n_{\rm max}$,
in the case of $L_{\rm min}=36$, $\Delta\lambda\cdot 10^3=1.5$ and
$w\cdot 10^2=2.8$ (left).
Dependence of the fitted value of $\nu$ on the bin size
$\Delta\lambda$ for the smallest fitting
range (center) and on the width $w$ of
the fitting range for the smallest bin size (right). Here
$L_{\rm min}=36$. }
\label{fig:binwidth_dep}
\end{figure}
If one determines $\lambda_c$ and $\nu$ correctly, the numerical data
for $Q(\lambda,L)$ obtained for different lattice sizes should collapse
on a single curve, when plotted against the scaling variable
$L^{1/\nu}(\lambda-\lambda_c)$. We then proceeded as follows:
expanding the scaling function $F$ in powers of $\lambda-\lambda_c$,
one gets
\begin{equation}
\label{eq:fss_expansion}
Q(\lambda,L)=\sum_{n=0}^{\infty} F_{n}\,L^{n/\nu}(\lambda-\lambda_c)^n \,.
\end{equation}
By truncating the series to some $n_{\rm max}$ and performing a
fit to the numerical data, using several volumes at a
time, one can then determine $\nu$ and $\lambda_c$, together with the
first few coefficients $F_n$.
For our purposes, the best quantity turned out to be the integrated ULSD
$I_\lambda$. Our fitter was based on the MINUIT
library~\cite{JR}. Statistical errors were determined by means of a
jackknife analysis. To check for finite size effects, we repeated the
fit using only data from lattices of size $L\ge L_{\rm min}$ for
increasing $L_{\rm min}$.
Systematic effects due to the truncation of the series for the scaling
function, Eq.~\eqref{eq:fss_expansion}, are controlled by including more
and more terms in the series, and checking how the results change. In
order to circumvent the numerical
instability of polynomial fits of large order, we resorted to the
technique of constrained fits~\cite{LCDHMMT}. The basic idea of constrained
fits is to use the available information to constrain the
values of the fitting parameters. In our case, they are needed only to
avoid that the polynomial coefficients of higher order take unphysical
values. One then checks the convergence of the resulting
parameters and of the corresponding errors as the number of terms is
increased. After convergence, the
resulting errors include both statistical effects
and systematic effects due
to truncation~\cite{LCDHMMT}.
To set the constraints, we shift and rescale
$F$ as follows, $\tilde F(x) = (F(x) - F_{\rm RMT})/(F_{\rm
Poisson}-F_{\rm RMT})$, so that $\tilde F$ interpolates between 1
(localised/Poisson region) and 0 (delocalised/RMT region). The data
indicate that $\tilde F$ changes rapidly, monotonically and almost
linearly between 1 and 0 over a range $\delta x$. Any reasonable
definition of $\delta x$ has then to satisfy $1+\tilde F_1 \delta x \simeq
0$. Moreover, $\delta x$ provides a reasonable estimate
of the radius of convergence $\rho$ of the series.
Furthermore, it is known that $(\tilde F_{n+1}/\tilde F_n) \rho \to 1$ as
$n\to\infty$, and so we expect $\tilde F_n \rho^n \sim 1$ (at least
for large $n$). One then finds that $\tilde F_n/(-\tilde F_1)^n $ is
expected to be of order 1. This constraint was imposed rather loosely,
by asking $\tilde F_n/(-\tilde F_1)^n$ to be distributed according to
a Gaussian of zero mean and width $\sigma = 10$ for $n\ge 4$. We did
not impose any constraint on the coefficients $F_n$ with $n<4$, as
well as on $\nu$ and $\lambda_c$. The results of the constrained fits
converge rather rapidly as $n_{\rm max}$ is increased, see
Fig.~\ref{fig:binwidth_dep}. We went as far as $n_{\rm max}=9$, and we
used the corresponding results for the following analyses.
The effects of the choice of bin size and fitting range
were checked by varying the bin size $\Delta\lambda$
and the width $w$ of the fitting range, which was centered
approximately at the critical point. The results show a slight
tendency of $\nu$ to decrease as $\Delta\lambda$ is decreased,
but it is rather stable for $\Delta\lambda\cdot 10^{3}\lesssim 3$.
There is also a slight tendency of $\nu$ to increase as $w$ is
decreased, becoming rather stable for $w\cdot 10^{2}\lesssim 3$. See
Fig.~\ref{fig:binwidth_dep}. To quote a single value for $\nu$, we
averaged the central values obtained for $1 \le \Delta\lambda\cdot
10^{3} \le 3$ and $2.6 \le w\cdot 10^{2} \le 3$. As the error is also
rather stable within these ranges, we quote its average as the final
error on $\nu$ for each choice of $L_{\rm min}$. We have checked that
other prescriptions (e.g., extrapolating to vanishing $w$ and/or
$\Delta\lambda$, or changing -- within reasonable bounds -- the ranges
of $w$ and $\Delta\lambda$ over which the final average is performed)
give consistent results within the errors.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{nu_vs_L_wav_dl1-3_alt.eps}
\hspace{\stretch{1}}
\includegraphics[width=0.48\textwidth]{shape_L_proc.eps}
\caption{Dependence of the fitted value of $\nu$, averaged over
$2.6\le w\cdot 10^2 \le 3.0$ and $1.0\le \Delta\lambda\cdot 10^3
\le 3.0$, on $L_{\rm min}$. The values of $\nu$ obtained in
the three symmetry classes of the 3D Anderson model (symplectic,
$\nu_{\rm S}=1.375(16)$~\cite{nu_symp}, unitary $\nu_{\rm
U}=1.43(4)$~\cite{nu_unitary} and orthogonal $\nu_{\rm
O}=1.57(2)$~\cite{nu_orth})
are shown for comparison together with their errors (left). Plot
of $I_\lambda$ against $\langle s^2\rangle_\lambda$ for several lattice
sizes (right).}
\label{fig:lmin_dep}
\end{figure}
Concerning finite size effects,
the fitted value of $\nu$ increases with $L_{\rm min}$, stabilising
around $L_{\rm
min}=36$, see Fig.~\ref{fig:lmin_dep}. This signals that our smallest
volumes are still too small for one-parame\-ter scaling to work, and
that finite size corrections are still important there. On the other
hand, as the difference between the values obtained with $L_{\rm
min}=36$ and $L_{\rm min}=40$ is much smaller than the statistical
error, one-parameter scaling works fine for our largest volumes.
The value for the critical point $\lambda_c\simeq 0.336$ was obtained
through the same procedure described above. As a function of $L_{\rm
min}$, the fitted value of $\lambda_c$ shows no systematic
dependence, and different choices of $L_{\rm min}$ give consistent
values
within the errors.
Our result for the critical exponent $\nu=1.43(6)$ is compatible with
the result obtained for the three-dimensional unitary Anderson model
$\nu_{\rm U}=1.43(4)$~\cite{nu_unitary}. This strongly suggests that
the transition
found in the spectrum of the Dirac operator above $T_c$ is a true
Anderson-type phase transition, belonging to the same universality
class of the three-dimensional unitary Anderson model.
\section{Shape analysis}
From the point of view of random matrix models, Fig.~\ref{fig:2} shows
that the local spectral statistics along the spectrum are described by
one-parameter families of models, with spectral statistics
interpolating between Poisson and Wigner-Dyson along some path in the
space of probability distributions. To check if the appropriate
one-parameter family depends on the size of the lattice,
one can simply plot a couple of parameters of the ULSD against each
other (thus projecting the path onto a two-dimensional plane in the
space of probability distributions): if points are seen to collapse on
a single curve, irrespectively of $L$, then the intermediate ULSDs lie
on a universal path in the space of probability
distributions~\cite{Varga}.
In Fig.~\ref{fig:lmin_dep} we show $I_\lambda$ and $\langle
s^2\rangle_\lambda$ plotted against each other for several volumes, and we
see that they indeed lie on a single curve. As $L$ is increased,
points corresponding to a given value of $\lambda$ flow
towards the Poisson or RMT ``fixed points'', while flowing away from
an unstable fixed point corresponding to $\lambda_c$, where a
different universality class for the spectral statistics is
expected. Similar plots made by changing $T$ and/or $a$ are
compatible with a similar universality of the path, but statistical
errors are still too large to reach a definitive conclusion.
The transition from Poisson to Wigner-Dyson behaviour in finite volume
is therefore expected to be described by a universal one-parameter
family of random matrix models~\cite{Nishigaki}. Comparing with
analogous results for the Anderson model, it turns out that the
spectral statistics at the critical point in the two models are
compatible~\cite{Nishigaki}.
|
1,108,101,564,308 | arxiv | \section{Introduction}\label{sect1}
Recently, separably injective Banach spaces have been studied in depth by Avil\'es, Cabello S\'anchez, Castillo, Gonz\'alez
and Moreno in \cite{AC13,av,AC16}, where one can find a number of interesting examples of these spaces despite
the scarcity of examples of injective Banach spaces.
In contrast to the fact that $1$-injective Banach spaces
are isometric to the Banach space $C(\Omega)$ of continuous functions on a compact
Hausdorff space $\Omega$ \cite{H58,kelley}, $1$-separably injective Banach spaces
need not be complemented in $C(\Omega)$. In view of this, a natural question arises: when is a $1$-complemented subspace of $C(\Omega)$ separably injective? We address this question in this paper.
A Banach space $V$ is {\it $1$-separably injective} if every continuous
linear map\\ $T: Y \longrightarrow V$ on a closed subspace $Y$ of a {\it separable} Banach space $Z$
admits a norm preserving
extension to $Z$.
It is known that $C(\Omega)$ itself is separably injective if and only
if $\Omega$ is an $F$-space \cite{AC13}. Abelian C*-algebras with identity are of the form $C(\Omega)$ for some $\Omega$.
The ones without identity can be represented as the algebra $C_0(S)$ of complex continuous functions vanishing at infinity on a locally compact Hausdorff space $S$ and it has been shown lately that $C_0(S)$ is separably injective if and only if $S$ is substonean \cite{CL16}. Following
\cite{GP}, we call $S$ \textit{substonean} if any two
disjoint open $\sigma$-compact subsets of $S$ have disjoint
compact closures. The compact substonean spaces are exactly the \textit{F-spaces}
defined in \cite{GJ,s}. However, infinite discrete spaces are
F-spaces without being substonean. We refer to \cite[Example
5]{HW89} for an example of a substonean space which is not an
F-space.
Noting that the class of $1$-complemented subspaces of $C(\Omega)$ is identical to that of $C_\sigma$-spaces
\cite[Theorem 3]{lw} (see also Remark \ref{cs}), our question amounts to asking for a characterisation of separably injective $C_\sigma$-spaces.
We give a complete answer by showing that a $C_\sigma$-space is separably
injective if and only if it is linearly isometric to the function
space $C_0(S)$ on a substonean locally compact Hausdorff space $S$.
In what follows, all Banach spaces are over the complex field and we will denote by $C_0(K)$ the C*-algebra of complex continuous functions vanishing at
infinity on a locally compact Hausdorff space $K$. If $K$ is compact, we omit the subscript $0$.
Given a function $g \in C_0(K)$, we denote by $\overline g$ the complex conjugate of $g$.
Let $\mathbb{T}=\{\alpha\in \mathbb{C}: |\alpha|=1\}$ be the
circle group. By a $\mathbb{T}$-space, we mean a locally compact
Hausdorff space $K$ equipped with a continuous group action
$$\sigma: (\alpha, \omega) \in \mathbb{T} \times K \mapsto \alpha \cdot \omega
\in K.$$
A complex Banach space is called a {\it complex
$C_\sigma$-space} if it is linearly isometric to a function space
of the form $C_\sigma(K)$ for some $\mathbb{T}$-space $K$, defined
by
$$C_\sigma(K) = \{f\in C_0(K): f(\sigma(\alpha,\omega))= \alpha f(\omega),
\forall \omega \in K\}.$$
We note that the definition of a complex $C_\sigma$-space in \cite{olsen} requires
$K$ to be compact.
To achieve our result, we make substantial use of the Jordan algebraic structure
of $C_\sigma(K)$. Indeed, although $C_\sigma(K)$ lacks a C*-algebraic structure, it is equipped
with a triple product
$$\{f,g,h\} = f\overline g h \qquad (f,g, h \in C_\sigma(K))$$
which turns it into a {\it JB*-triple} with many useful Jordan properties.
First, let us give a brief introduction to JB*-triples which generalise C*-algebras.
For further references and the geometric origin of JB*-triples, we refer to \cite{book,ru,u}. A complex Banach space $V$ is a {\it JB*-triple} if it admits a continuous triple product
$$\{\cdot,\cdot,\cdot\} : V^3 \longrightarrow V$$
which is symmetric and linear in the
outer variables, but conjugate linear in the middle variable, and
satisfies
\begin{enumerate}
\item[(i)] $\{x,y,\{a,b,c\}\}=\{\{x,y,a\},b,c\}-\{ a,\{y,x,b\},c\}
+\{a,b,\{x,y,c\}\}$; \item[(ii)] the operator $a\sqr44\, a : x\in V \mapsto \{a,a, x\} \in V$ has real numerical range and
non-negative spectrum; \item[(iv)] $\|a\sqr44\, a\|=\|a\|^2$
\end{enumerate}
for $a,b,c,x,y \in V$. We always have $\|\{a,b,c\}\| \leq \|a\| \|b\| \|c\|$ and (i)
is called the {\it Jordan triple identity}.
A C*-algebra $A$ is a JB*-triple with the triple product
$$\{a,b,c\} = \frac{1}{2}(ab^*c +cb^*a) \qquad (a,b,c \in A).$$
More generally, the range of a contractive projection on a C*-algebra is a JB*-triple
(cf.\,\cite[Theorem 3.3.1]{book}), but not always a C*-algebra.
An element $e$ in a JB*-triple is called a {\it tripotent} if $e = \{e,e,e\}$. Tripotents in
C*-algebras are exactly the partial isometries.
A subspace $W$ of a JB*-triple $V$ is called a {\it
subtriple} if $a,b,c \in W$ implies $\{a,b,c\} \in W$. Closed subtriples of a JB*-triple
are JB*-triples in the inherited norm and triple product. A {\it
triple ideal} of $V$ is a subspace $J\subset V$ such that
$\{a,b,c\}\in J$ whenever one of $a$, $b$ and $c$ belongs to $J$.
Given a closed triple ideal $J\subset V$, the quotient space $V/J$ is a JB*-triple
in the triple product
$$\{a+J, b+J, c+J\} := \{a,b,c\} + J.$$ Two elements $a,b \in V$ are said to be {\it orthogonal}
to each other if $a\sqr44\, b =0$, where $a\sqr44\, b$ is the continuous linear
map $x\in V \mapsto \{a,b,x\}\in V$. Two subspaces $I, J \subset V$ are {\it orthogonal} if
$I\sqr44\, J := \{a\sqr44\, b: a\in I, b\in J\} = \{0\}$.
The bidual $V^{**}$ of
a JB*-triple $V$ carries a natural structure of a JB*-triple, with a unique predual, in which
the triple product is separately weak* continuous and the natural
embedding of $V$ into $V^{**}$ identifies $V$ as a subtriple of
$V^{**}$. Given a closed triple ideal $I \subset V$, the bidual $I^{**}$ embeds as a weak* closed
triple ideal in $V^{**}$, which can be decomposed into an $\ell^\infty$-sum $V^{**} = I^{**} \oplus_\infty J$ for some weak* closed triple ideal $J\subset V^{**}$, orthogonal to $I^{**}$ \cite[lemma 3.3.16]{book}.
A linear map $\varphi: V \longrightarrow W$ between two JB*-triples is called
a {\it triple homomorphism} if $\{\varphi(a), \varphi(b),\varphi(c)\} =
\varphi\{a,b,c\}$ for $a,b, c \in V$. The triple isomorphisms between $V$ and $W$ are
exactly the surjective linear isometries (cf.\,\cite[Theorem 3.1.7, Theorem 3.1.20]{book}).
A JB*-triple $V$ is called {\it abelian} if
its triple product satisfies
$$\{\{x,y,z\},u,v\} =\{x,\{y,z,u\},v\} = \{x,y,\{z,u,v\}\}$$
for all $x,y,z,u, v \in V$. An abelian C*-algebra is an abelian JB*-triple
and so is $C_\sigma(K)$ in the triple product defined above. In fact, $C_\sigma(K)$
is a closed subtriple of $C_0(K)$.
By \cite[Lemma 2.2, Theorem 3.7]{bc}, an
abelian closed subtriple $V$ of a C*-algebra admits a composition series
$(J_\lambda)_{0\leq \lambda\leq \mu}$ of closed triple ideals, indexed by ordinals $\lambda$, such that
the quotient $J_{\lambda+1}/J_{\lambda}$ is linearly isometric to an
abelian C*-algebra, for $\lambda< \mu$. We recall that $(J_\lambda)_{0\leq\lambda\leq\mu}$
is called a {\it composition series} if $J_0=\{0\}, J_\mu = V$ and for a limit ordinal $\lambda \leq \mu$, the ideal $J_\lambda$ is the closure
of $\bigcup_{\lambda' <\lambda} J_{\lambda'}$.
\section{Jordan structure in $C_\sigma$-spaces}\label{j}
We will make use of the abelian JB*-triple structure of $C_\sigma$-spaces to derive our result.
To pave the way, we first present some detailed analysis of this structure.
Let $V$ be an abelian closed subtriple of a C*-algebra (e.g. a $C_\sigma$-space)
in this section. One
can consider it as as a subtriple
of its bidual $V^{**}$ via the natural embedding
$v\in V \mapsto \widehat v \in V^{**}$, where $\widehat v (\psi)
= \psi (v)$ for $\psi \in V^*$. By \cite{bc, FR83}, $V^{**}$
is (isometric to and identified as) an {\it abelian} von Neumann
algebra with identity denoted by $\mathbf{1}$ and involution $z\in
V^{**} \mapsto z^* \in V^{**}$. The triple product in $V^{**}$ is
given by $\{a,b,c\} = ab^*c$. Each $\psi \in V^*$ can be viewed
naturally as a functional of $V^{**}$. If $\psi$ is a positive functional of
$V^{**}$, then $\psi (z^*) = \overline{\psi(z)}$ for each $z\in
V^{**}$. A positive functional $\psi\in V^*$ is called a {\it normal state} of $V^{**}$
if $\psi(\mathbf{1})=1$. It is called {\it pure} if it is an extreme point
of the norm closed convex set of normal states in $V^*$.
Let $S$ be the set of all pure normal
states of $V^{**}$, which are exactly the multiplicative normal
states of $V^{**}$. Given a projection $p\in V^{**}$ and $s\in S$,
we have $s(p)= 0$ or $1$ since $s(p)=s(p^2) = s(p)^2$. If $u\in
V^{**}$ is unitary, then $1=s(\mathbf{1})=s(u^*u) = |s(u)|^2$ for
all $s\in S$. We equip $S$ with the weak* topology of $V^*$ and
call it the {\it pure normal state space} of $V^{**}$.
The nonzero triple homomorphisms from $V$ to $\mathbb{C}$ are
exactly the set $K= {\rm ext}\, V^*_1$ of extreme points of the
dual unit ball $V^*_1$, where $K\cup \{0\}$ is weak*-compact
\cite[Proposition 2.3, Corollary 2.4]{FR83} and $S \subset K$.
For
each $\omega \in K$ and tripotent $c\in V^{**}$, we have
$$\omega(c)=\omega(cc^*c)=\omega(c)|\omega(c)|^2$$
which implies $\omega(c)=0$ or $\omega(c) \in \mathbb{T}$. We note
that $K$ is a $\mathbb{T}$-space with the natural
$\mathbb{T}$-action
$$\sigma: (\alpha,\omega) \in \mathbb{T}\times K \mapsto \alpha \omega \in
K$$ and we have $ K=\{\alpha s: \alpha \in \mathbb{T}, s\in S\}$.
In fact, each $\omega \in K$ has a {\it unique} representation
$\omega = \alpha s$ for some $\alpha \in \mathbb{T}$ and $s\in S$,
where $s= \overline{\omega(\mathbf{1})}\omega$. By \cite[Theorem
1]{FR83}, the map
\begin{equation}\label{id}
v\in V \mapsto \widehat v|_K \in C_\sigma(K)
\end{equation}
is a surjective linear isometry, which enables us to identify $V$
with the $C_\sigma$-space $C_\sigma (K)$.
\begin{remark} \label{cs} Let $\pi: C(\Omega) \longrightarrow C(\Omega)$ be a contractive projection.
Then its image $\pi(C(\Omega))$ is an abelian closed subtriple in some C*-algebra \cite{fr}
and hence the previous discussion implies
that it is a $C_\sigma$-space.
\end{remark}
For each $a\in V\backslash \{0\}$, let $V(a)$ be the JB*-subtriple generated by $a$ in $V$.
Then there is a surjective linear isometry and triple isomorphism
\begin{equation}\label{em}
\phi: C_0(S_a) \rightarrow V(a) \subset V
\end{equation}
which identifies $V(a)$
with the abelian JB*-triple
$C_0(S_a)$ of continuous functions vanishing at infinity on the
triple spectrum $S_a \subset (0, \|a\|]$, where $S_a \cup \{0\}$
is compact \cite[Theorem 3.1.12]{book}.
Let $\phi^{**}: C_0(S_a)^{**} \rightarrow V(a)^{**}$ be the
bidual map and let
$\frak{i}$ be the identity in the von Neumann algebra $C_0(S_a)^{**}$. Then
$e = \phi^{**}(\frak i)$ is the identity in the von Neumann algebra $V(a)^{**}$ with
product and involution given by
$$x \cdot y =
\{x,e,y\}, \quad x \in V(a)^{**} \mapsto \{e,x,e\} \in V(a)^{**}.$$
While this abelian von Neumann algebraic
structure of $V(a)^{**}$ will be assumed throughout, it
should be noted that $V(a)^{**}$ need not be a subalgebra of
$V^{**}$ in its natural embedding. Nevertheless, $V(a)^{**}$ can
always be considered as a subtriple of $V^{**}$ and the identity
$e\in V(a)^{**}$ is a tripotent in $V^{**}$ satisfying
\begin{equation}\label{eae}
\{e, a,e\} = \{\phi^{**}(\frak i), \phi^{**}(\iota_a), \phi^{**}(\frak i)\} = \phi^{**}\{\frak i, \iota_a, \frak i\}
= a.
\end{equation}
For each $\rho\in K$, viewed as a complex-valued triple homomorphism on $V(a)^{**}$,
we have $\rho(e) =0$ if and only if $\rho(a)=\rho\{e,e,a\}=0$.
The norm closed triple ideal $J_a$ generated by $a$ in $V$ contains $V(a)$ and is the norm closure of $\{a, V,a\}$.
It has been shown in \cite[Lemma 2.2]{bc} that $J_a$ is linearly isometric to the abelian C*-algebra $C_0(X_a)$ of continuous functions vanishing
at infinity on
a locally compact Hausdorff space $X_a$. We need some detail here for later
application. In fact, $J_a$ is an abelian C*-algebra with the same product and involution of $V(a)^{**}$,
and $e\in V(a)^{**} \subset J_a^{**}$ is the identity of $J_a^{**}$, which can be seen from the following
computation using (\ref{eae}).
\[\{\{a,x,a\},e,\{a,y,a\}\} = \{a, \{x, \{a,e,a\}, y\},a\} \in \{a, V, a\};\]
\[ \{e, \{a,x,a\},e\} = ea^*xa^*e = ee^*ae^*xe^*ae^*e=ae^*xe^*a
= \{x,a,a\} \in J_a. \]
Given $\rho \in K = {\rm ext}\,V^*_1$ with $\rho(e)=1$, its restriction $\rho|_{J_a} \in J_a^*$
is a pure normal state of $J_a^{**}$. Conversely,
each pure normal state $\varphi$ of
$J_a^{**}$ is an extreme point of
the closed unit ball of $J_a^*$ and can be extended to an extreme
point $\widetilde \varphi \in {\rm ext}\, V^*_1$ satisfying $\widetilde \varphi(e)=1$.
Let
\[X_a=\{\rho|_{J_a}: \rho \in K, \rho(e)=1\}\]
denote the pure normal state space of $J_a^{**}$ which is locally compact in the weak* topology $w(J_a^*,J_a)$ of $J_a^*$.
We note that for each $\rho \in K$, we have $\rho(a)=0$ if and only if $\rho(V(a))=\{0\}$, which in turn is equivalent
to $\rho(J_a)= \rho(\overline{\{a,V,a\}})=\{0\}$.
\begin{lem}\label{ja} In the above notation, the set $K(e)=\{\rho\in K: \rho(e)=1\}$ with the relative weak* topology
of $V^*$ is homeomorphic to $X_a=\{\rho|_{J_a}: \rho \in K, \rho(e)=1\}$ in the topology $w(J_a^*,J_a)$.
In particular, $K(e)$ is weak* locally compact in $K$.
\end{lem}
\begin{proof} We show that the restriction map $\rho \in K(e) \mapsto \rho|_{J_a} \in X_a$ is a homeomorphism in these topologies.
It is clearly continuous and surjective. Given $\rho, \rho'\in K$ such that $\rho|_{J_a} = \rho'|_{J_a}$, then
$\rho(a)=\rho'(a)\neq 0$ since $\rho(e)=\rho'(e)=1$. For any $v\in V$, we have $\{a,a,v\} \in J_a$ and hence $|\rho (a)|^2\rho(v) =
\rho\{a,a,v\}=\rho'\{a,a,v\} = |\rho'(a)|^2\rho'(v)$, giving $\rho(v)=\rho'(v)$. This proves injectivity of the map.
Finally, to show that the inverse of the map is continuous, let $(\rho_\gamma|_{J_a})$ be a net
converging to $\rho|_{J_a} \in X_a$. Then
again, for each $v\in V$, we have $\rho_\gamma (a) \rightarrow \rho(a) \neq 0$ and $|\rho_\gamma (a)|^2\rho_\gamma(v) =
\rho_\gamma \{a,a,v\} \rightarrow\rho\{a,a,v\} = |\rho(a)|^2\rho(v)$, which implies $\rho_\gamma(v) \rightarrow \rho(v)$,
proving continuity.
\end{proof}
\begin{remark}\label{xake}
The above lemma enables us to identify the pure normal state space $X_a$ with $K(e)$ and write
$X_a=\{\rho\in K: \rho(e)=1\}$.
\end{remark}
We retain the above notation in the sequel.
\section{Separably injective $C_\sigma$-spaces}
We characterize separably injective $C_\sigma$-spaces in this section. Throughout, let $V$
be a $C_\sigma$-space.
We will identify $V$, as in the previous section, with
the $C_\sigma$-space $C_\sigma(K)$, where $K= {\rm ext}\, V^*_1$ is the set of nonzero triple homomorphisms
from $V$ to $\mathbb{C}$.
\begin{lem} \label{2}
Let $V$ be separably injective. Given $a\in V$ of unit norm and the identity
$e\in V(a)^{**}$, let $K(e) =\{\rho\in K: \rho(e)=1\}$.
Then there exists an element $v_a \in V$ such that $K(e) \subset K_a$
where
$$K_a = \{ \rho \in K: \rho (v_a)=1\}$$
and $K_a$ is weak* compact in $V^*$.
\end{lem}
\begin{proof}
Since $K_a$
is weak* closed in $K \cup \{0\}$, it is weak* compact. Let
$$\phi: C_0(S_a) \rightarrow V(a) \subset V$$
be the embedding in (\ref{em}), where the triple spectrum $S_a\subset (0,1]$ can be identified, via
the evaluation map as usual, with the pure normal state space of $C_0(S_a)^{**}$.
Let $\chi_a$ be the constant function on $S_a$ with value $1$
and consider the separable subspace
$C_0(S_a)+ \mathbb{C}\chi_a$ of $\ell^\infty (S_a)$. By separable injectivity of $V$,
the embedding $\phi: C_0(S_a)\rightarrow V(a) \subset V$
admits a norm preserving extension
$\Phi :C_0(S_a) + \mathbb{C}\chi_a \rightarrow V$. Let $$v_a = \Phi(\chi_a)\in V.$$
To complete the proof, we show that $\rho(v_a) =1$ for each $\rho \in K(e)$.
We first observe that the sequence
$(r_n)$ of odd roots of the identity function $\iota_a$ in $C_0(S_a)$ converges
pointwise to the function $\chi_a$. Let $u_n = \phi(r_n)\in V(a)$.
Let $\rho \in K(e)$. Then the map $\rho \circ \phi : C_0(S_a) \rightarrow \mathbb{C}$
is a pure normal state of $C_0(S_a)^{**}$. Hence
we have $\rho (u_n) = \rho\circ \phi (r_n) \in [0,1]$ and
$\lim_n \rho(u_n)=1$.
The norm preserving extension $\Phi$ satisfies $\|\Phi(\chi_a)\| \leq 1$ and
$$\|\Phi(\chi_a) - 2u_n\| = \|\Phi(\chi_a) - 2\phi(r_n)\|
=\|\Phi(\chi_a) - 2\Phi (r_n)\| \leq \|\chi_a- 2r_n\| \leq 1.$$
It follows that
$|\rho(\Phi(\chi_a)) - 2\rho(u_n)| \leq 1$ for all $n$, which implies
$$|\rho(\Phi(\chi_a)) - 2|\leq 1. $$ Now $|\rho(\Phi(\chi_a))| \leq 1$ gives $\rho(v_a)=\rho(\Phi(\chi_a))=1$.
\end{proof}
Our next task is to show that a separably injective $C_\sigma$-space $V$ is actually linearly
isometric to an abelian C*-algebra. We adopt the following strategy. Since $V$ is abelian,
it has been noted in Section \ref{sect1} that there is a composition series $(J_\lambda)_{0\leq \lambda \leq \mu}$
of closed triple ideals in $V$ such that for each ordinal $\lambda < \mu$, the
quotient $J_{\lambda +1}/J_\lambda$ is linearly isometric to
the C*-algebra $C_0(X_\lambda)$ of continuous functions vanishing at infinity on a locally compact
Hausdorff space $X_\lambda$, and $V^{**}$ is linearly isometric to the $\ell^\infty$-sum
$\bigoplus^{\ell^\infty}_{\lambda <\mu} (J_{\lambda +1}/J_\lambda)^{**} = \bigoplus^{\ell^\infty}_{\lambda <\mu} C_0(X_\lambda)^{**}.$
By the uniqueness of predual,
$V^*$ is linearly isometric to the $\ell^1$-sum
\[\bigoplus^{\ell^1}_{\lambda <\mu} (J_{\lambda +1}/J_\lambda)^* = \bigoplus^{\ell^1}_{\lambda <\mu} C_0(X_\lambda)^*.\]
Given that $V$ is separably injective, we will refine this construction
to show that $V$ is isometric to the abelian C*-algebra $\displaystyle\bigoplus^{c_0}_{\lambda< \mu} C_0( X_{\lambda})$.
Let $V$ be separably injective and let $a\in V$ be of unit norm. Consider the closed
subtriple $V(a)$ generated by $a$ in $V$ as well as the closed triple ideal $J_a$, the latter is linearly isometric
to the C*-algebra $C_0(X_a)$ as shown before, where $X_a=\{\rho\in K: \rho(e_a)=1\}$ is weak*
locally compact by Lemma \ref{ja} and Remark \ref{xake}, and $e_a$ is the identity of the von Neumann algebra $J_a^{**}$. By separable injectivity and Lemma \ref{2},
there exists $v_a\in V$ such that $\rho(v_a)=1$ for each $\rho \in X_a$.
If $J_a= V$, then we are done. Otherwise we have the $\ell^\infty$-sum \[V^{**}=J_a^{**}\oplus_\infty
(J_a^{**})^\sqr44\,\] where $(J_a^{**})^\sqr44\,$ is a nonzero weak* closed triple ideal in $V^{**}$, orthogonal to
$J_a^{**}$, that is, $J_a^{**} \sqr44\, (J_a^{**})^\sqr44\, = \{0\}$ (cf.\,\cite[Lemma 3.3.16]{book}). The quotient map
$V^{**} \rightarrow V^{**}/J_a^{**}$ identifies $(J_a^{**})^\sqr44\,$ with the quotient $V^{**}/J_a^{**}$ and we can write
\begin{equation}\label{33}
V^{**} = J_a^{**} \oplus_\infty (V^{**}/J_a^{**}) = C_0(X_a)^{**}\oplus_\infty (V/J_a)^{**}.
\end{equation}
We have the $\ell^1$-sum $V^{*} = C_0(X_a)^{*}\oplus_1 (V/J_a)^{*}$ where $J_a$ is an M-ideal in $V$.
Consider the quotient map
\[x\in V \mapsto [x]:= x +J_a \in [V]:= V/J_a \]
which maps the closed unit ball of $V$ onto the closed unit ball of $V/J_a$ (cf.\,\cite[Corollary 5.6]{ae}).
Pick $[b] = b+J_a \in [V]$ with unit norm. Let $V([b])$ and $J_{[b]}$ be respectively the
closed subtriple and triple ideal generated by $[b]$ in $[V]$. We can repeat the previous arguments in the setting
$V([b]) \subset J_{[b]} \subset [V]$ to deduce that $J_{[b]}$ is linearly isometric to some abelian
C*-algebra $C_0(X_{[b]})$ with
\[X_{[b]} =\{\theta \in{\rm ext}\, [V]^*_1: \theta (e_{[b]})=1\}\]
where $X_{[b]}$ is locally compact in the weak* topology of $[V]^*$ by Lemma \ref{ja} and
$e_{[b]}$ is the identity of $J_{[b]}^{**}\subset [V]^{**} =V^{**}/J_a^{**}$, which identifies
with a tripotent $\widetilde e_b \in (J_a^{**})^\sqr44\,$ with $e_{[b]}= \widetilde e_b + J_{a}^{**}$.
Moreover, the quotient JB*-triple $[V]= V/J_a$ is abelian and seprably injective
by \cite[Proposition 4.6]{AC13}, and hence Lemma \ref{2} implies that
there exists $v_{[b]} =\widetilde v_b + J_a\in [V]= V/J_a$ such that $\theta (v_{[b]})=1$ for each
$\theta \in X_{[b]}$.
The set ${\rm ext}\, [V]^*_1$ consists of nonzero complex-valued triple homomorphisms on $[V]$,
which can be lifted to nonzero complex triple homomorphisms on $V$ via the quotient map and we have
\[{\rm ext}\, [V]^*_1=\{\bar \rho: \rho \in K, \rho(J_a)=\{0\}\}\]
where $\bar \rho (x + J_a):= \rho(x)$.
Hence we have
\[X_{[b]}=\{ \bar\rho :
\rho \in K, \rho(a)=0, \rho(\widetilde e_b)=1\} \]
and $\bar \rho \in X_{[b]}$ implies $\rho(b) = \bar \rho([b]) \neq 0$ and $\rho(\widetilde v_b)=1$.
A weak* convergent net in $[V]^*$ lifts to a weak* convergent net in $V^*$ via the quotient map.
Considering $X_{[b]}\subset C_0(X_{[b]})^*= J_{[b]}^*$ and Lemma \ref{ja}, we see that
a net $(\bar\rho_\gamma)$ in $X_{[b]}$ converges to $\bar\rho\in X_{[b]}$
in the weak* topology of $C_0(X_{[b]})^*$ if and only if
$(\rho_\gamma)$ weak* converges to $\rho$ in $K$. The homeomorphism \[\bar\rho \in X_{[b]}\mapsto \rho \in
\{\rho \in K: \rho(a)=0, \rho(\widetilde e_b)=1\}\] enables us to identify these two spaces. We note that
\begin{equation}\label{10}
\mathbb{T}X_a \cap \mathbb{T}X_{[b]} \subset \mathbb{T}X_a \cap {\rm ext}\,[V]^*_1 = \emptyset
\end{equation}
where $\rho(a) \neq 0$ for all $\rho \in X_a$.
In the $\ell^1$-sum
\begin{equation}\label{11}
V^* =J_a^* \oplus_1 (V/J_a)^*= C(X_a)^{*}\oplus_1 [V]^{*},
\end{equation}
each $\omega \in V^*$ admits a decomposition
$\omega = \omega^1 + \omega^2$ in $V^*$ with $\omega ^2(J_a)=\{0\}$ and $\|\omega\|= \|\omega^1|_{J_a}\|+\|\omega^2\|$,
which provides the identification of $\omega$ as an element $(\widetilde \omega^1, \widetilde\omega^2)$ in the $\ell^1$-sum,
defined by
\[\widetilde\omega^1 = \omega^1|_{J_a} \in J_a^* \quad {\rm and} \quad \widetilde\omega^2( [\cdot])
= \omega^2 (\cdot)
\in (V/J_a)^*.\]
Hence for an extreme point $\omega \in {\rm ext}\, V^*_1 =K$, we have $\widetilde\omega^1=0$
or $\widetilde\omega^2=0$.
Given a net $(\omega_\gamma)$ in $V^*$ weak* converging to a limit $\omega\in V^*$, it can be seen that
the net $(\widetilde\omega^1_\gamma)$ converges to $\widetilde \omega^1$ in the $w(J_a^*,J_a)$-topology
and the net $(\widetilde\omega^2_\gamma)$ converges to $\widetilde \omega^2$ in the
weak* topology of $(V/J_a)^*$. In particular, if the net $(\omega_\gamma)$ is in $K$
and $\omega \in K$ with $\widetilde\omega^j \neq 0$ $(j \in \{1,2\})$,
then the convergence of $(\widetilde\omega_\gamma^j)$ to $\widetilde\omega^j$ implies that $\widetilde
\omega_\gamma^j \neq 0$ eventually, and
hence $\widetilde\omega_\gamma^{j'}=0$ for $j'\neq j$ eventually.
The closed unit ball of the $\ell^1$-sum in (\ref{11}) has extreme points
$$(\mathbb{T}X_a, 0) \cup (0, {\rm ext}\,[V]^*_1):= \{(\omega,0): \omega \in \mathbb{T}X_a\} \cup
\{(0,\omega): \omega \in {\rm ext}\,[V]^*_1 \}$$
and in the identification of (\ref{10}), we have the disjoint union $K= \mathbb{T}X_a \cup {\rm ext}\,[V]^*_1$.
Given a net $(\omega_\gamma)$ in $K$ weak* converging to some $\omega \in K$,
and given either $\omega\in \mathbb{T}X_a$ or $\omega \in {\rm ext}\,[V]^*_1$, the above observation implies
that $\omega_\gamma$ belongs to the same set eventually.
Observe that $[J_b] = J_{[b]}=(J_b+J_a)/J_a$ and if $J_{[b]} \neq [V]$, we have the $\ell^\infty$-sum
\[ V^{**} = J_a^{**} \oplus_\infty ((J_b+J_a)/J_a)^{**}\oplus_\infty ([V]/J_{[b]})^{**}
= C_0(X_a)^{**} \oplus_\infty C_0(X_{[b]})^{**}\oplus_\infty ([V]/J_{[b]})^{**}\]
where the quotient JB*-triple $[[V]]:=[V]/J_{[b]}$ is separably injective, $\rho (v_a)=1$ for $\rho \in X_a$
and $\rho(\widetilde v_b)=1$ for $\rho \in X_{[b]}$.
The closed unit ball of the $\ell^1$-sum
\[
V^* = C_0(X_a)^* \oplus_1 C_0(X_{[b]})^{*}\oplus_1 ([V]/J_{[b]})^*
\]
has extreme points
\[(\mathbb{T}X_a,0,0) \cup (0,\mathbb{T}X_{[b]},0) \cup (0,0, {\rm ext}\, [[V]]^*_1) \]
and we have the disjoint union $K= {\rm ext}\, V^*_1 = \mathbb{T}X_a \cup \mathbb{T}X_{[b]} \cup {\rm ext}\, [[V]]^*_1$.
Given a net $(\omega_\gamma)$ in $K$ weak* converging to some $\omega \in K$,
if $\omega$ belongs to one of the three sets above, then repeating the arguments as before, $\omega_\gamma$ belongs to the same set eventually.
Now transfinite induction together with separable injectivity yields a composition series
$(J_\lambda)_{0\leq \lambda \leq \mu}$ of closed triple ideals in $V$, with $ v_\lambda \in V$, such that $V^{*}$ is linearly isometric to, and identifies with, the $\ell^1$-sum
\[\bigoplus^{\ell^1}_{\lambda<\mu} (J_{\lambda+1}/J_\lambda)^{*} = \bigoplus^{\ell^1}_{\lambda<\mu} C_0( X_{\lambda})^{*}\]
where $\rho( v_\lambda)=1$ for each $\rho \in X_{\lambda}$, and the pure normal state space $X_\lambda$ of
$(J_{\lambda+1}/J_\lambda)^{**}$ identifies with the set
\[\{\rho \in K: \rho(J_\lambda)=\{0\}, \rho (\widetilde e_\lambda)=1\}\]
in which $\widetilde e_\lambda$ is the identity of $(J_{\lambda+1}/J_\lambda)^{**}$.
In this identification, we have the disjoint union
\[ K= \bigcup_{\lambda<\mu} \mathbb{T}X_\lambda\]
and for a weak* convergent net $(\omega_\gamma)$ in $K$ with limit $\omega \in \mathbb{T}X_\lambda$ for
some $\lambda$, we have $(\omega_\gamma)$ in $\mathbb{T}X_\lambda$ eventually. As a consequence of Lemma \ref{ja}, in the identification $X_\lambda \subset C_0(X_\lambda)^*$ and
$X_\lambda \subset K$, the weak* convergence in $ C_0(X_\lambda)^*$ of a net $(\rho_\gamma)$ in $X_\lambda $
to $\rho \in X_\lambda$ is the same as the weak* convergence in $K$.
\begin{lem}\label{x} Given that $V$ is separably injective and in the above notation, the subset $X_\lambda \cup\{0\}$
of $K \cup\{0\}$ is weak* compact for all $\lambda <\mu$ and also, $\mathbb{T}X_\lambda$ is relatively weak* open
in $K\cup \{0\}$.
\end{lem}
\begin{proof} Let $(\rho_\gamma)$ be a net in $X_\lambda$ weak* converging to a nonzero limit
$\omega \in V^*$. Then $\omega \in K$ and by the above remark, we must have $\omega \in\mathbb{T}X_\lambda$,
say $\omega = \alpha \rho$ with $\alpha \in \mathbb{T}$ and $\rho\in X_\lambda$. Since $X_\lambda$ is contained
in the weak* compact set $\{\rho'\in K: \rho'(v_\lambda)=1\}$, it follows that
$\alpha = \alpha \rho(v_\lambda) = \lim_\gamma \rho_\gamma(v_\lambda) =1$ and $\omega = \rho \in X_\lambda$.
This proves that $X_\lambda \cup \{0\}$ is weak* closed in $K\cup\{0\}$ and hence weak* compact.
For the second assertion, let $(\omega_\gamma)$ be a net in $(K\cup\{0\})\backslash \mathbb{T}X_\lambda$ weak* converging to some $\omega \in K$. Then again $\omega \notin \mathbb{T}X_\lambda$ for otherwise, the previous remark implies that
$\omega_\gamma$ belongs to $\mathbb{T}X_\lambda$
eventually which is impossible.
\end{proof}
The above construction enables us to show that a separably injective $C_\sigma$-space is isometric to
an abelian C*-algebra.
\begin{thrm}\label{4} Let $V$ be a separably injective $C_\sigma$-space.
Then $V$ is linearly isometric to an abelian C*-algebra.
\end{thrm}
\begin{proof}
Let $K={\rm ext}\,V^*_1$
and as shown previously, we have the $\ell^1$-sum
\[ V^* =\bigoplus^{\ell^1}_{\lambda<\mu} C_0( X_{\lambda})^{*}\]
with the disjoint union $ K= \bigcup_{\lambda<\mu} \mathbb{T}X_\lambda$.
For each $\lambda <\mu$, there is an element $v_\lambda \in V$ such that $\rho(v_\lambda)=1$ for
all $\rho \in X_\lambda$.
We show that $V$ is linearly isometric
to the $c_0$-sum $\displaystyle\bigoplus^{c_0}_{\lambda< \mu} C_0( X_{\lambda})$ which would complete the proof.
We continue to identify $V$ with $C_\sigma (K)$ in (\ref{id}). By Lemma \ref{x}, each $f\in C_\sigma(K)$
restricts to a continuous function $f|_{X_\lambda} \in C_0(X_\lambda)$.
We show that the map
$$f\in V \approx C_\sigma(K) \mapsto (f|_{X_\lambda}) \in
\bigoplus^{c_0}_{\lambda< \mu} C_0( X_{\lambda})$$ is a surjective linear isometry.
To see that $(f|_{X_\lambda})$ indeed belongs to the $c_0$-sum, we
need to show $(\|f|_{X_\lambda}\|) \in c_0(\Lambda)$, where $\Lambda=[0,\mu)$. Let
$\varepsilon >0$. By Lemma \ref{x}, for each $\lambda \in \Lambda$,
the set
$\mathbb{T}X_\lambda$ is relatively weak*
open in $K\cup\{0\}$.
Since $f$ vanishes at infinity on $K$, the set
$$U_\varepsilon = \{\omega \in K : |f(\omega)| < \varepsilon\}\cup \{0\}$$
is a relatively weak* open neighbourhood of $0$ in the one-point
compactification $K\cup\{0\}$ of $K$. We have
$$K\cup\{0\} = \bigcup_{\lambda \in \Lambda} \mathbb{T}X_\lambda \cup
U_\varepsilon$$ and by weak* compactness, there are finitely many
$\lambda_1, \ldots, \lambda_n$ such that
$$K\cup\{0\} \subset \mathbb{T}X_{\lambda_1} \cup \cdots \cup \mathbb{T}X_{\lambda_n} \cup
U_\varepsilon.$$ It follows that $\|f|_{X_\lambda}\| \leq
\varepsilon$ for $\lambda \notin \{\lambda_1, \ldots, \lambda_n\}$
which proves $(\|f|_{X_\lambda}\|) \in c_0(\Lambda)$.
Since $K
=\displaystyle \cup_\lambda \mathbb{T}\,X_\lambda$ and each $f\in
C_\sigma (K)$ satisfies $f(\alpha \omega) = \alpha f(\omega)$ for $\alpha
\in \mathbb{T}$ and $\omega\in K$, it is evident that the map is a
linear isometry.
It remains to show that the map is surjective.
Let $(g_\lambda) \in \displaystyle\bigoplus^{c_0}_{\lambda \in \Lambda} C_0( X_{\lambda})$. Define a function $f: K \rightarrow \mathbb{C}$
by
$$f(\omega) = \alpha g_\lambda (\rho_\lambda) \quad {\rm for}\quad
\omega = \alpha \rho_\lambda \in \mathbb{T} X_\lambda.$$ The function
$f$ is well-defined since the sets
$\{\mathbb{T}X_\lambda\}_\lambda$ are mutually disjoint and each
$\omega\in K$ has a unique representation $\omega = \alpha \rho\in \mathbb{T}X_\lambda$ for
if $\alpha\rho =\beta \sigma \in \mathbb{T}X_\lambda $, we have $\alpha = \alpha\rho(v_\lambda)
=\beta\sigma(v_\lambda)= \beta$, where $X_\lambda$ is contained in the weak* compact
set $\{\rho'\in K: \rho'(v_\lambda)=1\}$.
We complete the proof by showing $f\in C_\sigma (K)$. We have
readily $f(\alpha \omega) = \alpha f(\omega)$ for $\alpha \in
\mathbb{T}$ and $\omega\in K$.
For continuity, let $(\omega_\gamma)$ be a net weak* converging to
$\omega \in K$ and say, $\omega = \alpha \rho\in
\mathbb{T}X_{\lambda}$ for some $\lambda$. By a previous remark, the net $(\omega_\gamma)$
is in $\mathbb{T}X_{\lambda}$ eventually. Therefore we have $\omega_\gamma
= \alpha_\gamma \rho_\gamma$ with $\alpha_\gamma \in \mathbb{T}$ and $\rho_\gamma \in X_{\lambda}$ eventually.
It follows that eventually $\alpha_\gamma = \alpha_\gamma \rho_\gamma (v_\lambda) \rightarrow
\alpha\rho(v_\lambda) = \alpha$ and $\rho_\gamma \rightarrow \rho$. Hence
we have
\[\lim_\gamma f(\omega_\gamma)= \lim_\gamma \alpha_\gamma
g_\lambda(\rho_\gamma) = \alpha g_\lambda(\rho) = f(\omega).\]
Finally, for any $\varepsilon >0$, there are finitely many $\lambda_1, \ldots, \lambda_n$ such that
$\|g_\lambda\| < \varepsilon$ for $\lambda \notin \{\lambda_1, \ldots, \lambda_n\}$.
For each $\lambda_j$ with $j=1, \ldots,n$, there is a weak* compact set $E_j \subset X_{\lambda_j}$
such that $\{\rho \in X_{\lambda_j}: |g_{\lambda_j}(\rho)| \geq \varepsilon\} \subset E_j $.
This gives
\begin{eqnarray*}
\{\omega \in K: |f(\omega)| \geq \varepsilon\}= \cup_{j=1}^n \mathbb{T}\{
\rho\in X_{\lambda_j}: |g_{\lambda_j}(\rho)| \geq \varepsilon\}
\subset \cup_{j=1
}^n \mathbb{T}E_j \subset K
\end{eqnarray*}
where the finite union $ \cup_j \mathbb{T}E_j $
is weak* compact and therefore $f \in C_0(K)$.
\end{proof}
Finally, by the characterisation of separably injective abelian C*-algebras in \cite[Theorem 3.5]{CL16},
together with Theorem \ref{4}, we conclude with the following main result of the paper.
\newpage
\begin{thrm}\label{111} Let $V$ be a $C_\sigma$-space.
The following conditions are equivalent.
\begin{itemize}
\item[(i)] $V$ is separably injective. \item[(ii)] $V$ is linearly
isometric to the Banach space $C_0(S)$ of complex continuous
functions vanishing at infinity on a substonean locally compact
Hausdorff space $S$.
\end{itemize}
\end{thrm}
|
1,108,101,564,309 | arxiv | \section{Introduction}
Quantum annealing (QA) \cite{kadowaki_quantum_1998,farhi_quantum_2001,finnila_quantum_1994,Brooke1999,Santoro} usually refers
to a family of analog quantum optimization algorithms that interpolate between an initial Hamiltonian whose ground state is easy to prepare and a final Hamiltonian whose ground state is the answer to the optimization problem we want to solve~\cite{Albash-Lidar:RMP}.
Typically, QA is operated adiabatically, which means that the interpolation timescale $t_f$ (also referred to as the annealing time) is much larger than the smallest energy gap between the ground state and the first excited state that is encountered along the interpolation.
The adiabatic theorem for closed system dynamics provides a guarantee that for a sufficiently long $t_f$, the evolution reaches the ground state of the final Hamiltonian with high probability (see, e.g., Ref.~\cite{Jansen:07} for a rigorous statement).
We can also consider QA operated non-adiabatically. Here too, the goal is to end the evolution with the system in the ground state of the final Hamiltonian, but the system can undergo diabatic transitions to excited states and return to the ground state. To further complicate matters, QA can also refer to a version of open system analog quantum optimization algorithms operating at non-zero temperature~\cite{RevModPhys.80.1061}.
In this work, we consider a particular diabatic, oracular QA algorithm for solving the glued-trees problem, which we modify by the addition of noise to the oracle.
The glued-trees problem was first introduced in Ref.~\cite{childs2003exponential}, where it was shown that any classical algorithm must necessarily take exponential time to solve this problem, and a quantum walk algorithm was presented which solves the problem in polynomial time. Subsequently, a diabatic QA algorithm was presented which also solves the problem in polynomial time~\cite{Somma:2012kx}. This is so far the only explicit QA algorithm for which an exponential speedup is known. The QA evolution in the algorithm takes the system from the ground state to the first excited state, then back down to the ground state. This transition from and back to the ground state is enabled by the Hamiltonian spectrum, which
is symmetric about the middle of evolution.
Oracular models may not be practical examples of quantum speedups because it is highly non-trivial to construct an oracle in a way that does not assume that we already know the answer to the problem at hand; and, even if we could do so, oracular Hamiltonians acting on $n$ spins typically involve $n$-body operators. However, they provide insights into the mechanisms and boundaries of quantum speedups, and can sometimes serve as stepping stones to more practical, non-oracular algorithms \cite{Simon:94,Shor:94}. In this work, we address the question of whether the exponential speedup of the QA glued-trees algorithm is robust under noise. The noise models we consider are phenomenological and add a time-independent random matrix with Gaussian entries to the interpolating Hamiltonian. Such noise is more appropriately viewed as a model of control errors than as originating from a system-bath interaction~\cite{Breuer:2002}. We consider two dichotomies of noise models. One dichotomy is between noise models which induce long-range interactions among distant nodes in the graph and noise models which only induce interactions between nearest-neighbor nodes. The other dichotomy is between noise models which break a certain reflection symmetry in the spectrum and noise models which preserve the reflection symmetry.
Our noise models are motivated by three concerns. First, they offer ways to perturb features of the problem that are considered explanatorily relevant to the performance of the QA algorithm. This will become clear later, but the main idea can be illustrated as follows. The QA algorithm described in Ref.~\cite{Somma:2012kx} works reliably because the spectrum is symmetric upon reflection about the middle of the evolution. This symmetry guarantees that if the system is excited to the first excited state in the first half of the evolution due to the presence of an exponentially small energy gap, the system will then encounter the same exponentially small gap in the second half of the evolution and return back to the ground state. Therefore, a perturbation that breaks this reflection symmetry offers a control knob to explore the importance of this symmetry. Second, given that this is an oracle problem, in order to obtain physical noise models, we need to consider physical realizations of the oracle. But oracles are generically unrealizable as local Hamiltonians. Thus, in the absence of physical implementations, we assume the noise is Gaussian at the oracle level. Finally, we choose these noise models because they allow for a numerical and analytical treatment to reasonably large system sizes.
We now summarize our results. We find that for the long-range noise models, the quantum dynamics show an exponential speedup over classical algorithms that respect the glued-trees graph-structure. However, this speedup is misleading because an exponential speedup is also observed for a \emph{classical} algorithm with long-range transition terms. More precisely, the long-range noise corresponds to a classical random walk on a graph containing edges connecting \emph{any} two columns (see Fig.~\ref{fig:gt}), which allows for a sufficiently high probability for the random walker to jump directly to the EXIT vertex. Meanwhile, we find that the quantum dynamics with the long-range noise exhibit a speedup because of a perturbative lifting of the spectral gap, which turns dynamics that were diabatic in the noiseless setting to dynamics that are adiabatic in the noisy setting. We also observe that the short-range noise models lose the exponential quantum speedup over the noiseless classical algorithm, but they do show a \emph{polynomial} speedup for sufficiently small values of the noise strength~\footnote{An algorithm $A$ has an exponential (polynomial) speedup over another algorithm $B$ if the asymptotic scaling of $A$ is an exponential (polynomial) function of the asymptotic scaling of $B$.}.
The paper is organized as follows. In Sec.~\ref{sec:gtproblem}, we describe the glued-trees problem and the QA algorithm that solves it. In Sec.~\ref{sec:noisemodels}, we describe the noise models that we study. In Sec.~\ref{sec:numericalresults}, we present numerical results on how the performance of the algorithm changes under the different noise models. In Sec.~\ref{sec:explanation}, we provide an explanation for these results and we conclude in Sec.~\ref{sec:conclusion}.
\section{The Glued-Trees problem}
\label{sec:gtproblem}
Consider two identical perfect binary trees, of depth $n$, glued together as depicted in Fig.~\ref{fig:gt}. The gluing is done such that each leaf on one tree is randomly joined to two leaves on the other tree, and vice versa. This ensures that every vertex in the graph, except the two root vertices, have degree $3$. One root node is called the ENTRANCE vertex and the other root node is called the EXIT vertex. Starting from the ENTRANCE vertex, the objective is to find the EXIT vertex~\footnote{That all the vertices, except the ENTRANCE and the EXIT vertices, have equal degree is crucial to avoid the easy solution of this problem by a backtracking classical random walk. See Ref.~\cite{childs2003exponential}.}.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth,height=0.6\columnwidth]{Figure01}
\caption{(Color online) The graph structure of the glued trees problem. $j=0,1,2,\dots,n,n+1,\dots,2n+1$ indexes the different columns of the graph starting from one vertex to another. }
\label{fig:gt}
\end{figure}
All the vertices have labels. A classical algorithm can query the oracle with the label of any vertex, and the oracle returns the labels of the vertices connected to the given vertex. A quantum algorithm can query the oracle with the label of any vertex and the oracle will return a uniform superposition over all the vertices connected to it. Thus, the oracle encodes the adjacency matrix $A$ of the glued-trees graph. Since $n$ is the depth of one of the binary trees, the total number of vertices in the glued trees is $2^{n+2}-2 = \mathcal{O}(2^n)$, which is the minimum number of distinct labels we need. Therefore, the entire graph can be labeled using $(n+2)$-length bitstrings. But this labeling system is insufficient to make the problem hard for classical algorithms. Rather, to prove classical hardness, there need to be exponentially more labels than vertices \cite{childs2003exponential}. It turns out to be sufficient to choose labels as randomly chosen from the set of $2n$-length bitstrings.
Under this labeling scheme, Ref.~\cite{childs2003exponential} showed that any classical algorithm that makes fewer than $2^{n/6}$ queries to the oracle, will not be able to find the EXIT vertex with probability greater than $4\times 2^{-n/6}$. This means that it will at least take a time $\Omega(2^{n/3})$ to find the EXIT vertex, because in order to boost the success probability we must repeat the algorithm $2^{n/6}$ times.
On the other hand, it was shown in Ref.~\cite{childs2003exponential} that a quantum walk algorithm which starts from the ENTRANCE vertex and evolves under the Hamiltonian equal to the adjacency matrix of the graph, can find the EXIT vertex with probability $\mathcal{O}(\frac{1}{n})$ if the algorithm is run for times chosen uniformly at random in the interval $[0, \mathcal{O}(n^4)]$. This means we can can get a probability of success arbitrarily close to $1$ by simply repeating the algorithm $\mathcal{O}(n)$ times, and therefore the algorithm will take at most $\mathcal{O}(n^5)$ time. This yields an exponential speedup over the classical algorithm~\footnote{The labeling scheme used for the vertices does not affect the performance of the quantum algorithm.}.
\subsection{The quantum annealing algorithm}
We now turn to the QA algorithm for the glued-trees problem presented in Ref.~\cite{Somma:2012kx}. The initial Hamiltonian is taken to be the projector onto the ENTRANCE vertex: $H_0 = -\ket{\mathrm{ENTRANCE}}\bra{\mathrm{ENTRANCE}}$, such that the initial state of the system coincides with the ground state of the Hamiltonian. The final Hamiltonian is the projector onto the EXIT vertex: $H_1 = -\ket{\mathrm{EXIT}}\bra{\mathrm{EXIT}}$. We then interpolate between these projectors while turning on and off the adjacency matrix $A$:
\begin{equation} \label{eq:QAHam}
H(s) = (1-s) \alpha H_0 - s(1-s) A + s \alpha H_1,
\end{equation}
with $s = t/{t_f}\in [0,1]$, where $t$ is the physical time and $t_f$ is the total evolution time. Also, $0 < \alpha < \frac{1}{2}$ is a constant. We set $\hbar = 1$ throughout. In Ref.~\cite{Somma:2012kx}, it was shown that if $t_f = \mathcal{O}(n^6)$, then the above interpolation ends with sufficiently high probability in the ground state of $H_1$, the EXIT vertex.
With the initial state being $\ket{\mathrm{ENTRANCE}}$, the evolution associated with the Hamiltonian in Eq.~\eqref{eq:QAHam} confines the system to the subspace spanned by the \emph{column basis}, whose elements are defined as
\begin{equation} \label{eq:colbasisdef}
\ket{\text{col}_j} \equiv \frac{1}{\sqrt{N_j}} \sum_{a \in \text{column } j} \ket{a},
\end{equation}
where $\ket{a}$ denotes the state associated with a vertex in column $j$ with label $a$ and
\begin{equation}
N_j = \begin{cases} 2^j \ , & 0 \leq j \leq n \\
2^{2n+1-j} \ , & n+1 \leq j \leq 2n+1
\end{cases}
\end{equation}
is the number of vertices in column $j$ (there are $2n+2$ columns in total). It is straightforward to show (see Appendix~\ref{app:colbasis}) that in the column basis, the matrix elements of the Hamiltonian [Eq.~\eqref{eq:QAHam}] are
\bes \label{eq:colbasisH}
\begin{align}
H_{0,0} &= -\alpha (1-s) \label{eq:colENT} \\
H_{j,j+1} = H_{j+1,j} &= -s(1-s) \text{ for } j \neq n \label{eq:coladj}\\
H_{n,n+1} = H_{n+1,n} &= -\sqrt{2} s (1-s) \label{eq:colglue} \\
H_{2n+1,2n+1} &= -\alpha s. \label{eq:colEXIT}
\end{align}
\ees
\subsubsection{Reflection symmetry}
This Hamiltonian is invariant under the composition of two transformations, which together we call the \emph{reflection symmetry}. The first transformation is the reflection of the graph around the central glue. In the column basis, this is represented by the permutation matrix $P$ which has $1$'s on the anti-diagonal and $0$'s everywhere else,
\begin{equation}
P_{ij} = \delta_{i,2n+1-j}, \quad i,j \in \{0,1,2,\dots,(2n+1)\}.
\end{equation}
The second transformation is $s \mapsto (1-s)$: the reflection of the interpolation parameter $s$ around $s=0.5$.
The reflection symmetry is the invariance of the Hamiltonian [Eq.~\eqref{eq:QAHam}] under the composition of these two transformations:
\begin{equation} \label{eq:refsymm}
H(s) = PH(1-s)P.
\end{equation}
One consequence of the reflection symmetry is that the spectrum of the Hamiltonian is symmetric under the second transformation $s \mapsto (1-s)$ alone. This is because Eq.~\eqref{eq:refsymm} implies that $s \mapsto (1-s)$ corresponds to effectively conjugating the Hamiltonian by $P$, and since $P$ is unitary, the spectrum is unchanged. Therefore,
\begin{equation} \label{eq:eigvalsym}
E_k(s) = E_k(1-s) \text{ for } k \in \{0,1,2,\dots,(2n+1)\}.
\end{equation}
Another consequence of the symmetry is that if $\ket{\phi_k(s)}$ is the $k$-th eigenstate of $H(s)$, then
\bes
\begin{align}
H(s)\ket{\phi_k(s)} &= E_k(s) \ket{\phi_k(s)} \\
\implies PH(s)P^\dagger P\ket{\phi_k(s)} &= E_k(s) P \ket{\phi_k(s)} \\
\implies H(1-s) (P\ket{\phi_k(s)}) &= E_k(s) (P \ket{\phi_k(s)}) \\
\implies H(1-s) (P\ket{\phi_k(s)}) &= E_k(1-s) (P \ket{\phi_k(s)}) \\
\implies \ket{\phi_k(1-s)} &= P\ket{\phi_k(s)}. \label{eq:eigvecsym}
\end{align}
\ees
Together, Eqs.~\eqref{eq:eigvalsym} and~\eqref{eq:eigvecsym} imply that $H(1-s)$ has the same eigenvalues as $H(s)$ and that the eigenvectors of $H(1-s)$ are the reversed-in-column-basis eigenvectors of $H(s)$.
\subsection{Dynamics}\label{sec:gtqadynamics}
As shown in Ref.~\cite{Somma:2012kx}, the key features of the Hamiltonian that results in a polynomial time performance are the scalings of the avoided level-crossings in the spectrum, depicted in Fig.~\ref{fig:spectrum}. The evolution that solves the problem in polynomial time is as follows. The system starts in the ground state of the Hamiltonian at $s=0$ (i.e., the ENTRANCE vertex). In the optimal evolution, the system diabatically transitions to the first excited state at the first exponentially small gap (between $s_1$ and $s_2$). Then, it adiabatically follows the first excited state and does not transition to the second excited state because of the polynomially large gap between the first and second excited states. Finally, the system returns back down to ground state through the second exponentially small gap (between $s_3$ and $s_4$). At the end of the annealing evolution described above, we get the EXIT vertex with high probability, as long as the evolution time $t_f$ is chosen to scale as $\mathcal{O}(n^6)$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{Figure02}
\caption{(Color online) The smallest three eigenvalues of the Hamiltonian for the case of $n=6$ and $\alpha = 1/\sqrt{8}$. We choose a small $n$ so that the exponentially small gaps are visible. (This image from Ref.~\cite{Albash-Lidar:RMP}.)}
\label{fig:spectrum}
\end{figure}
Since the scaling $\mathcal{O}(n^6)$ is an analytically derived upper bound, we expect and find the scaling obtained via numerical simulations to be better. To see this, let us define the threshold annealing time to be the minimum time required for the success probability (where success is defined as reaching the EXIT vertex) to reach a threshold probability $p_\mathrm{Th}$:
\begin{equation} \label{eqt:tf}
t_f^\mathrm{Th}(n) \equiv \min \{t_f : p_\mathrm{GS}(t_f) \geq p_\mathrm{Th} \},
\end{equation}
(henceforth, we choose $p_\mathrm{Th} = 0.95$). In Fig.~\ref{fig:gtnoiselessqa}, we plot the scaling of $t_f^\mathrm{Th}(n)$ for the QA algorithm for the glued trees problem. The scaling is $\mathcal{O}(n^{2.86})$, which is significantly faster than $\mathcal{O}(n^6)$.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{Figure03}
\caption{The minimum time required to reach a success probability of $p_\mathrm{Th} = 0.95$ as a function of size size $n$ for the noiseless quantum annealing glued trees algorithm. The solid line corresponds to a scaling of $n^{2.8613}$.}
\label{fig:gtnoiselessqa}
\end{figure}
It is instructive to examine the $p_\mathrm{GS}(t_f)$ function. This is exhibited for the case $n=10$ in Fig.~\ref{fig:pgstfqanoiseless}. For $n=10$, the threshold timescale is $t_f^\mathrm{Th}(10) = 1690$.
This corresponds to the second peak in the oscillations. In general, the QA algorithm works by being in a region of the $p_\mathrm{GS}(t_f)$ function before adiabaticity is achieved.
It is also instructive to examine what the dynamics look like at different evolution timescales. We examine the populations in the instantaneous ground state, first excited state, and the second excited state as a function of the interpolation parameter $s$ for $n=4,20$ in Fig.~\ref{fig:gspopn4}. For $n=4$, at relatively small annealing times the evolution is close to optimal: the population starts off in the ground state, enters the first excited state at the first exponentially small gap, and returns to the ground state at the second exponentially small gap. At longer annealing times the dynamics is closer to adiabatic, with some interesting fluctuations that arise around the exponentially small gaps. For $n=20$, at the threshold annealing time, the evolution is optimal and exhibits sharp transitions.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figure04}
\caption{Probability of finding the ground state at the end of evolution as a function of $t_f$ for the noiseless glued-trees quantum anneal at problem size $n=10$. Strong oscillations are observed, indicating that the optimal time at which we should terminate the algorithm is sub-adiabatic.}
\label{fig:pgstfqanoiseless}
\end{figure}
\begin{figure*}[t]
\subfigure[]{\includegraphics[width = \columnwidth]{Figure05a}\label{fig:gtqatf250}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure05b} \label{fig:gtqatf1000}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure05c}\label{fig:gtqatf4000}}
\subfigure[]{\includegraphics[width=\columnwidth]{Figure05d}\label{fig:noiselesspopsn20}}
\caption{(Color online) Populations in the instantaneous ground state, first excited state, and second excited state as a function of the anneal parameter $s$ for the glued-trees problem without any added noise. (a) $n=4$, $t_f = 268$; (b): $n=4$, $t_f = 1000$; (c): $n=4$, $t_f = 4000$; (d) $n=20$ at $t_f = t_f^\mathrm{Th}(20) = 12125$ [see Eq.~\eqref{eqt:tf}].
}
\label{fig:gspopn4}
\end{figure*}
\begin{figure*}[t]
\subfigure[]{\includegraphics[width = \columnwidth]{Figure06a}\label{fig:LSmanyeps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure06b}\label{fig:LAmanyeps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure06c} \label{fig:SSmanyeps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure06d}\label{fig:SAmanyeps}}
\caption{(Color online) The median success probability, $p_\mathrm{GS}$, at the end of an evolution of duration $t_f^\mathrm{Th}(n)$ as a function of $n$ for $\epsilon = 0, 10^{-3}, 10^{-2}, 5\times 10^{-2}, 10^{-1}, 5\times 10^{-1}$. $\epsilon=0$ is the noiseless evolution ($\epsilon$ increases from top to bottom at $n=1$). $t_f^\mathrm{Th}(n)$ is chosen so that the success probability for the noiseless probability is just above $0.95$. The error bars are obtained by bootstrap sampling over $300$ realizations of the noise. (a) The long-range symmetric noise model. (b) The long-range asymmetric noise model. (c) The short-range symmetric noise model. (d) The short-range asymmetric noise model.}
\label{fig:mdnpgsvsnmanyeps}
\end{figure*}
\section{Noise}\label{sec:noisemodels}
The noise models we consider here are phenomenological. They ignore the details of how the noise may be realized and instead posit some general properties that noisy systems might have. This method is especially well-suited to oracle algorithms. This is because oracles are typically very difficult to realize physically. Indeed, for the glued-trees problem we can show that the terms $H_0, H_1$, and $A$ in Eq.~\eqref{eq:QAHam} all need to be highly nonlocal and require experimentally difficult-to-engineer interactions (see Appendix~\ref{app:qubitgt}). Such oracle-level noise models are studied, e.g., in Refs.~\cite{Shenvi:03,temme2014runtime,cross2015quantum}, for circuit algorithms, and for some quantum walk algorithms (see~\cite{kendon2007decoherence} for a review), including the quantum walk version of the glued-trees algorithm~\cite{lockhart2014glued}.
Our noise model is inspired by one due to Roland and Cerf~\cite{PhysRevA.71.032330}. The noise model they consider is a time-dependent random-matrix added to the Hamiltonian, with the entries of the random matrix being time-dependent random variables distributed as white noise with a cut-off. They show that this noise does not significantly affect the performance of the adiabatic algorithm as long as the cut-off frequency of the white noise is either much smaller or much greater than the energy scale of the noiseless Hamiltonian. They further explore this noise model in detail for the adiabatic Grover algorithm~\cite{Roland:2002ul}.
The noise models we study are as follows. We add a time-independent random matrix $h$ to our Hamiltonian $H(s)$ [Eq.~\eqref{eq:QAHam}]. We write our noisy Hamiltonian $\tilde{H}(s)$ as
\begin{equation} \label{eqt:QAErrorHam}
\tilde{H}(s) = H(s) + \epsilon h \ .
\end{equation}
We restrict the noise matrix $h$ to be inside the subspace spanned by the column basis, i.e., $h$ has the same dimensions as $H(s)$ when written in the column basis. The four noise models we consider correspond to different ways of choosing the random matrix $h$.
\subsection{Four noise models}
\label{sec:noisemodels}
We construct four noise models by selecting one branch in each of two dichotomies. The first dichotomy is between long-range and short-range noise models. The second dichotomy is between reflection-symmetric and reflection-asymmetric noise models.
First we consider a noise model in which $h$ is chosen from the Gaussian Orthogonal Ensemble (GOE). This means that in any given basis, and in particular the column basis, the matrix elements of $h$ are distributed as
\begin{equation} \label{eq:goedef}
h_{ij} = h_{ji} = \begin{cases} \mathcal{N}(0,1), \quad i \neq j \\
\mathcal{N}(0,2), \quad i = j
\end{cases} \ .
\end{equation}
(Here Gaussian random variables are denoted as $\mathcal{N}(\mu,\sigma^2)$, with $\mu$ being the mean of the Gaussian and $\sigma$ the standard deviation.)
A standard way of generating such a matrix is to start with a matrix $M$ whose entries are independent $\mathcal{N}(0,1)$ random variables (and therefore non-Hermitian) and setting
\begin{equation} \label{eq:goegen}
h = \frac{M + M^T}{\sqrt{2}}.
\end{equation}
That Eq.~\eqref{eq:goedef} is obtained from Eq.~\eqref{eq:goegen} can be seen from the fact that the addition of two independent Gaussian random variables obeys
\begin{equation}
\label{eq:Gauss-sum-diff}
\mathcal{N}(\mu_1,\sigma_1^2) \pm \mathcal{N}(\mu_2,\sigma_2^2) = \mathcal{N}(\mu_1 \pm \mu_2, \sigma_1^2 + \sigma_2^2),
\end{equation}
combined with $a \mathcal{N}(\mu,\sigma^2) = \mathcal{N}\left(\mu,(a\sigma)^2\right)$ (see Appendix~\ref{app:GOE} for more details about the GOE).
We call this noise model---i.e., the model generated by adding a time-independent random matrix chosen from the GOE---the \emph{long-range asymmetric} (LA) noise model and denote $h$ in this case by $h_\mathrm{LA}$. \emph{Long-range} refers to the fact that $h$ contains matrix elements which connect all columns to all columns, and \emph{asymmetric} refers to $h$ breaking the reflection symmetry of the Hamiltonian. To see that $h$ breaks the reflection symmetry, notice that $h$ is not invariant under conjugation with the permutation matrix $P$, which, together with the fact that $h$ is time-independent, yields that $\tilde{H}$ is not reflection symmetric.
Next, we consider what we call the \emph{long-range symmetric} (LS) noise model. This noise model preserves the reflection symmetry of the problem. To generate this noise model, we first pick a matrix $\omega$ from the GOE. Then we reflection-symmetrize it:
\begin{equation}
h_\mathrm{LS} \equiv \frac{\omega + P \omega P}{\sqrt{2}}.
\end{equation}
Now $h_\mathrm{LS}$ is manifestly reflection symmetric and therefore so is $\tilde{H} = H + \epsilon h_\mathrm{LS}$. Note that $h_\mathrm{LS}$ still contains terms connecting distant columns.
From the above definition, we can check that the matrix elements of $h_\mathrm{LS}$ are distributed as
\begin{align} \label{eq:hLSdef}
&(h_\mathrm{LS})_{ij} = (h_\mathrm{LS})_{ji} \\ \nonumber
&= (h_\mathrm{LS})_{(2n+1)-i,(2n+1)-j} = (h_\mathrm{LS})_{(2n+1)-j,(2n+1)-i} \\ \nonumber
&= \begin{cases} \mathcal{N}(0,1), \quad i \neq j \\
\mathcal{N}(0,2), \quad i = j.
\end{cases}
\end{align}
The reflection symmetry of $h_\mathrm{LS}$ implies that the spectrum of $\tilde{H}$ is reflection symmetric as well in this case.
We next turn to the short-range noise models. These noise-models only connect neighboring columns in the glued-trees graph. We examine both \emph{short-range asymmetric} (SA) and \emph{short-range symmetric} (SS) noise models. In the SA model, the Hamiltonian perturbation is given by
\begin{equation}
(h_\mathrm{SA})_{ij} = \begin{cases} \mathcal{N}(0,1) \quad \abs{i-j}=1, \\
0 \quad \mathrm{otherwise}.
\end{cases}
\end{equation}
In the SS model, we have
\begin{equation}
h_\mathrm{SS} = \frac{h_\mathrm{SA} + Ph_\mathrm{SA}P}{\sqrt{2}},
\end{equation}
which preserves the reflection symmetry of the Hamiltonian.
We remark that the parameter controlling the strength of the noise $\epsilon$, can be absorbed into the standard deviations of the Gaussian random variables: e.g., $\epsilon \mathcal{N}(0,1) = \mathcal{N}(0,\epsilon^2)$. Therefore, the larger the noise, the greater the spread of the Gaussians according to which the matrix elements are drawn.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figure07}
\caption{(Color online) Median success probability vs. the log (base 10) of the strength of the noise for the LS model for several, larger values of the problem size $n$. There is a fall, then a rise, and then again a fall in this success probability as a function of $\epsilon$ for all values of $n$ displayed.}
\label{fig:mdnpgsvsepsmanyn}
\end{figure}
\begin{figure*}[t]
\subfigure[]{\includegraphics[width = \columnwidth]{Figure08a}\label{fig:polyfitLSmanyeps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure08b}\label{fig:polyfitLAmanyeps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure08c} \label{fig:expfitSSmanyeps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure08d}\label{fig:expfitSAmanyeps}}
\caption{(Color online) Base-2 logarithm of the median ground state success probability, as a function of problem size $n$, or the logarithm of the problem size $\log_2 n$, at different noise levels $\epsilon$. Polynomial fits of the form $\mathcal{O}(n^\alpha)$ are performed for the long-range models. Exponential fits of the form $\mathcal{O}(2^{\alpha n})$ are performed for the short range models. This is done for $\epsilon \in \{10^{-3},10^{-2}, 5\times 10^{-2}, 10^{-1}, 5\times 10^{-1}\}$ for the short-range models ($\epsilon$ increases from top to bottom at $n=1$). For the long-range models, the set of values of $\epsilon$ is the same as for the short-range models, except that we omit the $\epsilon = 10^{-2}$ case since it shows anomalous behavior (i.e., a rise and a fall) which doesn't fit a decay. The scaling coefficient $\alpha$ as a function of the noise $\epsilon$ is shown in Fig.~\ref{fig:scalingcoeffsvseps}. (a) The long-range symmetric noise model. (b) The long-range asymmetric noise model. (c) The short-range symmetric noise model. (d) The short-range asymmetric noise model.}
\label{fig:expfitmdnpgsvsnsymmanyeps}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figure09}
\caption{(Color online) The exponential scaling coefficient $\alpha$ as a function of the base-10 logarithm of the strength of the noise $\epsilon$ for the short-range noise models. (The long-range noise models do not show an exponential scaling.) The scaling coefficient $\alpha$ is obtained by performing an exponential fit of the form $\mathcal{O}(2^{\alpha n})$, to the median success probability, $p_\mathrm{GS}(t_f^\mathrm{Th})$ vs. $n$ curves [shown in Figs.~\ref{fig:expfitSSmanyeps} and~\ref{fig:expfitSAmanyeps}]. The dashed horizontal line at $\alpha = -1/3$ represents the scaling coefficient below which the speedup, over the best possible classical algorithm to solve the glued-trees problem, is lost. Thus, the short-range models lose the speedup for $\epsilon \gtrsim 10^{-1.75}$. The error-bars are 95\% confidence intervals. While some error-bars are large due to the fits being performed on a small number of data points, the trend in the data is clear.}
\label{fig:scalingcoeffsvseps}
\end{figure}
\begin{figure*}[t]
\subfigure[]{\includegraphics[width = \columnwidth]{Figure10a}\label{fig:LSgaps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure10b}\label{fig:LAgaps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure10c} \label{fig:SSgaps}}
\subfigure[]{\includegraphics[width = \columnwidth]{Figure10d}\label{fig:SAgaps}}
\caption{(Color online) The scaling of the median minimum gap (log-scale) as a function of problem size $n$ for the four noise models with $\epsilon \in \{10^{-4},10^{-3},10^{-2}\}$. The solid blue line represents the noiseless algorithm ($\epsilon = 0$), which has an exponentially closing minimum gap. (a) Long-range symmetric. (b) Long-range asymmetric. (c) Short-range symmetric. (d) Short-range asymmetric. The long-range models display a constant gap with problem size, while the short-range models exhibit an exponentially decreasing gap with problem size. This is consistent with the perturbative argument given in Eq.~\eqref{eq:gap-LA}.}
\label{fig:noisygapsvsn}
\end{figure*}
\section{Noisy glued-trees: Results from numerical simulations}\label{sec:numericalresults}
We simulate the Schr{\"o}dinger evolution
\begin{equation}
i \hbar \frac{d}{dt} \ket{\tilde{\psi}_{t_f}(s)} = t_f \tilde{H}(s) \ket{\tilde{\psi}_{t_f}(s)}
\end{equation}
using the different noise models and calculate the probability of finding the EXIT vertex at the end of the anneal, i.e., the success probability:
\begin{equation}
p_\mathrm{GS}[t_f^\mathrm{Th}(n)] \equiv \abs{\braket{\phi_0(s) | \tilde{\psi}_{t_f^\mathrm{Th}(n)}(1)}}^2 .
\end{equation}
We choose the annealing timescale $t_f$ to be equal to threshold timescale of the noiseless algorithm, i.e., the timescales depicted in Fig.~\ref{fig:gtnoiselessqa}. We do this because it is natural to imagine that one operates the algorithm in the regime in which the noiseless algorithm succeeds. Note that this probability is a random variable whose value depends on the specific noise realization, and we focus on the typical (median) value of this random variable. The median $p_\mathrm{GS}$ will depend on (a) the noise model, (b) the strength of the noise $\epsilon$, and (c) the problem size $n$.
In Fig.~\ref{fig:mdnpgsvsnmanyeps}, we plot the median success probability for the four different noise models specified in Sec.~\ref{sec:noisemodels} as a function of problem size $n$, equal to the depth of one of the two binary trees.
The first observation is that the median $p_\mathrm{GS}$ behavior is not significantly different between the symmetric and asymmetric noise model, for both the long-range and short-range variants, although the symmetric noise does slightly outperform the asymmetric case. This suggests that while the symmetry of the spectrum is an important aspect of the performance of the noiseless algorithm (by allowing transitions from and back to the ground state), the noisy algorithm is somewhat robust to reflection symmetry-breaking.
Next, a remarkable feature of Fig.~\ref{fig:mdnpgsvsnmanyeps} is that for large enough $n$ ($n \gtrsim 13$), the success probability is non-monotonic with respect to the strength of the noise $\epsilon$. This can be seen more clearly in Fig.~\ref{fig:mdnpgsvsepsmanyn}: as expected, there is a fall in the success probability from $\epsilon=0$ to $\epsilon=10^{-3}$, but, surprisingly, there is a rise from $\epsilon=10^{-3}$ to $\epsilon=10^{-2}$. For higher values of $\epsilon$, the probability falls off, again as expected. We explain the counterintuitive rise in the next section.
We also examine, for a given noise model and a given noise strength, whether the quantum speedup of the noiseless algorithm is retained. Since $2^{n/3}$ is the best possible time scaling that a noiseless classical random walk can achieve (see Sec.~\ref{sec:gtproblem} and Ref.~\cite{childs2003exponential}),
the $p_\mathrm{GS}(t_f^\mathrm{Th})$ vs. $n$ scaling should decline no faster than $2^{-n/3}$ for a quantum speedup to persist.
But, it might not be an exponential speedup: if the success probability for the noisy quantum algorithm declines as an exponential function that decreases slower than $2^{-{n/3}}$, then the speedup over the classical algorithm will instead be a polynomial speedup.
We thus perform fits to the $p_\mathrm{GS}(t_f^\mathrm{Th})$ vs. $n$ curves, displayed in Fig.~\ref{fig:expfitmdnpgsvsnsymmanyeps}. For the long-range models, we exclude small values of $n$ because the asymptotic behavior starts to show only at larger values of $n$. For the short-range models, we only perform the fits on small values of $n$ because exponential decay causes the values of the success probability to be very small for larger values of $n$. The long-range models fit polynomials of the form $\mathcal{O}(n^\alpha)$. This means that the long-range models have an exponential speedup over the noiseless classical algorithm. (We argue below why in fact referring to this as a quantum speedup is misleading.) On the other hand, the short-range models fit exponentials of the form $\mathcal{O}(2^{\alpha n})$, which means that they do not exhibit a polynomial speedup over the noiseless classical algorithm if $\alpha < -1/3$ (as discussed in the previous paragraph). In Fig.~\ref{fig:scalingcoeffsvseps}, we plot the scaling coefficient $\alpha$ as a function of the noise strength $\epsilon$ for the short-range noise models. As is apparent from Fig.~\ref{fig:scalingcoeffsvseps}, the quantum speedup is lost for the short-range models for $\epsilon \gtrsim 10^{-1.75}\approx 0.018$.
Let us now explain why the exponential speedup exhibited by the long-range noise models is misleading. For this speedup to count as a genuine quantum speedup, we must compare the quantum algorithm with an appropriate classical algorithm, so that we do not bias our analysis in favor of the quantum algorithm. How do we construct the appropriate classical algorithm in this case? Recall that in the quantum case, the long-range noise Hamiltonians have $\mathcal{N}(0,1)$-distributed off-diagonal terms. These terms connect distant columns of the graph. Thus, these models ought to be compared with classical random walks which have $\mathcal{N}(0,1)$-distributed transition probabilities between distant nodes of the graph. This needs to be normalized by a factor that is $\mathcal{O}(n)$ since there are $2n+2$ columns in total. Therefore, at any given time-step, in whichever column the random walker is, there is an $\mathcal{O}(1/n)$ probability that the random walker will transition directly to the EXIT vertex in the next time-step. Hence such a classical random walk will land at the EXIT vertex in $\mathcal{O}(n)$ time, and compared to the appropriate classical algorithm, the quantum algorithm with the long-range noise does not have an exponential speedup.
\begin{figure*}[t]
\centering
\subfigure[]{\includegraphics[width=0.66\columnwidth]{Figure11a}\label{fig:typicalspectrum}}
\subfigure[]{\includegraphics[width=0.66\columnwidth]{Figure11b}\label{fig:10-3dynamics}}
\subfigure[]{\includegraphics[width=0.66\columnwidth]{Figure11c}\label{fig:10-2dynamics}}
\caption{(Color online) Two random instances of the long-range symmetric noise for $n=20$ and $t_f = 12125$, with $\epsilon=10^{-3}$ and $\epsilon=10^{-2}$. (a) The spectral gap between the ground state and the first excited state for the two noisy instances and the noiseless case ($\epsilon = 0$), around $s = s^*$, i.e., near the location of the first exponentially closing gap in the noiseless problem. Both noisy instances increase the spectral gap from its value in the noiseless problem, with the increase being larger in the $\epsilon = 10^{-2}$ case. (b) The populations in the lowest three eigenstates of the noisy Hamiltonian as a function of the anneal parameter $s$, for $\epsilon = 10^{-3}$. The diabatic transitions do not happen as cleanly as in the noiseless case [shown in Fig.~\ref{fig:noiselesspopsn20}]. (c) As in (b), for $\epsilon=10^{-2}$. The dynamics are very close to adiabatic. }
\label{fig:popdynamicssymnoise}
\end{figure*}
\section{Noise-induced adiabaticity}\label{sec:explanation}
Two things stand out in the results presented in the previous section. The first is that for the long-range models, there is a rise in success probability from $\epsilon \approx 10^{-3}$ to $\epsilon \approx 10^{-2}$. The second is that the long-range models have an exponential speedup over the noiseless classical algorithm for a wide range of noise strengths, when compared to the short-range models. In this section, we will provide explanations for these two observations.
One phenomenon helps explain both these behaviors: In the long-range models, the noise typically leads to a larger spectral gap. To see this, consider what happens to the spectral gap at first-order in perturbation theory when we add long-range noise. Consider first the asymmetric case. Let $E_0^{(1)}(s)$ and $E_1^{(1)}(s)$ be the first-order corrections to the ground- and first excited state respectively. We know that
\bes
\begin{align}
E_0^{(1)}(s) &= \epsilon \braket{\phi_0 (s)| h_\mathrm{LA}(s) | \phi_0(s)} \\
&\sim \epsilon \mathcal{N}(0,2),
\end{align}
\ees
where $\ket{\phi_0(s)}$ represents the ground state of the unperturbed (noiseless) problem at $s$. We have used the fact that $h_\mathrm{LA}(s)$ is drawn from the GOE, and that the diagonal elements of a GOE matrix are distributed with variance $2$ [see Eq.~\eqref{eq:goedef}]. A similar calculation gives that $E_1^{(1)}(s) \sim \epsilon \mathcal{N}(0,2)$. Putting these together, we can approximate the spectral gap of the perturbed Hamiltonian using first-order perturbation theory as follows.
\bes
\label{eq:gap-LA}
\begin{align}
\tilde{\Delta}_\mathrm{LA}(s) &= \tilde{E}_1(s) - \tilde{E}_0(s) \\
&\approx [ E_1(s) + \epsilon E_1^{(1)}(s) ] - [ E_0(s) + \epsilon E_0^{(1)} (s) ] \\
&= \Delta(s) + \epsilon \mathcal{N}(0,4), \label{eq:gappert}
\end{align}
\ees
where $\Delta(s)$ is the spectral gap of the noiseless problem. To obtain the last line we used Eq.~\eqref{eq:Gauss-sum-diff} again.
Using the fact that $\Delta(s)$ scales either as inverse polynomially or inverse exponentially (shown in Ref.~\cite{Somma:2012kx}; see also Fig.~\ref{fig:spectrum}), and the fact that the random variable $\mathcal{N}(0,4)$ has no scaling with problem size we can conclude that, typically, at first order in perturbation theory, the perturbed problem has an $\mathcal{O}(1)$ gap. (A similar argument establishes that the gap is $\mathcal{O}(1)$ for the case of the LS model as well.)
This argument will only work if $\epsilon \mathcal{N}(0,4)$ does not make the right-hand side of Eq.~\eqref{eq:gappert} negative; if the right-hand side is negative, the perturbative approximation breaks down. The RHS is distributed according to $\mathcal{N}(\Delta(s),4\epsilon^2)$. This is a Gaussian centered around a positive mean, and thus the chance of this distribution sampling negative values is small for small $\epsilon$. So, heuristically, we expect the perturbative argument to work in typical instances. To corroborate the conclusion of this argument, we display the scaling of the gaps with problem size in Fig.~\ref{fig:noisygapsvsn}. It is apparent that the long-range models exhibit a constant scaling with problem size, while the short-range models exhibit an exponential scaling.
Note that the perturbative argument presented above is not directly applicable for the short-range noise models, because for these models matrix elements that are arbitrarily far apart in the column basis are not normally distributed.
Let us see how the perturbative lifting of the spectral gap helps explain the non-monotonic dependence on noise strength seen in Fig.~\ref{fig:mdnpgsvsepsmanyn}. For $\epsilon \approx 0$, the algorithm succeeds because of the diabatic transitions from the ground state to the first excited state and then back down to the ground state; recall Figs.~\ref{fig:gtqatf250} and~\ref{fig:noiselesspopsn20}. As we increase $\epsilon$ from zero, the slight lifting of the gap interferes with these diabatic transitions, leading to a somewhat smaller success probability. This corresponds to the local minimum in Fig.~\ref{fig:mdnpgsvsepsmanyn}. As we increase $\epsilon$ further, the gap increases more and this causes the dynamics to turn adiabatic, which increases the success probability. This corresponds to the second peak in Fig.~\ref{fig:mdnpgsvsepsmanyn}. These two effects can be seen in Fig.~\ref{fig:popdynamicssymnoise}, which shows typical instances of the noisy spectrum and the noisy dynamics under the LS noise model, at $\epsilon=10^{-3}$ and $\epsilon=10^{-2}$, for $n=20$. Figure~\ref{fig:typicalspectrum} shows that both noise realizations increase the spectral gap from its value in the noiseless case, more so for the higher $\epsilon$ realization. Figure~\ref{fig:10-3dynamics} shows how the diabatic transitions are scrambled due to the noise. Then, as we increase the noise to $\epsilon = 10^{-2}$ in Fig.~\ref{fig:10-2dynamics}, we observe the onset of adiabaticity.
As we increase $\epsilon$ to values greater than $10^{-2}$, the success probability falls off because even if the dynamics are adiabatic, the noisy spectrum and eigenstates have little relationship with the noiseless spectrum and eigenstates. To corroborate this, in Appendix~\ref{app:pertdecay}, we show using perturbation theory that the overlap between the unperturbed ground state and the perturbed ground state decays as $1- \mathcal{O}(\epsilon^2)$, which suggests that as we increase $\epsilon$, even if the dynamics are adiabatic, the ground state found at the end of the noisy evolution has low overlap with the EXIT vertex.
The perturbative lifting of the gap also explains why the long-range models exhibit an exponential quantum speedup. Indeed, because the noise induces adiabaticity for a certain range of values of $\epsilon$, then as long as the overlap between the unperturbed ground state and the perturbed ground state remains significant, the noisy quantum system can still solve the problem by evolving adiabatically as long as the anneal timescale is greater than the adiabatic timescale. The latter is given by the inverse of the perturbed gap squared, which is $\mathcal{O}(1)$, multiplied the norm of the Hamiltonian, which is $\mathcal{O}(\mathrm{poly}(n))$. The anneal timescale is chosen such that it provides a speedup in the noiseless case [Eq.~\eqref{eqt:tf} represents one such choice], which means the noiseless dynamics are adiabatic relative to the polynomially small gap between the first and second excited states (see Sec.~\ref{sec:gtqadynamics}), and therefore, the long-range noise dynamics will be adiabatic relative the constant gap between the ground and first excited state.
A natural question arises at this stage. Is the anomalous behavior of success probability with increasing system simply due to the perturbation increasing the energy scale of the system? That is, because the long-range noise matrices are selected from the GOE (and variants thereof), the norm of the perturbation increases with system size as $\sqrt{n}$; one might think that since the energies in the system increase, then so do its energy gaps, and then the larger gaps are responsible for the increase in success probability.
To test this explanation, we checked what happens after
normalization of the perturbation matrix by
its largest eigenvalue. This normalization ensures that the perturbation never adds an amount of energy that scales with system size. If it were true that the perturbative lifting of the gap is due to pumping energy into the system, we would expect the perturbative lifting to disappear upon normalization, and consequently also expect the anomalous behavior of success probability with increasing system size to disappear upon normalization. However, as seen in Fig.~\ref{fig:normLAmdnpgsvsn}, the non-monotonic dependence of the success probability on $n$ for intermediate $\epsilon$ values continues to hold even after normalization. (This will in turn lead to non-monotonic dependence of the success probability on $\epsilon$ for larger values of $n$, analogous to the behavior seen in Fig.~\ref{fig:mdnpgsvsepsmanyn}.) This is qualitatively similar to the behavior seen in Fig.~\ref{fig:LAmanyeps} for the long-range asymmetric noise model, where the perturbation matrix $h$ is not normalized.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\columnwidth]{Figure12}
\caption{(Color online) The median success probability, $p_\mathrm{GS}$, for the \emph{normalized} long-range asymmetric noise case at the end of an evolution of duration $t_f^\mathrm{Th}(n)$, with $\epsilon$ increasing from top to bottom at $n=1$.$t_f^\mathrm{Th}(n)$ is chosen so that the success probability for the noiseless probability is just above $0.95$. Error bars were obtained by bootstrap sampling over $300$ realizations of the noise.}
\label{fig:normLAmdnpgsvsn}
\end{figure}
\section{Summary and Conclusions}
\label{sec:conclusion}
We have analyzed the quantum annealing algorithm for the glued trees problem under four different
noise models: long-range vs. short-range and reflection-symmetric vs. reflection-asymmetric. These are oracular noise models: they add a Gaussian perturbation, of different forms, to the Hamiltonian evolution. We studied the success probability---i.e. the probability finding the EXIT vertex---at the end of the Schr{\"o}dinger evolutions for these different noise models.
We found that the long-range noise models display a perturbative lifting of the spectral gap which causes the dynamics to transition from diabatic to adiabatic. This allows the algorithm subject to long-range noise models to solve the glued trees problem in polynomial time and hence display an exponential quantum speedup over the noiseless classical algorithm. This seems surprising, since it associates a robustness to noise with a quantum algorithm exhibiting exponential speedup. However, we argue that in fact this speedup is misleading, because it disappears when we compare the quantum algorithm to an appropriate classical analogue. More precisely, a classical random walk that has long-range transition probabilities will also be able to solve the problem in polynomial time and hence display an exponential speedup over the noiseless classical algorithm.
This analysis highlights that care must be taken in the selection of noise-models for oracular algorithms. Typically, oracles are hard to realize physically, so we must select phenomenological noise models for them. But in so choosing, we might end up with a model that changes the nature of the problem, which is what occurred in the long-range noise models. More precisely, the classical long-range noisy version of the algorithm changed its complexity from exponential into polynomial, so a quantum polynomially scaling algorithm for the problem cannot count as providing an exponential quantum speedup.
It is instructive to compare this with the results of Ref.~\cite{cross2015quantum}, which analyzed the problem of learning the class of $n$-bit parity functions by making queries to a (noisy) quantum example oracle. There, the quantum algorithm has a linear speedup in the noiseless case, while it has a superpolynomial speedup when both the classical and quantum oracles are noisy. This happens upon depolarizing the qubits at the oracle's output at any constant nonzero rate.
For the glued trees problem we found a weaker result under the symmetric and asymmetric short-range noise models, which retain the exponential complexity of the classical problem. For sufficiently weak oracle noise, the quantum annealing algorithm retains a polynomial quantum speedup over the noiseless classical algorithm. But, for sufficiently strong oracle noise even the polynomial speedup is lost. The fact that for all values of the oracle noise the short-range noise models result in a loss of exponential speedup demonstrates that the exponential speedup of the glued-trees algorithm is not robust to noise.
We conjecture that more broadly, in the absence of fault tolerant error correction, exponential speedups cannot be obtained in any physical implementation of quantum annealing. This should not necessarily be a cause for pessimism: we are not ruling out polynomial speedups, which remain highly interesting and valuable.
\acknowledgements
We are especially grateful to Hidetoshi Nishimori for an important observation regarding perturbative gap lifting. We also thank Milad Marvian and Evgeny Mozgunov for useful discussions and Huo Chen and Richard Li for advice regarding parallel computation. The research is based upon work (partially) supported by the Office of
the Director of National Intelligence (ODNI), Intelligence Advanced
Research Projects Activity (IARPA), via the U.S. Army Research Office
contract W911NF-17-C-0050. The views and conclusions contained herein are
those of the authors and should not be interpreted as necessarily
representing the official policies or endorsements, either expressed or
implied, of the ODNI, IARPA, or the U.S. Government. The U.S. Government
is authorized to reproduce and distribute reprints for Governmental
purposes notwithstanding any copyright annotation thereon. Computation for the work described in this paper was supported by the University of Southern California's Center for High-Performance Computing (hpc.usc.edu).
|
1,108,101,564,310 | arxiv | \section{Introduction}
Last decade has seen a major shift in stellar spectroscopy: a slow collection of individual spectra has been accelerated by massive surveys, mostly using fiber-fed spectrographs with hundreds of spectra observed simultaneously. The past and ongoing efforts include RAVE \citep{Steinmetz2006,Zwitter2008,Siebert2011,Kordopatis2013}, Gaia-ESO \citep{Gilmore2012}, SEGUE \citep{Yanny2009}, APOGEE \citep{Zasowski2013}, LAMOST \citep{Luo2015}, GALAH \citep{DeSilva2015}, and of course Gaia \citep{Prusti2014}. Up-to-date overviews of the state and results of these surveys are given elsewhere in this volume.
The main goal of stellar spectroscopic surveys is to study Galactic structure and evolution. But the collected spectra allow for a significant auxiliary science. The three examples discussed below are an illustration of a vast range of posibilities and are by no means exhaustive. We believe that every observer could add further relevant uses of hundreds of thousands of stellar spectra, which were in most cases selected for observation only following simple positional and magnitude constraints. The first example illustrates research of the multi-dimensional structure of the interstellar medium. The next one helps with identifying young stars in the field. The last one is an example on how even a single spectrum obtained by a stellar survey can improve the solution of an astrometric binary which is being derived by Gaia.
\section{Interstellar Medium}
In 2020, the Gaia mission (launched in December 2013) is expected to release 6-dimensional (spatial position + velocity) vectors for a significant fraction of stars on our side of the Galactic centre, thus allowing a computation of stellar orbits and of evolution of the Galaxy as a whole. Traditional studies of the Galactic interstellar medium (ISM) cannot yield information equivalent to stars, as absorption studies get only a 2-dimensional (column density) information by observing one hot star at a time. But ISM allows to open up its 3-rd and 4-th dimension by studying diffuse interstellar bands (DIBs), weak but numerous absorption lines seen in spectra of background stars which are likely caused by distinct macromolecular carriers. High dimensionality requires measurement of the strength of these weak interstellar lines also for cool stars which by far outnumber hot stars in the Galaxy. Recent new approaches divide out the cool star spectrum by use of synthetic models of stellar atmospheres \citep{Puspitarini2015} or in a self-calibrated way by using spectra of similar stars with negligible ISM absorption observed at high Galactic latitudes by the same survey \citep{Kos2013}. By observing a given DIB toward many stars which are nearly in the same direction but at different and known distances one can reconstruct absorption sites along the line of sight. Joining observations in many directions on the sky then gives their spatial distribution. Finally, measurement of radial velocity shift yields a 4-dimensional picture of the ISM for each DIB, and can even constrain placement of multiple clouds along each line of sight. Interstellar absorption lines of sodium and potassium atoms yield information equivalent to DIBs, but emission lines or dust absorptions are limited to up to 3 dimensions.
ISM is the place of violent collisions of supernova shells, plus winds from asymptotic giant branch stars and hot-star associations. Head-on collisions in the Galactic plane are difficult to interpret, though an expected Galactic rotation pattern has been nicely identified \citep{Zasowski2015}. But observations of the on-going GALAH and partly Gaia-ESO surveys are away from the plane where interactions generally result in a net motion perpendicular to the plane. If any shells of absorbing material are identified we can assume that their motion is perpendicular to shell surfaces and reconstruct a complete velocity vector from its radial velocity component. Such information for ISM is then equivalent to the one collected for stars by Gaia.
This information can be used to study past events in the interstellar medium. \citet{Kos2014} published a quasi 3-dimensional map of intensity of diffuse interstellar band at 8620~\AA\ which shows that distribution of DIB extinction is thicker than the one of dust and that it is different on either side of the Galactic plane, a witness to asymmetries in placement of recent explosions of supernovae and to incomplete vertical mixing. Observations with the Gaia-ESO and GALAH surveys could be used to
increase the dimensionality of ISM studies to 4 dimensions
\citep[for an example of radial velocity measurements see][]{Kos2015}. They could also identify and characterize Galactic fountains blown away by supernovae in the last million years. Such flows are thought to sustain star formation in the disk by entraining fresh gas from the halo, so they provide a mechanism which explains why star formation in our and other similar galaxies did not stop when gas present in the disk has been used up \citep{BlandHawthorn2009,Fraternali2014}.
\articlefigure[width=\textwidth]{TZwitter_Fig1.eps}{figDIBsGALAH}{
Diffuse interstellar bands and the K~I interstellar atomic line at 7699\AA\ in GALAH spectra.
}
Figure \ref{figDIBsGALAH} plots a dozen DIBs and the K~I interstellar atomic line at 7699~\AA\ in a stellar spectrum observed by GALAH. Spectrum of TYC 4011-102-1, a hot star with strong interstellar absorptions close to the Galactic plane, is shown. Each 20~\AA\ wide panel is centred on the DIB wavelength as listed in \citet{Jenniskens1994}. Plotted wavelengths are heliocentric. Right-most panel identifies two interstellar clouds for K~I at different velocities. For a majority of GALAH objects, which lie away from the Galactic plane, such complications are rare (but can be detected).
\section{Young stars in the field}
Properties of a star are entirely determined by its initial composition, mass and current age if one neglects rotation, magnetism or multiplicity. As noted by David \citet{Soderblom2014} "age is not a direct agent of change and cannot be measured like mass or composition. Also, age affects the core of the star, but we observe the surface which is complex." Large spectroscopic surveys have the possibility to measure some empirical age indicators, i.e.\ rotation, activity, and lithium depletion boundary. The GALAH survey will bring studies of these age indicators to industrial scale with its hundreds of thousands of observed spectra. Lithium depletion studies have been motivated by lithium observations of main-sequence stars in young clusters and the halo
\citep[e.g.\ ][]{Soderblom1995}. The GALAH survey includes the Li~I 6708~\AA\ line in its red channel. Its resolving power of $\sim 28,000$ and a typical S$/$N ratio of 100 per resolution element allow for efficient measurement of stellar rotation and for studies of profiles of H$\alpha$ and H$\beta$ lines which are sensitive to chromospheric activity. Measurement of these youth indicators for field stars is important, as it may point to stars recently ejected from young stellar environments. Parallaxes and proper motions measured by Gaia, together with spectroscopically derived radial velocities permit to reconstruct their Galactic orbits and so to identify recently dispersed stellar clusters. Multidimensional chemistry studies, which are within the scope of GALAH, are then the final check of the emerging picture based on chemical tagging \citep{Freeman2002}.
\articlefigure[width=\textwidth]{TZwitter_Fig2.eps}{figactivityRAVE}{
Distribution of the equivalent width of emission component of Calcium infrared triplet for active stars in RAVE (grey area). Solid histogram are normal stars which are assumed to be inactive, while dashed line marks pre-main sequence stars known to Simbad. From \citet{Zerjal2013}.}
Stellar activity identification is now entering the era of massive studies.
Figure \ref{figactivityRAVE} summarizes active star candidates found in RAVE data using equivalent width of the emission components of the Ca~II infrared triplet lines \citep[EW$_{IRT}$, ][]{Zerjal2013}. Grey histogram is a distribution of EW$_{IRT}$ for
stars with active morphology in RAVE, as identified by a locally linear embedding technique \citep{Matijevic2012}. Solid line marks normal stars which are assumed to be inactive, while dashed line marks RAVE stars classified by Simbad to be pre-main sequence stars. $p_{log}$ is a logarithmic measure for the probability that a star with a given EW$_{IRT}$ differs from an inactive spectrum. Its values from left to right correspond to the probabilities of 5 and 2~$\sigma$ below zero, zero, and 2, 5 and 10~$\sigma$ above zero. Altogether the work identifies $\sim 14,000$ stars with chromospheric flux in Ca~II lines detectable at least at a 2~$\sigma$ confidence level.
\begin{table}[!ht]
\caption{Fractions of categorisations of the same object into subtypes (see text) for emission type objects in the Gaia-ESO survey. From \citet{Traven2015}.}
\smallskip
\label{tablefractions}
\begin{center}
\begin{tabular}{crrrrrrr}
Prevalent&\multicolumn{7}{c}{Categories of spectra of the same object (\%)}\\
category &$E_{bl}$&$E_{sp}$&$E_{dp}$&$P_{Cyg}$&$IP_{Cyg}$&$S_{abs}$&$E_{abs}$\\
\noalign{\smallskip}
\tableline
$E_{bl}$ &80.2& 0.8& 3.0& 0.4& 0.5&13.5& 1.5\\
$E_{sp}$ & 1.4&80.0& 0.1& 0.5& 4.0& 3.6&10.4\\
$E_{dp}$ & 4.0& 0.1&73.6& 1.7& 1.3&18.6& 0.6\\
$P_{Cyg}$ & 1.9& 9.7& 1.0&64.0& 3.5& 3.3&16.7\\
$IP_{Cyg}$& 1.4& 5.2& 0.1& 0.2&77.2& 5.3&10.5\\
$S_{abs}$ &12.1& 2.5&13.2& 1.5& 1.8&67.1& 1.9\\
$E_{abs}$ & 0.3& 4.3& 0.1& 1.7& 5.3& 1.6&86.7\\
\noalign{\smallskip}
\tableline
\end{tabular}
\end{center}
\end{table}
Presence of emission components in the Ca~II infrared triplet (RAVE, Gaia, and Gaia-ESO) or in Balmer lines (GALAH and Gaia-ESO) do not prove that the object is young: interacting binaries are an obvious example of old objects with emission type spectra. But such objects are not very common in the field. RAVE (Fig.\ \ref{figactivityRAVE}) found that strong emissions suggest a pre main-sequence evolutionary phase. This is consistent with results of the Gaia-ESO survey, where \citet{Traven2015} studied 22,035 spectra of stars in young open cluster fields and found that 7698 spectra (35\%) belonging to 3765 stars have intrinsic emission in H$\alpha$. Again, such a large fraction of emission type spectra in a young stellar environment suggests that emission is related to youth. But emission is a transient property and morphological classification of emission may be changing with time. \citet{Traven2015} shows that most profiles are composed and classifies such profiles by properties of fits using two Gaussians: $E_{bl}$ stands for blended emission components, $E_{sp}$ have double sharp peaks, $E_{dp}$ are double emission, $P_{Cyg}$ are P-Cygni profiles, $IP_{Cyg}$ are inverted P-Cygni, $S_{abs}$ is self-absorption and $E_{abs}$ is emission within absorption. Off-diagonal elements in table \ref{tablefractions} report correlations betweeen individual composed profile types. When emission blend is the prevalent category for an object, it is most often in combination with self-absorption, which is best explained by one of the two components being in transition between absorption and emission. Similarly, the double sharp peaks can change to emission with absorption if a relatively weak absorption is constantly present and one of the emission peaks diminishes. The largest off-diagonal elements connect double peaks and self-absorption. The distinction between the two categories is largely influenced by the inclination of the slopes in the profile that are liable to change in the presence of additional weaker components or they are harder to retrieve in the case of more noisy spectra. The most frequently identified morphological categories from \citet{Traven2015} are emission blend (1729 spectra), emission in absorption (1652 spectra), and self absorption (1253 spectra). We conclude that many stars have their emission transient in time or in morphological type, so that activity detected through emission is an indication of youth which is not always present and should be used in connection with the absolute position of the star on the H-R diagram, a frequently known property in the Gaia era.
\section{Astrometric binaries}
\articlefigure[width=\textwidth]{TZwitter_Fig3.eps}{figBinaries}{
Expected binary census of Gaia for different types of observing techniques. Adapted from \citet{Soderhjelm2004}.
}
Gaia will observe huge numbers of different types of binaries \citep{Zwitter2004,Eyer2015} and study them with a wide range of techniques (Fig.\ \ref{figBinaries}). One of its core strengths will be a derivation of accurate astrometric solutions even for binaries with extreme mass ratios \citep{Soderhjelm2004}. On the other hand spectroscopy from ground based surveys will be the source of detailed chemistry for any type of binary or multiple system. Astrometry has been frequently used to supplement spectroscopic observations in the past \citep[e.g.][]{Sahlmann2013}, but in Gaia the opposite will be a common case \citep[e.g.\ ][]{Torres2006}. Many astrometric binaries will have components of similar mass and luminosity. The reach of astrometry is limited in this case: the two stellar images are usually not spatially resolved, so that Gaia will be able to trace only the astrometric motion of the photocenter of the two components. Such studies yield accurate orbital period, but since the photocenter is located somewhere between the two stars individual masses cannot be derived from astrometry alone. Here even a single spectrum obtained during a spectroscopic survey can be extremely valuable. Radial velocities of individual components in an SB2 at an orbital phase known from astrometry allow to derive the true sizes of both orbits, and so the complete solution of the system. A proper Bayesian analysis of simultaneous astrometric and spectroscopic information will be needed for this task \citep{Schulze2012}.
|
1,108,101,564,311 | arxiv | \section{\label{intro}Introduction}
In solids subject to a magnetic field $B$, the energy spectrum of charge
carriers is quantized into Landau levels (LLs). The
magneto-oscillations (MOs) observed in the Shubnikov-de Haas and
de Haas-van Alphen effects reflect the oscillations of the
density of states (DOS) with the field intensity. The DOS reaches
maxima at magnetic fields, $B_n$, for which the LLs with the index, $n$,
cross the Fermi energy, $E_F$.
The Landau plot is a plot of inverse magnetic fields, $1/B_{n}$,
versus the LL index, $n$. It is a standard tool used to determine the
frequency and phase of MOs, and the related important
characteristics of the investigated systems.
The construction of the Landau plot is based on the Onsager-Lifshitz
quasiclassical quantization rule, \cite{Onsager,Lifshitz}
\begin{equation}
A(E_F)= \frac{2\pi|e|B}{\hbar}\left(n+\gamma\right), \
\label{onsager}
\end{equation}
where $A(E_F)$ is an area of the extremal cross-section of the Fermi
surface (FS) cut by the plane perpendicular to the magnetic field
direction, $e$ is the electron charge and $\gamma$ is a constant which
describes the phase of MOs. It follows from Eq.~(\ref{onsager}) that MOs
of DOS are periodic in $1/B$ and their frequency $F$ is related to
$A(E_F)$ by
\begin{equation}
F = \frac{\hbar A(E_F)}{2\pi|e|}.
\label{quasi}
\end{equation}
The Onsager-Lifshitz quantization rule has been originally designed for
three dimensional metals, where the validity of the
quasiclassical approximation is guaranteed by a large number of LLs
bellow $E_F$ in accessible magnetic fields. However, the method is also
widely used when two-dimensional (2D) systems are investigated. Here,
the importance of $F$ is stressed out by the fact that the
carrier concentration is proportional to the area surrounded by the Fermi
contour.
In general, the rule should not be applicable to 2D systems. Subject to
strong magnetic fields, the quantum limit with only one LL below $E_F$
can be easily reached. But in the majority of such systems,
the periodicity of MOs is preserved due to the simple parabolic
(Schr\"{o}dinger--like) energy spectra of the 2D electron layers in the
semiconductor structures, which yields the LL energies proportional to
$B$.
In 2004 a single sheet of graphene was separated from bulk graphite by
micromechanical cleavage. \cite{geim05} It was confirmed
experimentally that electrons in graphene obey a linear energy
dependence on the wave-vector $\vec{k}$, as predicted many years ago
by the band structure calculation. \cite{wallace} Both electron and
hole charge carriers behave like massless relativistic
particles -- Dirac fermions (DFs), and there is no gap between the
valence and conduction bands. The electron and hole Dirac cones touch
at a neutrality point.
Subject to a magnetic field $B$, the DFs form LLs
with energies proportional to $\sqrt{B}$. In the seminal papers
\cite{geim04, geim05,kim05} the Shubnikov-de Haas MOs in graphene were found
periodic in $1/B$, similarly as in the 2D gas of Schr\"{o}dinger
fermions (SFs) with the parabolic energy spectra, but with the phase
shifted by $\pi$. The shift, which was clearly demonstrated by the
Landau plot of magneto-resistance oscillations, is due to the
existence of the zero-energy LL in the linear Dirac spectrum, shared
by electrons and holes. Note that $\gamma=1/2$ for SFs, and
$\gamma=0$ for DFs.
In addition to a single layer graphene, also a few layer graphene
samples can be prepared. Among them a bilayer graphene (BLG), in which two
carbon layers are placed on top of each other with a standard Bernal
stacking, is of particular interest. Probably the most remarkable
feature of this structure is the possibility to open a gap between the
valence and conduction bands through the application of an external
field or by chemical doping.~\cite{McCann,Ohta,ECastro-Geim}. This
phenomenon is closely related to the gate-induced breaking of the
inversion symmetry of the crystal.\cite{xiao,mucha,nakamura,zhao}
Note also that the application of the gate voltage is a necessary
condition for the experimental observation of MOs in BLG.
Without a gate voltage, the sample is neutral, the Fermi
energy is located in the neutrality point, and no free charge carriers
should be present in perfect samples.
There are two ways of how to apply the gate voltage. If the external
voltage is applied symmetrically from both sides of a sample, just $E_F$
and the concentration of carriers are varied, and no gap is opened. The
tunable gap appears in the presence of external electric field resulting
from the asymmetrically applied gate voltage.
Let us point out that the charge carriers in BLG are
neither SFs nor DFs, and therefore it is of interest to construct the
corresponding Landau plots to see how far the bilayer energy spectra
from these two simplest possibilities are.
This task is simplified by the fact that the electrochemical potential
(i.e., also $E_F$) is kept constant during magnetic field sweeps in
gated samples. According to Ref.~\onlinecite{mosser,imura}
carrier density oscillations are compensated by gate current
oscillations in the case of fixed $E_F$. Note that in bulk samples,
where the charge neutrality must be preserved, the carrier concentration
is considered to be fixed.
To construct the Landau plot, we will first calculate the quasiclassical
frequencies of MOs in BLG, based on their
zero-magnetic-field electronic structure.
Later on we will compare these quasiclassical frequencies with results
of the quantum-mechanical calculation of the electronic structure of
BLG subject to a perpendicular magnetic field.
\section{\label{zero}Zero-field electronic structure}
The electronic structure of BLG can be described by the
simple tight-binding model involving only the nearest neighbor
interactions.\cite{Pereira,McCann1,McCann2,Nilsson,ECastro,Koshino,
CastroNeto-Geim}
A single layer honeycomb lattice, with two atoms per unit cell, results
from two superimposed triangular lattices labeled A and B. The unit
cell is defined by the lattice vectors $\vec{a}_1$ and $\vec{a}_2$, making
the angle 60$^\circ$, the lattice constant $a$ is equal to 2.46 \AA.
\begin{figure}[htb]
\includegraphics[width=0.8\linewidth]{Fig1.eps}
\caption{\label{fig1} (Color online) Lattice structure of a graphene
bilayer. The unit cell is a green parallelepiped.}
\end{figure}
The bilayer is formed by two graphene sheets, 1 and 2, arranged in the
Bernal stacking. The distance between layers is 3.37 \AA. Thus the
unit cell of a bilayer has four atoms, its lattice structure is
sketched in Fig.~\ref{fig1}.
In addition to the intralayer parameter $\gamma_0$ and the interlayer
parameter $t$, the corresponding Hamiltonian depends on the potential energy
difference between the two layers, which we denote $2u$. The
parameter $\gamma_0 \approx 3.1$ eV yields the Fermi velocity $v_F
\approx 1.0\times 10^6 $ m/s, defined by $\hbar v_F =
\gamma_0\sqrt{3}a/2$. We further consider that $t \approx 0.39$ eV,
and the energy $2u$ varies between $0$ and $250$ meV.\cite{zhang}
While $\gamma_0$ and $t$ are fixed by nature, we assume that $u$ and
$E_F$ are the adjustable parameters.
If we employ the continuum approximation, \cite{wallace} the Hamiltonian
$H$ in the vicinity of the $K$ point can be written as
\begin{equation}
H=
\left(\begin{array}{cccc}
H_{B_2B_2} & H_{B_2A_2} & H_{B_2A_1} & H_{B_2B_1}\\
\\
H_{A_2B_2} & H_{A_2A_2} & H_{A_2A_1} & H_{A_2B_1}\\
\\
H_{A_1B_2} & H_{A_1A_2} & H_{A_1A_1} & H_{A_1B_1}\\
\\
H_{B_1B_2} & H_{B_1A_2} & H_{B_1A_1} & H_{B_1B_1}\\
\end{array}\right),
\label{Hamilton}
\end{equation}
where the matrix elements of the first layer are given by
\begin{eqnarray}
H_{A_1A_1}& =&H_{B_1B_1}= -u ,\nonumber\\
H_{A_1B_1}&=&H^*_{B_1A_1}=\hbar v_F(k_x-i k_y) . \nonumber
\end{eqnarray}
Similarly,
the matrix elements corresponding to the second layer read
\begin{eqnarray}
H_{A_2A_2}&=&H_{B_2B_2}= u , \nonumber\\
H_{A_2B_2}&=&H^*_{B_2A_2}= \hbar v_F(k_x+i k_y) . \nonumber
\end{eqnarray}
There
are only two nonzero interlayer matrix elements
\begin{equation}
H_{A_1A_2} =H_{A_2A_1}= t . \nonumber
\end{equation}
The Hamiltonian $H'$ in the vicinity of the $K'$ point has a similar
structure, the matrix elements of $H'$ are complex conjugates of the
matrix elements of $H$.
\begin{figure}[htb]
\includegraphics[width=0.8\linewidth]{Fig2.eps}
\caption{\label{fig2}(Color online) The ,,mexican hat'' shape of the
valence and conduction bands of a biased bilayer. The blue and red
colors correspond to higher probability of finding charge carriers in
the layers 1 and 2, respectively. Three groups of the Fermi contour are
possible depending on the value of $E_F$: the double circles (A1), the
circles (A2), and the Fermi rings (A3).}
\end{figure}
The above Hamiltonians can be diagonalized
analytically.\cite{Pereira,Nilsson,ECastro,ECastro1,CastroNeto-Geim}
The zero-field energy branches of the conduction band, $E_{c1}(k)$ and
$E_{c2}(k)$, and the valence band, $E_{v1}(k)$ and $E_{v2}(k)$, of a
bilayer result from hybridization of Fermi cones of layers 1 and 2,
mediated by the interlayer matrix element $t$. Note that
$E_{v1}(k)=-E_{c1}(k)$ and $E_{v2}(k)=-E_{c2}(k)$ and that the valley
degeneracy is preserved, i.e., we get the same bands in
valleys $K$ and $K'$.
For $u=0$ two Fermi cones are replaced by four bonding and antibonding
hyperbolic bands. The bonding valence and conduction bands,
$E_{v1}(k)$ and $E_{c1}(k)$, touch at $k=0$, the separation between
bands of a bonding--antibonding pair is equal to $t$ on the energy
scale.
When the interlayer voltage is applied, the Fermi cones of two layers
are shifted along the energy axis, and the separation of the neutrality
points becomes equal to $2u$. The hybridization due to the interlayer parameter
$t$ is strongest near the cone cross-points. The resulting four bands are
shown in Fig.~\ref{fig2}. It turns out that for any finite $u$ a gap is
open between the topmost valence band $E_{v1}(k)$ and the bottom
conduction band $E_{c1}(k)$. The conduction band acquires a ,,mexican
hat'' shape with energy minima at nonzero $k$ and a local maximum at
$k=0$. We can write
\begin{eqnarray}
E_{c1}^{\,\,\text{max}}(0)&=&u, \nonumber\\
E_{c2}^{\,\,\text{min}}(0)&=&\sqrt{u^2+t^2}, \nonumber\\
E_{c1}^{\,\,\text{min}}(k)&=&\Delta=ut/\sqrt{4u^2 +t^2}.
\label{minmax}
\end{eqnarray}
Note that for large $k$ the band $E_{c1}(k)$ describes electrons
localized mostly in the layer 1. Near the local maximum at $k=0$ the
holes in the layer 2 prevail. Similar conclusions can be drawn for the
topmost valence band $E_{v1}(k)$.
As mentioned in Introduction, the quasiclassical frequencies of the
bilayer, $F_1$ and $F_2$, are proportional to areas surrounded by the
Fermi circles, which depend, for a given $u$, on the Fermi energy
value $E_F$. Three different possibilities are depicted in
Fig.~\ref{fig2} for the case of conduction/valence bands. (For the
valence bands $E_F$ should be replaced by $-E_F$.)
The analytic expressions for the quasiclassical frequencies $F_1$ and
$F_2$ read
\begin{equation}
F_{1(2)} = \frac{2\hbar}{3|e|a^2\gamma_0^2}\left[E_F^2+u^2
\pm \sqrt{(E_F^2-u^2)t^2 + 4 E_F^2u^2}\right],
\label{B12u}
\end{equation}
the frequencies $F_{1(2)}$ are even functions of variables $E_F$ and
$u$.
The frequency $F_2$ is equal to zero at the local maximum
$E_{c1}^{\,\,\text{max}}(0)$, and at the minimum
$E_{c2}^{\,\,\text{min}}(0)$. For a finite $u$, the frequency $F_2$
approaches $F_1$ at $E_{c1}^{\,\,\text{min}}(k)$.
Three forms of the Fermi contour are possible depending on the value
of $E_F$. First, the large $E_F$ cuts both conduction bands and
$F_1 > F_2 > 0$. The frequency $F_1$ corresponds to electron orbits
localized mainly in the layer 1, the frequency $F_2$ corresponds to
hole orbits localized mainly in the layer 2. Second, only the band
$E_{c1}$ is cut by $E_F$. Then $F_1 > 0$ and $F_2 < 0$. In this case
$F_2$ is just a parameter and does not have the meaning of a true
frequency. At last, the $E_F$ cuts the bottom conduction band
$E_{c1}(k)$ twice, if it is less than a local energy maximum,
$E_{c1}^{\,\,\text{max}}(0)$.
Then again $F_1 > F_2 > 0$. In that case $F_1$ is the
frequency of an electron orbit in the layer 1 while $F_2$ belong to a
hole orbit in the layer 2. Close to the local minima the difference
between electrons and holes is smeared and charge carriers are
present in both layers as indicated by the change of line colors in
Fig.~\ref{fig2}.
For the special case of $u=0$, Eq.~(\ref{B12u}) reduces to
\begin{equation}
F_{1(2)} = \frac{2\hbar}{3|e|a^2\gamma_0^2}
\left(E_F \pm t\right)E_F.
\label{B12}
\end{equation}
Then the gap between the valence and conduction bands as well
as the local maximum $E_{c1}^{\,\,\text{max}}(0)$ all disappear.
The quasiclassical phases of MOs are not accesible via the
Onsager-Lifshitz quantization rule, Eqs.~(\ref{onsager}) and
(\ref{quasi}). To find the energy spectra beyond the quasiclassical
approximation, we need to diagonalize the magnetic Hamiltonians $H$ and
$H'$.
\section{\label{nonzero}Magnetic field effects}
The magnetic Hamiltonians can be obtained from the zero-field
Hamiltonians by modification of matrix elements $H_{A_1B_1}$,
$H_{B_1A_1}$, $H_{A_2B_2}$ and
$H_{B_2A_2}$.\cite{MacClure60,Inoue,Pereira}
The matrix elements of the
magnetic Hamiltonian in the vicinity of the $K$ point are
\begin{eqnarray}
H_{A_1B_1}& =&H^*_{B_1A_1}=\sqrt{2|e|\hbar v_F^2 B\,n},\nonumber\\
H_{A_2B_2}& =&H^*_{B_2A_2}=\sqrt{2|e|\hbar v_F^2 B\,(n+1)}. \nonumber
\end{eqnarray}
The other matrix elements remain the same as in the zero-field
Hamiltonian. Near the $K'$ point,
\begin{eqnarray}
H'_{A_1B_1}& =&H'^*_{B_1A_1}=\sqrt{2|e|\hbar v_F^2 B\,(n+1)},\nonumber\\
H'_{A_2B_2}& =&H'^*_{B_2A_2}=\sqrt{2|e|\hbar v_F^2 B\,n}. \nonumber
\end{eqnarray}
We need not diagonalize these Hamiltonians to construct the Landau
plot. If we look for magnetic fields $B_n$ at which the LLs cross
$E_F$, it is enough to find the poles of the resolvent
$G(z)=(z-H)^{-1}$, as it defines the density of states $g(E_F)$
through
\begin{equation}
g(E_F) \propto -\frac{1}{\pi}\, {\mathcal Im}\, {\text Tr}\,G(E_F+i0 ).
\label{res}
\end{equation}
The easiest way to find the poles is to solve the corresponding
secular equation for $B_n$ assuming the fixed $E_F$.
We start with the simplest case of the unbiased BLG ($u=0$). Then
the secular equations can be given a very convenient form, utilizing
the quasiclassical frequencies of MOs, presented in the previous
paragraph, Eq.~(\ref{B12}),
\begin{equation}
B^2 n(n+1) -B\left(n+\frac{1}{2}\right)(F_1+F_2)+F_1F_2 =0.
\label{sec_eq}
\end{equation}
The Hamiltonians $H$ and $H'$ yield identical equations for valleys
$K$ and $K'$.
While the secular polynomial is quartic in energy it is only quadratic
in $B$. Therefore, to construct the Landau plot it is enough to solve
the quadratic equation to find $B_n$ in terms of fixed $E=E_F$,.
The quasiclassical phase $\gamma$ can be easily obtained from
Eq.~(\ref{sec_eq}). For a large number of LLs below $E_F$ one may
assume that $n(n+1)\rightarrow (n+1/2)^2$, and then Eq.~(\ref{sec_eq})
can be written in the form
\begin{equation}
B^2\left(n+\frac{1}{2}\right)^2 -B\left(n+\frac{1}{2}\right)(F_1+F_2)+F_1F_2 =0.
\label{sec_eq_qcl}
\end{equation}
From here we obtain the asymptotic quasiclassical Landau plots
\begin{equation}
\frac{F_{1(2)}}{B_n} =n +\frac{1}{2},
\end{equation}
i.e., we found that the phases of MOs correspond to SFs with $\gamma
=1/2$, in agreement with quasiclassical treatments of systems with
inversion symmetry. Note that $F_2$ is positive only in the
rather unrealistic case $|E_F|>t$.
To get Landau plots for an arbitrary $n$ we can express the solution of
Eq.~(\ref{sec_eq}) as
\begin{equation}
\frac{2F_1F_2}{F_1+F_2}\frac{1}{B_n}= n+\frac{1}{2}\mp
\sqrt{\left(n+\frac{1}{2}\right)^2 -n(n+1)\frac{4F_1F_2}{(F_1+F_2)^2}}
\end{equation}
or, if we define dimensionless $\delta$ by
\begin{equation}
\delta = \left(\frac{F_1-F_2}{F_1+F_2}\right)^2,
\end{equation}
we can write (see also Ref.~\onlinecite{Smrcka})
\begin{equation}
\frac{F_{1(2)}}{B_n} =
\frac{n+\frac{1}{2}\mp \sqrt{\frac{1}{4}+n(n+1)\delta}}
{1\mp\sqrt{\delta}}.
\label{freq}
\end{equation}
Here the negative sign in the numerator/denominator corresponds to the
frequency $F_1$ in the quasiclassical limit, and the positive sign to
the quasiclassical frequency $F_2$. It is obvious that for $ \delta \ne
0$ the MOs are not periodic in $1/B$.
The case of the biased BLG ($u\neq 0$) must be treated separately,
as the presence of the electric field perpendicular to layer planes
breaks the inversion symmetry and lifts the valley
degeneracy.\cite{mucha,nakamura}
The secular equations can be given again a form quadratic in $B$, but
the coefficients do not depend exclusively on the quasiclassical
frequencies as in Eq.~(\ref{sec_eq}). We can write
\begin{equation}
B^2 n(n+1)-B\left[(n+\frac{1}{2})(F_1+F_2)+
F_0\right]+F_1F_2 =0
\label{sec-eq-bi}
\end{equation}
for the Hamiltonian $H$ in the vicinity of $K$.
In comparison with Eq.~(\ref{sec_eq}) there is an extra term
\begin{equation}
F_0 = \frac{4\hbar}{3|e|a^2\gamma_0^2} E_F u.
\end{equation}
In the vicinity of $K'$ we obtain a very similar equation from
the Hamiltonian $H'$, the only difference is that $F_0$ is replaced by
$-F_0$. The extra term, $F_0$, is the reason of the valley asymmetry. It is
obvious that Eq.~(\ref{sec-eq-bi}) gives two different series of
solutions, $B_n$, for positive and negative $F_0$.
The quasiclassical frequencies $F_1$ and $F_2$ are even functions of
$E_F$ and $u$. It means that there are the same frequencies not only for
$K$ and $K'$, but also for the electrons and holes with energies $E_F$
and $-E_F$, respectively.
Note also that $F_1$ and $F_2$ do not depend on the sign of
$u$. On the other hand, $F_0$ is an odd function of $E_F$ and $u$. Thus
$F_0$ breaks the $K$ -- $K'$ symmetry, and also the symmetry between the
electron and hole oscillations with the same quasiclassical
frequencies. The change of sign of $u$ also reverts the roles of $K$ and
$K'$ valleys, i.e., what is valid for $K$ with $u>0$ is valid for $K'$
with $u<0$.
Also Eq.~(\ref{sec-eq-bi}) can be rewritten to an equation similar
to Eq.~(\ref{freq}), but with an additional dimensionless parameter
$\lambda$, which depends on $F_0$,
\begin{equation}
\lambda = \frac{F_0}{F_1+F_2}.
\end{equation}
Then the analytic solution reads
\begin{equation}
\frac{F_{1(2)}}{B_n} =
\frac{n+\frac{1}{2}+\lambda\mp\sqrt{(n+\frac{1}{2}+\lambda)^2
+n(n+1)(1-\delta)}}
{1\pm\sqrt{\delta}}.
\label{freq_u}
\end{equation}
This equation reduces to Eq.~(\ref{freq}) for $\lambda = 0$.
Now it is a more difficult task to find an asymptotic expression for the
oscillation phase than in the previous case $u=0$. If we solve
Eq.~(\ref{sec-eq-bi}) for $n+1/2$ we get
\begin{equation}
n+\frac{1}{2}=\frac{F_1+F_2\pm\sqrt{(F_1-F_2)^2+4BF_0+B^2}}
{2B},
\end{equation}
which for $B$ approaching zero yields
\begin{equation}
\frac{F_{1(2)}}{B} = n+\frac{1}{2}\mp \xi.
\label{extra}
\end{equation}
Here $\xi$ is a gate-tunable correction to the oscillation phase, given by
\begin{equation}
\xi = \frac{F_0}{F_1-F_2 } = \frac{E_F u}{\sqrt{(E_F^2-u^2)t^2 +4E_F^2u^2}}.
\label{extra1}
\end{equation}
This correction differs in sign for $K$ and $K'$ and also differs for
electrons and holes from the same valley with the same absolute value
of energy.
\section{\label{results}Results and discussion}
\begin{figure}[b]
\includegraphics[width=0.8\linewidth]{Fig3.eps}
\caption{\label{fig3} (Color online)
(a) The electronic bands of the unbiased graphene bilayer ($u=0$). The
horizontal lines denote the Fermi energies which cross the electron
and hole dispersion curves. (b) The ,,phases''
$\gamma_{1(2)}(E_F)=F_{1(2)}/B-n$, for $E_F$ depicted in a), plotted as
functions of the Landau level index~$n$.}
\end{figure}
In the unbiased BLG the energy $u$ is equal to zero and the
parameter $\delta$, which appears in Eq.~(\ref{freq}), has a
particularly simple form
\begin{equation}
\delta = \frac{t^2}{E_F^2}.
\end{equation}
\begin{figure}[b]
\includegraphics[width=0.8\linewidth]{Fig4.eps}
\caption{\label{fig4} (Color online) The electron and hole Landau
levels (in the $K'$ valley)
of two layers are mixed by the interlayer interaction, $t$. For
the energy range corresponding to Fermi rings in zero magnetic filed
(see $A_3$ in Fig.~\ref{fig2}),
$E_F$ cuts the Landau levels twice. This is the reason for the
anomalous phase in the quasiclassical limit $B \rightarrow 0$.}
\end{figure}
For small Fermi energies only the bottom branch $E_{c1}(k)$ of the
conduction subband is cut by $E_F$ and only the frequency $F_1$ is
defined. For $E_F$ approaching zero, the parameter $\delta$
diverges. This implies that for energies close to the band bottom
Eq.~(\ref{freq}) can be written as
\begin{equation}
\frac{F_1}{B} = \sqrt{n(n+1)},
\end{equation}
the form found for the extremal electron and hole orbits in
graphite,\cite{Smrcka} which clearly indicates the aperiodicity of
oscillations.
The Fermi energies greater than $t$ are rather
unrealistic. Nevertheless we can consider this hypothetical case in
our theoretical treatment. We can write, for $E_F =t$ and $\delta =1$,
Eq.~(\ref{freq}) as follows
\begin{eqnarray}
\frac{F_1}{B}& = &\frac{n(n+1)}{ n+\frac{1}{2}},\\
\frac{F_1}{B}& = & n+\frac{1}{2}. \nonumber
\end{eqnarray}
The Landau plots calculated for two selected values of $E_F$, $E_F<t$
and $E_F>t$, which cross the dispersion curves are presented in
Fig.~\ref{fig3}. The Landau plots are the same for $E_F$ and $-E_F$
due to the inversion symmetry conservation. One can observe that in the
unbiased bilayer the phases of MOs, corresponding to both frequencies
$F_1$ and $F_2$, approach the phase of massive fermions,$\gamma=1/2$,
for higher quantum numbers of LLs.
In BLG, an applied electric field leads to asymmetry
between $K$ and $K'$ valleys that gives rise to nontrivial oscillation
phenomena in magnetic fields. To illuminate an anomalous
behavior of oscillations, we plotted in Fig.~\ref{fig4} the field
dependence of LLs in BLG.
In a single layer graphene the LL fans of electrons and holes start at
the zero-field neutrality point. The neutrality points of two
independent layers are shifted by $2u$ and the LLs of holes from the layer
1 cross the LLs of electrons from the layer 2, as shown in Fig.~\ref{fig4}
by thin brown lines.
In BLG the shape of LL spectrum results from
hybridization of LL spectra of layers 1 and 2. Due to the interlayer
interaction, represented by the matrix element $t$, we have four fans
of LLs which start at zero-field energies $E_{v2}(0)$, $E_{v1}(0)$,
$E_{c1}(0)$ and $E_{c2}(0)$.
The hole levels from the layer 1 and the electron levels from the
layer 2 avoid to cross, and the low-field hole LLs smoothly turn into
the electron LLs as $B$ increases. This is indicated in
Fig.~\ref{fig4} by the change of LL color from red to blue. The LLs
from a fan starting at zero-field energy $E_{c1}(0)$ have minima in
their field dependence and, therefore, can be cut twice by a single
$E_F$. Moreover, the minima are not the same for all levels and,
consequently, not all levels are cut by a single $E_F$.
This is reflected in the quasiclassical approach as the gate-dependent
correction to the MO phase, $\xi$, which is related to the energy
difference $2u$ between two layers. Note that in the region of energies
corresponding to the Fermi rings the expression~(\ref{extra1}) diverges
at $E_{c1}^{\,\,\text{min}}(k)$ and is equal to $1/2$ for $E_F =
E_{c1}^{\,\,\text{max}}(0)$. As the many body effects can play a role
in this low concentration range, the above one-electron picture is probably
oversimplified.
\begin{figure}[tb]
\includegraphics[width=0.8\linewidth]{Fig5.eps}
\caption{\label{fig5} (Color online) The ,,phase''
$\gamma_{1}=F_{1}/B-n$ calculated for the fixed quasiclassical
frequency $F_1=70$~T and various $u$, for the
electron $K$ and $K'$ valleys as a function of the LL index, $n$.}
\end{figure}
\begin{figure}[b]
\includegraphics[width=0.8\linewidth]{Fig6.eps}
\caption{\label{fig6} (Color online) The DOS of the
unbiased (a) and biased (b, c) BLG versus dimensionless
value of the Landau plot, $F_1/B$, with the fixed quasiclassical
frequency $F_1=70$~T. The frequency $F_1$ corresponds to the
situation when $F_1>0, F_2<0$, i.e., only the lowest
conduction/valence energy band is cut by $E_F$.
In (b, c) the blue peaks show DOS calculated for the $K$
valley, whereas the red ones are related to the $K'$ valley.}
\label{Lala}
\end{figure}
\begin{figure}[hb]
\includegraphics[width=0.75\linewidth]{Fig7a.eps}
\caption{\label{fig7}(Color online)
(a) The electron/hole bands of the biased graphene bilayer with the gap
$2u=0.25$~eV at $k=0$. The horizontal
lines denote the Fermi energies which cross the electron (solid lines)
and hole (dashed lines) dispersion curves.
(b) The ,,phases'' $\gamma_{1(2)}(E_F)=F_{1(2)}/B-n$
calculated for $E_F$ depicted in (a) plotted as functions of the LL
index, $n$.}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.8\linewidth]{Fig7b.eps}
\caption{\label{fig8}(Color online)
The same as in Fig.~\ref{fig7}, only for the lowest electron/hole bands
and $\Delta<|E_F|<u$.}
\end{figure}
We start our discussion with the simplest case,
$E_{c1/v1}(0)<|E_F|<E_{c2/v2}(0)$, when only one conduction/valence
energy band is cut by $E_F$ (see Fig.~\ref{fig2}). The
single quasiclassical frequency $F_1$ corresponds to a single Fermi
area, which is the same for $K$ and $K'$.
According to Eq.~(\ref{extra}), the electron peaks in DOS are shifted by
$\xi$ to the left in the $K$ valley, whereas the peaks in the $K'$
valley are shifted by $\xi$ to the right. The shift magnitude
ranges from zero to $1/2$ depending on the energy difference between two
layers. In Fig.\ref{fig5} the ,,phase'' $\gamma_1$ is plotted as the
function of $n$ for three different cases with $u$ equal to 0, 0.05 and
0.125 eV. The Fermi energies are chosen to keep the same Fermi area
(and the same fixed $F_1$) in all three systems. Only for $u=0$ the curves
are identical for $K$ and $K'$, for $u\neq 0$ the curves are
substantially different.
In Fig.~\ref{fig6} the shifts of peaks in the above three cases are shown
explicitly. There is a single series of oscillations for the unbiased
bilayer, as the valley degeneracy is preserved in a system with the
inversion symmetry.
The effect of the gate-tunable valley splitting originates in two
series of oscillations which differ for the different $u$. Let us
emphasize that all series of oscillations have the same
quasiclassical frequency $F_1$, but the quasiclassical phases
depend on the choice of $u$ and on the choice of the valley index.
We complete our discussion with cases when the Fermi energy cuts the
conduction/valence bands twice. The Landau plots of the biased
bilayer with $u=0.125$~eV, which is probably the highest
experimentally accessible value,~\cite{zhang}
calculated for four selected Fermi energies, two in the
conduction band, and two in the valence band, are presented in
Fig.~\ref{fig7}. In accordance with types of the Fermi contours in
Fig.~\ref{fig2}, the first and second cases are depicted.
The situation is more complicated in the region of energies,
$\Delta<|E_F|<u$, for which $E_F$ cuts the lowest subband
$E_{c1/v1}(k)$ twice, which is characteristic for the third type of
the Fermi contours, as shown in Fig.~\ref{fig2}. The bottom of
$E_{c1}(k)$ is at $\approx 0.105$~eV. The parameter $\xi$ is far from
the values expected for the phase of quasiclassical oscillations, it
reduces/grows heavily when $E_F$ approaches the bottom of
$E_{c1}(k)$/$E_{v1}(k)$. For $E_F=u$, $\xi$ becomes closer to $-1/2$
for $E_F$ in the conduction band and $1/2$ for $E_F$ in the valence
band.\\
\section{\label{cocl}Conclusions}
Using a four-band continuum model, we calculated analytically the
Landau plots in biased and unbiased BLG subject to
external perpendicular magnetic fields.
It turns out that the magneto-oscillations are only asymptotically
periodic, and that in the unbiased bilayers their phase is equal to
the phase of massive fermions. The convergence to the quasiclassical
limit is slow, and depends strongly on the the value of $E_F$. The
convergence is slower for higher values of $E_F$.
Anomalous behavior of oscillation phases was found in biased bilayers
with broken inversion symmetry. The oscillation frequencies again
tend to quasiclassically predicted ones, which are the same for $K$
and $K'$, but the quantum approach yields the gate-tunable corrections
to oscillation phases, which differ in sign for $K$ and $K'$. These
valley-dependent phase corrections give rise, instead of a single
quasiclassical series of oscillations, to two series with the same
frequency but shifted in phase.
We also found that for $E_F$ in the region of energies corresponding
to the Fermi rings in the quasiclassical approach, only a limited
number of LLs can cut the Fermi energy and thus a limited number of
magneto-oscillations can be achieved. Moreover, their quasiclassical
phases reach very large values. As the many body effects can play a
role in the corresponding concentration range, the above one-electron
picture is probably oversimplified.
\begin{acknowledgments}
The authors acknowledge the support of the Academy of Sciences of the
Czech Republic project KAN400100652, the Ministry of Education of the
Czech Republic project LC510, and the PHC Barrande project 19535NF and MEB
020928.
\end{acknowledgments}
|
1,108,101,564,312 | arxiv | \section{Introduction}
Spintronics is based on the increasing efforts to replace or supplement electronic devices by devices that exploit spin-transport phenomena.
Especially, in magnonic devices one would try to avoid charge currents, utilizing magnons -- the elementary excitations of a magnets ground state -- for the spin transport
\cite{Chumak14_MagnonTransistor,Klinger14_SpinWaveLogicDevices,Kruglyak10_MagnonicsReview,Chumak15_MagnonSpintronics}.
The great potential of this idea has been demonstrated, for instance, by the magnon transistor \cite{Chumak14_MagnonTransistor}, which forms a building block for magnon-based logic \cite{Klinger14_SpinWaveLogicDevices}.
A further development in this context is the use of antiferromagnets \cite{Lebrun18_LongDistTransportHematite,Khymyn16_TransfromSpincurrentByAFMinsulator} which, for instance, allow to build a spin-valve structure \cite{Cramer18_SpinValve} -- a multilayer system designed to pass spin waves through the central, antiferromagnetic layer only, when the two outer, ferromagnetic layers are magnetized in the same direction.
For an antiparallel magnetization, the magnons are blocked.
A variety of spin-transport experiments in antiferromagnets is to monochromatically pump spin waves from a ferromagnet via ferromagnetic resonance into an antiferromagnet and detect the signal via the inverse spin-Hall effect in an attached heavy metal layer \cite{Wang14_AFMSpinTransportYIGintoNiO,Wang15_FMRexcitationOfAFMinsulators,Qiu2016_SpinCurrentProbeForAFMPhaseTransitions}.
Alternatively, one can excite thermal spin waves via the spin-Seebeck effect \cite{Lin16_SSEthroughAFMinsulator,Prakash16_SSEthroughNiO,Cramer18_SEEAcrossAFM}.
In either way, the setup in total is necessarily a trilayer system, where two layers are magnetically ordered and the two materials may have different ordering temperatures.
This raises the question for the temperature dependence of the spin transport, especially when the two critical temperatures are quite different and proximity effects at the interface may play a role.
First experiments in such systems exist, including temperature ranges well above the critical temperature of one of the constituents. Surprisingly, even then there seems to be a spin current above the N\'{e}el{} temperature of the antiferromagnet, as demonstrated e.g.\ in \cite{Cramer18_SEEAcrossAFM,Goennenwein_2018_MRatNeelTemp}.
For a deeper understanding of the temperature dependent magnetic behavior of these multilayer systems, it is necessary to study the impact of one layer onto the magnetic behavior of the other, a class of effects that is called magnetic proximity effect \cite{Manna14_ReviewExchangeBiasAndMagneticProximityEffect}.
Magnetic proximity effects have been investigated in bilayers composed of an itinerant ferromagnet coupled to a paramagnet, where magnetic moments are induced in the paramagnet \cite{Zuckermann73_MagneticProximityEffect,Cox79_MagnProximityEffectItinerantFM,Mata82_ModelMagnProximityEffectItinerantFMfilms}, but it is rather ubiquitous for all kinds of heterostructures and also core-shell nanoparticles \cite{Carey93_InterlayerCouplingCoONiO,Borchers93_InterlayerCouplingCoONiO,Lenz07_MagnProximityAFM_FM_Bilayer,Maccherozzi08_MagnProximity_Fe_SemiconductorInterface,Golosovsky09_MagnProximityEffect_FM_AFMCoreShell}.
Typical signatures of proximity effects are a magnetization in a paramagnetic constituent, an enhanced ordering temperature in the material with the lower ordering temperature, an increased coercivity, and also the occurrence of an exchange bias effect.\ \cite{Manna14_ReviewExchangeBiasAndMagneticProximityEffect}
Theoretically, proximity effects in bilayers of ferro- and antiferromagnets have been investigated using mean-field techniques \cite{Jensen05_TheroyMagnProximityEffect_FM_AFM} , Monte Carlo simulations \cite{nowakPRB02}, and multi-scale techniques \cite{szunyoghPRB11}. However, these studies neglect the influence of magnons that might pass the interface of a magnetic bilayer as these can only be treated via spin dynamics calculations. It is, hence, the purpose of this study to investigate the magnetic proximity effect including magnon spectroscopy, with that adding to a more complete understanding of the temperature dependent magnetic behavior of bi- and trilayers close to the interface.
The outline of this work is as follows: in \cref{sec:ASM} we describe our model and the two setups which we treat in the following -- a magnetic trilayer system build up of three ferromagnets, where the central layer has a lower Curie temperature, and a corresponding ferromagnet-antiferromagnet-ferromagnet system.
We investigate the temperature-dependence of the spatially resolved order parameters and susceptibility in \cref{subsec:FM-FM-FM,subsec:FM-lAFM-FM}, and the magnon spectra in \cref{subsec:MagnonSpectra}.
We show that each property can probe this proximity effect, especially in the vicinity of the critical temperature of the central layer and discover a magnonic contribution to the proximity effect that rests on the different spectra and polarizations of magnons in the different layers.
\section{Model, methodology and geometry} \label{sec:ASM}
We conduct our work within an atomistic spin model, where every magnetic atom at position $\vec{r}_l$, $l = 1,...,N$, is described by a classical magnetic moment $\vec{\mu}_l = \mu_s\vec{S}_l$ of magnitude $|\vec{\mu}_l | = \mu_s$.
Assuming a model of Heisenberg type, the Hamiltonian of the system reads
\begin{align}
\operatorname{H} = -\frac{1}{2} \sum_{ \mathclap{ \substack{j,k = 1 \\ k\in \mathrm{NN}(j)} } }^{N} J_{jk} \vec{S}_j\cdot \vec{S}_k - d_z\sum_{j=1}^{N} S_{j,z}^2
\end{align}
with the Heisenberg exchange interaction $J_{jk}$, restricted to nearest neighbors (NN), and a uniaxial anisotropy, parameterized by the anisotropy constant $d_z > 0$.
The equation of motion is the Landau-Lifshitz-Gilbert equation \cite{Landau35_LL_equation} with Gilbert damping $\alpha$ \cite{Gilbert55_Gilbert_damp,Gilbert04_Gilbert_damp_IEEE},
\begin{align}
\partial_t \vec{S}_l & = -\frac{\gamma}{\mu_s(1+\alpha^2)}\left[ \vec{S}_l\times \vec{H}_l + \alpha\vec{S}_l \times (\vec{S}_l \times \vec{H}_l) \right]
\end{align}
with gyromagnetic ratio $\gamma > 0$ and the effective field
\begin{align}
\vec{H}_l & = -\frac{\partial \operatorname{H}}{\partial \vec{S}_l} + \vec{\xi}_l .
\end{align}
The coupling to the heat bath at temperature $k_\mathrm{B}T$ \cite{Brown63_ThermalFluctuactionsMagnParticles} leads to thermal fluctuations in form of a Gaussian white noise $\vec{\xi}$ satisfying $\mean{ \vec{\xi}_l }=0$ and
\begin{align}
\mean{ \xi_{l,\beta}(t) \xi_{k,\zeta}(t') } & = \frac{2 \mu_s \alpha k_\mathrm{B}T} {\gamma}\delta_{lk}\delta_{\beta\zeta}\delta(t - t')
\end{align}
with $\beta,\zeta \in \{ x,y,z \} $. These stochastic differential equations are solved numerically using the stochastic Heun algorithm. \cite{Nowak07_SpinModels}
The simulations are implemented in a highly efficient code developed in \textsc{C/C++} and \textsc{CUDA}, running on GPUs. A high degree of optimization is necessary because of the rather large system size (about $N \approx 10^5$ spins) in combination with very long equilibration times close to the critical temperature.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{graphics/SketchTriLayer.png}
\caption{Geometry of the investigated trilayer: in the central layer a lower exchange constant $J_\mathrm{B}$ is used, leading here to a much lower critical temperature $T_\mathrm{c}$ than in the outer layers.}
\label{fig:ExchangeTrilayer}
\end{figure}
The system of interest is a trilayer stacked along the $z$ direction -- the three layers denoted A, B and C -- composed of spins arranged on a simple cubic lattice with lattice constant $a$, see \cref{fig:ExchangeTrilayer}.
The Heisenberg coupling constant varies along the system by a factor of $10$: within each layer it takes isotropic values $J_{jk} = J_\mathrm{A},\pm J_\mathrm{B},J_\mathrm{C}$, where $10 J_\mathrm{B} = J_\mathrm{A} = J_\mathrm{C}$. This choice results in very different critical temperatures.
At the interfaces we choose a coupling of intermediate strength $J_{jk} = \nicefrac{J_\mathrm{A}}{2}$ for lattice sites $j,k$ at the interfaces of layers A and B as well as layers B and C.
There are two different setups: a purely ferromagnet trilayer (termed FM-FM-FM) with $J_{\mathrm{B}} > 0$, and a layered antiferromagnet sandwiched between two ferromagnets (denoted FM-lAFM-FM).
In the latter case, the exchange is ferromagnetic, $J_{lk} = J_\mathrm{B} > 0$, in the $x$-$y$ plane and antiferromagnetic along the $z$ direction, $J_{lk} = -J_\mathrm{B} < 0$.
The use of the layered antiferromagnet ensures the interfaces to be ideal in either case (parallel alignment of the spins in the ground state), corresponding to completely uncompensated interfaces.\\
As a test case, we choose the following values for our model parameters: $J_\mathrm{A} = J_\mathrm{C} = \SI{10}{\milli\electronvolt}$, $J_\mathrm{B} = \SI{1}{\milli\electronvolt}$ and $d_z = \SI{0.1}{\milli\electronvolt}$.
Furthermore, it is $\gamma = \gamma_0$, the free electron's gyromagnetic ratio, and $\mu_s = \mu_\mathrm{B}$, Bohr's magneton.
\section{Results}
We study the magnetic trilayers outlined above with respect to the magnetic proximity effect in terms of three aspects: their temperature-dependent order parameter profiles, their temperature-dependent susceptibility profiles, and their magnon spectra.
\subsection{Magnetization of a Ferromagnetic Trilayer} \label{subsec:FM-FM-FM}
In a first step, we consider the equilibrium order parameter profile along the $z$ direction.
For a ferromagnet this is the magnetization, which we resolve monolayer-wise along $z$ direction,
\ba{
\mean{ m_z }(z) = \frac{1}{N_{xy}}\sum_{r_{j,z} = z} \langle S_{j,z} \rangle,
}
with $N_{xy}$ being the number of spin per monolayer and $\mean{\ldots}$ denoting the thermal average, which we calculate in our simulations as time average.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{graphics/FMfmFM-bluegreen_mod.pdf
\caption{%
Magnetization profiles along $z$ axis in a FM-FM-FM trilayer with coupling ratio $\nicefrac{J_\mathrm{B}}{J_\mathrm{A}} = \num{0.1}$ for different temperatures.
To illustrate the influence of the damping constants we present in the left part our results for $\alpha = \num{0.005}$ (blue) and in the right part only data for $\alpha = \num{0.5}$ (green).
The bulk value of the equilibrium magnetization with coupling constant $J_\mathrm{B}$ is shown as black dashed lines for comparison.
The corresponding critical temperature is $k_\mathrm{B}T_\mathrm{c} \approx \SI{1.5}{meV}$.
}
\label{fig:FM-mz}
\end{figure}
\Cref{fig:FM-mz} depicts this magnetization for the FM-FM-FM system. Vertical lines indicate the interfaces at $z = 32a$ and $z = 64a$, separating the central layer B with low exchange constant $J_\mathrm{B}$ from the outer layers A and C with a coupling constant that is ten times higher. We tested two very different values of the damping constants $\alpha = 0.005$ and $\alpha = 0.5$, corresponding to the blue and green lines in the figure respectively.
There is only a small difference visible close to the transition temperature $k_\mathrm{B}T_\mathrm{c} \approx 1.5J_\mathrm{B}$, where the curve for the smaller damping is not fully converged to its equilibrium profile. We conclude that -- apart from this small deviation -- our results are equilibrium properties, that do not depend on $\alpha$.
The outer layers show a rather stable magnetization with respect to an increasing temperature due to the higher exchange constant, while the central layer undergoes a phase transition where the magnetization drops to zero.
However, there is a significant difference to a bulk material with exchange constant $J_\mathrm{B}$ (indicated as black dashed lines): while for low temperatures magnetization values of the bulk and the central layer of the trilayer are in good agreement, the magnetization of the central layer remains significantly higher in the vicinity of the critical temperature. Especially, there is a residual magnetization in the central layer directly at the critical temperature, a first signature of a magnetic proximity effect.
Analyzing the magnetization profiles further we find an enhanced difference from the bulk value closer to the interfaces.
The magnetization decays exponentially from the high value of the outer layer to the low value in the middle of the central layer. The according temperature-dependent decay constant which quantifies the penetration depth of the magnetic order is shown in \cref{fig:PenetrationDepth}.
These data, with a peak at the critical temperature, clearly demonstrate the occurrence of a critical behavior. From these observation we conclude that the magnetic order of the outer layers with the higher coupling constant penetrates into the central layer -- another signature of the magnetic proximity effect.
The length scale of this proximity effect is the correlation length of the system -- a quantity which in our case is only of the order of a few lattice constants though it should diverge at the critical temperature. Furthermore, its value might be much larger in materials with a larger range of the exchange interaction, beyond nearest neighbors.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{graphics/PenetrationDepth.pdf}
\caption{%
Temperature dependent penetration depth of the magnetic order, averaged from fitting exponential decays from the left and the right interface. Error bars are smaller than the symbol sizes.
}
\label{fig:PenetrationDepth}
\end{figure}
This proximity effect can also be observed in the monolayer-dependent magnetic susceptibility,
\begin{align}
\chi^\text{FM}_{zz}(z) & = \frac{N}{k_\mathrm{B}T} \Big( \langle m_z(z) M_z \rangle - \langle m_z(z) \rangle \langle M_z \rangle \Big) \label{eq:def_layerDepSuscep}
\end{align}
with the layer magnetization $m_z(z)$ and the total magnetization $M_z$ of the trilayer.
Note that this statistical definition equals the response-function definition $\chi^\text{FM}_{zz}(z) = \nicefrac{\del m_z(z)}{\del B_z}$, for a homogeneous magnetic field $\vec{B} = B_z\vec{e}_z$.
As shown in \cref{fig:fmfmfmthinequilibriumlayeredsus}, the critical behavior of the susceptibility in the central layer is suppressed, especially for those monolayers close to the interface (blue line).
In the middle of the central layer, there remains a maximum of the susceptibility around the expected critical temperature of the central layer as a reminiscent of the critical behavior of the corresponding bulk system.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{graphics/FMsus.pdf}
\caption{%
Temperature dependence of the monolayer-wise susceptibility of the central layer of the FM-FM-FM exchange trilayer.
For this calculation a thinner central layer of eleven atomic monolayers is used.
The first layer (blue line) is directly at the interface, and the highest number (yellow line) denotes the monolayer in the middle of the central layer.
}
\label{fig:fmfmfmthinequilibriumlayeredsus}
\end{figure}
\subsection{Comparison to a FM-lAFM-FM Trilayer} \label{subsec:FM-lAFM-FM}
In the following, our previous results for a purely ferromagnetic trilayer are compared to the FM-lAFM-FM setup, with an antiferromagnet in the central layer. In the latter case, the order parameter of the central layer B is the N\'{e}el{} vector $\mean{n_z} = \frac{1}{2} \left(\langle m_z^\uparrow \rangle - \langle m_z^\downarrow \rangle \right)$, where $m_z^{\uparrow\downarrow}$ are the corresponding sublattice magnetizations. In the outer layers A and C, the normal magnetization remains the order parameter as before.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{graphics/FM-AFM-bluegreen_mod.pdf}
\caption{%
Order parameter profiles for a FM-FM-FM trilayer (black solid line) compared to a FM-lAFM-FM trilayer (green crosses) with coupling ratio $\nicefrac{J_\mathrm{B}}{J_\mathrm{A}} = \num{0.1}$ for different temperatures.
}
\label{fig:FM-AFM}
\end{figure}
In \cref{fig:FM-AFM} the spatial- and temperature-dependent order parameter profiles of the two trilayers are compared (symbols for the AFM and lines for the FM).
Interestingly, they do not show any significant difference.
This is due to the fact that equilibrium properties result solely from the Hamiltonian of the system.
For the case of a two-sublattice AFM (with sublattices $\vec{S}^{\mathrm{A},\mathrm{B}}$) with only nearest-neighbor interaction there exists a transformation $J \mapsto -J$, $\vec{S}^\mathrm{B} \mapsto -\vec{S}^\mathrm{B}$ which maps the system to the corresponding ferromagnet.
The Hamiltonian is symmetric with respect to this transformation and, hence, the profiles are equal in equilibrium.
Note, however, that the argument above is solely classical and quantum corrections may lead to additional effects where equilibrium properties of FMs and AFMs deviate.
However, even in the central layer -- including its proximity effect -- we observe the same behavior for the N\'{e}el{} vector of the central AFM in the FM-lAFM-FM trilayer as for the magnetization of the central FM in the FM-FM-FM trilayer: This is quite surprising since it means that a FM can generate even antiferromagnetic order via a proximity effect.
Looking at the exchange fields, however, it becomes clear that -- because of the fully uncompensated interface -- the nearest-neighbor exchange interaction of the FM acts on the layered AFM as a field that induces layer-wise the same order as in a FM.
Nevertheless, as we will show in the following, the magnon spectra in the two investigated trilayers will be affected differently by the proximity effect.
\subsection{Magnon Spectra} \label{subsec:MagnonSpectra}
In a further step, the magnonic spectra are calculated by a Fourier transform of the $N$ spins in time,
\begin{equation}
\hat{S}_l(\omega) = \frac{1}{\sqrt{2\pi}} \int \left[ S_{l,x}(t) -i S_{l,y}(t) \right] e^{-i\omega t} \,\dd t,
\end{equation}
where the spin-wave amplitude for our easy-axis magnets is given by the $x$- and $y$ component of the spins.
For our numerical study, this property is calculated by a fast Fourier transform on discrete instances in time.
The frequency- and temperature dependent amplitude $\mathcal{I}(\omega,T)$ is then calculated as an average over all lattice sites
\begin{equation}
\mathcal{I}(\omega,T) = \frac{1}{N} \sum_l \abs{ \hat{S}_l(\omega) }^2.
\end{equation}
This quantity is proportional to the magnon number $n(\omega,T) = \mathrm{DOS}(\omega,T) \cdot f(\omega,T)$, where $\mathrm{DOS}$ is the density of states per volume and $f$ the magnon distribution function, in our classical spin model given by the Rayleigh-Jeans distribution $f(\omega,T) = \nicefrac{k_\mathrm{B}T}{\hbar \omega}$.
Consequently, the quantity $\mathcal{I}(\omega,T)$ corresponds to a measurement of the magnon spectra, for instance by Brillouin light scattering \cite{Hillebrands00_ReviewBLSMultilayers}.
\begin{figure*}
\centering
\includegraphics[width=0.48\linewidth]{graphics/FMfmFM-Spectra-withTheo.pdf}
\includegraphics[width=0.48\linewidth]{graphics/FMafmFM-Spectra.pdf}
\caption{%
Magnon spectra of the central layer (solid black lines) of a FM-FM-FM trilayer (left) and of a FM-lAFM-FM trilayer (right) with coupling ratio $\nicefrac{J_\mathrm{B}}{J_\mathrm{A}} = 0.1$ at different temperatures, compared to the according bulk spectra (colored dashed lines).
For the lowest temperature (top graphs), a theoretical curve calculated in the limit of low temperatures is added.
}
\label{fig:MagnonSpectra}
\end{figure*}
\Cref{fig:MagnonSpectra}, left panel, shows the magnon spectra for the central layer of the FM-FM-FM trilayer (black solid lines) compared to a bulk ferromagnet (colored dashed lines) for increasing temperatures (top to bottom). Correspondingly, the right panel depicts the FM-lAFM-FM trilayer case in the same coloring.
Despite the fact that -- as shown before -- the two different order parameters follow exactly the same behavior, the spectra differ. The ferromagnet has only a single magnon branch with positive frequencies, whereas the antiferromagnet has two of opposite sign. Furthermore, the dispersion relation and, hence, the density of states are different \cite{Cramer18_SEEAcrossAFM}.
To reveal distinctive features we compare the trilayer spectra to the corresponding bulk spectra.
The figure depicts also the numerical bulk spectra (calculated by simulations of a pure bulk system) and for low temperatures the theoretical bulk spectra, calculated from the dispersion relations out of linear spin-wave theory.
These dispersion relations for a three dimensional simple cubic ferromagnet or layered antiferromagnet respectively read \cite{Cramer18_SEEAcrossAFM,eriksson_bergman_bergqvist_hellsvik_2017}
\begin{align}
& \frac{\mu_s}{\gamma} \omega_\text{FM}(\vec{k}) = 2d_z + 2J \sum_{ \mathclap{ \Theta\in \{x,y,z\} } }\left[1 - \cos(ak_\Theta)\right]
\end{align}
and
\begin{align}
& \frac{\mu_s}{\gamma} \omega_\text{lAFM}(\vec{k}) = \nonumber \\
& \pm \sqrt{ \Big[ 6|J| + 2d_z - 2|J| \smash{ \sum_{ \mathclap{ \Theta\in\{x,y\} } } } \cos(ak_\Theta) \Big]^2 - \Big[ 2|J|\cos(a k_z) \Big]^2}.
\end{align}
For the layered AFM, one has to include contributions from the antiferromagnetic coupling between the layers (along the $z$ direction) and ferromagnetic coupling within the layers ($x$-$y$ direction).
From these dispersion relations follows by integration the density of states and multiplied with the Rayleigh-Jeans distribution the theoretical curves in \Cref{fig:MagnonSpectra}.
Comparing trilayer systems and bulk we find distinct features at high and low frequencies.
High frequencies are around the maximal frequencies of the spectra of the central layer. These maximal frequencies can be read from the dispersion relations and they are $\omega_\text{max}^\text{FM} \approx \SI{36}{\per\ps}$ for the FM and $\omega_\text{max}^\text{lAFM} \approx \SI{31}{\per\ps}$ for the lAFM.
Remarkably, there are occupied states above this upper band edge.
These are most likely magnons from the outer FM layers, which have a ten times larger frequency range due to the higher coupling constant.
These magnons can penetrate the central layer in the form of evanescent waves. Consequently,
even within the allowed frequency regime of the central layer, there are more high-frequency states occupied in the central layer than in the according bulk magnet. In the spectrum of the lAFM this manifests even as a peak at $\omega \approx \SI{28}{\per\pico\s}$.
In the low-frequency regime, there are significant deviations from a pure bulk spectrum. Not only is the amplitude here much lower in the central layer, also the position of the first maximum appears to be located at slightly higher frequencies. For the ferromagnetic trilayer we conclude that the low-frequency magnons can easily leave the central layer.
Since ferromagnetic magnons reduce the overall magnetization, the absence of magnons leads to the observation of an increased magnetization as compared to the bulk value.
This is perfectly in accordance with the observation from the order parameter curves \cref{subsec:FM-FM-FM}.
For the lAFM the resulting picture is more complicated, since only magnons with positive frequency can propagate into the outer FM layers, affecting the spectra asymmetrically even though magnons with negative frequencies can still migrate into the FM as evanescent waves. Indeed, we find a slight asymmetry with respect to positive and negative frequencies.
However, this asymmetry is not due to this effect alone, since, because of the odd number of monolayers within the lAFM, one of the sublattices is favoured and therefore also one of the branches.
A closer look also reveals further features: the lAFM has a temperature-dependent band gap.
The central layer of the trilayer exhibits the same effect, but close to the critical temperature, e.g.\ for $k_\mathrm{B}T = \SI{1.4}{\milli\eV}$, the central layer still has a visible band gap, whereas in a bulk it has essentially vanished.
Thus it seems that the outer FM layers effectively cool the central lAFM, stabilizing the magnetic order and therefore weaken the effect of the vanishing bandgap at the critical temperature.
\section{Conclusion}
We investigate and compare magnetic proximity effects in a FM-FM-FM and a FM-lAFM-FM exchange trilayer numerically using three different approaches.
For spatially resolved and temperature-dependent order parameter profiles we show that magnetic order can be induced from the outer layers with higher critical temperature into the central layer with lower critical temperature. This is even true for a central antiferromagnetic layer and the order parameter profiles are the same for both types of trilayers.
In addition we studied the susceptibility profiles, finding a suppressed critical behavior in the vicinity of the interface as further signature of the magnetic proximity effect.
Most interestingly, magnon spectroscopy uncovers additional features, which could be summarized as magnonic proximity effects: in the central layer there is a magnon occupation above the allowed frequency range and an additional peak close to the upper band edge of the AFM can be observed.
These effects are due to high-frequency spin waves from the outer layers with higher exchange coupling, which penetrate the central layer as evanescent modes.
Nevertheless, the overall magnon number is lower -- a cooling effect due to the influence of the outer layers -- and the temperature dependence of the frequency gap is weakened.
Most importantly, the magnon spectrum of the central AFM becomes asymmetric since in the outer ferromagnetic layers only one polarization is allowed, an effect that was already exploited in a magnonic spin valve \cite{Cramer18_SpinValve}.
Our findings thus contribute to the understanding of magnetic equlibrium and spin transport phenomena in magnetic bi- and trilayers, especially at higher temperatures, approaching the critical temperature of one of the layers \cite{Cramer18_SEEAcrossAFM,Goennenwein_2018_MRatNeelTemp}.
|
1,108,101,564,313 | arxiv | \section{Introduction}
The observation of the cutoff in the spectrum of Ultra-High Energy
Cosmic Rays (UHECRs) \citep{Abbasi:2007sv,Abraham:2008ru} as predicted
by \citet{Greisen:1966jv,Zatsepin:1966jv} provides
compelling evidence for the shortening of the UHECR propagation length at
high energies. The highest energy events then must have come from
relatively close sources (within $250$ Mpc). At these length scales
the matter in the Universe is distributed inhomogeneously, being
organized into clusters and superclusters. One should, therefore, expect
the flux of highest-energy cosmic rays to be anisotropic.
In astrophysical scenarios, it is natural to assume that the number of
sources within $250$~Mpc is large, and that these sources trace the
distribution of matter. Under these assumptions, the anisotropy at
Earth depends only on the nature and size of UHECR
deflections. Measurement of the anisotropy, therefore, gives direct
experimental access to parameters that determine the deflections,
notably to the UHECR charge composition and cosmic magnetic fields.
Several investigations of anisotropy in arrival directions of UHECRs
have been previously undertaken. At small angular scales,
correlations with different classes of putative sources were claimed
(e.g. \citealt{Gorbunov:2004bs,Abbasi:2005qy,Cronin:2007zz,Abraham:2007si}).
At larger angular scales and energies below 10 EeV possible anisotropy
towards the Galactic center was reported in
\citet{Hayashida:1998qb,Hayashida:1999ab,Bellido:2000tr}, but not
supported by more recent studies \citep{Santos:2007na}. At higher
energies, \citet{Stanev:1995my} found evidence against an isotropic
flux above 40 EeV through correlations with the supergalactic plane,
but this was not confirmed by other authors
\citep{Hayashida:1996bc,Kewley:1996zt, Bird:1998nu}. Finally, using
the Pierre Auger Observatory (PAO) data, \citet{Kashti:2008bw} have
found correlations between UHECR arrival directions and the
large-scale structure of the Universe which are incompatible with an
isotropic flux (see, however, \citealt{Koers:2008ba}).
In this paper, we analyze the data accumulated by the HiRes experiment
for anisotropy associated with the large-scale structure of the
Universe. The HiRes experiment has been described previously
\citep{HiResStatus1999,Boyer:2002zz,Hanlon:2008}. It studied
ultrahigh energy cosmic rays from $10^{17.2}$ eV to $10^{20.2}$ eV
using the fluorescence technique. HiRes operated two fluorescence
detectors located atop desert mountains in west-central Utah. The
data set used in this study consists of events observed by both
detectors, analyzed jointly in what is commonly called ``stereo
mode''. In this mode the angular resolution in cosmic rays' pointing
directions is about $0.8$ degrees, and the energy resolution is about
10\%. The HiRes experiment operated in stereo mode between December,
1999, and April, 2006. At the highest energies HiRes has the largest
data set in the Northern hemisphere. Large number of events, good
angular resolution (better than the bending angles expected from
Galactic and extragalactic magnetic fields) and the wide energy range
covered make the HiRes data particularly suitable for anisotropy
studies. The exact data set used in this study was described
previously in \citet{Abbasi:2008md}.
We consider here a generic model that assumes many sources within
$250$~Mpc tracing the distribution of matter, which we refer to as the
``matter tracer'' model. We also assume that deflections of UHECR do
not exceed the angular size of the nearby structures, that is
10-20$^\circ$. In this regime, both regular and random deflections in
magnetic fields can be modeled with a one-parameter distribution, for
which we take a Gaussian distribution centered at zero with width
$\theta_{\rm s}$. This width is treated as a free parameter, whose
value we aim to constrain from the data. Constraints on $\theta_{\rm
s}$ may then be used to obtain information on the strength of
Galactic and extragalactic magnetic fields. In keeping with our
assumption of small deflections, we assume a proton composition in
this study, which is consistent with the $X_{\rm max}$ analysis based
on the same dataset (for confirmation see \citealt{Abbasi:2009nf}).
The HiRes data is compared to model predictions using the ``flux
sampling'' test put forward by \citet{Koers:2008ba}. This test has
good discrimination power at small statistics and is insensitive to
the details of deflections. The comparison is performed at three
different threshold energies that have been used in previous studies:
10 EeV, 40 EeV, and 57 EeV
\citep{Hayashida:1996bc,Abbasi:2005qy,Cronin:2007zz}. An {\em a
priori} significance of 5\%, corresponding to a confidence level
(CL) of 95\%, is chosen for this work.
The paper is organized as follows. In section~\ref{section:modeling}
we discuss the modeling of UHECR arrival
directions. Section~\ref{section:data} concerns the HiRes data used in
the analysis, while section~\ref{section:fluxsampling} describes the
flux sampling method. We present our results in
section~\ref{sec:results} and conclude in
section~\ref{sec:conclusions}.
\section{Modeling of UHECR arrival directions}
\label{section:modeling}
\emph{Galaxy catalog ---} The distribution of matter in the local
Universe is modeled with the 2 Micron All-Sky Redshift Survey (2MRS;
\citealt{2MRS}) galaxy catalog, using galaxies as samplers of the
underlying matter density field.\footnote{This sample was kindly
provided by John Huchra.} The 2MRS is a flux-limited sample of
galaxies, that is, the sample containing all galaxies with
observed magnitude in the $K_s$ band $m \leq 11.25$. It contains
spectroscopically measured redshifts for all but a few galaxies. A
number of cuts have been applied to the galaxy sample. First, the
Galactic plane, where the sample is incomplete, has been excluded from
the sample by removing all galaxies with $|b|<10\ensuremath{^\circ}$. Second,
objects with $D<5$ Mpc are removed because such objects should be
treated on an individual basis.\footnote{This corresponds to the
\emph{ad hoc} assumption that there are no UHECR sources within 5
Mpc. Different analyses are more appropriate to test this
possibility.} Finally, the catalog is cut at 250~Mpc because the
sample becomes too sparse. The resulting sample provides an accurate
statistical description at smearing angles $\theta_s>2^\circ$. The
flux from sources beyond 250~Mpc is taken to be isotropic. A total of
15508 galaxies remain in the HiRes field of view after the cuts. To
compensate for observational selection effects in the (flux-limited)
2MRS catalog, weights $ w^{\rm cat}_i$ are assigned to the galaxies
with the sliding-box method as described in \citet{Koers:2009pd}.
\emph{Energy loss ---} UHECR fluxes are affected by energy loss due to
redshift and interactions with the Cosmic Microwave Background (CMB).
To account for the resulting flux suppression, the integral flux,
$\varphi_i$, from a single source is expressed as follows:
\begin{equation}
\label{eq:ptflux}
\varphi_i (E, D_i) = \frac{J (E) f (E, D_i)}{4 \pi D_i^2} \, ,
\end{equation}
where $E$ is the threshold energy, $D_i$ is the source distance, $J$
stands for the integral injection spectrum at the source, and $f$
represents the flux fraction that remains after interactions and
redshift. We take an injection spectrum $J(E) \propto E^{-1.2}$
extending to very high energies. The function $f$ is determined using
a numerical propagation code based on the continuous loss
approximation that is described in \citet{Koers:2008hv,Koers:2008ba}.
Energy loss due to interactions with the extragalactic background
light is neglected. In Figure~\ref{fig:ffunc}, top panel, the
fraction $f$ is shown as a function of distance for the different
energies considered in this work.
The strength of the isotropic flux component that is added to account
for sources beyond 250 Mpc also depends on UHECR energy loss. Using
the computer code described in the previous paragraph, we estimate
the fraction $g$ of total flux contributed by sources within 250 Mpc
to be 0.4, 0.7, and 1.0 for threshold energies $E=10$ EeV, 40 EeV, and
57 EeV, respectively (see Figure 1, bottom panel).
\begin{figure}
\includegraphics[angle=-90, width=8cm]{fig1a}
\includegraphics[angle=-90, width=8cm]{fig1b}
\caption{\label{fig:ffunc} {\em Top panel:} Fraction $f$ of integral
CR flux that survives after interactions with the CMB and
cosmological redshift as a function of distance $D$ for threshold
energies 10 EeV (solid line), 40 EeV (dashed), and 57 EeV (dotted).
{\em Bottom panel:} Fraction $g$ of total flux that is produced by
sources within $250$~Mpc, as a function of energy $E$. }
\end{figure}
\emph{Deflections ---} UHECR protons (as well as nuclei) are deflected
by Galactic and intergalactic magnetic fields. These deflections are
taken into account by an angular smearing procedure, which replaces
the point-source flux, $\varphi$, by a flux density distribution:
\begin{equation}
\varphi_i \to \varphi_i \, w^s (\theta_i) \, ,
\end{equation}
where $w^s (\theta_i)$ represents the probability density that an
UHECR is deflected by $\theta_i$, the angle between galaxy $i$ and the
line of sight. This procedure also accounts for the detector's
angular resolution and prevents unphysical fluctuations due to the use
of a catalog of point sources. In the absence of detailed knowledge
on the structure of Galactic and extragalactic magnetic fields, we
adopt a simple Gaussian probability density distribution with
characteristic angle, $\theta_{\rm s}$. This angle is treated as a free
model parameter. The Gaussian distribution is a fair approximation
when the deflections are small. For large deflections, details on the
structure of the Galactic and extragalactic magnetic fields become
important. Accounting for these details goes beyond the scope of the
present study.
\emph{Exposure ---} The HiRes exposure is modeled using our Monte
Carlo simulation of the experiment \citep{Abbasi:2006mb,
Bergman:2006vt}. The aim of this simulation was to create a set of
Monte Carlo events that would be in all essences identical to the
actual data. In making the simulation we put in the properties of
cosmic ray air showers as measured by previous experiments
\citep{Bird:1993yi, AbuZayyad:2000zz, AbuZayyad:2000ay,
Abbasi:2004nz}. We used cosmic ray showers generated by the Corsika
and QGSJet programs \citep{Heck:1998vt, Kalmykov:1997te} and simulated
the generation of fluorescence light (see references in
\citealt{Abbasi:2007sv}) and its propagation through the atmosphere
(see references in \citealt{Abbasi:2007sv}). A complete simulation of
the optics and electronics (trigger and front-end electronics) of our
detectors was performed. The result was an excellent simulation of
our experiment as evidenced by the very good agreement between data
and simulated events in the distribution of all kinematic variables,
e.g. zenith angle, impact parameter to detector, etc. By assigning
Monte Carlo events times of occurrence taken from the actual on-time
of the experiment we are able to calculate the exposure on the sky
very accurately.
\emph{Model flux maps ---} The integral UHECR flux from a given
direction is expressed as follows:
\begin{equation}
\Phi = \sum_i \varphi_i \, w^{\rm cat}_i \, w^{\rm s} (\theta_i)
+ \Phi_{\rm iso} \, ,
\end{equation}
where $i$ enumerates galaxies in the 2MRS sample, $w^{\rm cat}_i$
denotes the weight assigned to galaxy $i$ in the catalog, $w^{\rm s}
(\theta_i)$ is the deflection probability distribution, and
$\Phi_{\rm iso}$ is the UHECR flux arising from sources beyond 250
Mpc.
The probability to observe a CR from a given direction is proportional
to the product of flux and exposure. We denote this probability as
\begin{equation}
\widetilde{\Phi} = \Phi \, \Xi \, ,
\end{equation}
where $\Xi$ stands for exposure. In Figure \ref{fig:skymap:struct}, the
distribution of $\widetilde{\Phi}$ over the sky is shown for three
different threshold energies. The contrast in the flux distributions
becomes more pronounced with increasing energy. Also shown are the
arrival directions of UHECRs in the HiRes data to which the model flux
has to be compared.
\begin{figure}
\includegraphics[width=8cm]{fig2a}
\vspace{0.2cm}
\includegraphics[width=8cm]{fig2b}
\vspace{0.2cm}
\includegraphics[width=8cm]{fig2c}
\vspace{0.2cm}
\caption{\label{fig:skymap:struct} Hammer projection (galactic
coordinates) of $\widetilde{\Phi}$ (flux times exposure) with
threshold energies 10 EeV (top panel), 40 EeV (middle), and 57 EeV
(bottom). Darker gray indicates a higher value; the bands are
chosen such that each band contains 1/5 of the total flux (weighted
with exposure). Excluded regions, viz. the galactic plane
($|b|<10\ensuremath{^\circ}$) and the region outside the HiRes~ field of view,
are shown in white. White dots indicate HiRes events. All maps are
produced with $\theta_{\rm s} = 6\ensuremath{^\circ}$.}
\end{figure}
\section{Data}
\label{section:data}
The data set used in this study was described previously in
\citet{Abbasi:2008md}, including selection criteria and a correction
to the energy scale. Our sample of the 2MRS catalog does not cover
the region near the Galactic plane with $|b|<10\ensuremath{^\circ}$. We therefore
removed cosmic ray events with $|b|<10\ensuremath{^\circ}$ from the analysis. The
resulting sample contains:
\begin{itemize}
\item 309 events with $E>10$~EeV;
\item 27 events with $E>40$~EeV;
\item 10 events with $E>57$~EeV.
\end{itemize}
The arrival directions of these events are shown as white dots in
Fig.~\ref{fig:skymap:struct}.
\section{Statistical test}
\label{section:fluxsampling}
The compatibility of a model flux map with the set of UHECR arrival
directions is quantified by the flux sampling method introduced by
\citet{Koers:2008ba}. The idea of the method is as follows. To any set
of arrival directions one associates the set of flux values that are
obtained by sampling a given flux map (such as the map shown in
fig.~\ref{fig:skymap:struct}),
i.e. by extracting the flux values at corresponding points on the
sphere. The two-dimensional distribution of arrival directions thus
translates into a one-dimensional distribution of flux values. If the
reference model is true, this flux distribution will tend to high
values since events fall preferentially into regions where the model
flux (times exposure) is high. If, on the other hand, the reference
model is not true, the flux distribution is more uniform because the
correlation between arrival directions and regions of high model flux
is (partly) destroyed. By comparing the flux distribution to a model
flux distribution, the compatibility between a set of arrival
directions and model predictions can be quantified. This comparison is
performed by the Kolmogorov-Smirnov (KS) test, which yields a test
statistic $D$. The relevant statistical quantities, in particular powers
and $p$-values, are computed from the distribution of this test
statistic. Note that this test does not involve any additional
parameters like bin size.
The ability of the test to discriminate between models may be
quantified in terms of the statistical power, $P$, i.e. the probability
to rule out, at a given confidence level, the reference model when an
alternative model is true. Within numerical uncertainties, the
statistical power is equal to the fraction of event sets generated
under the alternative model that leads to rejecting the reference
model. Figure~\ref{fig:power-Nvsths} shows the number of events
required for a power $P=0.5$ (i.e., a 50\% probability) to rule out
(at 95\% CL) the matter tracer model when the true flux is
isotropic. The number of events increases with increasing smearing
angle and decreasing energy: the decreasing flux contrasts in the
matter tracer model call for an increase in statistics to achieve the
same discriminatory power. Observe that the event numbers indicated in
Figure~\ref{fig:power-Nvsths} are of the same order as the data
analyzed in this work. We thus expect that there is sufficient data
to obtain meaningful constraints at 95\% CL.
\begin{figure}
\includegraphics[angle=-90, width=8cm]{fig3}
\caption{\label{fig:power-Nvsths}
Number of events required for a 50\% probability to rule
out, at 95\% CL, the matter tracer model if the true flux
is isotropic.}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{Scan over smearing angles}
\begin{figure}
\includegraphics[angle=-90, width=8cm]{fig4a}
\includegraphics[angle=-90, width=8cm]{fig4b}
\caption{\label{fig:pvalues} The dependence of $p$-value indicating
the level of (in)compatibility between HiRes data and model
predictions on the smearing angle $\theta_{\rm s}$. Solid lines
indicate a $p$-value equal to 0.05, below which the model is ruled
out at 95\% CL. The points represent numerical results (with
estimated uncertainties of 20\%); the lines are smooth
interpolations between these points. Top panel: data vs. matter-tracer model;
bottom panel: data vs. isotropic distribution.
}
\end{figure}
The level of compatibility between data and model predictions
is quantified by a $p$-value, which represents the model probability
of obtaining a measurement that is at least as extreme as
the actual measurement. With our {\em a priori} choice of
significance, a $p$-value smaller than $0.05$ rules out the model.
The probability of falsely excluding
the model is then $5$\%, translating to a CL of $95$\%.
Figure \ref{fig:pvalues} shows the $p$-values obtained by the flux
sampling method for the HiRes data and predictions of the matter
tracer model. The smearing angle, $\theta_{\rm s}$, is treated as a
free parameter. That is, at each value of $\theta_{\rm s}$ and each
threshold energy a flux map is generated and compared to the HiRes
data as described above. The results can be summarized as follows:
\begin{itemize}
\item[\emph{(a)}] For the threshold energies of 40 EeV and 57 EeV, the
tests show disagreement between data and the matter
tracer model for $\theta_{\rm s} \leq 10\ensuremath{^\circ}$. Within this
parameter range, a source distribution tracing the distribution of
matter is excluded at a 95\% confidence level.
\item[\emph{(b)}] For the threshold energy of 10 EeV, the test shows
agreement between data and the matter tracer model.
\end{itemize}
The incompatibility between data and matter tracer model is
illustrated by the non-correlation between the observed arrival
directions and regions of high model flux shown in the two lower
panels of Figure \ref{fig:skymap:struct}.
We have also tested the data for compatibility with an isotropic model
flux and found no disagreement, at 95\% CL, for any of the three tested
threshold energies (the data with threshold energy 57 EeV are
marginally consistent with an isotropic flux).
\subsection{Case study: $E = 57$ EeV, $\theta_{\rm s} = 3.2 \ensuremath{^\circ}$}
At energy threshold $E > 57$~EeV a correlation between the arrival
directions of UHECRs and the location of AGNs contained in the
12$^{\rm th}$ edition of the V{\'e}ron-Cetty \& V{\'e}ron catalog
\citep{2006A&A...455..773V} was reported by the PAO
\citep{Cronin:2007zz, Abraham:2007si}. This correlation was found to
be maximal for $\psi= 3.2\ensuremath{^\circ}$, where $\psi$ denotes the maximum
angular distance between UHECRs and AGNs. In the Northern hemisphere,
correlation with AGN was not confirmed by the HiRes experiment
\citep{Abbasi:2008md}.
Since AGNs are tracers of the distribution of matter in the Universe,
the PAO result is suggestive of a more general correlation between
UHECRs and the local structure of the Universe on an angular scale of
a few degrees. The methods presented in this paper allow a check on
the existence of such correlations in the HiRes data.
The results presented in the previous section disfavor a correlation
between UHECRs and the local structure of the Universe. In fact, the
flux sampling test yields $p$-values smaller than $10^{-2}$ for the
matter tracer model with $\theta_{\rm s} \lesssim 6\ensuremath{^\circ}$, with a
$p$-value of $7\times 10^{-4}$ for $\theta_{\rm s} = 3.2\ensuremath{^\circ}$.
(Note that $\theta_{\rm s}$ is not in $1:1$ correspondence with
$\psi$; both quantities are however representative of the angular
scale of the problem). Focusing on the case of $\theta_{\rm s} =
3.2\ensuremath{^\circ}$ in more detail, Figure \ref{fig:Ddist} shows the
distribution of the test statistic $D$ for the matter tracer model and
for an isotropic flux for this smearing angle. The vertical line shows
the value $D_{\rm obs} = 0.59$ obtained for the HiRes data. This
demonstrates the strong incompatiblity between HiRes data and the
matter tracer model for smearing angle $\theta_{\rm s} = 3.2\ensuremath{^\circ}$
and threshold energy $E=57$ EeV.
\begin{figure}
\includegraphics[angle=-90, width=8cm]{fig5}
\caption{\label{fig:Ddist} Model distribution of test statistic $D$
when testing the matter tracer model, for both the matter tracer
model (``Structure'') and an isotropic flux distribution
(``Isotropy''). Here $E=57$ EeV and $\theta_{\rm s} =3.2\ensuremath{^\circ}$.
The vertical line indicates the observed value $D_{\rm obs}=0.59$.}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
To summarize, we have confronted the stereo data collected by
the HiRes experiment with predictions of the matter tracer model, a
generic model of cosmic ray origin and propagation. The model assumes
a large number of cosmic ray sources within $250$~Mpc whose
distribution traces that of matter, and relatively small deflections
characterized by a single parameter, the typical deflection angle
$\theta_s$. We have found that the HiRes data with energy thresholds
$E=40$~EeV and $E=57$~EeV are incompatible with the matter tracer
model for $\theta_s<10^\circ$ at 95\%~CL. With an energy threshold
$E=10$~EeV the HiRes data are compatible with the matter tracer model.
At all three energy thresholds, the data are compatible with an
isotropic flux at 95\%~CL.
In the present analysis we have treated the deflections as random and
Gaussian, which is only appropriate for small deflection angles and
limited number of events. The actual deflections are expected to
contain a coherent component due to the Galactic magnetic field. With
the accumulation of UHECR events by PAO in the Southern hemisphere and
by Telescope Array in the Northern hemisphere, our analysis
will become sensitive to the nature of deflections and thus, with
proper modifications of the statistical procedure, may give direct
access to the parameters of cosmic magnetic fields.
\section*{Acknowledgments}
This work is supported by the National Science Foundation under
contracts NSF-PHY-9321949, NSF-PHY-9322298, NSF-PHY-9974537,
NSF-PHY-0071069, NSF-PHY-0098826, NSF-PHY-0140688, NSF-PHY-0245328,
NSF-PHY-0307098, and NSF-PHY-0305516, by Department of Energy grant
FG03-92ER40732, by the BSP under IUAP VI/11, by the FNRS contract
1.5.335.08 and by the IISN contract 4.4509.10. We gratefully
acknowledge the contribution from the technical staffs of our home
institutions and thank the University of Utah Center for High
Performance Computing for their contributions. The cooperation of
Colonels E. Fisher, G. Harter, and G. Olsen, the US Army and the
Dugway Proving Ground staff is appreciated.
\bibliographystyle{apsrev}
|
1,108,101,564,314 | arxiv | \section{Introduction}
The covariant exterior derivative on associated vector bundles \cite{NaturalOperations} that possesses a structure of a fiber manifold is one of the primary tools of modern differential geometry \cite{KobayashiNomizu, NaturalOperations}. Moreover, the fundamental physical theories as gauge theories (e.g., Electrodynamics, Strong and Weak interactions theories) are formulated using a covariant derivative approach \cite{Thirring, Bleecker, BennTucker, CovariantDerivativeInPhysics, Sternber, Rund, GagueTheories}. Therefore, a practical way of inverting (at least locally) the covariant exterior derivative is in demand. In typical situations, and to our best knowledge, it was not provided efficiently and algorithmically up to today.
The covariant exterior derivative consists of the usual exterior derivative that can be locally inverted utilizing a homotopy operator of the Poincare lemma and a wedge product of a connection $1$-form. The general approach to the homotopy operator is a classical subject; see, e.g., \cite{deRham}. However, the profits of using the homotopy operator defined for the linear homotopy \cite{EdelenExteriorCalculus, EdelenIsovectorMethods} was unnoticed, although it proved to have many valuable properties that can be used to solve local problems in mathematical physics \cite{KyciaPoincare, KyciaPoincareCohomotopy, CopoincareHamiltonianSystem}.
The Poincare lemma states in the most practical formulation that on a star-shaped subset $U \subset \mathbb{R}^{n}$ every closed form (an element of the kernel of exterior derivative $d$) is also exact (an element of the image of $d$). This is equivalent to the existence of the inverse of $d$. It can be defined in many ways, however, usage of the linear homotopy operator, as we will see, is the most useful. To this end, let $x_{0}\in U$ be a center of the linear homotopy $F:U\times [0;1] \rightarrow U$, $F(x,t)=x_{0} + t(x-x_{0})$. Then the homotopy operator defined by $F$ is
\begin{equation}
H\omega = \int_{0}^{1} \mathcal{K}\lrcorner\omega|_{F(t,x)}t^{k-1}dt,
\end{equation}
for $\mathcal{K}=(x-x_{0})^{i}\partial_{i}$, and where $\omega \in \Lambda^{k}(U)$ is a $k$-differential form. It was noticed in \cite{EdelenExteriorCalculus, EdelenIsovectorMethods} that it is nilpotent $H^{2}=0$ and possesses useful properties like: $HdH=H$, $dHd=d$. However, the most useful is the homotopy invariance formula
\begin{equation}
dH + Hd = I - s_{x_{0}}^{*},
\label{Eq_homotopyFormula}
\end{equation}
where $s_{x_{0}}^{*}$ is the pullback along the constant map $s_{x_{0}}:x_{0} \hookrightarrow U$. The map $s_{x_{0}}^{*}$ can be nonzero only on $\Lambda^{0}(U)$.
The homotopy operator $H$ and exterior derivative $d$ on $U$ define the decomposition \cite{EdelenExteriorCalculus, EdelenIsovectorMethods}: $\Lambda^{*}(U)=\mathcal{E}(U)\oplus \mathcal{A}(U)$ into an exact (equivalent to closed by the Poincare lemma) vector space: $\mathcal{E}(U)=\{\omega \in \Lambda^{*}(U) | d\omega =0\}$, and the module over $C^{\infty}(U)$ of antiexact forms: $\mathcal{A}(U)=\{ \omega \in \Lambda^{*}(U) | \mathcal{K}\lrcorner \omega = 0, \omega|_{x=x_{0}}=0 \}$. It can be also shown that $\mathcal{E} = im(dH)$ and $\mathcal{A}=im(Hd)$.
Likewise, for a star-shaped subset $U$ a Riemannian manifold $(M, g)$ with a metric tensor $g:TM\times TM \rightarrow \mathbb{R}$, we have a Hodge star $\star:\Lambda^{k}(M)\rightarrow \Lambda^{n-k}(M)$, $0\leq k \leq n=dim(M)$, and codifferential $\delta= \star^{-1}d\star \eta$, for $\eta\omega = (-1)^{k}\omega$ where $\omega\in \Lambda^{k}$. In this setup the dual theory of the co-Poincare lemma, and cohomotopy operator $h=\eta \star^{-1}H\star$ is defined \cite{KyciaPoincareCohomotopy, CopoincareHamiltonianSystem}. The cohomotopy operator $h$ defines the decomposition $\Lambda^{*}(U)=\mathcal{C}(U)\oplus \mathcal{Y}(U)$ into coexact vector space $\mathcal{C}(U)=ker(\delta)=im(\delta h)$ and anticoexact module $\mathcal{Y}(U)=\{\omega \in \Lambda^{*}(U)| \mathcal{K}^{\flat}\wedge \omega=0\quad \omega|_{x=x_{0}}=0 \}=im(h\delta)$ over $C^{\infty}(U)$. Here $\flat:TM\rightarrow T^{*}M$ is a musical isomorphism related to the existence of a metric structure on $M$. The opposite isomorphism is given by $\sharp: T^{*}M\rightarrow TM$. These statements are easily obtained from the (anti)exact theory by using the identity
\begin{equation}
\alpha^{\sharp}\lrcorner \star\phi = \star(\phi\wedge\alpha),
\label{Eq.HodgeDuality}
\end{equation}
that dualizes (anti)exact theory to (anti)coexact one, see \cite{BennTucker, KyciaPoincareCohomotopy, CopoincareHamiltonianSystem}.
Both these decompositions into (co)(anti)exact forms of $\Lambda^{*}$ allow us to solve plenty of practical problems of mathematical physics \cite{EdelenExteriorCalculus, EdelenIsovectorMethods, KyciaPoincare, KyciaPoincareCohomotopy, CopoincareHamiltonianSystem}.
The Poincar\'{e} lemma can be extended in many various directions \cite{deRham}, including non-abelian case \cite{NonabelianPoincareLemma}. We also want to acknowledge the work on the Poincar\'{e} lemma for the space of paths induced by a connection form \cite{PoincareLemmaForConnection} that has applications in Yang-Mills equations. In this paper we want to use the linear homotopy operator and its properties mentioned above for covariant exterior derivative and covariant constancy equations. Therefore, the extensions of our results to more complicated cases will be postponed until the future.
In this paper we use (co)(anti)exact decomposition to solve equations involving covariant exterior derivative in a star-shaped local trivialization of a fiber bundle. We obtain a practical algorithm for getting covariantly constant differential forms. The paper is organized as follows: In the next section we define the setup for our discussion. Then we provide the formula for inverting covariant exterior derivative. We also discuss the problem of the horizontal projection of our results from the fibered set to the base space. Next, we embed these formulas in Bittner's operational calculus framework, summarized for the reader's convenience in the Appendices \ref{Appendix_BittnersOperatorCalculus} and \ref{Appendix_BittnersOperatorCalculusAsCategory}. Then we use this knowledge to provide an algorithm for solving curvature equations. Finally, we Hodge-dualize the previous results to the manifolds with metrics structure. In the Appendices we discuss Bittner's operator calculus, the application of the results to the operator-valued connection, and the integral equations version of our results.
\section{Setup}
\label{Section_Setup}
We will focus only on the smooth case in this paper, so manifolds and objects on them will be smooth.
A prototypical example of our motivation is a cotangent bundle. The general differential operator that increases the degree of a form by one is a sum of exterior derivative $d$ and exterior multiplication by a one form $A$. This operator
\begin{equation}
d^{\nabla}=d+A\wedge\_,
\label{Eq.dcov}
\end{equation}
appears in many other contexts in differential geometry, which we listed below. Therefore it is desirable to have a good formula to invert it.
The simplest setup for this operator is an open subset $U$ of Euclidean space. Since we will use the Poincare lemma, we will assume that $U$ is star-shaped. Then $d^{\nabla}$ acts on $\Lambda^{*}(U)$ - differential forms on $U$ with values in real numbers. One can also consider differential forms with values in a vector space $V$ isomorphic to $\mathbb{R}^{l}$, $l\in \mathbb{N}_{+}$, $l>1$, i.e., the vector space $\Lambda^{*}(U,V)$. Then $A$ is a matrix of differential $1$-forms. There are many problems in differential geometry that can be brought by local trivialization to this setup, e.g., noted at the beginning, a local subset of the cotangent bundle.
The other convenient setup is a space of fibers \cite{EdelenExteriorCalculus}. It locally looks like (local trivialization) subset $U$ of the product of two smooth manifolds $B\times F$, where $B$ will be called the base space, and $F$ a fiber. W assume that there is a local diffeomorphism of an open subset $U$ of $B\times F$ to an open subset $\mathbb{R}^{n}\times \mathbb{R}^{m}$. We will denote this image also by $U$. Moreover, it will be assumed that $U$ is star-shaped for the same reason as above. In this space we can distinguish vertical directions - those along $F$, and horizontal directions - distinguished by the kernel of a $1$-form $A\in \Lambda^{1}(U)$. Since the form that distinguishes (a priori) a horizontal part is called a connection, we will call $A$ the connection $1$-form. In this space we naturally define (\ref{Eq.dcov}) as an operator acting on $\Lambda^{*}(U)$. Moreover, if we allow the forms on $U$ to have values in a vector space $V$, then by taking $A$ to be a $1$-form valued in endomorphisms of $V$, or by introducing the base of $V$ - a matrix of $1$-forms we can also define $d^{\nabla}$ on $\Lambda^{*}(U,V)$ - the space of $V$-valued forms on $U$.
The method presented below also has some limited application to the associated vector bundle, as we will discuss now. To better understand what we can gain from our method, we review the structure of the associated vector bundle from standard sources \cite{Sternber, LoringTu, KobayashiNomizu, NaturalOperations}. Consider a smooth manifold $P$ with a smooth free action of a Lie group $G$, with Lie algebra $\mathfrak{g}$, by $P\times G \rightarrow P$ denoted as $(p,g)\rightarrow p g^{-1}$. Then $M:=\sfrac{P}{G}$ is a smooth manifold, and the projection $\pi: P \rightarrow M$ defines a principal bundle. We can then define a $\mathfrak{g}$-valued differential form called connection when it distinguishes fibers, identified with $G$, and transforms by adjoint action of $G$. If the group $G$ also acts on a vector space $V$, then we can associate it with the principal bundle by the following equivalence relation
\begin{equation}
(p,v)\sim (pg^{-1},gv), \quad p\in P, v\in V, g \in G,
\end{equation}
and call it $V(P)$. The projection operator for $V(P)$ is the same as for the principal bundle $P$, forgetting the $V$ component. On $V(P)$ we have \cite{Sternber, LoringTu, Husemoller, KobayashiNomizu, NaturalOperations} that the sections $s:M \rightarrow V(F)$ are in one-to-one correspondence with functions $ P\rightarrow V$ that vanish on vertical vectors of $P$ and are equivariant with respect to the action of $G$. This definition can be extended from functions to $V$-valued forms on $P$, which are called basic forms. Because of that, basic forms are an important subset of all $V$-valued forms on $P$, denoted by $\Lambda^{*}(P,V)$. Moreover, the action of $G$ on $V$ lifts to the action of $\mathfrak{g}$ on $V$, so we can define exterior multiplication of a connection form by an element from $\Lambda^{*}(P,V)$. The exterior multiplication of two forms from $\Lambda^{*}(P,V)$ can be defined if in $V$ exists a bilinear form that is covariant with respect to $G$ action, e.g., matrix multiplication when $V$ is a matrix algebra, or the Lie bracket for $V$ being a Lie algebra. Then in a local star-shaped subset $U$ of $P$, we can consider on $\Lambda^{*}(U,V)$ the operator $d^{\nabla}$. The action of connection form $A\wedge\_$ should be replaced by a tensorial product of exterior multiplication and $\mathfrak{g}$-action. However, in coordinates, it is always related to the situation when $A$ is a matrix-valued one-form, and the lie algebra action is reduced to matrix-matrix or matrix-vector multiplication, e.g., for $V=\mathfrak{g}$ by adjoint representation. Therefore we are in the fibred manifold discussed above applied for $\Lambda^{*}(U,V)$. The only problem is that in the setup of the associated vector bundle we prefer to use basic forms since they are in 1:1 correspondence with the forms on the base manifold, so we can operate using lifts and projections without any problems. Therefore we impose on our forms from $\Lambda^{*}(U,V)$ additional constraints to being basic ones - these forms should vanish on vertical vectors and be equivariant. That is an additional condition to inverting the $d^{\nabla}$ in the covariance constant equation ($d^{\nabla}\phi=0$) if we want basic forms. Therefore if we invert $d^{\nabla}$ on $V(P)$ it does not mean that it gives basic form, so it does not mean that it can be represented on/projected uniquely to $M$ by some element of $\Lambda^{*}(M,V)$. These additional constraints can induce a failure to provide a nontrivial solution for the covariancy constant equation on the base manifold. The problem of horizontal projection will be addressed below.
Summing up, we will assume that $U$ is a star-shaped subset of $\mathbb{R}^{n}$ and $A$ is a matrix of one-forms since other cases can be related to this case. We will try to inverse $d^{\nabla}$ on this set. One can note that the fibered set is a Banach space so that we can define norms on $\Lambda^{*}(U)$, treated as a vector space. For instance, we can define the supremum norm as a supremum over differential form coefficients, see, e.g., \cite{deRham}.
\section{Inversion formula for covariant exterior derivative}
The following two theorems provide the local inverse of the $d^{\nabla}$ operator.
\subsection{Homogenous equation}
First, we solve the homogenous equation of covariant constancy:
\begin{Theorem}
\label{Th_homogenous_solution}
The unique solution to the equation
\begin{equation}
d^{\nabla} \phi =0, \quad \phi\in \Lambda^{k}(U),
\label{Eq_homogenous_equation}
\end{equation}
with the condition $dH\phi= c\in \mathcal{E}$, is given for $k=0$ by
\begin{equation}
\phi = c\exp(-H(A)),
\end{equation}
where $c \in \mathbb{R}$.
For $k>0$ is given by
\begin{equation}
\phi = \sum_{l=0}^{\infty} (-1)^{l} (H(A\wedge \_))^{l} c,
\label{Eq.Solution_homogenous_k_gt_0}
\end{equation}
where $\alpha \in \Lambda^{k-1}(U)$ is an arbitrary form, $(H(A\wedge \_))^{0}=Id$, and
\begin{equation}
(H(A\wedge \_))^{l} = \underbrace{H(A\wedge ( \ldots (H(A\wedge \_ )\ldots )}_{l},
\end{equation}
is the $l$-fold composition of the operator $H\circ A \wedge \_$.
The series in (\ref{Eq.Solution_homogenous_k_gt_0}) is convergent for
\begin{equation}
||x-x_{0}|| < \frac{k}{||A||_{\infty}}
\label{Eq_convergence_homogenous_solution}
\end{equation}
where supremum is taken over the line $L=\{x_{0}+t(x-x_{0}) | t\in[0;1]\}$, and the norm of the form is the norm of its coefficients. i.e., it is treated as a norm of a covariant vector.
\end{Theorem}
The proof will use the perturbation series approach with decomposition into (anti)exact forms. It can also be seen as an attempt to solve the integral equation version of (\ref{Eq_homogenous_equation}) obtained by applying the $H$ operator and using (anti)exact decomposition - see Appendix \ref{Appendix_Integral_equations}. Therefore some parts of the proof remind the methods applied to prove the convergence of the Neumann series for integral equations \cite{IntegralEquations}.
\begin{Proof}
\textbf{For $k=0$} we have
\begin{equation}
d\phi =-A\phi,
\end{equation}
so for $\phi\neq 0$ we have $d\ln(\phi)=-A$. Since $dA=0$, so $A=dHA$, and therefore $d(\ln(\phi)-HA)=0$, or equivalently, $\phi = C\exp(-HA)$. For $C=0$ we obtain $\phi=0$, which is also a solution. Since $H^{2}=0$ so expanding exponent we get $dH(\phi)=C=c$.
\textbf{For $k>0$} we notice first, by taking exterior derivative of (\ref{Eq_homogenous_equation}), that $d(A\wedge \phi)=0$, i.e., $A\wedge \phi \in \mathcal{E}$, i.e., $A\wedge \phi = dH(A\wedge\phi)$.
In order to introduce formal perturbation series, we modify the equation (\ref{Eq_homogenous_equation}) to
\begin{equation}
d\phi + \lambda A\wedge \phi=0,
\end{equation}
introducing a real number $\lambda \neq 0$.
Searching the solution in the form of a formal power series
\begin{equation}
\phi = \phi_{0} + \lambda \phi_{1} + \lambda^{2} \phi_{2} + \ldots,
\end{equation}
we get the set of equations with respect to the degree of $\lambda$:
\begin{itemize}
\item {\textbf{$O(\lambda^{0})$}: The equation is $d\phi_{0}=0$, which by star-shapedness of $U$ is solved by
\begin{equation}
\phi_{0}=d\alpha_{0}
\end{equation}
for $\alpha_{0}\in \Lambda^{k-1}(U)$.}
\item{\textbf{$O(\lambda^{1})$}: The equation is $d\phi_{1}+A\wedge \phi_{0}=0$. Taking $d$ we get $d(A\wedge \phi_{0})=0$, i.e., $A\wedge \phi_{0}=dH(A\wedge\phi_{0})$. That means, $d(\phi_{1}+H(A\wedge \phi_{0}))=0$, and the solution is
\begin{equation}
\phi_{1} = d\alpha_{1}-H(A\wedge\phi_{0}),
\end{equation}
for $\alpha_{1} \in \Lambda^{k-1}(U)$.
}
\item{\textbf{$O(\lambda^{2})$}: The equation is $d\phi_{2}+A\wedge\phi_{1}=0$, and the same procedure gives that the solution is
\begin{equation}
\phi_{2}=d\alpha_{2}-H(A\wedge\phi_{1}),
\end{equation}
for $\alpha_{2}\in \Lambda^{k-1}(U)$.}
\item{\textbf{$O(\lambda^{l})$}: In general case the equation is $d\phi_{l}+A\wedge\phi_{l-1}=0$ which gives
\begin{equation}
\phi_{l}=d\alpha_{l}-H(A\wedge\phi_{l-1}),
\end{equation}
for $\alpha_{l}\in \Lambda^{k-1}(U)$.}
\end{itemize}
Collecting all the terms we get for $\lambda=1$ the formal solution in terms of the series
\begin{equation}
\phi = (1-H(A\wedge\_)+H(A\wedge H(A\wedge\_))-\ldots) \sum_{l=0}^{\infty} d\alpha_{l}.
\end{equation}
Selecting $\{\alpha_{i}\}_{i=0}^{\infty}$ in such a way that the series is uniformly convergent, denoting its sum by $\alpha:=\sum_{l=0}^{\infty} \alpha_{l}$, and taking into account that $dH\phi=d\alpha=c$ (using $H^{2}=0$), we get (\ref{Eq.Solution_homogenous_k_gt_0}).
For convergence, we estimate
\begin{equation}
\begin{array}{c}
\left|H(A\wedge \omega ) \right| = \left| \int_{0}^{1}\mathcal{K} \lrcorner A\wedge\omega(x_{0}+t(x-x_{0})) t^{k-1} dt \right| \leq \int_{0}^{1}||x-x_{0}|| ||A||_{\infty} ||\omega||_{\infty}t^{k-1}dt \\
= ||x-x_{0}|| ||A||_{\infty}||\omega||_{\infty}\frac{1}{k},
\end{array}
\end{equation}
where $||\_||_{\infty}$ is a supremum (a matrix norm in the case when $A$ is matrix-valued) on the line $L$ connecting $x_{0}$ with $x$. We therefore have
\begin{equation}
\begin{array}{c}
||\phi|| = ||(1- H(A\wedge\_)+H(A\wedge(H(A\wedge\_)))- \ldots)c|| \leq \\
\left(1+||x-x_{0}|| \frac{||A||_{\infty}}{k} + \left(||x-x_{0}|| \frac{||A||_{\infty}}{k}\right)^{2} +\ldots \right) ||c||,
\end{array}
\end{equation}
and the series (\ref{Eq.Solution_homogenous_k_gt_0}) is absolutely convergent for (\ref{Eq_convergence_homogenous_solution}).
Uniqueness is proved in a standard way by reductio ad absurdum. Assume that there are two distinct solutions $\phi_{1} \neq \phi_{2}$ with the same initial conditions $dH\phi_{1}=dH\phi_{2}$. Then the form $\psi:=\phi_{1}-\phi_{2}$ is also the solution with $dH\psi=0$. However, from the form of the solution obtained above, we see that if $dH\psi=0$, then $\psi=0$, so $\phi_{1}=\phi_{2}$, a contradiction.
\end{Proof}
From the above proof we have the following:
\begin{Corollary}
If $\phi \in ker(A\wedge \_)$ and is a solution of $d^{\nabla} \phi =0$ then $\phi\in \mathcal{E}$, i.e., $dH\phi=\phi$.
\end{Corollary}
\begin{Remark}
At each stage of the series (\ref{Eq.Solution_homogenous_k_gt_0}) we have
\begin{equation}
A\wedge (H(A\wedge\_)^{l} c \in \mathcal{E}(U).
\end{equation}
\end{Remark}
Moreover,
\begin{Remark}
We can formally write (\ref{Eq.Solution_homogenous_k_gt_0}) as
\begin{equation}
\phi = \frac{1}{1+HA\wedge\_}c.
\label{Eq_inverison_of_1HA}
\end{equation}
This notation will be firmly stated within the framework of Operational Calculus below.
\end{Remark}
\begin{Remark}
In solving (\ref{Eq_homogenous_equation}) the natural initial condition for the iterative procedure described by the series (\ref{Eq.Solution_homogenous_k_gt_0}) is a form $c \in \mathcal{E}^{k}$. In this sense the exact form $c$ parametrizes the solution.
\end{Remark}
We can now formulate the algorithm for solving (\ref{Eq_homogenous_equation}).
\begin{Algorithm}
\label{Algorithm_1}
In order to solve
\begin{equation}
d^{\nabla} \phi =0,
\end{equation}
for $\phi\in \Lambda^{k}(U)$, $k>0$, pick an initial condition $\gamma_{0}\in \mathcal{E}^{k-1}(U)$ and compute iteratively
\begin{equation}
\gamma_{k}=H(A\wedge \gamma_{k-1}).
\end{equation}
Then the solution is
\begin{equation}
\phi = \sum_{l=0}^{\infty} (-1)^{l} \gamma_{l}.
\end{equation}
The series is convergent for $ ||x-x_{0}|| <\frac{1}{||A||_{\infty}}$ for supremum norm taken along the line $L$.
\end{Algorithm}
We now provide a simple example that explains the Algorithm \ref{Algorithm_1}.
\begin{Example}
\label{Ex1}
Let us solve the equation
\begin{equation}
d\phi+A\wedge\psi=0,
\end{equation}
on $\mathbb{R}^{2}$ with coordinates $(x,y)$, where $A =dy$, and with initial condition $\gamma_{0}=dx\in \mathcal{E}$ and the center $x_{0}=0$, i.e., $\mathcal{K}=x\partial_{x}+y\partial_{y}$.
We have
\begin{itemize}
\item {$\gamma_{1} = \int_{0}^{1}\mathcal{K} \lrcorner \left( dy\wedge dx \right) t dt = \frac{1}{2!}(ydx-xdy)$,}
\item {$\gamma_{2} = \int_{0}^{1}\mathcal{K} \lrcorner \left(\frac{1}{2!}y dy\wedge dx \right) t^{2} dt = \frac{1}{3!}(y^{2}dx-yxdy)$,}
\item {$\gamma_{3} = \int_{0}^{1} \mathcal{K}\lrcorner \left( \frac{1}{3!} y^{2} dy\wedge dx \right) t^{3} dt = \frac{1}{4!}(y^{3}dx-y^{2}xdy)$,}
\item {$\gamma_{k}=\frac{1}{(k+1)!}\left(y^{k}dx-y^{k-1}xdy \right)$.}
\end{itemize}
Then, by summing the terms, we get
\begin{equation}
\phi = \sum_{l=0}^{\infty} \gamma_{l} = (1- e^{-y})\frac{dx}{y} + (e^{-y}-1+y)\frac{xdy}{y^{2}}.
\label{Ex1.solution}
\end{equation}
The solution has a removable singularity at $y=0$.
The projection to initial condition is given by $dH$ and we have $dH \phi = dx$ as required.
By straightforward computations we have
\begin{equation}
\begin{array}{c}
d\phi = \frac{1}{y}(1-e^{-y})dx\wedge dy, \\
A\wedge \phi = \frac{1}{y}(1-e^{-y})dy\wedge dx,
\end{array}
\end{equation}
so $d^{\nabla} \phi=0$ as required.
One can note that the solution $\phi$ is well-defined for the whole $\mathbb{R}^{2}$, so its radius of convergence is significantly larger than the Theorem \ref{Th_homogenous_solution} suggests.
Moreover, if we treat $\mathbb{R}^{2}$ as a fibered bundle with horizontal direction $dx$ and vertical $dy$, then the form (\ref{Ex1.solution}) is neither horizontal nor vertical. So projecting $\phi$ to $dx$ component and then lifting back along $A=dy$ we do not obtain the original form and even their covariant constancy.
\end{Example}
\begin{Example}
\label{Ex2}
Continuing Example \ref{Ex1}, we can also check easily (by assuming the solution in the form $\phi_{2}=f(y)dx$) that the solution is
\begin{equation}
\phi_{2}=e^{-y}dx.
\end{equation}
It is interesting to note that the initial condition for this solution is the solution from the previous example since
\begin{equation}
dH\phi_{2}=\phi = (1- e^{-y})\frac{dx}{y} + (e^{-y}-1+y)\frac{xdy}{y^{2}}.
\end{equation}
This is the condition for starting the algorithm to obtain $\phi_{2}$.
The solution $\phi_{2}$ is horizontal when treating $dx$ as a horizontal direction.
\end{Example}
\subsection{Inhomogenous equation}
The next step is to provide a solution for the inhomogeneous covariant constancy equation. We begin with the particular case when the inhomogeneity is an exact form.
\begin{Theorem}
\label{Th_nonhomogenous_solution_exactRHS}
The unique solution of
\begin{equation}
d^{\nabla} \phi = J,
\label{Eq_nonhomogenous_covariant_equation}
\end{equation}
for $\phi \in \Lambda^{k}(U)$, $A\in \Lambda^{1}(U)$, $J\in \mathcal{E}^{k+1}(U)$, with $dH\phi=c\in \mathcal{E}(U)$ is
for $k=0$
\begin{equation}
\phi = \exp(-HA)(c+H(J\exp(HA)).
\label{Eq_solution_nonhomogenous_k_eq_0}
\end{equation}
For $k>0$ the solution is
\begin{equation}
\phi=\phi_{H} + \phi_{I}, \quad \phi_{I}=\sum_{l=0}^{\infty}(-1)^{l} (H(A\wedge\_))^{l} HJ,
\label{Eq_solution_nonhomogenous_k_gt_0}
\end{equation}
where $\phi_{H}$ is a solution of homogenous equation ($J=0$) given in Theorem \ref{Th_homogenous_solution}.
The series in (\ref{Eq_solution_nonhomogenous_k_gt_0}) is convergent for $||x-x_{0}||<\frac{k}{||A||_{\infty}}$, where the supremum norm is taken over the line $L=\{x_{0}+t(x-x_{0})|t\in[0;1]\}$.
\end{Theorem}
\begin{Proof}
\textbf{For $k=0$} we have the solution of the homogenous equation $\phi=C\exp(-HA)$. By variating the constant, i.e., taking $C\in \Lambda^{0}(U)$, and substituting back to the equation, we obtain
$dC = J\exp(HA)\in \mathcal{E}(U)$, and as a result, $J\exp(HA)=dH(J\exp(HA))$. This gives $d(C-H(J\exp(HA)))=0$, i.e, $C=D+H(J\exp(HA))$, for real number $D$. This, using $dH\phi=D=c$, gives (\ref{Eq_solution_nonhomogenous_k_eq_0}).
\textbf{For $k>0$} we proceed as in the proof of the previous theorem. We replace the equation (\ref{Eq_nonhomogenous_covariant_equation}) by
\begin{equation}
d\phi+\lambda A\wedge\phi = J,
\end{equation}
for a nonzero real number $\lambda$. Introducing the formal ansatz $\phi=\sum_{l=0}^{\infty}\lambda^{l}\phi_{l}$ we get:
\begin{itemize}
\item {\textbf{$O(\lambda^{0})$}: The equation is $d\phi_{0}=J\in \mathcal{E}$, and therefore, $J=dHJ$, so
\begin{equation}
\phi_{0}=d\alpha_{0}+HJ,
\end{equation}
for $\alpha_{0}\in \Lambda^{k-1}$. }
\item { \textbf{$O(\lambda^{l})$}: We get the recurrence for the solution
\begin{equation}
\phi_{l}=d\alpha_{l} - H(A\wedge \phi_{l-1}),
\end{equation}
for $\alpha_{l}\in \Lambda^{k-1}$.}
\end{itemize}
Summing up the terms we have
\begin{equation}
\phi = \underbrace{ \sum_{l=0}^{\infty}(-1)^{l} (H(A\wedge\_))^{l} \sum_{p=0}^{\infty}d\alpha_{p} }_{\phi_{h}} + \underbrace{\sum_{l=0}^{\infty}(-1)^{l}(H(A\wedge\_))^{l} HJ}_{\phi_{I}}.
\end{equation}
As before, we can select $\{\alpha_{p}\}_{p=0}^{\infty}$ to form uniformly convergent series, so setting $\alpha=\sum_{p=0}^{\infty}\alpha_{p}$ and using the condition $dH\phi=d\alpha=c$ we get (\ref{Eq_solution_nonhomogenous_k_gt_0}).
The proof of convergence is the same as in the previous proof.
\end{Proof}
\begin{Remark}
The solution (\ref{Eq_solution_nonhomogenous_k_gt_0}) can be written as
\begin{equation}
\phi=\phi_{h}+G(J),
\end{equation}
where $G$ resembles a Green's function used in the theory of the second order Laplace-Beltrami operator $\triangle = d\delta+\delta d$, see, e.g., \cite{Thirring}. However, here we do not assume a metric structure, so the approach is general. Moreover, no boundary conditions were imposed on $G$.
\end{Remark}
\begin{Corollary}
When $\phi\in ker(A\wedge\_)$ and $d^{\nabla} \phi=0$ with $dH\phi=c$, then the solution is
\begin{equation}
\phi=c+HJ.
\end{equation}
\end{Corollary}
\begin{Proof}
We have $d\phi=J$ so $J=dHJ$ and therefore $d(\phi-HJ)=0$, so by the Poincare lemma $\phi=d\alpha + HJ$ for some form $\alpha$. Since $dH\phi=d\alpha=c$ we get the solution.
\end{Proof}
\begin{Corollary}
We can also note that we can decompose $\phi$ into $\phi=\phi_{1}+\phi_{2}$ where $\phi_{2}\in \ker(A\wedge\_)$. Then we can choose $\phi_{1}$ to be a solution of (\ref{Eq_nonhomogenous_covariant_equation}), and $\phi_{2}\in \mathcal{E}$ be an arbitrary exact form in the kernel of $A\wedge \_$.
\end{Corollary}
\begin{Example}
Continuing Example \ref{Ex1}, we solve the equation
\begin{equation}
d^{\nabla} \phi = J, \quad J=xdx, \quad A=dy,
\end{equation}
where $J$ is an exact form. We have
\begin{equation}
HJ = \frac{1}{2}x^{2}.
\end{equation}
First, we use the series solution (\ref{Eq_solution_nonhomogenous_k_gt_0}) and then we compare it with (\ref{Eq_solution_nonhomogenous_k_eq_0}). We have
\begin{itemize}
\item {$HA\wedge HJ=H(\frac{1}{2}x^{2}dy)=\frac{1}{2}x^{2}y\int_{0}^{1}t^{2}dt = \frac{1}{3!}x^{2}y$,}
\item {$(HA\wedge)^{2}HJ=\frac{1}{4!}x^{2}y^{2}$,}
\item {$\ldots$.}
\item {$(HA\wedge)^{k}HJ=\frac{1}{(k+2)!}x^{2}y^{k}$.}
\end{itemize}
The inhomogenous part of the solution (\ref{Eq_solution_nonhomogenous_k_gt_0}) is given by the series
\begin{equation}
\phi_{I}=\frac{1}{2!}x^{2}-\frac{1}{3!}x^{2}y+\ldots = \left(\frac{x}{y}\right)^{2}(e^{-y}-1+y).
\end{equation}
Likewise, applying (\ref{Eq_solution_nonhomogenous_k_eq_0}), we have
\begin{equation}
\begin{array}{c}
\phi_{I}=\exp(-HA) H(J\exp(HA)) = e^{-y}H(e^{y}xdx)=e^{-y}x^{2}\int_{0}^{1}te^{ty}dt = \\
x^{2}\frac{d}{dy}\frac{1}{y}\int_{0}^{y}e^{z}dz = \left(\frac{x}{y}\right)^{2}(e^{-y}-1+y),
\end{array}
\end{equation}
as previously. Therefore the inhomogeneous contributions calculated either by (\ref{Eq_solution_nonhomogenous_k_gt_0}) or (\ref{Eq_solution_nonhomogenous_k_eq_0}) agrees, as required.
\end{Example}
Finally, we will consider the complete equation where $J$ is an arbitrary, not necessarily exact, form. We have
\begin{Theorem}
The solution of the inhomogeneous covariant constancy equation
\begin{equation}
d^{\nabla} \phi = J,\quad d^{\nabla} = d + A\wedge \_,
\label{Eq.FullInhomogenous_CovariancyConstantEquation}
\end{equation}
where $\phi\in\Lambda^{k}(U)$, $A\in \Lambda^{1}(U)$, $J\in \Lambda^{k+1}(U)$ is given by
\begin{equation}
\phi = \phi_{1}+\phi_{2}+\phi_{3},
\end{equation}
where $\phi_{1}$ fulfils
\begin{equation}
d^{\nabla} \phi_{1}=J_{e} - d(\phi_{2}+\phi_{3}),
\end{equation}
and $\phi_{2}$ fulfils
\begin{equation}
A\wedge \phi_{2} = J_{a},
\label{Eq_Ja_condition}
\end{equation}
where $J_{e}:=dHJ$ is the exact part of $J$, and $J_{a}:=HdJ$ is the antiexact part of $J$.
The $\phi_{3}\in \ker(A\wedge\_)$ is an arbitrary form.
Moreover $A\wedge \phi_{1}\in \mathcal{E}^{k+1}(U)$ and $A\wedge\phi_{2}\in \mathcal{A}^{k+1}(U)$.
\end{Theorem}
\begin{Proof}
Since $A\wedge \phi \in \Lambda^{l}$ and $l>0$, so it can be decomposed into exact and antiexact parts. Therefore, we can find three forms $\phi=\phi_{1}+\phi_{2}+\phi_{3}$ such that
\begin{equation}
A\wedge \phi_{1} \in \mathcal{E}, \quad A\wedge \phi_{2}\in \mathcal{A}, \quad A\wedge \phi_{3}=0.
\end{equation}
Decomposing $J=J_{e}+J_{a}$ and substituting into the equation (\ref{Eq.FullInhomogenous_CovariancyConstantEquation}) and splitting into exact ($\mathcal{E}$) and antiexact ($\mathcal{A}$) parts gives
\begin{equation}
\underbrace{d(\phi_{1}+\phi_{2}+\phi_{3})+A\wedge\phi_{1} -J_{e}}_{\mathcal{E}} +\underbrace{A\wedge \phi_{2} - J_{a}}_{\mathcal{A}}=0.
\end{equation}
This ends the proof.
\end{Proof}
\begin{Remark}
We can note that $\phi_{2}=\phi_{2}(J_{a})$, i.e., $\phi_{2}$ depends on the antiexact part of $J_{a}$. However, in general, $\phi_{2}$ is not fixed completely by the condition (\ref{Eq_Ja_condition}), as it will be presented below in examples.
\end{Remark}
We can now construct a practical way of solving the equation (\ref{Eq.FullInhomogenous_CovariancyConstantEquation}).
\begin{Algorithm}
For the equation (\ref{Eq.FullInhomogenous_CovariancyConstantEquation}):
\begin{enumerate}
\item {solve the algebraic constraint:
\begin{equation}
A\wedge \phi_{2} = J_{a},
\end{equation}
for $\phi_{2}$,}
\item {solve the algebraic constraint:
\begin{equation}
A\wedge \phi_{3}=0,
\end{equation}
for $\phi_{3}$,
}
\item {solve the differential equation:
\begin{equation}
d\phi_{1}+A\wedge\phi_{1}=J_{e}-d(\phi_{2}+\phi_{3}),
\end{equation}
for $\phi_{1}$ which is an inhomogenous covariancy constant equation with exact RHS,
}
\item {
compose the full solution
\begin{equation}
\phi=\phi_{1}+\phi_{2}+\phi_{3}.
\end{equation}
}
\end{enumerate}
\end{Algorithm}
We provide a simple example of solving algebraic constraints. As a negative example, we propose the following
\begin{Example}
We continue the Example \ref{Ex1}. We consider
\begin{equation}
d^{\nabla} \phi=J_{a}, \quad J_{a}=\frac{1}{2}(xdy-ydx), \quad A=dy.
\end{equation}
Since the solution of exact part, from Theorem \ref{Th_homogenous_solution}, is $\phi_{1}=ce^{-y}$ and $\phi_{3}=0$, we will focus only on algebraic constraint
\begin{equation}
A\wedge \phi_{2} = J_{a}.
\label{Ex_Jaconstraint_0}
\end{equation}
Assuming that $\phi_{2}=f(x,y) \in \Lambda^{0}(U)$, substituting into (\ref{Ex_Jaconstraint_0}) we get a contradiction. Therefore, there are no solutions to this problem.
\end{Example}
As a positive example, we propose the following
\begin{Example}
Consider a star-shaped $U \subset \mathbb{R}^{3}$ with coordinates $x,y,z$. We will try to solve the constraint
\begin{equation}
A\wedge \phi = J_{a}, \quad J_{a}=xdy\wedge dz -y dx\wedge dz+z dx\wedge dy, \quad A=dy.
\end{equation}
It is a part of the process of solving the equation $d^{\nabla} \phi=J_{a}$. In order to solve this constraint, assume that
\begin{equation}
\phi =f(x,y,z)dx + g(x,y,z)dy + h(x,y,z) dz,
\end{equation}
for $f,g,h \in \Lambda^{0}(U)$. By substituting into constraints, one gets
\begin{equation}
f = -z, \quad h=x,
\end{equation}
and $g$ is arbitrary.
\end{Example}
As a final remark of this section note that the operators for the inverse of $d^{\nabla}$ are nonlocal. They can be expressed by curvature using
\begin{Proposition}
\begin{equation}
H(A\wedge d\alpha) = dH(A\wedge \alpha) + H(F\wedge\alpha)-H(A\wedge A\wedge\alpha) -A\wedge \alpha
\end{equation}
where $F=d^{\nabla} A$ is the curvature. For the connection valued in abelian groups $A\wedge A=0$.
\end{Proposition}
Using the above proposition and making a recursive substitution 'inside-out' in (\ref{Eq.Solution_homogenous_k_gt_0}), we are led to a complicated expression, and therefore, we do not follow this path. One can notice that the curvature (and connection form) enter the solution in a highly nonlocal and nonlinear way.
In the next section we discuss the issue of the existence of horizontal forms.
\section{Horizontal projection}
In this section we analyze possible problems when considering the horizontality and covariance constancy of forms on fibered sets. This issue is essential if we want to relate solutions on fibered set/bundle to the forms on base space as in the case of the associated bundle. Since we only want to illustrate the issue, we consider only scalar-valued forms from $\Lambda(U)$ for simplicity. For vector-valued forms additional constraints related to the matrix structure of the $A$ form must be considered. The idea of this section is based on an adaptation of the proof of the Retraction theorem (Theorem 1.3 of \cite{Bryant}).
Within the setup of fibered space $U$ and a one-form $A\in \Lambda^{1}(U)$ let us call $ TU\setminus ker(A)=VU$ the vertical tangent space with dimension $k=dim(VU)$.
\begin{Proposition}
For a one-form $A$ we can select $k=dim(VU)$ linearly independent vectors $\{X_{i}\}_{i=1}^{k}$ such that
\begin{equation}
X_{i}\lrcorner A = 1.
\label{Eq_Xi_definition}
\end{equation}
Moreover, the one-form $A$ can be decomposed into the sum of linearly independent one-forms
\begin{equation}
A = \sum_{i=1}^{k}\omega_{i}, \quad \omega_{1}\wedge\ldots\wedge \omega_{k}\neq 0.
\end{equation}
For each such one-form $\omega_{i}$ we can select a vector $X_{i}$ such that
\begin{equation}
X_{j}\lrcorner\omega_{i}=\delta_{ij}.
\label{Eq_omega_orthogonality}
\end{equation}
\end{Proposition}
\begin{Proof}
For the proof assume that there is $X_{i}$ such that in addition we have $X_{i}\lrcorner \omega_{j}=a\neq 0$. Then since $\omega_{i}$ and $\omega_{j}$ are linearly independent we get a contradiction. Linear independence of vectors can be proved similarly.
\end{Proof}
Now the vertical space $VU = span(\{X_{i}\}_{i=1}^{k})$. We can construct projectors:
\begin{equation}
P_{i} = I-\omega_{i}\wedge (X_{i}\lrcorner \_),
\end{equation}
with the property
\begin{equation}
X_{i}\lrcorner P_{i} =0.
\end{equation}
We can see that, since (\ref{Eq_omega_orthogonality}) is valid, we have that the operators $\{P_{i}\}$ commutes pairwise, i.e.,
\begin{equation}
P_{i}\circ P_{j}=P_{j}\circ P_{i}.
\end{equation}
The projectors $P_{i}$ are homomorphism of exterior algebra since we have
\begin{equation}
P_{i}(\alpha\wedge\beta)=P_{i}(\alpha)\wedge P_{i}(\beta),
\end{equation}
where $\omega_{i}\wedge \omega_{i}=0$ was used. It can also be noted that for $\omega_{i}$ (and therefore $A$) being non-scalar this property does not generally hold.
We can now project a differential form $\phi \in \Lambda^{*}(U)$ onto a horizontal form by means of
\begin{equation}
\Delta:=P_{1}\circ \ldots \circ P_{k}.
\end{equation}
that is
\begin{equation}
X_{i} \lrcorner \Delta \phi =0,
\end{equation}
for all $1<i<k$.
Now we can examine the relation between solutions of $d^{\nabla} \phi =0$ and its horizontal part $\Delta \phi$. We have an obvious statement that if $[d^{\nabla}, \Delta ] = 0$ then the horizontal part of $\phi$ is also covariantly constant. However, generally, this is not the case. In order to grasp more knowledge about the problem notice that we can write
\begin{equation}
\Delta = I -\sum_{i} \omega_{i}\wedge X_{i}\lrcorner\_ - \sum_{i<j} \omega_{i}\wedge\omega_{j} \wedge (X_{i}\lrcorner X_{j}\lrcorner\_) + \sum_{i<j<l} \omega_{i}\wedge \omega_{j}\wedge\omega_{l}\wedge (X_{i}\lrcorner X_{j} \lrcorner X_{l}\lrcorner \_)+\ldots,
\end{equation}
where all summation indices run over the set $\{1,\ldots, k\}$. Therefore if $d^{\nabla} \phi=0$ then $d^{\nabla} \Delta \phi=0$ if
\begin{equation}
d^{\nabla} \Delta\phi = \sum_{i} d^{\nabla}(\omega_{i}\wedge X_{i}\lrcorner\phi) + \sum_{i<j} d^{\nabla}(\omega_{i}\wedge\omega_{j}\wedge (X_{i}\lrcorner X_{j}\lrcorner \phi)) + \ldots = 0.
\end{equation}
By vanishing all the summands we get the necessary conditions for the covariantly constant solutions to be horizontal.
We illustrate it using an example.
\begin{Example}
Continuing Example \ref{Ex1}, we have $X_{1}=\partial_{y}$ and
\begin{equation}
\Delta \phi = (1-e^{-y})\frac{dx}{y}.
\end{equation}
However, $d^{\nabla} \Delta \phi \neq 0$.
In Example \ref{Ex2} we have that $\Delta \phi_{2}=\phi_{2}$, so $P_{1}|_{\phi_{2}} = I$ and therefore $d^{\nabla} \Delta \phi_{2}=d^{\nabla} \phi_{2}=0$. Therefore, in this case $\Delta$ operator commute with $d^{\nabla}$.
\end{Example}
In the next section we connect the results obtained so far with Operational Calculus.
\section{Relation to Bittner's operator calculus}
We can relate Bittner's operator calculus outlined in Appendix \ref{Appendix_BittnersOperatorCalculus} to the exterior derivative and homotopy operator. We expand the analogy that was introduced in \cite{KyciaPoincareCohomotopy}.
We define $L_{0}=\mathcal{E}\oplus\mathcal{A}$ and $L_{1}=\mathcal{E}$ as presented in Fig. \ref{Fig.DecompositionOperatorCalculusExterior}.
\begin{figure}
\centering
\xymatrix{ & 0 & & & \\
L_{1}:=& \ar[u]_{\hat{d}} \mathcal{E} \ar@/^/[drr]^{H} & & & \\
L_{0} := & \ar[dr]_{\hat{d}} \mathcal{E} & \oplus & \mathcal{A} \ar@/^/[ull]^{d} \ar[dl]^H & \\
& & 0 &
}
\caption{Operator calculus mapped to exterior calculus.}
\label{Fig.DecompositionOperatorCalculusExterior}
\end{figure}
We have obviously:
\begin{itemize}
\item {$S:=d:L_{0}\rightarrow L_{1}$ - derivative with $ker(S)=ker(d)=\mathcal{E}$.}
\item {$T:=H:L_{1}\rightarrow \mathcal{A}\subset L_{0}$ - integral.}
\end{itemize}
Obviously, $ST|_{L_{1}}=dH|_{\mathcal{E}} = I$ since $dH$ is the projection operator onto $\mathcal{E}$.
In order to identify $s$ operator, we use homotopy invariance formula (\ref{Eq_homotopyFormula}) as
\begin{equation}
Hd = I - \underbrace{(s_{x_{0}}^{*} + dH)}_{s},
\end{equation}
i.e.
\begin{equation}
s:=\left\{
\begin{array}{ccc}
s_{x_{0}}^{*} & for & \Lambda^{0}(U) \\
dH & for & \Lambda^{k}(U), \quad k>0.
\end{array}
\right.
\end{equation}
Obviously, $s$ defined above is a projection operator ($s^2=s$) onto $ker(S)=ker(d)=\mathcal{E}$.
We will use the symbols $S$, $T$, and $s$ in these specific substitutions from exterior algebra.
We can now associate the operator $R=-A\wedge\_$ to the notion of abstract (non-commutative) logarithm in the sense that it has no zero divisors.
\begin{Proposition}
If $d\phi=R\phi$ then we have
\begin{equation}
(I-HR)\phi =0 \Rightarrow \phi=0.
\end{equation}
\end{Proposition}
\begin{Proof}
For the solution $\phi$ from (\ref{Eq.Solution_homogenous_k_gt_0}) we have $d\alpha = (I+H(A\wedge\_)\phi = (I-HR)\phi = 0$, and therefore again from (\ref{Eq.Solution_homogenous_k_gt_0}), $\phi=0$, as required.
\end{Proof}
The following Corollary instantiates a strict definition of formal notation of fraction in (\ref{Eq_inverison_of_1HA}).
\begin{Corollary}
The operator $(I-HR)=(I+H(A\wedge\_))$ has no trivial zero divisors. Therefore, we can construct Mikusinki's ring of elements $\Lambda^{*}(U)$ and operators $\{ I, I-HR\}$, where $\phi \in \Lambda^{*}(U)$ is represented by $\frac{\phi}{I}$, and other elements are of the form $\frac{\phi}{I-HR}$.
\end{Corollary}
The following section provides a practical approach to solving the curvature equation.
\section{Curvature equation}
The curvature $F$ is a square of the differential operator $S-R$
\begin{equation}
F:=(S-R)^{2}.
\end{equation}
We fix, as in the previous section, i.e., $S=d$ and $R=-A\wedge\_$, $s=dH+s_{x_{0}}^{*}$ so we obtain usual curvature of $A$.
The curvature equation, due to its tensorial character, is an algebraic equation, however, we will focus on the solutions using the above machinery of (anti)exact forms to get a deeper insight into the structure of the solutions.
We can easily solve the curvature equation using the method of solving the covariant constancy equation. First, we solve the homogenous curvature equation in
\begin{Theorem}
\label{Th_homogenous_curvature}
In order to solve the homogenous curvature equation
\begin{equation}
F\phi = (S-R)^{2}\phi=0,
\end{equation}
with $\phi\in \Lambda^{k}(U)$ rewrite it as a coupled system
\begin{equation}
\begin{array}{cc}
\phi_{2} := (S-R)\phi_{1}, & s\phi_{1}=0 \\
(S-R)\phi_{2}=0, & s\phi_{2}=c_{2},
\end{array}
\end{equation}
for $\phi_{1}$ and $\phi_{2}$ and then add a solution of the first-order equation
\begin{equation}
(S-R)\phi=0, \quad s\phi=c_{1}\in ker(S).
\end{equation}
\end{Theorem}
Next, the nonhomogenous solution of the curvature equation will be provided.
\begin{Theorem}
The solution of the nonhomogenous curvature equation
\begin{equation}
(S-R)^{2}\phi=J, \quad s\phi=c_{1}\in ker(S), \quad s(S-R)\phi=c_{2} \in ker(S),
\end{equation}
for $\phi \in \Lambda^{k}(U)$, $J \in \Lambda^{k+2}(U)$, $c_{1} \in \mathcal{E}^{k}(U)$, $c_{2} \in \mathcal{E}^{k+1}(U)$, $R\in \Lambda^{1}(U)$ is a linear combination of
\begin{itemize}
\item {First order equation
\begin{equation}
\begin{array}{cc}
(S-R)\phi=0 & s\phi=c_{1}.
\end{array}
\end{equation}
The solution is as in the homogenous case of Theorem \ref{Th_homogenous_curvature}.
}
\item {Second order equation
\begin{equation}
(S-R)^{2}\phi=J, \quad s\phi=0, \quad s(S-R)\phi=c_{2}.
\end{equation}
It can be solved by replacing it with the first-order system of coupled equations:
\begin{equation}
\begin{array}{cc}
\phi_{2} = (S-R)\phi_{1}, & s\phi_{1}=0 \\
(S-R)\phi_{2}=J, & s\phi_{2}=c_{2},
\end{array}
\end{equation}
with $\phi=\phi_{1}$ and $\phi_{2}$.
}
\end{itemize}
\end{Theorem}
\section{Hodge duals}
When $U$ has a metric structure so the Hodge star $\star$ can be defined, we can use dual (anti)coexact decomposition for solving dual equations. By dualizing Theorem \ref{Th_homogenous_solution} we have
\begin{Corollary}
Solution of the equation
\begin{equation}
\delta\phi+A^{\sharp}\lrcorner\phi=0,
\end{equation}
where $\phi\in\Lambda^{k}(U)$, $A\in\Lambda^{1}(U)$ is given by
\begin{equation}
\phi = \frac{1}{I+h(A^{\sharp}\lrcorner\_)}c = (I-h(A^{\sharp}\lrcorner\_)+h(A^{\sharp}\lrcorner(h(A^{\sharp}\lrcorner\_)))-\ldots)c,
\end{equation}
where $c\in \mathcal{C}(U)=ker(\delta)|_{U}$. The series is convergent for $||x-x_{0}||<\frac{k}{||A||_{\infty}}$, where the supremum norm is taken over the line joining $x_{0}$ with $x$.
\end{Corollary}
Likewise, from Theorem \ref{Th_nonhomogenous_solution_exactRHS} the inhomogeneous equation with coecact RHS is provided by
\begin{Theorem}
Solution of the equation
\begin{equation}
\delta\phi+A^{\sharp}\lrcorner\phi=J,
\end{equation}
where $\phi\in\Lambda^{k}(U)$, $A\in\Lambda^{1}(U)$, $J\in\mathcal{C}^{k-1}(U)$
is given by
\begin{equation}
\phi= \frac{1}{I+h(A^{\sharp}\lrcorner\_)}c + \sum_{l=0}^{\infty} (-1)^{l} (h(A^{\sharp}\lrcorner\_))^{l} hJ
\end{equation}
where $c\in \Lambda^{k}(U)$. The first term is a solution to the homogenous equation. The series is convergent for $||x-x_{0}||<\frac{k}{||A||_{\infty}}$.
\end{Theorem}
Finally, the solution of the covariant constancy equation with arbitrary inhomogeneity is given by
\begin{Theorem}
Solution of the equation
\begin{equation}
\delta\phi+A^{\sharp}\lrcorner\phi=J,
\end{equation}
where $\phi\in\Lambda^{k}(U)$, $A\in\Lambda^{1}(U)$, $J\in\Lambda^{k-1}(U)$ can be composed from three elements $\phi=\phi_{1}+\phi_{2}+\phi_{3}$, where $\phi_{1}$ is a solution of
\begin{equation}
(\delta + A^{\sharp}\lrcorner )\phi_{1}=J_{c} -\delta(\phi_{2}+\phi_{3}),
\end{equation}
the $\phi_{2}$ is a solution of a constraint equation
\begin{equation}
A^{\sharp}\lrcorner\phi_{2}=J_{y},
\end{equation}
and $\phi_{3}$ is a solution of
\begin{equation}
A^{\sharp}\lrcorner\phi_{3}=0,
\end{equation}
where $J_{c}=\delta hJ$ is the coecact part of $J$, and $J_{y}=h\delta J$ is the anticoexact part of $J$.
\end{Theorem}
We can consider also the square of the $\delta +A^{\sharp}\lrcorner\_$ operator, however it can be also noticed that it is related to the results of the previous section, since by (\ref{Eq.HodgeDuality}) we have for $\alpha \in \Lambda^{k}(U)$ we have $(\delta + A^{\sharp}\lrcorner)\star \alpha=(-1)^{k+1}\star (d+\_\wedge A) \alpha$.
\section{Conclusions}
The formulas for inverting covariant exterior derivatives in a local star-shaped subset of a fibred set are provided. Using this prescription the solution for the curvature equation is given. Moreover, the profound relation of the methods developed here with Operational Calculus, especially with Bittner's calculus, is provided. Since Operational Calculus was invented to solve (linear) differential equations appearing in engineering as easily as algebraic equations, we believe this link helps simplify notation and promote an efficient way to make local calculations in differential geometry and their applications. Using (anti)(co)exact decomposition of forms allows solving equations involving covariant exterior derivatives as efficiently as standard ODEs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.